Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
- 1999 (228)
- 1994 (33)
- 1996 (32)
- 1998 (32)
- 1995 (31)
- 1997 (26)
- 2021 (24)
- 2016 (23)
- 2019 (23)
- 2015 (22)
- 2022 (22)
- 2001 (20)
- 2023 (20)
- 2007 (18)
- 2018 (18)
- 2013 (17)
- 2020 (16)
- 2014 (15)
- 1993 (14)
- 2009 (11)
- 2012 (11)
- 2000 (10)
- 2002 (10)
- 2003 (7)
- 2004 (7)
- 2006 (7)
- 2008 (7)
- 2017 (7)
- 2010 (6)
- 2005 (5)
- 1991 (4)
- 1992 (4)
- 2011 (4)
- 1979 (2)
- 1980 (1)
- 1990 (1)
- 2024 (1)
Document Type
- Preprint (290)
- Doctoral Thesis (212)
- Report (111)
- Article (99)
- Conference Proceeding (8)
- Master's Thesis (7)
- Study Thesis (5)
- Bachelor Thesis (4)
- Habilitation (2)
- Part of a Book (1)
Language
- English (739) (remove)
Has Fulltext
- yes (739)
Keywords
- AG-RESY (47)
- PARO (25)
- Visualisierung (16)
- SKALP (15)
- Case-Based Reasoning (11)
- RODEO (11)
- HANDFLEX (9)
- Visualization (8)
- Deep Learning (7)
- Robotics (7)
Faculty / Organisational entity
This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
The World Wide Web is a medium through which a manufacturer may allow Internet visitors to customize or compose his products. Due to missing or rapidly changing standards these applications are often restricted to relatively simple CGI or JAVA based scripts. Usually, results like images or movies are stored in a database and are transferred on demand to the web-user. Viper (Visualisierung parametrisch editierbarer Raumkomponenten) is a Toolkit [VIP96] written in C++ and JAVA which provides 3D-modeling and visualization methodsfor developing complex web-based applications. The Toolkit has been designed to built a prototype, which can be used to construct and visualize prefabricated homes on the Internet. Alternative applications are outlined in this paper. Within Viper, all objects are stored in a scene graph (VSSG ), which is the basic data structure of the Toolkit. To show the concept and structure of the Toolkit, functionality, and implementation of the prototype are described.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Retrieving multiple cases is supposed to be an adequate retrieval strategy for guiding partial-order planners because of the recognized flexibility of these planners to interleave steps in the plans. Cases are combined by merging them. In this paper, we will examine two different kinds of merging cases in the context of partial-order planning. We will see that merging cases can be very difficult if the cases are merged eagerly. On the other hand, if cases are merged by avoiding redundant steps, the guidance of the additional cases tends to decrease with the number of covered goals and retrieved cases in domains having a certain kind of interactions. Thus, to retrieve a single case covering many of the goals of the problem or to retrieve fewer cases covering many of the goals is at least equally effective as to retrieve several cases covering all goals in these domains.
A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)
We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.
A large set of criteria to evaluate formal methods for reactive systems is presented. To make this set more comprehensible, it is structured according to a Concept-Model of formal methods. It is made clear that it is necessary to make the catalogue more specific before applying it. Some of the steps needed to do so are explained. As an example the catalogue is applied within the context of the application domain building automation systems to three different formal methods: SDL, statecharts, and a temporallogic.
A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.
Coloring terms (rippling) is a technique developed for inductive theorem proving which uses syntactic differences of terms to guide the proof search. Annotations (colors) to terms are used to maintain this information. This technique has several advantages, e.g. it is highly goal oriented and involves little search. In this paper we give a general formalization of coloring terms in a higher-order setting. We introduce a simply-typed lambda calculus with color annotations and present an appropriate (pre-)unification algorithm. Our work is a formal basis to the implementation of rippling in a higher-order setting which is required e.g. in case of middle-out reasoning. Another application is in the construction of natural language semantics, where the color annotations rule out linguistically invalid readings that are possible using standard higher-order unification.
This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.
This paper describes a system that supports softwaredevelopment processes in virtual software corporations. A virtual software corporation consists of a set of enterprisesthat cooperate in projects to fulfill customer needs. Contracts are negotiated in the whole lifecycle of asoftware development project. The negotiations really influence the performance of a company. Therefore, it isuseful to support negotiations and planning decisions with software agents. Our approach integrates software agentapproaches for negotiation support with flexible multiserver workflow engines.
The concept of the Virtual Software Corporation ( VSC) has recently become a practical reality as a result of advances in communication and distributed technologies. However, there are significant difficulties with the management of the software development process within a VSC. The main problem is the significantly increased communicational complexity of the process model for such developments. The more classic managerial hierarchy is generally replaced by a "flatter" network of commitments. Therefore new solution approaches are required to provide the necessary process support. The purpose of this paper is to present a solution approach which models the process based on deontic logic. The approach has been validated against a case study where it was used to model commitments and inter-human communications within the software development process of a VSC. The use of the formalism is exemplified through a prototype system using a layered multi-agent architecture.
This paper investigates the suitability of the mobile agents approach to the problem of integrating a collection of local DBMS into a single heterogeneous large-scale distributed DBMS. The paper proposes a model of distributed transactions as a set of mobile agents and presents the relevant execution semantics. In addition, the mechanisms which are needed to guarantee the ACID properties in the considered environment are discussed.
In this paper we present an interpreter which allows to support the validation of conceptual models in early stages of the development. We compare hypermedia and expert system approaches to knowledge processing and show how an integrated approach eases the creation of expert systems. Our knowledge engineering tool CoMo-Kit allows a "smooth" transition from initial protocols via a semi-formal specification based on a typed hypertext up to an running expert system. The interpreter uses the intermediate hypertext representation for the interactive solution of problems. Thereby, tasks are distributed to agents via an local area network. This means that the specification of an expert system can directly be used to solve real world problems. If there exist formal (operational) specifications for subtasks then these are delegated to computers. Therefore, our approach allows to specify and validate distributed, cooperative systems where some subtasks are solved by humans and other subtasks are solved automatically by computers.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
Medical cyber-physical systems (MCPS) emerged as an evolution of the relations between connected health systems, healthcare providers, and modern medical devices. Such systems combine independent medical devices at runtime in order to render new patient monitoring/control functionalities, such as physiological closed loops for controlling drug infusion or optimization of alarms. Despite the advances regarding alarm precision, healthcare providers still struggle with alarm flooding caused by the limited risk assessment models. Furthermore, these limitations also impose severe barriers on the adoption of automated supervision through autonomous actions, such as safety interlocks for avoiding overdosage. The literature has focused on the verification of safety parameters to assure the safety of treatment at runtime and thus optimize alarms and automated actions. Such solutions have relied on the definition of actuation ranges based on thresholds for a few monitored parameters. Given the very dynamic nature of the relevant context conditions (e.g., the patient’s condition, treatment details, system configurations, etc.), fixed thresholds are a weak means for assessing the current risk. This thesis presents an approach for enabling dynamic risk assessment for cooperative MCPS based on an adaptive Bayesian Networks (BN) model. The main aim of the approach is to support continuous runtime risk assessment of the current situation based on relevant context and system information. The presented approach comprises (i) a dynamic risk analysis constituent, which corresponds to the elicitation of relevant risk parameters, risk metric building, and risk metric management; and (ii) a runtime risk classification constituent, which aims to analyze the current situation risk, establish risk classes, and identify and deploy mitigation measures. The proposed approach was evaluated and its feasibility proved by means of simulated experiments guided by an international team of medical experts with a focus on the requirements of efficacy, efficiency, and availability of patient treatment.
Information technology support for complex, dynamic, and distributed business processes as they occur in engineering domains requires an advanced process management system which enhances currently available workflow management services with respect to integration, flexibility, and adapt ation. We present an uniform and flexible framework for advanced process management on an a bstract level which uses and adapts agent technology from distributed artificial intelligence for both modelling and enacting of processes. We identify two different frameworks for applying agent tec hnology to process management: First, as a multi-agent system with the domain of process manag ement. Second, as a key infrastructure technology for building a process management system. We will then follow the latter approach and introduce different agent types for managing activities, products, and resources which capture specific views on the process.
Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.
This paper describes the architecture and concept of operation of a Framework for Adaptive Process Modeling and Execution (FAME). The research addresses the absence of robust methods for supporting the software process management life cycle. FAME employs a novel, model-based approach in providing automated support for different activities in the software development life cycle including project definition, process design, process analysis, process enactment, process execution status monitoring, and execution status-triggered process redesign. FAME applications extend beyond the software development domain to areas such as agile manufacturing, project management, logistics planning, and business process reengineering.
This research for this thesis was conducted to develop a framework which supports the automatic configuration of project-specific software development processes by selecting and combining different technologies: the Process Configuration Framework. The research draws attention to the problem that while the research community develops new technologies, the industrial companies continue only using their well-known ones. Because of this, technology transfer takes decades. In addition, there is the fact that there is no solution which solves all problems in a software development project. This leads to a number of technologies which need to be combined for one project.
The framework developed and explained in this research mainly addresses those problems by building a bridge between research and industry as well as by supporting software companies during the selection of the most appropriate technologies combined in a software process. The technology transformation gap is filled by a repository of (new) technologies which are used as a foundation of the Process Configuration Framework. The process is configured by providing SPEM process pattern for each technology, so that the companies can build their process by plugging into each other.
The technologies of the repository were specified in a schema including a technology model, context model, and an impact model. With context and impact it is possible to provide information about a technology, for example, its benefits to quality, cost or schedule. The offering of the process pattern as output of the Process Configuration Framework is performed in several stages:
I Technology Ranking:
1 Ranking based on Application Domain, Project & Impact
2 Ranking based on Environment
3 Ranking based on Static Context
II Technology Combination:
4 Creation of all possible Technology Chains
5 Restriction of the Technology Chains
6 Ranking based on Static and Dynamic Context
7 Extension of the Chains by Quality Assurance
III Process Configuration:
8 Process Component Diagram
9 Extension of the Process Component Diagram
10 Instantiation of the Components by Technologies of the Technology Chain
11 Providing process patterns
12 Creation of the process based on Patterns
The effectiveness and quality of the Process Configuration Framework have additionally been evaluated in a case study. Here, the Technology Chains manually created by experts were compared to the chains automatically created by the framework after it was configured by those experts. This comparison depicted that the framework results are similar and therefore can be used as a recommendation.
We conclude from our research that support during the configuration of a process for software projects is important especially for non-experts. This support is provided by the Process Configuration Framework developed in this research. In addition our research has shown that this framework offers a possibility to speed up the technology transformation gap between the research community and industrial companies.
A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
Experience gathered from applying the software process modeling language MVP-L in software development organizations has shown the need for graphical representations of process models. Project members (i.e„ non MVP-L specialists) review models much more easily by using graphical representations. Although several various graphical notations were developed for individual projects in which MVP-L was applied, there was previously no consistent definition of a mapping between textual MVP-L models and graphical representations. This report defines a graphical representation schema for MVP-L
descriptions and combines previous results in a unified form. A basic set of building blocks (i.e., graphical symbols and text fragments) is defined, but because we must first gain experience with the new symbols, only rudimentary guidelines are given for composing basic
symbols into a graphical representation of a model.
We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.
Cooperative decision making involves a continuous process, assessing the validity ofdata, information and knowledge acquired and inferred by the colleagues, that is, the shared knowledge space must be transparent. The ACCORD methodology provides aninterpretation framework for the mapping of domain facts - constituting the world model of the expert - onto conceptual models, which can be expressed in formalrepresentations. The ACCORD-BPM framework allows a stepwise and inarbitrary reconstruction of the problem solving competence of BPM experts as a prerequisite foran appropriate architecture of both BPM knowledge bases and the BPM-"reasoning device".
In many robotic applications, the teaching of points in space is necessary to register the robot coordinate system with the one of the application. Robot-human interaction is awkward and dangerous for the human because of the possibly large size and power of the robot, so robot movements must be predictable and natural. We present a novel hybrid control algorithm which provides the needed precision in small scale movements while allowing for fast and intuitive large scale translations.
Data is the new gold and serves as a key to answer the five W’s (Who, What, Where, When, Why) and How’s of any business. Companies are now mining data more than ever and one of the most important aspects while analyzing this data is to detect anomalous patterns to identify critical patterns and points. To tackle the vital aspects of timeseries analysis, this thesis presents a novel hybrid framework that stands on three pillars: Anomaly Detection, Uncertainty Estimation,
and Interpretability and Explainability.
The first pillar is comprised of contributions in the area of time-series anomaly detection. Deep Anomaly Detection for Time-series (DeepAnT), a novel deep learning-based anomaly detection method, lies at the foundation of the proposed hybrid framework and addresses the inadequacy of traditional anomaly detection methods. To the best of the author’s knowledge, Convolutional Neural Network (CNN) was used for the first time in Deep Anomaly Detection for Time-series (DeepAnT) to robustly detect multiple types of anomalies in the tricky
and continuously changing time-series data. To further improve the anomaly detection performance, a fusion-based method, Fusion of
Statistical and Deep Learning for Anomaly Detection (FuseAD) is proposed. This method aims to combine the strengths of existing wellfounded
statistical methods and powerful data-driven methods.
In the second pillar of this framework, a hybrid approach that combines the high accuracy of the deterministic models with the posterior distribution approximation of Bayesian neural networks is proposed.
In the third pillar of the proposed framework, mechanisms to enable both HOW and WHY parts are presented.
Embedded systems have become ubiquitous in everyday life, and especially in the automotive industry. New applications challenge their design by introducing a new class of problems that are based on a detailed analysis of the environmental situation. Situation analysis systems rely on models and algorithms of the domain of computational geometry. The basic model is usually an Euclidean plane, which contains polygons to represent the objects of the environment. Usual implementations of computational geometry algorithms cannot be directly used for safety-critical systems. First, a strict analysis of their correctness is indispensable and second, nonfunctional requirements with respect to the limited resources must be considered. This thesis proposes a layered approach to a polygon-processing system. On top of rational numbers, a geometry kernel is formalised at first. Subsequently, geometric primitives form a second layer of abstraction that is used for plane sweep and polygon algorithms. These layers do not only divide the whole system into manageable parts but make it possible to model problems and reason about them at the appropriate level of abstraction. This structure is used for the verification as well as the implementation of the developed polygon-processing library.
A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.
We present a mathematical knowledge base containing the factual know-ledge of the first of three parts of a textbook on semi-groups and automata,namely "P. Deussen: Halbgruppen und Automaten". Like almost all math-ematical textbooks this textbook is not self-contained, but there are somealgebraic and set-theoretical concepts not being explained. These concepts areadded to the knowledge base. Furthermore there is knowledge about the nat-ural numbers, which is formalized following the first paragraph of "E. Landau:Grundlagen der Analysis".The data base is written in a sorted higher-order logic, a variant of POST ,the working language of the proof development environment OmegaGamma mkrp. We dis-tinguish three different types of knowledge: axioms, definitions, and theorems.Up to now, there are only 2 axioms (natural numbers and cardinality), 149definitions (like that for a semi-group), and 165 theorems. The consistency ofsuch knowledge bases cannot be proved in general, but inconsistencies may beimported only by the axioms. Definitions and theorems should not lead to anyinconsistency since definitions form conservative extensions and theorems areproved to be consequences.
Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.
This paper addresses a model of analogy-driven theorem proving that is more general and cognitively more adequate than previous approaches. The model works at the level ofproof-plans. More precisely, we consider analogy as a control strategy in proof planning that employs a source proof-plan to guide the construction of a proof-plan for the target problem. Our approach includes a reformulation of the source proof-plan. This is in accordance with the well known fact that constructing ananalogy in maths often amounts to first finding the appropriate representation which brings out the similarity of two problems, i.e., finding the right concepts and the right level of abstraction. Several well known theorems were processed by our analogy-driven proof-plan construction that could not be proven analogically by previous approaches.
The well-known and powerful proof principle by well-founded induction says that for verifying \(\forall x : P (x)\) for some property \(P\) it suffices to show \(\forall x : [[\forall y < x :P (y)] \Rightarrow P (x)] \) , provided \(<\) is a well-founded partial ordering on the domainof interest. Here we investigate a more general formulation of this proof principlewhich allows for a kind of parameterized partial orderings \(<_x\) which naturallyarises in some cases. More precisely, we develop conditions under which theparameterized proof principle \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)]\) is sound in thesense that \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)] \Rightarrow \forall x : P (x)\) holds, and givecounterexamples demonstrating that these conditions are indeed essential.
We present a parallel control architecture for industrial robot cells. It is based on closed functional components arranged in a flat communication hierarchy. The components may be executed by different processing elements, and each component itself may run on multiple processing elements. The system is driven by the instructions of a central cell control component. We set up necessary requirements for industrial robot cells and possible parallelization levels. These are met by the suggested robot control architecture. As an example we present a robot work cell and a component for motion planning, which fits well in this concept.
The main problem in computer graphics is to solve the global illumination problem,
which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel
of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence.
We investigate one of the classical problems of the theory ofterm rewriting, namely termination. We present an ordering for compar-ing higher-order terms that can be utilized for testing termination anddecreasingness of higher-order conditional term rewriting systems. Theordering relies on a first-order interpretation of higher-order terms anda suitable extension of the RPO.
The semantics of everyday language and the semanticsof its naive translation into classical first-order language consider-ably differ. An important discrepancy that is addressed in this paperis about the implicit assumption what exists. For instance, in thecase of universal quantification natural language uses restrictions andpresupposes that these restrictions are non-empty, while in classi-cal logic it is only assumed that the whole universe is non-empty.On the other hand, all constants mentioned in classical logic arepresupposed to exist, while it makes no problems to speak about hy-pothetical objects in everyday language. These problems have beendiscussed in philosophical logic and some adequate many-valuedlogics were developed to model these phenomena much better thanclassical first-order logic can do. An adequate calculus, however, hasnot yet been given. Recent years have seen a thorough investigationof the framework of many-valued truth-functional logics. UnfortuADnately, restricted quantifications are not truth-functional, hence theydo not fit the framework directly. We solve this problem by applyingrecent methods from sorted logics.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation.
Due to continuously increasing demands in the area of advanced robot control, it became necessary to speed up the computation. One way to reduce the computation time is to distribute the computation onto several processing units. In this survey we present different approaches to parallel computation of robot kinematics and Jacobian. Thereby, we discuss both the forward and the reverse problem. We introduce a classification scheme and classify the references by this scheme.
The term enterprise modelling, synonymous with enterprise engineering, refers to methodologies developed for modelling activities, states, time, and cost within an enterprise architecture. They serve as a vehicle for evaluating and modelling activities resources etc. CIM - OSA (Computer Integrated Manufacturing Open Systems Architecture) is a methodology for modelling computer integrated environments, and its major objective is the appropriate integration of enterprise operations by means of efficient information exchange within the enterprise. PERA is another methodology for developing models of computer integrated manufacturing environments. The department of industrial engineering in Toronto proposed the development of ontologies as a vehicle for enterprise integration. The paper reviews the work carried out by various researchers and computing departments on the area of enterprise modelling and points out other modelling problems related to enterprise integration.
In recent years, more and more publications and material for studying and teaching, e. g. for Web-based teaching (WBT), appear "online" and digital libraries are built to manage such publications and online materials. Therefore, the most important concerns are related to the problem of durable, sustained storage and the management of content together with its metadata existing in heterogeneous styles and formats. In this paper, we present specific techniques and their use to support metadata-based catalog services. Such semistructured metadata (represented as XML fragments), which belong to online learning resources, need efficient XML-based query support, scalable result set processing, and comprehensive facilities for personalization purposes. We discuss the associated problems, subsequently derive the concepts of a suitable architecture, and finally outline the realization by means of our prototype system that is based on the J2EE component model.
This report presents the properties of a specification of the domain of process planning for rotary symmetrical workpieces. The specification results from a model for problem solving in this domain that involves different reasoners, one of which is an AI planner that achieves goals corresponding to machining workpieces by considering certain operational restrictions of the domain. When planning with SNLP (McAllester and Rosenblitt, 1991), we will show that the resulting plans have the property of minimizing the use of certain key operations. Further, we will show that, for elastic protected plans (Kambhampati et al., 1996) such as the ones produced by SNLP, the goals corresponding to machining parts of a workpiece are OE-constrained trivial serializable, a special form of trivial serializability (Barrett and Weld, 1994). However, we will show that planning with SNLP in this domain can be very difficult: elastic protected plans for machining parts of a workpiece are nonmergeable. Finally, we will show that, for sufix, prefix or sufix and prefix plans such as the ones produced by state-space planners, it is not possible to have both properties, being OEconstrained trivial serializable and minimizing the use of the key operations, at the same time.
When considering complex systems, identifying the most important actors is often of relevance. When the system is modeled
as a network, centrality measures are used which assign each node a value due to its position in the network. It is often
disregarded that they implicitly assume a network process flowing through a network, and also make assumptions of how
the network process flows through the network. A node is then central with respect to this network process (Borgatti in Soc
Netw 27(1):55–71, 2005, https ://doi.org/10.1016/j.socne t.2004.11.008). It has been shown that real-world processes often
do not fulfill these assumptions (Bockholt and Zweig, in Complex networks and their applications VIII, Springer, Cham,
2019, https ://doi.org/10.1007/978-3-030-36683 -4_7). In this work, we systematically investigate the impact of the measures’
assumptions by using four datasets of real-world processes. In order to do so, we introduce several variants of the betweenness
and closeness centrality which, for each assumption, use either the assumed process model or the behavior of the real-world
process. The results are twofold: on the one hand, for all measure variants and almost all datasets, we find that, in general,
the standard centrality measures are quite robust against deviations in their process model. On the other hand, we observe a
large variation of ranking positions of single nodes, even among the nodes ranked high by the standard measures. This has
implications for the interpretability of results of those centrality measures. Since a mismatch of the behaviour of the real
network process and the assumed process model does even affect the highly-ranked nodes, resulting rankings need to be
interpreted with care.
Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago, but there has not been a satisfact-ory mechanization. Recent years have seen a thorough investigation ofthe framework of many-valued truth-functional logics. However, strongKleene logic, where quantification is restricted and therefore not truth-functional, does not fit the framework directly. We solve this problemby applying recent methods from sorted logics. This paper presents atableau calculus that combines the proper treatment of partial functionswith the efficiency of sorted calculi.
A Tailored Real Time Temporal Logic for Specifying Requirements of Building Automation Systems
(1999)
A tailored real time temporal logic for specifying requirements of building automation systems is introduced and analyzed. The logic features several new real time operators, which are chosen with regard to the application area. The new operators improve the conciseness and readability of requirements as compared to a general-purpose real time temporal logic. In addition, some of the operators also enhance the expressiveness of the logic. A number of properties of the new operators are presented and proven.