Beim funktionsorientierten Testen von Steuergeräten im automobilen Bereich ist das Expertenwissen aufgrund der hohen Komplexität der Testfälle unersetzlich. Bei Basistesttechniken wie der Grenzwertanalyse ist die Absicht eines Testfalls implizit durch die Technik gegeben. Beim Expertenwissen wird jedoch zur Zeit zu jedem erstellten Testfall zusätzlich ein Prosatext verfasst um die Testabsicht anzugeben. Diese Prosabeschreibung ist anfällig für Mehrdeutigkeiten, fällt bei jedem Testentwickler unterschiedlich aus und der inhaltliche Bezug zum Testfall ist lose. Ziel der Arbeit ist eine Spezifikationssprache für die Testfallbeschreibung zu entwerfen um die Nachteile der natürlichen Sprache zu minimieren und testablaufspezifische Sprachelemente zu definieren, so dass sie als ein Grundgerüst für einen Testfall verwendet werden kann. Dazu wird aus der Einsatzumgebung (Systemspezifikation, Testimplementierung und Testprozessthemen) Sprachelemente für die Beschreibung abgeleitet und Ansätze für die Überführung der Beschreibung in die Testimplementierung betrachtet. Das Ergebnis ist eine Testfall-Spezifikationssprache, die auf formaler Grundlage basiert und u.a. in eine graphische Sicht überführt werden kann. Ähnlich der UML wird der Mehrwert erst durch eine werkzeugunterstützte Eingabe deutlich: So sind die Testentwickler in der Lage, einheitliche, formale, wieder verwendbare, verständliche Testfälle zu definieren.
In robotics, information is often regarded as a means to an end. The question of how to structure information and how to bridge the semantic gap between different levels of abstraction in a uniform way is still widely regarded as a technical issue. Ignoring these challenges appears to lead robotics into a similar stasis as experienced in the software industry of the late 1960s. From the beginning of the software crisis until today, numerous methods, techniques, and tools for managing the increasing complexity of software systems have evolved. The attempt to transfer several of these ideas towards applications in robotics yielded various control architectures, frameworks, and process models. These attempts mainly provide modularisation schemata which suggest how to decompose a complex system into less complex subsystems. The schematisation of representation and information ﬂow however is mostly ignored. In this work, a set of design schemata is proposed which is embedded into an action/perception-oriented design methodology to promote thorough abstractions between distinct levels of control. Action-oriented design decomposes control systems top-down and sensor data is extracted from the environment as required. This comes with the problem that information is often condensed in a premature fashion. That way, sensor processing is dependent on the control system design resulting in a monolithical system structure with limited options for reusability. In contrast, perception-oriented design constructs control systems bottom-up starting with the extraction of environment information from sensor data. The extracted entities are placed into structures which evolve with the development of the sensor processing algorithms. In consequence, the control system is strictly dependent on the sensor processing algorithms which again results in a monolithic system. In their particular domain, both design approaches have great advantages but fail to create inherently modular systems. The design approach proposed in this work combines the strengths of action orientation and perception orientation into one coherent methodology without inheriting their weaknesses. More precisely, design schemata for representation, translation, and fusion of environmental information are developed which establish thorough abstraction mechanisms between components. The explicit introduction of abstractions particularly supports extensibility and scalability of robot control systems by design.
Modern science utilizes advanced measurement and simulation techniques to analyze phenomena from fields such as medicine, physics, or mechanics. The data produced by application of these techniques takes the form of multi-dimensional functions or fields, which have to be processed in order to provide meaningful parts of the data to domain experts. Definition and implementation of such processing techniques with the goal to produce visual representations of portions of the data are topic of research in scientific visualization or multi-field visualization in the case of multiple fields. In this thesis, we contribute novel feature extraction and visualization techniques that are able to convey data from multiple fields created by scientific simulations or measurements. Furthermore, our scalar-, vector-, and tensor field processing techniques contribute to scattered field processing in general and introduce novel ways of analyzing and processing tensorial quantities such as strain and displacement in flow fields, providing insights into field topology. We introduce novel mesh-free extraction techniques for visualization of complex-valued scalar fields in acoustics that aid in understanding wave topology in low frequency sound simulations. The resulting structures represent regions with locally minimal sound amplitude and convey wave node evolution and sound cancellation in time-varying sound pressure fields, which is considered an important feature in acoustics design. Furthermore, methods for flow field feature extraction are presented that facilitate analysis of velocity and strain field properties by visualizing deformation of infinitesimal Lagrangian particles and macroscopic deformation of surfaces and volumes in flow. The resulting adaptive manifolds are used to perform flow field segmentation which supports multi-field visualization by selective visualization of scalar flow quantities. The effects of continuum displacement in scattered moment tensor fields can be studied by a novel method for multi-field visualization presented in this thesis. The visualization method demonstrates the benefit of clustering and separate views for the visualization of multiple fields.
Due to remarkable technological advances in the last three decades the capacity of computer systems has improved tremendously. Considering Moore's law, the number of transistors on integrated circuits has doubled approximately every two years and the trend is continuing. Likewise, developments in storage density, network bandwidth, and compute capacity show similar patterns. As a consequence, the amount of data that can be processed by today's systems has increased by orders of magnitude. At the same time, however, the resolution of screens has hardly increased by a factor of ten. Thus, there is a gap between the amount of data that can be processed and the amount of data that can be visualized. Large high-resolution displays offer a way to deal with this gap and provide a significantly increased screen area by combining the images of multiple smaller display devices. The main objective of this dissertation is the development of new visualization and interaction techniques for large high-resolution displays.
We study the extension of techniques from Inductive Logic Programming (ILP) to temporal logic programming languages. Therefore we present two temporal logic programming languages and analyse the learnability of programs from these languages from finite sets of examples. In first order temporal logic the following topics are analysed: - How can we characterize the denotational semantics of programs? - Which proof techniques are best suited? - How complex is the learning task? In propositional temporal logic we analyse the following topics: - How can we use well known techniques from model checking in order to refine programs? - How complex is the learning task? In both cases we present estimations for the VC-dimension of selected classes of programs.
A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.
We tackle the problem of obtaining statistics on content and structure of XML documents by using summaries which may provide cardinality estimations for XML query expressions. Our focus is a data-centric processing scenario in which we use a query engine to process such query expressions. We provide three new summary structures called LESS (Leaf-Element-in-Subtree), LWES (Level-Wide Element Summarization), and EXsum (Element-centered XML Summarization) which are targeted to base an estimation process in an XML query optimizer. Each of these collects structural statistical information of XML documents, and the latter (EXsum) gathers, in addition, statistics on document content. Estimation procedures and/or heuristics for specic types of query expressions of each proposed approach are developed. We have incorporated and implemented our proposals in XTC, a native XML database management system (XDBMS). With this common implementation base, we present an empirical and comparative study in which our proposals are stressed against others published in the literature, which are also incorporated into the XTC. Furthermore, an analysis is made based on criteria pertinent to a query optimizer process.
Most software systems are described in high-level model or programming languages. Their runtime behavior, however, is determined by the compiled code. For uncritical software, it may be sufficient to test the runtime behavior of the code. For safety-critical software, there is an additional aggravating factor resulting from the fact that the code must satisfy the formal specification which reflects the safety policy of the software consumer and that the software producer is obliged to demonstrate that the code is correct with respect to the specification using formal verification techniques. In this scenario, it is of great importance that static analyses and formal methods can be applied on the source code level, because this level is more abstract and better suited for such techniques. However, the results of the analyses and the verification can only be carried over to the machine code level, if we can establish the correctness of the translation. Thus, compilation is a crucial step in the development of software systems and formally verified translation correctness is essential to close the formalization chain from high-level formal methods to the machine-code level. In this thesis, I propose an approach to certifying compilers which achieves the aim of closing the formalization chain from high-level formal methods to the machine-code level by applying techniques from mathematical logic and programming language semantics. I propose an approach called foundational translation validation (FTV) in which the software producer implements an FTV system comprising a compiler and a specification and verification framework (SVF) which is implemented in higher-order logic (HOL). The most important part of the SVF is an explicit translation contract which comprises the formalizations of the source and the target languages of the compiler and the formalization of a binary translation correctness predicate corrTrans(S,T) for source programs S and target programs T. The formalizations of the languages are realized as deep embeddings in HOL. This enables one to declare the whole program in a formalized language as a HOL constant. The predicate formally specifies when T is considered to be a correct translation of S. Its definition is explicitly based on the program semantics definitions provided by the translation contract. Subsequent to the translation, the compiler translates the source and the target programs into their syntactic representations as HOL constants, S and T, and generates a proof of corrTrans(S,T). We call a compiler which follows the FTV approach a proof generating compiler. Our approach borrows the idea of representing programs in correctness proofs as logic constants from the foundational proof-carrying code (FPCC) approach. Novel features that distinquish our approach from further approaches to certifying compilers, such as proof-carrying code (PCC) and translation validation (TV) are the following: Firstly, the presence of an explicit translation contract formalized in HOL: The approaches PCC and TV do not formalize a translation contract explicitly. Instead of this, they incorporate operational semantics and translation correctness criterion in translation validation tools on the programming language level. Secondly, representation of programs in correctness proofs as logic constants: The approaches PCC and the TV translate programs into their representations as semantic abstractions that serve as inputs for translation validation tools. Thirdly, certification of program transformation chains: Unlike the TV approach, which certifies single program transformations, the FTV approach achieves the aim of certifying whole chains of program transformations. This is possible due to the fact that the translation contract provides, for all programming languages involved in the program transformation chain, definitions of program semantics functions which map programs to mathematical objects that are elements of a set with an (at least) partial order "<=". Then, the proof makes use of the fact that the relation "<=" is transitive. In this thesis, the feasibility of the FTV approach is exemplified by the implementation of an FTV system. The system comprises a compiler front-end that certifies its optimization phase and an accompanying SVF that is implemented in the theorem prover Isabelle/HOL. The compiler front-end translates programs in a small C-like programming language, performs three optimizations: constant folding, dead assignment elimination, and loop invariant hoisting, and generates translation certificates in the form of Isabelle/HOL theories. The main focus of the thesis is on the description of the SVF and its translation verification techniques.
This PhD thesis aims at finding a global robot navigation strategy for rugged off-road terrain which is robust against inaccurate self-localization, scalable to large environments, but also cost-efficient, e.g. able to generate navigation paths which optimize a cost measure closely related to terrain traversability. In order to meet this goal, aspects of both metrical and topological navigation techniques are combined. A primarily topological map is extended with the previously lacking capability of cost-efficient path planning and map extension. Further innovations include a multi-dimensional cost measure for topological edges, a method to learn these costs based on live feedback from the robot and a set of extrapolation methods to predict the traversability costs for untraversed edges. The thesis presents two sophisticated new image analysis techniques to optimize cost prediction based on the shape and appearance of surrounding terrain. Experimental results indicate that the proposed global navigation system is indeed able to perform cost-efficient, large scale path planning. At the same time, the need to maintain a fine-grained, global world model which would reduce the scalability of the approach is avoided.