## Fachbereich Informatik

### Filtern

#### Erscheinungsjahr

#### Dokumenttyp

- Preprint (346)
- Dissertation (110)
- Wissenschaftlicher Artikel (109)
- Masterarbeit (45)
- Bericht (27)
- Studienarbeit (13)
- Konferenzveröffentlichung (6)
- Bachelorarbeit (2)
- Habilitation (2)
- Teil eines Buches (Kapitel) (1)

#### Schlagworte

- AG-RESY (64)
- PARO (26)
- Visualisierung (13)
- Case-Based Reasoning (12)
- CoMo-Kit (12)
- SKALP (12)
- META-AKAD (9)
- Case-Based Reasoning (8)
- HANDFLEX (8)
- Robotik (8)

- Advances in Theory and Applicability of Stochastic Network Calculus (2016)
- Stochastic Network Calculus (SNC) emerged from two branches in the late 90s: the theory of effective bandwidths and its predecessor the Deterministic Network Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support their design and control. In contrast to queueing theory, which strives for similar goals, SNC uses in- equalities to circumvent complex situations, such as stochastic dependencies or non-Poisson arrivals. Leaving the objective to compute exact distributions behind, SNC derives stochastic performance bounds. Such a bound would, for example, guarantee a system’s maximal queue length that is violated by a known small prob- ability only. This work includes several contributions towards the theory of SNC. They are sorted into four main contributions: (1) The first chapters give a self-contained introduction to deterministic net- work calculus and its two branches of stochastic extensions. The focus lies on the notion of network operations. They allow to derive the performance bounds and simplifying complex scenarios. (2) The author created the first open-source tool to automate the steps of cal- culating and optimizing MGF-based performance bounds. The tool automatically calculates end-to-end performance bounds, via a symbolic approach. In a second step, this solution is numerically optimized. A modular design allows the user to implement their own functions, like traffic models or analysis methods. (3) The problem of the initial modeling step is addressed with the development of a statistical network calculus. In many applications the properties of included elements are mostly unknown. To that end, assumptions about the underlying processes are made and backed by measurement-based statistical methods. This thesis presents a way to integrate possible modeling errors into the bounds of SNC. As a byproduct a dynamic view on the system is obtained that allows SNC to adapt to non-stationarities. (4) Probabilistic bounds are fundamentally different from deterministic bounds: While deterministic bounds hold for all times of the analyzed system, this is not true for probabilistic bounds. Stochastic bounds, although still valid for every time t, only hold for one time instance at once. Sample path bounds are only achieved by using Boole’s inequality. This thesis presents an alternative method, by adapting the theory of extreme values. (5) A long standing problem of SNC is the construction of stochastic bounds for a window flow controller. The corresponding problem for DNC had been solved over a decade ago, but remained an open problem for SNC. This thesis presents two methods for a successful application of SNC to the window flow controller.

- An Adaptive and Dynamic Simulation Framework for Incremental, Collaborative Classifier Fusion (2016)
- Abstract. To investigate incremental collaborative classifier fusion techniques, we have developed a comprehensive simulation framework. It is highly flexible and customizable, and can be adapted to various settings and scenarios. The toolbox is realized as an extension to the NetLogo multi-agent based simulation environment using its comprehensive Java- API. The toolbox has been integrated in two di↵erent environments, one for demonstration purposes and another, modeled on persons using re- alistic motion data from Zurich, who are communicating in an ad hoc fashion using mobile devices.

- Symbolic Simulation of Mixed-Signal Systems with Extended Affine Arithmetic (2016)
- Mixed-signal systems combine analog circuits with digital hardware and software systems. A particular challenge is the sensitivity of analog parts to even small deviations in parameters, or inputs. Parameters of circuits and systems such as process, voltage, and temperature are never accurate; we hence model them as uncertain values (‘uncertainties’). Uncertain parameters and inputs can modify the dynamic behavior and lead to properties of the system that are not in specified ranges. For verification of mixed- signal systems, the analysis of the impact of uncertainties on the dynamical behavior plays a central role. Verification of mixed-signal systems is usually done by numerical simulation. A single numerical simulation run allows designers to verify single parameter values out of often ranges of uncertain values. Multi-run simulation techniques such as Monte Carlo Simulation, Corner Case simulation, and enhanced techniques such as Importance Sampling or Design-of-Experiments allow to verify ranges – at the cost of a high number of simulation runs, and with the risk of not finding potential errors. Formal and symbolic approaches are an interesting alternative. Such methods allow a comprehensive verification. However, formal methods do not scale well with heterogeneity and complexity. Also, formal methods do not support existing and established modeling languages. This fact complicates its integration in industrial design flows. In previous work on verification of Mixed-Signal systems, Affine Arithmetic is used for symbolic simulation. This allows combining the high coverage of formal methods with the ease-of use and applicability of simulation. Affine Arithmetic computes the propagation of uncertainties through mostly linear analog circuits and DSP methods in an accurate way. However, Affine Arithmetic is currently only able to compute with contiguous regions, but does not permit the representation of and computation with discrete behavior, e.g. introduced by software. This is a serious limitation: in mixed-signal systems, uncertainties in the analog part are often compensated by embedded software; hence, verification of system properties must consider both analog circuits and embedded software. The objective of this work is to provide an extension to Affine Arithmetic that allows symbolic computation also for digital hardware and software systems, and to demonstrate its applicability and scalability. Compared with related work and state of the art, this thesis provides the following achievements: 1. The thesis introduces extended Affine Arithmetic Forms (XAAF) for the representation of branch and merge operations. 2. The thesis describes arithmetic and relational operations on XAAF, and reduces over-approximation by using an LP solver. 3. The thesis shows and discusses ways to integrate this XAAF into existing modeling languages, in particular SystemC. This way, breaks in the design flow can be avoided. The applicability and scalability of the approach is demonstrated by symbolic simulation of a Delta-Sigma Modulator and a PLL circuit of an IEEE 802.15.4 transceiver system.

- Verification Techniques for TSO-Relaxed Programs (2016)
- Knowing the extent to which we rely on technology one may think that correct programs are nowadays the norm. Unfortunately, this is far from the truth. Luckily, possible reasons why program correctness is difficult often come hand in hand with some solutions. Consider concurrent program correctness under Sequential Consistency (SC). Under SC, instructions of each program's concurrent component are executed atomically and in order. By using logic to represent correctness specifications, model checking provides a successful solution to concurrent program verification under SC. Alas, SC’s atomicity assumptions do not reflect the reality of hardware architectures. Total Store Order (TSO) is a less common memory model implemented in SPARC and in Intel x86 multiprocessors that relaxes the SC constraints. While the architecturally de-atomized execution of stores under TSO speeds up program execution, it also complicates program verification. To be precise, due to TSO’s unbounded store buffers, a program’s semantics under TSO might be infinite. This, for example, turns reachability under SC (a PSPACE-complete task) into a non-primitive-recursive-complete problem under TSO. This thesis develops verification techniques targeting TSO-relaxed programs. To be precise, we present under- and over-approximating heuristics for checking reachability in TSO-relaxed programs as well as state-reducing methods for speeding up such heuristics. In a first contribution, we propose an algorithm to check reachability of TSO-relaxed programs lazily. The under-approximating refinement algorithm uses auxiliary variables to simulate TSO’s buffers along instruction sequences suggested by an oracle. The oracle’s deciding characteristic is that if it returns the empty sequence then the program’s SC- and TSO-reachable states are the same. Secondly, we propose several approaches to over-approximate TSO buffers. Combined in a refinement algorithm, these approaches can be used to determine safety with respect to TSO reachability for a large class of TSO-relaxed programs. On the more technical side, we prove that checking reachability is decidable when TSO buffers are approximated by multisets with tracked per address last-added-values. Finally, we analyze how the explored state space can be reduced when checking TSO and SC reachability. Intuitively, through the viewpoint of Shasha-and-Snir-like traces, we exploit the structure of program instructions to explain several state-space reducing methods including dynamic and cartesian partial order reduction.

- Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential (2016)
- Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort. In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework. A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.

- Integrating Security Concerns into Safety Analysis of Embedded Systems Using Component Fault Trees (2016)
- Nowadays, almost every newly developed system contains embedded systems for controlling system functions. An embedded system perceives its environment via sensors, and interacts with it using actuators such as motors. For systems that might damage their environment by faulty behavior usually a safety analysis is performed. Security properties of embedded systems are usually not analyzed at all. New developments in the area of Industry 4.0 and Internet of Things lead to more and more networking of embedded systems. Thereby, new causes for system failures emerge: Vulnerabilities in software and communication components might be exploited by attackers to obtain control over a system. By targeted actions a system may also be brought into a critical state in which it might harm itself or its environment. Examples for such vulnerabilities, and also successful attacks, became known over the last few years. For this reason, in embedded systems safety as well as security has to be analyzed at least as far as it may cause safety critical failures of system components. The goal of this thesis is to describe in one model how vulnerabilities from the security point of view might influence the safety of a system. The focus lies on safety analysis of systems, so the safety analysis is extended to encompass security problems that may have an effect on the safety of a system. Component Fault Trees are very well suited to examine causes of a failure and to find failure scenarios composed of combinations of faults. A Component Fault Tree of an analyzed system is extended by additional Basic Events that may be caused by targeted attacks. Qualitative and quantitative analyses are extended to take the additional security events into account. Thereby, causes of failures that are based on safety as well as security problems may be found. Quantitative or at least semi-quantitative analyses allow to evaluate security measures more detailed, and to justify the need of such. The approach was applied to several example systems: The safety chain of the off-road robot RAVON, an adaptive cruise control, a smart farming scenario, and a model of a generic infusion pump were analyzed. The result of all example analyses was that additional failure causes were found which would not have been detected in traditional Component Fault Trees. In the analyses also failure scenarios were found that are caused solely by attacks, and that are not depending on failures of system components. These are especially critical scenarios which should not happen in this way, as they are not found in a classical safety analysis. Thus the approach shows its additional benefit to a safety analysis which is achieved by the application of established techniques with only little additional effort.

- Worst-Case Performance Analysis of Feed-Forward Networks – An Efficient and Accurate Network Calculus (2016)
- Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems: (a) the end-to-end delay data flows experience and (b) the buffer space required by a server to queue all incoming data. NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft. The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers. This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.

- Assuring Functional Safety in Open Systems of Systems (2016)
- Interconnected, autonomously driving cars shall realize the vision of a zero-accident, low energy mobility in spite of a fast increasing traffic volume. Tightly interconnected medical devices and health care systems shall ensure the health of an aging society. And interconnected virtual power plants based on renewable energy sources shall ensure a clean energy supply in a society that consumes more energy than ever before. Such open systems of systems will play an essential role for economy and society. Open systems of systems dynamically connect to each other in order to collectively provide a superordinate functionality, which could not be provided by a single system alone. The structure as well as the behavior of an open system of system dynamically emerge at runtime leading to very flexible solutions working under various different environmental conditions. This flexibility and adaptivity of systems of systems are a key for realizing the above mentioned scenarios. On the other hand, however, this leads to uncertainties since the emerging structure and behavior of a system of system can hardly be anticipated at design time. This impedes the indispensable safety assessment of such systems in safety-critical application domains. Existing safety assurance approaches presume that a system is completely specified and configured prior to a safety assessment. Therefore, they cannot be applied to open systems of systems. In consequence, safety assurance of open systems of systems could easily become a bottleneck impeding or even preventing the success of this promising new generation of embedded systems. For this reason, this thesis introduces an approach for the safety assurance of open systems of systems. To this end, we shift parts of the safety assurance lifecycle into runtime in order to dynamically assess the safety of the emerging system of system. We use so-called safety models at runtime for enabling systems to assess the safety of an emerging system of system themselves. This leads to a very flexible runtime safety assurance framework. To this end, this thesis describes the fundamental knowledge on safety assurance and model-driven development, which are the indispensable prerequisites for defining safety models at runtime. Based on these fundamentals, we illustrate how we modularized and formalized conventional safety assurance techniques using model-based representations and analyses. Finally, we explain how we advanced these design time safety models to safety models that can be used by the systems themselves at runtime and how we use these safety models at runtime to create an efficient and flexible runtime safety assurance framework for open systems of systems.

- Interactive Visualizations Supporting Minimal Cut Set Analysis II (2016)
- The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large. The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed. MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures. The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined. Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage. Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information. This phase is called MCS analysis I in this thesis. The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible. This phase is called MCS analysis II in this thesis. The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT). The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities, I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II. Thus, the following approach was taken in my thesis: 1- First, a triangulation of mixed methods and data sources was conducted. 2- Then, four novel interactive visualizations and one novel interaction widget were developed. 3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools) from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies. The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs. Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.

- Verification & Performance Measurement for Transport Protocol Parallel Routing of an AUTOSAR Gateway System (2016)
- A wide range of methods and techniques have been developed over the years to manage the increasing complexity of automotive Electrical/Electronic systems. Standardization is an example of such complexity managing techniques that aims to minimize the costs, avoid compatibility problems and improve the efficiency of development processes. A well-known and -practiced standard in automotive industry is AUTOSAR (Automotive Open System Architecture). AUTOSAR is a common standard among OEMs (Original Equipment Manufacturer), suppliers and other involved companies. It was developed originally with the goal of simplifying the overall development and integration process of Electrical/Electronic artifacts from different functional domains, such as hardware, software, and vehicle communication. However, the AUTOSAR standard, in its current status, is not able to manage the problems in some areas of the system development. Validation and optimization process of system configuration handled in this thesis are examples of such areas, in which the AUTOSAR standard offers so far no mature solutions. Generally, systems developed on the basis of AUTOSAR must be configured in a way that all defined requirements are met. In most cases, the number of configuration parameters and their possible settings in AUTOSAR systems are large, especially if the developed system is complex with modules from various knowledge domains. The verification process here can consume a lot of resources to test all possible combinations of configuration settings, and ideally find the optimal configuration variant, since the number of test cases can be very high. This problem is referred to in literature as the combinatorial explosion problem. Combinatorial testing is an active and promising area of functional testing that offers ideas to solve the combinatorial explosion problem. Thereby, the focus is to cover the interaction errors by selecting a sample of system input parameters or configuration settings for test case generation. However, the industrial acceptance of combinatorial testing is still weak because of the deficiency of real industrial examples. This thesis is tempted to fill this gap between the industry and the academy in the area of combinatorial testing to emphasizes the effectiveness of combinatorial testing in verifying complex configurable systems. The particular intention of the thesis is to provide a new applicable approach to combinatorial testing to fight the combinatorial explosion problem emerged during the verification and performance measurement of transport protocol parallel routing of an AUTOSAR gateway. The proposed approach has been validated and evaluated by means of two real industrial examples of AUTOSAR gateways with multiple communication buses and two different degrees of complexity to illustrate its applicability.