Fachbereich Elektrotechnik und Informationstechnik
Filtern
Erscheinungsjahr
- 2017 (5) (entfernen)
Sprache
- Englisch (5) (entfernen)
Schlagworte
- A/D conversion (1)
- ADAS (1)
- AFDX (1)
- Backlog (1)
- Buffer (1)
- Firmware (1)
- Formal Verification (1)
- Formale Beschreibungstechnik (1)
- Formale Methode (1)
- Hardware/Software co-verification (1)
-
A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction (2017)
- Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.
-
Buffer analysis and message scheduling for real-time networks (2017)
- For many years, most distributed real-time systems employed data communication systems specially tailored to address the specific requirements of individual domains: for instance, Controlled Area Network (CAN) and Flexray in the automotive domain, ARINC 429 [FW10] and TTP [Kop95] in the aerospace domain. Some of these solutions were expensive, and eventually not well understood. Mostly driven by the ever decreasing costs, the application of such distributed real-time system have drastically increased in the last years in different domains. Consequently, cross-domain communication systems are advantageous. Not only the number of distributed real-time systems have been increasing but also the number of nodes per system, have drastically increased, which in turn increases their network bandwidth requirements. Further, the system architectures have been changing, allowing for applications to spread computations among different computer nodes. For example, modern avionics systems moved from federated to integrated modular architecture, also increasing the network bandwidth requirements. Ethernet (IEEE 802.3) [iee12] is a well established network standard. Further, it is fast, easy to install, and the interface ICs are cheap [Dec05]. However, Ethernet does not offer any temporal guarantee. Research groups from academia and industry have presented a number of protocols merging the benefits of Ethernet and the temporal guarantees required by distributed real-time systems. Two of these protocols are: Avionics Full-Duplex Switched Ethernet (AFDX) [AFD09] and Time-Triggered Ethernet (TTEthernet) [tim16]. In this dissertation, we propose solutions for two problems faced during the design of AFDX and TTEthernet networks: avoiding data loss due to buffer overflow in AFDX networks with multiple priority traffic, and scheduling of TTEthernet networks. AFDX guarantees bandwidth separation and bounded transmission latency for each communication channel. Communication channels in AFDX networks are not synchronized, and therefore frames might compete for the same output port, requiring buffering to avoid data loss. To avoid buffer overflow and the resulting data loss, the network designer must reserve a safe, but not too pessimistic amount of memory of each buffer. The current AFDX standard allows for the classification of the network traffic with two priorities. Nevertheless, some commercial solutions provide multiple priorities, increasing the complexity of the buffer backlog analysis. The state-of-the-art AFDX buffer backlog analysis does not provide a method to compute deterministic upper bounds iiifor buffer backlog of AFDX networks with multiple priority traffic. Therefore, in this dissertation we propose a method to address this open problem. Our method is based on the analysis of the largest busy period encountered by frames stored in a buffer. We identify the ingress (and respective egress) order of frames in the largest busy period that leads to the largest buffer backlog, and then compute the respective buffer backlog upper bound. We present experiments to measure the computational costs of our method. In TTEthernet, nodes are synchronized, allowing for message transmission at well defined points in time, computed off-line and stored in a conflict-free scheduling table. The computation of such scheduling tables is a NP-complete problem [Kor92], which should be solved in reasonable time for industrial size networks. We propose an approach to efficiently compute a schedule for the TT communication channels in TTEthernet networks, in which we model the scheduling problem as a search tree. As the scheduler traverses the search tree, it schedules the communication channels on a physical link. We presented two approaches to traverse the search tree while progressively creating the vertices of the search tree. A valid schedule is found once the scheduler reaches a valid leaf. If on the contrary, it reaches an invalid leaf, the scheduler backtracks searching for a path to a valid leaf. We present a set of experiments to demonstrate the impact of the input parameters on the time taken to compute a feasible schedule or to deem the set of virtual links infeasible.
-
Conception and First Implementation of Novel Sensory Signal Conditioning and Digital Conversion Electronics Based on Spiking Neuron Ensembles for Inherently Robust Processing in Aggressively Scaled Integration Technologies (2017)
- ”In contemporary electronics 80% of a chip may perform digital functions but the 20% of analog functions may take 80% of the development time.” [1]. Aggravating this, the demands on analog design is increasing with rapid technology scaling. Most designs have moved away from analog to digital domains, where possible, however, interacting with the environment will always require analog to digital data conversion. Adding to this problem, the number of sensors used in consumer and industry related products are rapidly increasing. Designers of ADCs are dealing with this problem in several ways, the most important is the migration towards digital designs and time domain techniques. Time to Digital Converters (TDC) are becoming increasingly popular for robust signal processing. Biological neurons make use of spikes, which carry spike timing information and will not be affected by the problems related to technology scaling. Neuromorphic ADCs still remain exotic with few implementations in sub-micron technologies Table 2.7. Even among these few designs, the strengths of biological neurons are rarely exploited. From a previous work [2], LUCOS, a high dynamic range image sensor, the efficiency of spike processing has been validated. The ideas from this work can be generalized to make a highly effective sensor signal conditioning system, which carries the promise to be robust to technology scaling. The goal of this work is to create a novel spiking neural ADC as a novel form of a Multi-Sensor Signal Conditioning and Conversion system, which • Will be able to interface with or be a part of a System on Chip with traditional analog or advanced digital components. • Will have a graceful degradation. • Will be robust to noise and jitter related problems. • Will be able to learn and adapt to static errors and dynamic errors. • Will be capable of self-repair, self-monitoring and self-calibration Sensory systems in humans and other animals analyze the environment using several techniques. These techniques have been evolved and perfected to help the animal sur- vive. Different animals specialize in different sense organs, however, the peripheral neural network architectures remain similar among various animal species with few ex- ceptions. While there are many biological sensing techniques present, most popularly used engineering techniques are based on intensity detection, frequency detection, and edge detection. These techniques are used with traditional analog processing (e.g., colorvi sensors using filters), and with biological techniques (e.g. LUCOS chip [2]). The local- ization capability of animals has never been fully utilized. One of the most important capabilities for animals, vertebrates or invertebrates, is the capability for localization. The object of localization can be predator, prey, sources of water, or food. Since these are basic necessities for survival, they evolve much faster due to the survival of the fittest. In fact, localization capabilities, even if the sensors are different, have convergently evolved to have same processing methods (coincidence detection) in their peripheral neurons (for e.g., forked tongue of a snake, antennae of a cockroach, acoustic localization in fishes and mammals). This convergent evolution increases the validity of the technique. In this work, localization concepts based on acoustic localization and tropotaxis are investigated and employed for creation of novel ADCs. Unlike intensity and frequency detection, which are not linear (for e.g. eyes saturate in bright light, loose color perception in low light), localization is inherently linear. This is mainly because the accurate localization of predator or prey can be the difference between life and death for an animal. Figure 1 visually explains the ADC concept proposed in this work. This has two parts. (1) Sensor to Spike(time) Conversion (SSC), (2) Spike(time) to Digital Conversion(SDC). Both of the structures have been designed with models of biological neurons. The combination of these two structures is called SSDC. To efficiently implement the proposed concept, a comparison of several biological neural models is made and two models are shortlisted. Various synapse structures are also studied. From this study, Leaky Integrate and Fire neuron (LIF) is chosen since it fulfills all the requirements of the proposed structure. The analog neuron and synapse designs from Indiveri et. al. [3], [4] were taken, and simulations were conducted using cadence and the behavioral equivalence with biological counterpart was checked. The LIF neuron had features, that were not required for the proposed approach. A simple LIF neuron stripped of these features and was designed to be as fast as allowed by the technology. The SDC was designed with the neural building blocks and the delays were designed using buffer chains. This SDC converts incoming Time Interval Code (TIC) to sparse place coding using coincidence detection. Coincidence detection is a property of spiking neurons, which is a time domain equivalent of a Gaussian Kernel. The SDC is designed to have an online reconfigurable Gaussian kernel width, weight, threshold, and refractory period. The advantage of sparse place codes, which contain rank order coding wasvii Figure 1: ADC as a localization problem (right), Jeffress model of sound localization visualized (left). The values t 1 and t 2 indicate the time taken from the source to s1 and s2 respectively. described in our work [5]. A time based winner take all circuit with memory was created based on a previous work [6] for reading out of sparse place codes asynchronously. The SSC was also initially designed with the same building blocks. Additionally, a differential synapse was designed for better SSC. The sensor element considered wasviii a Wheatstone full bridge AMR sensor AFF755 from Sensitec GmbH. A reconfigurable version of the synapse was also designed for a more generic sensor interface. The first prototype chip SSDCα was designed with 257 modules of coincidence detectors realizing the SDC and the SSC. Since the spike times are the most important information, the spikes can be treated as digital pulses. This provides the capability for digital communication between analog modules. This creates a lot of freedom for use of digital processing between the discussed analog modules. This advantage is fully exploited in the design of SSDCα. Three SSC modules are multiplexed to the SDC. These SSC modules also provide outputs from the chip simultaneously. A rising edge detecting fixed pulse width generation circuit is used to create pulses that are best suited for efficient performance of the SDC. The delay lines are made reconfigurable to increase robustness and modify the span of the SDC. The readout technique used in the first prototype is a relatively slow but safe shift register. It is used to analyze the characteristics of the core work. This will be replaced by faster alternatives discussed in the work. The area of the chip is 8.5 mm 2 . It has a sampling rate from DC to 150 kHz. It has a resolution from 8-bit to 13-bit. It has 28,200 transistors on the chip. It has been designed in 350 nm CMOS technology from ams. The chip has been manufactured and tested with a sampling rate of 10 kHz and a theoretical resolution of 8 bits. However, due to the limitations of our Time-Interval-Generator, we are able to confirm for only 4 bits of resolution. The key novel contributions of this work are • Neuromorphic implementation of AD conversion as a localization problem based on sound localization and tropotaxis concepts found in nature. • Coincidence detection with sparse place coding to enhance resolution. • Graceful degradation without redundant elements, inherent robustness to noise, which helps in scaling of technologies • Amenable to local adaptation and self-x features. Conceptual goals have all been fulfilled, with the exception of adaptation. The feasibility for local adaptation has been shown with promising results and further investigation is required for future work. This thesis work acts as a baseline, paving the way for R&D in a new direction. The chip design has used 350 nm ams hitkit as a vehicle to prove the functionality of the core concept. The concept can be easily ported to present aggressively-scaled-technologies and future technologies.
-
Coverage of Compositional Property Sets for Hardware and Hardware-dependent Software in Formal System-on-Chip Verification (2017)
- Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification. Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed. However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA). This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment. Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints. At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems. The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces. This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS. Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.
-
Formal Verification of Firmware-Based System-on-Chip Modules (2017)
- In current practices of system-on-chip (SoC) design a trend can be observed to integrate more and more low-level software components into the system hardware at different levels of granularity. The implementation of important control functions and communication structures is frequently shifted from the SoC’s hardware into its firmware. As a result, the tight coupling of hardware and software at a low level of granularity raises substantial verification challenges since the conventional practice of verifying hardware and software independently is no longer sufficient. This calls for new methods for verification based on a joint analysis of hardware and software. This thesis proposes hardware-dependent models of low-level software for performing formal verification. The proposed models are conceived to represent the software integrated with its hardware environment according to the current SoC design practices. Two hardware/software integration scenarios are addressed in this thesis, namely, speed-independent communication of the processor with its hardware periphery and cycle-accurate integration of firmware into an SoC module. For speed-independent hardware/software integration an approach for equivalence checking of hardware-dependent software is proposed and an evaluated. For the case of cycle-accurate hardware/software integration, a model for hardware/software co-verification has been developed and experimentally evaluated by applying it to property checking.