Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik
Refine
Year of publication
Document Type
- Doctoral Thesis (64)
- Article (21)
- Conference Proceeding (17)
- Preprint (6)
- Report (3)
- Other (2)
- Course Material (1)
- Master's Thesis (1)
Language
- English (115) (remove)
Has Fulltext
- yes (115)
Keywords
- Mobilfunk (5)
- Cache (3)
- DRAM (3)
- Formal Verification (3)
- MIMO (3)
- Model checking (3)
- OFDM (3)
- SRAM (3)
- System-on-Chip (3)
- Chisel (2)
Faculty / Organisational entity
We study the sensor fault estimation and accommodation problems in a data-driven \(\mathcal{H}_\infty\) setting, leading to a data-driven sensor fault-tolerant control scheme. First, we formulate the fault estimation problem as a finite-horizon minimax \(\mathcal{H}_\infty\)-optimization problem in a data-driven setup, whose solution yields the fault estimate. The estimated fault is then used for output compensation. This compensated output and the experimental input are used to achieve certain control objectives in a data-driven \(\mathcal{H}_\infty\) setting. Next, the data-driven \(\mathcal{H}_\infty\) fault estimation and control problems are solved using a subspace predictor-based approach. Finally, the proposed algorithm is applied to the steering subsystem of the remotely operated underwater vehicle.
This article proposes a new clock-dependent gain-scheduled dynamic output feedback controller for delayed linear parameter varying systems with piecewise constant parameters. The proposed controller guarantees ℒ2-performance. By employing a clock-dependent Lyapunov–Krasovskii functional, a sufficient condition for the existence of the controller is provided in terms of clock- and parameter-dependent linear matrix inequalities. A case study on output feedback control of delayed switched systems is also provided. To illustrate the efficacy of the result, it is applied to a practical VTOL helicopter model.
The simulation of Dynamic Random Access Memories (DRAMs) on system level requires highly accurate models due to their complex timing and power behavior. However, conventional cycle-accurate DRAM subsystem models often become a bottleneck for the overall simulation speed. A promising alternative are simulators based on Transaction Level Modeling, which can be fast and accurate at the same time. In this paper we present DRAMSys4.0, which is, to the best of our knowledge, the fastest and most extensive open-source cycle-accurate DRAM simulation framework. DRAMSys4.0 includes a novel software architecture that enables a fast adaption to different hardware controller implementations and new JEDEC standards. In addition, it already supports the latest standards DDR5 and LPDDR5. We explain how to apply optimization techniques for an increased simulation speed while maintaining full temporal accuracy. Furthermore, we demonstrate the simulator’s accuracy and analysis tools with two application examples. Finally, we provide a detailed investigation and comparison of the most prominent cycle-accurate open-source DRAM simulators with regard to their supported features, analysis capabilities and simulation speed.
In recent years, ◂...▸optical character recognition (OCR) systems have been used to digitally preserve historical archives. To transcribe historical archives into a machine-readable form, first, the documents are scanned, then an OCR is applied. In order to digitize documents without the need to remove them from where they are archived, it is valuable to have a portable device that combines scanning and OCR capabilities. Nowadays, there exist many commercial and open-source document digitization techniques, which are optimized for contemporary documents. However, they fail to give sufficient text recognition accuracy for transcribing historical documents due to the severe quality degradation of such documents. On the contrary, the anyOCR system, which is designed to mainly digitize historical documents, provides high accuracy. However, this comes at a cost of high computational complexity resulting in long runtime and high power consumption. To tackle these challenges, we propose a low power energy-efficient accelerator with real-time capabilities called iDocChip, which is a configurable hybrid hardware-software programmable ◂...▸System-on-Chip (SoC) based on anyOCR for digitizing historical documents. In this paper, we focus on one of the most crucial processing steps in the anyOCR system: Text and Image Segmentation, which makes use of a multi-resolution morphology-based algorithm. Moreover, an optimized FPGA-based hybrid architecture of this anyOCR step along with its optimized software implementations are presented. We demonstrate our results on multiple embedded and general-purpose platforms with respect to runtime and power consumption. The resulting hardware accelerator outperforms the existing anyOCR by 6.2×, while achieving 207× higher energy-efficiency and maintaining its high accuracy.
Recurrent Neural Networks, in particular One-dimensional and Multidimensional Long Short-Term Memory (1D-LSTM and MD-LSTM) have achieved state-of-the-art classification accuracy in many applications such as machine translation, image caption generation, handwritten text recognition, medical imaging and many more. However, high classification accuracy comes at high compute, storage, and memory bandwidth requirements, which make their deployment challenging, especially for energy-constrained platforms such as portable devices. In comparison to CNNs, not so many investigations exist on efficient hardware implementations for 1D-LSTM especially under energy constraints, and there is no research publication on hardware architecture for MD-LSTM. In this article, we present two novel architectures for LSTM inference: a hardware architecture for MD-LSTM, and a DRAM-based Processing-in-Memory (DRAM-PIM) hardware architecture for 1D-LSTM. We present for the first time a hardware architecture for MD-LSTM, and show a trade-off analysis for accuracy and hardware cost for various precisions. We implement the new architecture as an FPGA-based accelerator that outperforms NVIDIA K80 GPU implementation in terms of runtime by up to 84× and energy efficiency by up to 1238× for a challenging dataset for historical document image binarization from DIBCO 2017 contest, and a well known MNIST dataset for handwritten digits recognition. Our accelerator demonstrates highest accuracy and comparable throughput in comparison to state-of-the-art FPGA-based implementations of multilayer perceptron for MNIST dataset. Furthermore, we present a new DRAM-PIM architecture for 1D-LSTM targeting energy efficient compute platforms such as portable devices. The DRAM-PIM architecture integrates the computation units in a close proximity to the DRAM cells in order to maximize the data parallelism and energy efficiency. The proposed DRAM-PIM design is 16.19 × more energy efficient as compared to FPGA implementation. The total chip area overhead of this design is 18 % compared to a commodity 8 Gb DRAM chip. Our experiments show that the DRAM-PIM implementation delivers a throughput of 1309.16 GOp/s for an optical character recognition application.
Machine Learning (ML) is expected to become an integrated part of future mobile networks due to its capacity for solving complex problems. During inference, ML algorithms extract the hidden knowledge of their input data which is delivered to them through wireless links in many scenarios. Transmission of a massive amount of such input data can impose a huge burden on the mobile network. On the other hand, it is known that ML algorithms can tolerate different levels of distortion on their input components, while the quality of their predictions remains unaffected. Therefore, utilization of the conventional approaches
implies a waste of radio resources, since they target an exact reconstruction of transmitted data, i.e., the input of ML algorithms. In this thesis, we propose a novel relevance based framework that focuses on the quality of final ML outputs instead of such syntax based reconstruction of transmitted inputs. To this end, we quantify the semantics or relevancy of input components in terms of the bit allocation aspect of data compression, where a higher tolerance for distortion implies less relevancy. A lower relevance level is translated into the allocation of less radio resources, e.g., bandwidth. The introduced formulation provides the foundations for the efficient support of ML models with their required data in the inference phase, while wireless resources are employed efficiently.
In this dissertation, a generic relevance based framework utilizing the Kullback-Leibler Divergence (KLD) is developed that is applicable to many realistic scenarios. The system model under study contains multiple sources transmitting correlated multivariate input components of a ML algorithm. The ML model is seen as a black box, which is trained and has fixed parameters while operating in the inference phase. Our proposed bit allocation accounts for the rate-distortion tradeoff. Hence, it is simply adjustable for application to
other problems. Here, an extended version of the proposed bit allocation strategy is introduced for signaling overhead reduction, in which the relevancy level of each input attribute changes instantaneously. In another expansion, to take the effect of dynamic channel states into account, a resource allocation approach for ML based centralized control systems is proposed. The novel quality of service metric takes outputs of ML algorithms into consideration,
and in combination with the designed greedy algorithm, provides significantly
improved end-to-end performance for a network of cart inverted pendulums.
The introduced relevance based framework is comprehensively investigated by considering various case studies, real and synthetic data, regression and classification, different estimators for the KLD, various ML models and codebook designs. Furthermore, the reliability of this proposed solution is explored in presence of packet drops, indicating robustness of the relevance based compression. In all of the simulations, the relevance based solutions deliver the best outcome in terms of the carefully chosen key performance indicators. In most of them, significantly high gains are also achieved compared to the conventional techniques, motivating further research on the subject.
Sensing location information in indoor scenes requires a high accuracy and is a challenging task, mainly because of multipath and NLoS (non-line-of-sight) propagation. GNSS signals cannot penetrate well in indoor environment. Satellite-based navigation and positioning systems cannot therefore be used for indoor positioning.. Other technologies have been suggested for indoor usage, among them, Wi-Fi (802.11) and 5G NR (New Radio). The primary aim of this study is to discuss the advantages and drawbacks of 5G and Wi-Fi positioning techniques for indoor localization.
Hardware devices fabricated with recent process technology are intrinsically
more susceptible to faults than before. Resilience against hardware faults is,
therefore, a major concern for safety-critical embedded systems and has been
addressed in several standards. These standards demand a systematic and
thorough safety evaluation, especially for the highest safety levels. However,
any attempt to cover all faults for all theoretically possible scenarios that a sys-
tem might be used in can easily lead to excessive costs. Instead, an application-
dependent approach should be taken: strategies for test and fault resilience
must target only those faults that can actually have an effect in the situations
in which the hardware is being used.
In order to provide the data for such safety evaluations, we propose scalable
and formal methods to analyse the effects of hardware faults on hardware/soft-
ware systems across three abstraction levels where we:
(1) perform a fault effect analysis at instruction set architecture level by em-
ploying fault injection into a hardware-dependent software model called
program netlist,
(2) use the results from the program netlist analysis to perform a deductive
analysis to determine “application-redundant” faults at the gate level by
exploiting standard combinational test pattern generation,
(3) use the results from the program netlist analysis to perform an inductive
analysis to identify all faults of a given fault list that can have an effect
on selected objects of the high-level software, such as specified safety
functions, by employing Abstract Interpretation.
These methods aid in the certification process for the higher safety levels
by (a) providing formal guarantees that certain faults can be ignored and (b)
pointing to those faults which need to be detected in order to ensure product
safety.
We consider transient and permanent faults corrupting data in program-
visible hardware registers and model them using the single-event upset and
stuck-at fault models, respectively.
Scalability of our approaches results from combining an analysis at the ma-
chine and hardware level with separate analyses on gate level and C level
source code, as well as, exploiting certain properties that are characteristic for
embedded systems software. We demonstrate the effectiveness and scalability
of each method on industry-oriented software, including a software system
with about 138 k lines of C code.
Model Identification of Power Electronic Systems for Interaction Studies and Small-Signal Analysis
(2023)
The rapid growth in offshore wind brings various challenges to power system research
and industry, such as the development of multi-terminal multi-vendor HVDC grids.
To ensure interoperability in those power converter dominated systems, suitable
models are needed to efficiently perform stability and interaction studies. With
state-space based small-signal methods stability and interaction phenomena can be
assessed globally for a complex system. Yet detailed models are needed. However,
in multi-vendor projects most likely only black-boxed models will be available to
protect the intellectual property, so that identification techniques are necessary to
obtain suitable models. This thesis contributes to the research activities on statespace
model identification of black-boxed power electronic systems.
In the first part of the thesis, a method was developed and tested, where the matrix
elements of linearized state-space models were fitted in dependency of the operating
point, based on input sweeps performed on the model of a grid forming power converter
controlled as a virtual synchronous machine. It was discussed how changes in
multiple inputs can be approximated by the superposition of the individual input
dependencies and a fully operating point dependent state-space model approximation
was created. The results were validated in time and frequency domain analyses.
It was found that the method can provide a good approximation, especially for the
operating range around the default operating point.
In the second part, identification of a power electronic system was performed based
on measurement data which was generated experimentally from a low voltage laboratory
system. A sequence of input perturbations was applied to the laboratory
system and frequency response data was calculated from the corresponding output
perturbations. The data served as basis for model identification with N4SID and a
soon to be published vector fitting method. The identified models were validated by
a visual inspection of the transfer function and by comparison of the calculated step
responses to the step responses measured in the laboratory. It was found that the
treatment of incomplete data sets, the generation of substitute data and the impact
of time delays on the identification might be worth further investigation.
This work provides a valuable contribution to the research of state-space model
identification of black-boxed power electronic systems. It points out challenges and
presents promising approaches to enable state-space based methods for stability
analysis and interaction studies in future multi-terminal multi-vendor HVDC grids.
Augmented (AR), Virtual (VR) and Mixed Reality (MR) are on their way into everyday life. The recent emergence of consumer-friendly hardware to access this technology has greatly benefited the community. Research and application examples for AR, VR and MR can be found in many fields, such as medicine, sports, the area of cultural heritage, teleworking, entertainment and gaming. Although this technology has been around for decades, immersive applications using this technology are still in their infancy. As manufacturers increase accessibility to these technologies by introducing consumer grade hardware with natural input modalities such as eye gaze or hand tracking, new opportunities but also problems and challenges arise. Researchers strive to develop and investigate new techniques for dynamic content creation or novel interaction techniques. It has yet to be found out which interactions can be made intuitively by users. A major issue is that the possibilities for easy prototyping and rapid testing of new interaction techniques are limited and largely unexplored.
In this thesis, different solutions are proposed to improve gesture-based interaction in immersive environments by introducing gesture authoring tools and developing novel applications. Specifically, hand gestures should be made more accessible to people outside this specialised domain. First, a survey which explores one of the largest and most promising application scenario for AR, VR and MR, namely remote collaboration is introduced. Based on the results of this survey, the thesis focuses on several important issues to consider when developing and creating applications. At its core, the thesis is about rapid prototyping based on panorama images and the use of hand gestures for interactions. Therefore, a technique to create immersive applications with panorama based virtual environments including hand gestures is introduced. A framework to rapidly design, prototype, implement, and create arbitrary one-handed gestures is presented. Based on a user study, the potential of the framework as well as efficacy and usability of hand gestures is investigated. Next, the potential of hand gestures for locomotion tasks in VR is investigated. Additionally, it is analysed how lay people can adapt to the use of hand tracking technology in this context. Lastly, the use of hand gestures for grasping virtual objects is explored and compared to state of the art techniques. Within this thesis, different input modalities and techniques are compared in terms of usability, effort, accuracy, task completion time, user rating, and naturalness.