Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik
Refine
Year of publication
Document Type
- Doctoral Thesis (92) (remove)
Has Fulltext
- yes (92)
Keywords
- Mobilfunk (10)
- Model checking (5)
- mobile radio (4)
- Formal Verification (3)
- MIMO (3)
- Niederspannungsnetz (3)
- OFDM (3)
- System-on-Chip (3)
- Verifikation (3)
- Bounded Model Checking (2)
Faculty / Organisational entity
Machine Learning (ML) is expected to become an integrated part of future mobile networks due to its capacity for solving complex problems. During inference, ML algorithms extract the hidden knowledge of their input data which is delivered to them through wireless links in many scenarios. Transmission of a massive amount of such input data can impose a huge burden on the mobile network. On the other hand, it is known that ML algorithms can tolerate different levels of distortion on their input components, while the quality of their predictions remains unaffected. Therefore, utilization of the conventional approaches
implies a waste of radio resources, since they target an exact reconstruction of transmitted data, i.e., the input of ML algorithms. In this thesis, we propose a novel relevance based framework that focuses on the quality of final ML outputs instead of such syntax based reconstruction of transmitted inputs. To this end, we quantify the semantics or relevancy of input components in terms of the bit allocation aspect of data compression, where a higher tolerance for distortion implies less relevancy. A lower relevance level is translated into the allocation of less radio resources, e.g., bandwidth. The introduced formulation provides the foundations for the efficient support of ML models with their required data in the inference phase, while wireless resources are employed efficiently.
In this dissertation, a generic relevance based framework utilizing the Kullback-Leibler Divergence (KLD) is developed that is applicable to many realistic scenarios. The system model under study contains multiple sources transmitting correlated multivariate input components of a ML algorithm. The ML model is seen as a black box, which is trained and has fixed parameters while operating in the inference phase. Our proposed bit allocation accounts for the rate-distortion tradeoff. Hence, it is simply adjustable for application to
other problems. Here, an extended version of the proposed bit allocation strategy is introduced for signaling overhead reduction, in which the relevancy level of each input attribute changes instantaneously. In another expansion, to take the effect of dynamic channel states into account, a resource allocation approach for ML based centralized control systems is proposed. The novel quality of service metric takes outputs of ML algorithms into consideration,
and in combination with the designed greedy algorithm, provides significantly
improved end-to-end performance for a network of cart inverted pendulums.
The introduced relevance based framework is comprehensively investigated by considering various case studies, real and synthetic data, regression and classification, different estimators for the KLD, various ML models and codebook designs. Furthermore, the reliability of this proposed solution is explored in presence of packet drops, indicating robustness of the relevance based compression. In all of the simulations, the relevance based solutions deliver the best outcome in terms of the carefully chosen key performance indicators. In most of them, significantly high gains are also achieved compared to the conventional techniques, motivating further research on the subject.
Hardware devices fabricated with recent process technology are intrinsically
more susceptible to faults than before. Resilience against hardware faults is,
therefore, a major concern for safety-critical embedded systems and has been
addressed in several standards. These standards demand a systematic and
thorough safety evaluation, especially for the highest safety levels. However,
any attempt to cover all faults for all theoretically possible scenarios that a sys-
tem might be used in can easily lead to excessive costs. Instead, an application-
dependent approach should be taken: strategies for test and fault resilience
must target only those faults that can actually have an effect in the situations
in which the hardware is being used.
In order to provide the data for such safety evaluations, we propose scalable
and formal methods to analyse the effects of hardware faults on hardware/soft-
ware systems across three abstraction levels where we:
(1) perform a fault effect analysis at instruction set architecture level by em-
ploying fault injection into a hardware-dependent software model called
program netlist,
(2) use the results from the program netlist analysis to perform a deductive
analysis to determine “application-redundant” faults at the gate level by
exploiting standard combinational test pattern generation,
(3) use the results from the program netlist analysis to perform an inductive
analysis to identify all faults of a given fault list that can have an effect
on selected objects of the high-level software, such as specified safety
functions, by employing Abstract Interpretation.
These methods aid in the certification process for the higher safety levels
by (a) providing formal guarantees that certain faults can be ignored and (b)
pointing to those faults which need to be detected in order to ensure product
safety.
We consider transient and permanent faults corrupting data in program-
visible hardware registers and model them using the single-event upset and
stuck-at fault models, respectively.
Scalability of our approaches results from combining an analysis at the ma-
chine and hardware level with separate analyses on gate level and C level
source code, as well as, exploiting certain properties that are characteristic for
embedded systems software. We demonstrate the effectiveness and scalability
of each method on industry-oriented software, including a software system
with about 138 k lines of C code.
Augmented (AR), Virtual (VR) and Mixed Reality (MR) are on their way into everyday life. The recent emergence of consumer-friendly hardware to access this technology has greatly benefited the community. Research and application examples for AR, VR and MR can be found in many fields, such as medicine, sports, the area of cultural heritage, teleworking, entertainment and gaming. Although this technology has been around for decades, immersive applications using this technology are still in their infancy. As manufacturers increase accessibility to these technologies by introducing consumer grade hardware with natural input modalities such as eye gaze or hand tracking, new opportunities but also problems and challenges arise. Researchers strive to develop and investigate new techniques for dynamic content creation or novel interaction techniques. It has yet to be found out which interactions can be made intuitively by users. A major issue is that the possibilities for easy prototyping and rapid testing of new interaction techniques are limited and largely unexplored.
In this thesis, different solutions are proposed to improve gesture-based interaction in immersive environments by introducing gesture authoring tools and developing novel applications. Specifically, hand gestures should be made more accessible to people outside this specialised domain. First, a survey which explores one of the largest and most promising application scenario for AR, VR and MR, namely remote collaboration is introduced. Based on the results of this survey, the thesis focuses on several important issues to consider when developing and creating applications. At its core, the thesis is about rapid prototyping based on panorama images and the use of hand gestures for interactions. Therefore, a technique to create immersive applications with panorama based virtual environments including hand gestures is introduced. A framework to rapidly design, prototype, implement, and create arbitrary one-handed gestures is presented. Based on a user study, the potential of the framework as well as efficacy and usability of hand gestures is investigated. Next, the potential of hand gestures for locomotion tasks in VR is investigated. Additionally, it is analysed how lay people can adapt to the use of hand tracking technology in this context. Lastly, the use of hand gestures for grasping virtual objects is explored and compared to state of the art techniques. Within this thesis, different input modalities and techniques are compared in terms of usability, effort, accuracy, task completion time, user rating, and naturalness.
This thesis focuses on the development and analysis of Stochastic Model Predictive Control (SMPC) strategies for both distributed stochastic systems and centralized stochastic systems with partially known distributional information. The first part deals with the development of distributed SMPC schemes that can be synthesized and operated in a fully distributed manner, establishing rigorous theoretical guarantees such as recursive feasibility, stability and closed-loop chance constraint satisfaction. We study several control problems of practical interest, such as the output-feedback regulation problem or the state-feedback tracking problem under additive stochastic noise, and the regulation problem under multiplicative noise. In the second part of this thesis, a novel research topic known as distributionally robust MPC (DR-MPC) is explored, which enhances the applicability of SMPC to real-world problems. DR-MPC is advantageous as it solely necessitates partial knowledge in the form of samples of the uncertainty, which is usually available in practical scenarios, while SMPC mandates exact knowledge of the (unknown) distributional information. We investigate different so-called ambiguity sets to immunize the DR-MPC optimization problem against sampling inaccuracies, leading to tractable optimization problems with strong theoretical guarantees. Altogether, both parts provide rigorous theoretical guarantees with practical design procedures demonstrated by numerical examples, which are the main contributions of this thesis.
The fifth-generation (5G) of wireless networks promises to bring new advances, such as a huge increase in mobile data rates, a plunge in communications latency, and an increase in the quality of experience perceived by users that can cope with the ever-increasing demand in Internet traffic. However, the high cost of capital and operational expenditure (CAPEX/OPEX) of the new 5G network and the lack of a killer application hinder its rapid adoption. In this context, Mobile Network Operators (MNOs) have turned their attention to the following idea: opening up their infrastructure so that vertical businesses can leverage the new 5G network to improve their primary businesses and develop new ones. However, deploying multiple isolated vertical applications on top of the same infrastructure poses unique challenges that must be addressed. In this thesis, we provide critical contributions to developing 5G networks to accommodate different vertical applications in an isolated, flexible, and automated manner. This thesis contributions spawn on three main areas: (i) the development of an integrated fronthaul and backhaul network, (ii) the development of a network slicing overbooking algorithm, and (iii) the development of a method to mitigate the noisy neighbors' problem in a vRAN deployment.
The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major
interest from both academic and industrial stakeholders.
Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions
and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed.
End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests
such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex
spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects.
To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while
trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis.
This thesis addresses the need for a new approach to hardware sign-off verification which guarantees the security of processors at the Register Transfer Level (RTL). To this end, we introduce a formal definition of security with respect to microarchitectural vulnerabilities, formulated as a hardware property.
We present a formal proof methodology based on Unique Program Execution Checking (UPEC) which can be used to systematically detect all vulnerabilities to transient execution attacks in RTL designs. UPEC does not exploit any a priori knowledge on known attacks and can therefore detect also vulnerabilities based on new, so far unknown, types of channels. This is demonstrated by the new attack scenarios discovered in our experiments with UPEC. UPEC operates on a verification model consisting of two identical instances of the SoC design under verification. The SoC instances in the model execute the same program.
The only difference between the two instances is the content of the protected part of the memory, i.e., the secret.
With the growing support for features such as hardware virtualization tied to the boost of hardware capacity, embedded systems are now able to regroup many software components on a same hardware platform to save costs. This evolution has raised system complexity, motivating the introduction of Mixed-Criticality Systems (MCS) to consolidate applications from different criticality levels on a hardware target: in critical environments such as an aircraft or a factory floor, high-critical functions are now regrouped with other non-critical functions. A key requirement of such system is to guarantee that the execution of a critical function cannot be compromised by other functions, especially by ones with a lower-criticality level. In this context, runtime intrusion detection contributes to secure system execution to avoid an intentional misbehavior in critical applications.
Host Intrusion Detection Systems (HIDS) has been an active field of research for computer security for more than two decades. The goal of HIDS is to detect traces of malicious activity in the execution of a monitored software at runtime. While this topic has been extensively investigated for general-purpose computers, its application in the specific context of embedded MCS is comparatively more recent.
We extend the domain of HIDS research towards HIDS deployment into industrial embedded MCS. For this, we provide a review of state-of-the-art HIDS solutions and evaluate the main problems towards a deployment into an industrial embedded MCS.
We present several HIDS approaches based on solutions for general-purpose computers, which we apply to protect the execution of an application running into an embedded MCS. We introduce two main HIDS methods to protect the execution of a given user-level application. Because of possible criticality constraints of the monitored application, such as industrial certification aspects, our solutions support transparent monitoring; i.e. they do not require application instrumentation. On one hand, we propose a machine-learning (ML) based framework to monitor low-level system events transparently. On the other hand, we introduce a hardware-assisted control-flow monitoring framework to deploy control-flow integrity monitoring without instrumentation of the monitored application.
We provide a methodology to integrate and evaluate HIDS mechanisms into an embedded MCS. We evaluate and implement our monitoring solutions on a practical industrial platform, using generic hardware system and SYSGO’s industrial real-time hypervisor.
Multicore processors and Multiprocessor System-on-Chip (MPSoC) have become essential in Real-Time Systems (RTS) and Mixed-Criticality Systems (MCS) because of their additional computing capabilities that help reduce Size, Weight, and Power (SWaP), required wiring, and associated costs. In distributed systems, a single shared multicore or MPSoC node executes several applications, possibly of different criticality levels. However, there is interference between applications due to contention in shared resources such as CPU core, cache, memory, and network.
Existing allocation and scheduling methods for RTS and MCS often rely on implicit assumptions of the constant availability of individual resources, especially the CPU, to provide guaranteed progress of tasks. Most existing approaches aim to resolve contention in only a specific shared resource or a set of specific shared resources. Moreover, they handle a limited number of events such as task arrivals and task completions.
In distributed RTS and MCS with several nodes, each having multiple resources, if the applications, resource availability, or system configurations change, obtaining assumptions about resources becomes complicated. Thus, it is challenging to meet end-to-end constraints by considering each node, resource, or application individually.
Such RTS and MCS need global resource management to coordinate and dynamically adapt system-wide allocation of resources. In addition, the resource management can dynamically adapt applications to changing availability of resources and maintains a system-wide (global) view of resources and applications.
The overall aim of global resource management is twofold.
Firstly, it must ensure real-time applications meet their end-to-end deadlines even in the presence of faults and changing environmental conditions. Secondly, it must provide efficient resource utilization to improve the Quality of Service (QoS) of co-executing Best-Effort (BE) (or non-critical) applications.
A single fault in global resource management can render it useless. In the worst case, the resource management can make faulty decisions leading to a deadline miss in real-time applications. With the advent of Industry 4.0, cloud computing, and Internet-of-Things (IoT), it has become essential to combine stringent real-time constraints and reliability requirements with the need for an open-world assumption and ensure that the global resource management does not become an inviting target for attackers.
In this dissertation, we propose a domain-independent global resource management framework for distributed RTS and MCS consisting of heterogeneous nodes based on multicore processors or MPSoC. We initially developed the framework with the French Aerospace Lab -- ONERA and Thales Research & Technology during the DREAMS project and later extended it during SECREDAS and other internal projects. Unlike previous resource management frameworks RTS and MCS, we consider both safety and security for the framework itself.
To enable real-time industries to use cloud computing and enter a new market segment -- real-time operation as a cloud-based service, we propose a Real-Time-Cloud (RT-Cloud) based on global resource management for hosting RTS and MCS.
Finally, we present a mixed-criticality avionics use case for evaluating the capabilities of the global resource management framework in handling permanent core failures and temporal overload condition, and a railway use case to motivate the use of RT-Cloud with global resource management.
The nondestructive testing of multilayered materials is increasingly applied in
both scientific and industrial fields. In particular, developments in millimeter
wave and terahertz technology open up novel measurement applications, which
benefit from the nonionizing properties of this frequency range. One example is
the noncontact inspection of layer thicknesses. Frequently used measuring and
analysis methods lead to a resolution limit that is determined by the bandwidth
of the setup. This thesis analyzes the reliable evaluation of thinner layer thicknesses
using model-based signal processing.