Refine
Year of publication
Document Type
- Doctoral Thesis (84)
- Article (18)
- Conference Proceeding (17)
- Preprint (17)
- Report (9)
- Habilitation (3)
- Other (2)
- Bachelor Thesis (1)
- Course Material (1)
Has Fulltext
- yes (152)
Is part of the Bibliography
- no (152)
Keywords
- Mobilfunk (12)
- Model checking (7)
- Ambient Intelligence (5)
- Netzwerk (5)
- mobile radio (5)
- MIMO (4)
- System-on-Chip (4)
- CDMA (3)
- Cache (3)
- DRAM (3)
Faculty / Organisational entity
- Fachbereich Elektrotechnik und Informationstechnik (152) (remove)
Modern society relies on convenience services and mobile communication. Cloud computing is the current trend to make data and applications available at any time on every device. Data centers concentrate computation and storage at central locations, while they claim themselves green due to their optimized maintenance and increased energy efficiency. The key enabler for this evolution is the microelectronics industry. The trend to power efficient mobile devices has forced this industry to change its design dogma to: ”keep data locally and reduce data communication whenever possible”. Therefore we ask: is cloud computing repeating the aberrations of its enabling industry?
3D integration of solid-state memories and logic, as demonstrated by the Hybrid Memory Cube (HMC), offers major opportunities for revisiting near-memory computation and gives new hope to mitigate the power and performance losses caused by the “memory wall”. In this paper we present the first exploration steps towards design of the Smart Memory Cube (SMC), a new Processor-in-Memory (PIM) architecture that enhances the capabilities of the logic-base (LoB) in HMC. An accurate simulation environment has been developed, along with a full featured software stack. All offloading and dynamic overheads caused by the operating system, cache coherence, and memory management are considered, as well. Benchmarking results demonstrate up to 2X performance improvement in comparison with the host SoC, and around 1.5X against a similar host-side accelerator. Moreover, by scaling down the voltage and frequency of PIM’s processor it is possible to reduce energy by around 70% and 55% in comparison with the host and the accelerator, respectively.
Beamforming performs spatial filtering to preserve the signal from given directions of interest while suppressing interfering signals and noise arriving from other directions.
For example, a microphone array equipped with beamforming algorithm could preserve the sound coming from a target speaker and suppress sounds coming from other speakers.
Beamformer has been widely used in many applications such as radar, sonar, communication, and acoustic systems.
A data-independent beamformer is the beamformer whose coefficients are independent on sensor signals, it normally uses less computation since the coefficients are computed once. Moreover, its coefficients are derived from the well-defined statistical models, then it produces less artifacts. The major drawback of this beamforming class is its limitation to the interference suppression.
On the other hand, an adaptive beamformer is a beamformer whose coefficients depend on or adapt to sensor signals. It is capable of suppressing the interference better than a data-independent beamforming but it suffers from either too much distortion of the signal of interest or less noise reduction when the updating rate of coefficients does not synchronize with the changing rate of the noise model. Besides, it is computationally intensive since the coefficients need to be updated frequently.
In acoustic applications, the bandwidth of signals of interest extends over several octaves, but we always expect that the characteristic of the beamformer is invariant with regard to the bandwidth of interest. This can be achieved by the so-called broadband beamforming.
Since the beam pattern of conventional beamformers depends on the frequency of the signal, it is common to use a dense and uniform array for the broadband beamforming to guarantee some essential performances together, such as frequency-independence, less sensitive to white noise, high directivity factor or high front-to-back ratio. In this dissertation, we mainly focus on the sparse array of which the aim is to use fewer sensors in the array,
while simultaneously assuring several important performances of the beamformer.
In the past few decades, many design methodologies for sparse arrays have been proposed and were applied in a variety of practical applications.
Although good results were presented, there are still some restrictions, such as the number of sensors is large, the designed beam pattern must be fixed, the steering ability is limited and the computational complexity is high.
In this work, two novel approaches for the sparse array design taking a hypothesized uniform array as a basis are proposed, that is, one for data-independent beamformers and the another for adaptive beamformers.
As an underlying component of the proposed methods, the dissertation introduces some new insights into the uniform array with broadband beamforming. In this context, a function formulating the relations between the sensor coefficients and its beam pattern over frequency is proposed. The function mainly contains the coordinate transform and inverse Fourier transform.
Furthermore, from the bijection of the function and broadband beamforming perspective, we propose the lower and upper bounds for the inter-distance of sensors. Within these bounds, the function is a bijective function that can be utilized to design the uniform array with broadband beamforming.
For data-independent beamforming, many studies have focused on optimization procedures to seek the sparse array deployment. This dissertation presents an alternative approach to determine the location of sensors.
Starting with a weight spectrum of a virtual dense and uniform array, some techniques are used, such as analyzing a weight spectrum to determine the critical sensors, applying the clustering technique to group the sensors into different groups and selecting representative sensors for each group.
After the sparse array deployment is specified, the optimization technique is applied to find the beamformer coefficients. The proposed method helps to save the computation time in the design phase and its beamformer performance outperforms other state-of-the-art methods in several aspects such as the higher white noise gain, higher directivity factor or more frequency-independence.
For adaptive beamforming, the dissertation attempts to design a versatile sparse microphone array that can be used for different beam patterns.
Furthermore, we aim to reduce the number of microphones in the sparse array while ensuring that its performance can continue to compete with a highly dense and uniform array in terms of broadband beamforming.
An irregular microphone array in a planar surface with the maximum number of distinct distances between the microphones is proposed.
It is demonstrated that the irregular microphone array is well-suited to sparse recovery algorithms that are used to solve underdetermined systems with subject to sparse solutions. Here, a sparse solution is the sound source's spatial spectrum that need to be reconstructed from microphone signals.
From the reconstructed sound sources, a method for array interpolation is presented to obtain an interpolated dense and uniform microphone array that performs well with broadband beamforming.
In addition, two alternative approaches for generalized sidelobe canceler (GSC) beamformer are proposed. One is the data-independent beamforming variant, the other is the adaptive beamforming variant. The GSC decomposes beamforming into two paths: The upper path is to preserve the desired signal, the lower path is to suppress the desired signal. From a beam pattern viewpoint, we propose an improvement for GSC, that is, instead of using the blocking matrix in the lower path to suppress the desired signal, we design a beamformer that contains the nulls at the look direction and at some other directions. Both approaches are simple beamforming design methods and they can be applied to either sparse array or uniform array.
Lastly, a new technique for direction-of-arrival (DOA) estimation based on the annihilating filter is also presented in this dissertation.
It is based on the idea of finite rate of innovation to reconstruct the stream of Diracs, that is, identifying an annihilating filter/locator filter for a few uniform samples and the position of the Diracs are then related to the roots of the filter. Here, an annihilating filter is the filter that suppresses the signal, since its coefficient vector is always orthogonal to every frame of signal.
In the DOA context, we regard an active source as a Dirac associated with the arrival direction, then the directions of active sources can be derived from the roots of the annihilating filter. However,
the DOA obtained by this method is sensitive to noise and the number of DOAs is limited.
To address these issues, the dissertation proposes a robust method to design the annihilating filter and to increase the degree-of-freedom of the measurement system (more active sources can be detected) via observing multiple data frames.
Furthermore, we also analyze the performance of DOA with diffuse noise and propose an extended multiple signal classification algorithm that takes diffuse noise into account. In the simulation,
it shows, that in the case of diffuse noise, only the extended multiple signal classification algorithm can estimate the DOAs properly.
A counter-based read circuit tolerant to process variation for low-voltage operating STT-MRAM
(2016)
The capacity of embedded memory on LSIs has kept increasing. It is important to reduce the leakage power of embedded memory for low-power LSIs. In fact, the ITRS predicts that the leakage power in embedded memory will account for 40% of all power consumption by 2024 [1]. A spin transfer torque magneto-resistance random access memory (STT-MRAM) is promising for use as non-volatile memory to reduce the leakage power. It is useful because it can function at low voltages and has a lifetime of over 1016 write cycles [2]. In addition, the STT-MRAM technology has a smaller bit cell than an SRAM. Making the STT-MRAM is suitable for use in high-density products [3–7]. The STT-MRAM uses magnetic tunnel junction (MTJ). The MTJ has two states: a parallel state and an anti-parallel state. These states mean that the magnetization direction of the MTJ’s layers are the same or different. The directions pair determines the MTJ’s magneto- resistance value. The states of MTJ can be changed by the current flowing. The MTJ resistance becomes low in the parallel state and high in the anti-parallel state. The MTJ potentially operates at less than 0.4 V [8]. In other hands, it is difficult to design peripheral circuitry for an STT-MRAM array at such a low voltage. In this paper, we propose a counter-based read circuit that functions at 0.4 V, which is tolerant of process variation and temperature fluctuation.
For many years real-time task models have focused the timing constraints on execution windows defined by earliest start times and deadlines for feasibility.
However, the utility of some application may vary among scenarios which yield correct behavior, and maximizing this utility improves the resource utilization.
For example, target sensitive applications have a target point where execution results in maximized utility, and an execution window for feasibility.
Execution around this point and within the execution window is allowed, albeit at lower utility.
The intensity of the utility decay accounts for the importance of the application.
Examples of such applications include multimedia and control; multimedia application are very popular nowadays and control applications are present in every automated system.
In this thesis, we present a novel real-time task model which provides for easy abstractions to express the timing constraints of target sensitive RT applications: the gravitational task model.
This model uses a simple gravity pendulum (or bob pendulum) system as a visualization model for trade-offs among target sensitive RT applications.
We consider jobs as objects in a pendulum system, and the target points as the central point.
Then, the equilibrium state of the physical problem is equivalent to the best compromise among jobs with conflicting targets.
Analogies with well-known systems are helpful to fill in the gap between application requirements and theoretical abstractions used in task models.
For instance, the so-called nature algorithms use key elements of physical processes to form the basis of an optimization algorithm.
Examples include the knapsack problem, traveling salesman problem, ant colony optimization, and simulated annealing.
We also present a few scheduling algorithms designed for the gravitational task model which fulfill the requirements for on-line adaptivity.
The scheduling of target sensitive RT applications must account for timing constraints, and the trade-off among tasks with conflicting targets.
Our proposed scheduling algorithms use the equilibrium state concept to order the execution sequence of jobs, and compute the deviation of jobs from their target points for increased system utility.
The execution sequence of jobs in the schedule has a significant impact on the equilibrium of jobs, and dominates the complexity of the problem --- the optimum solution is NP-hard.
We show the efficacy of our approach through simulations results and 3 target sensitive RT applications enhanced with the gravitational task model.
A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)
Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.
The authors explore the intrinsic trade-off in a DRAM between the power consumption (due to refresh) and the reliability. Their unique measurement platform allows tailoring to the design constraints depending on whether power consumption, performance or reliability has the highest design priority. Furthermore, the authors show how this measurement platform can be used for reverse engineering the internal structure of DRAMs and how this knowledge can be used to improve DRAM’s reliability.
This study presents an energy-efficient ultra-low voltage standard-cell based memory in 28nm FD-SOI. The storage element (standard-cell latch) is replaced with a full- custom designed latch with 50 % less area. Error-free operation is demonstrated down to 450mV @ 9MHz. By utilizing body bias (BB) @ VDD = 0.5 V performance spans from 20 MHz @ BB=0V to 110MHz @ BB=1V.
At present the standardization of third generation (3G) mobile radio systems is the subject of worldwide research activities. These systems will cope with the market demand for high data rate services and the system requirement for exibility concerning the offered services and the transmission qualities. However, there will be de ciencies with respect to high capacity, if 3G mobile radio systems exclusively use single antennas. Very promising technique developed for increasing the capacity of 3G mobile radio systems the application is adaptive antennas. In this thesis, the benefits of using adaptive antennas are investigated for 3G mobile radio systems based on Time Division CDMA (TD-CDMA), which forms part of the European 3G mobile radio air interface standard adopted by the ETSI, and is intensively studied within the standardization activities towards a worldwide 3G air interface standard directed by the 3GPP (3rd Generation Partnership Project). One of the most important issues related to adaptive antennas is the analysis of the benefits of using adaptive antennas compared to single antennas. In this thesis, these bene ts are explained theoretically and illustrated by computer simulation results for both data detection, which is performed according to the joint detection principle, and channel estimation, which is applied according to the Steiner estimator, in the TD-CDMA uplink. The theoretical explanations are based on well-known solved mathematical problems. The simulation results illustrating the benefits of adaptive antennas are produced by employing a novel simulation concept, which offers a considerable reduction of the simulation time and complexity, as well as increased exibility concerning the use of different system parameters, compared to the existing simulation concepts for TD-CDMA. Furthermore, three novel techniques are presented which can be used in systems with adaptive antennas for additionally improving the system performance compared to single antennas. These techniques concern the problems of code-channel mismatch, of user separation in the spatial domain, and of intercell interference, which, as it is shown in the thesis, play a critical role on the performance of TD-CDMA with adaptive antennas. Finally, a novel approach for illustrating the performance differences between the uplink and downlink of TD-CDMA based mobile radio systems in a straightforward manner is presented. Since a cellular mobile radio system with adaptive antennas is considered, the ultimate goal is the investigation of the overall system efficiency rather than the efficiency of a single link. In this thesis, the efficiency of TD-CDMA is evaluated through its spectrum efficiency and capacity, which are two closely related performance measures for cellular mobile radio systems. Compared to the use of single antennas, the use of adaptive antennas allows impressive improvements of both spectrum efficiency and capacity. Depending on the mobile radio channel model and the user velocity, improvement factors range from six to 10.7 for the spectrum efficiency, and from 6.7 to 12.6 for the spectrum capacity of TD-CDMA. Thus, adaptive antennas constitute a promising technique for capacity increase of future mobile communications systems.
Real-time systems are systems that have to react correctly to stimuli from the environment within given timing constraints.
Today, real-time systems are employed everywhere in industry, not only in safety-critical systems but also in, e.g., communication, entertainment, and multimedia systems.
With the advent of multicore platforms, new challenges on the efficient exploitation of real-time systems have arisen:
First, there is the need for effective scheduling algorithms that feature low overheads to improve the use of the computational resources of real-time systems.
The goal of these algorithms is to ensure timely execution of tasks, i.e., to provide runtime guarantees.
Additionally, many systems require their scheduling algorithm to flexibly react to unforeseen events.
Second, the inherent parallelism of multicore systems leads to contention for shared hardware resources and complicates system analysis.
At any time, multiple applications run with varying resource requirements and compete for the scarce resources of the system.
As a result, there is a need for an adaptive resource management.
Achieving and implementing an effective and efficient resource management is a challenging task.
The main goal of resource management is to guarantee a minimum resource availability to real-time applications.
A further goal is to fulfill global optimization objectives, e.g., maximization of the global system performance, or the user perceived quality of service.
In this thesis, we derive methods based on the slot shifting algorithm.
Slot shifting provides flexible scheduling of time-constrained applications and can react to unforeseen events in time-triggered systems.
For this reason, we aim at designing slot shifting based algorithms targeted for multicore systems to tackle the aforementioned challenges.
The main contribution of this thesis is to present two global slot shifting algorithms targeted for multicore systems.
Additionally, we extend slot shifting algorithms to improve their runtime behavior, or to handle non-preemptive firm aperiodic tasks.
In a variety of experiments, the effectiveness and efficiency of the algorithms are evaluated and confirmed.
Finally, the thesis presents an implementation of a slot-shifting-based logic into a resource management framework for multicore systems.
Thus, the thesis closes the circle and successfully bridges the gap between real-time scheduling theory and real-world implementations.
We prove applicability of the slot shifting algorithm to effectively and efficiently perform adaptive resource management on multicore systems.
The recently established technologies in the areas of distributed measurement and intelligent
information processing systems, e.g., Cyber Physical Systems (CPS), Ambient
Intelligence/Ambient Assisted Living systems (AmI/AAL), the Internet of Things
(IoT), and Industry 4.0 have increased the demand for the development of intelligent
integrated multi-sensory systems as to serve rapid growing markets [1, 2]. These increase
the significance of complex measurement systems, that incorporate numerous advanced
methodological implementations including electronics circuit, signal processing,
and multi-sensory information fusion. In particular, in multi-sensory cognition applications,
to design such systems, the skill-required tasks, e.g., method selection, parameterization,
model analysis, and processing chain construction are elaborated with immense
effort, which conventionally are done manually by the expert designer. Moreover, the
strong technological competition imposes even more complicated design problems with
multiple constraints, e.g., cost, speed, power consumption,
exibility, and reliability.
Thus, the conventional human expert based design approach may not be able to cope
with the increasing demand in numbers, complexity, and diversity. To alleviate the issue,
the design automation approach has been the topic for numerous research works [3-14]
and has been commercialized to several products [15-18]. Additionally, the dynamic
adaptation of intelligent multi-sensor systems is the potential solution for developing
dependable and robust systems. Intrinsic evolution approach and self-x properties [19],
which include self-monitoring, -calibrating/trimming, and -healing/repairing, are among
the best candidates for the issue. Motivated from the ongoing research trends and based
on the background of our research work [12, 13] among the pioneers in this topic, the
research work of the thesis contributes to the design automation of intelligent integrated
multi-sensor systems.
In this research work, the Design Automation for Intelligent COgnitive system with self-
X properties, the DAICOX, architecture is presented with the aim of tackling the design
effort and to providing high quality and robust solutions for multi-sensor intelligent
systems. Therefore, the DAICOX architecture is conceived with the defined goals as
listed below.
Perform front to back complete processing chain design with automated method
selection and parameterization,
Provide a rich choice of pattern recognition methods to the design method pool,
Associate design information via interactive user interface and visualization along
with intuitive visual programming,
Deliver high quality solutions outperforming conventional approaches by using
multi-objective optimization,
Gain the adaptability, reliability and robustness of designed solutions with self-x
properties,
Derived from the goals, several scientific methodological developments and implementations,
particularly in the areas of pattern recognition and computational intelligence,
will be pursued as part of the DAICOX architecture in the research work of this thesis.
The method pool is aimed to contain a rich choice of methods and algorithms covering
data acquisition and sensor configuration, signal processing and feature computation,
dimensionality reduction, and classification. These methods will be selected and parameterized
automatically by the DAICOX design optimization to construct a multi-sensory
cognition processing chain. A collection of non-parametric feature quality assessment
functions for the purpose of Dimensionality Reduction (DR) process will be presented.
In addition, to standard DR methods, the variations of feature selection method, in
particular, feature weighting will be proposed. Three different classification categories
shall be incorporated in the method pool. Hierarchical classification approach will be
proposed and developed to serve as a multi-sensor fusion architecture at the decision
level. Beside multi-class classification, one-class classification methods, e.g., One-Class
SVM and NOVCLASS will be presented to extend functionality of the solutions, in particular,
anomaly and novelty detection. DAICOX is conceived to effectively handle the
problem of method selection and parameter setting for a particular application yielding
high performance solutions. The processing chain construction tasks will be carried
out by meta-heuristic optimization methods, e.g., Genetic Algorithms (GA) and Particle
Swarm Optimization (PSO), with multi-objective optimization approach and model
analysis for robust solutions. In addition, to the automated system design mechanisms,
DAICOX will facilitate the design tasks with intuitive visual programming and various
options of visualization. Design database concept of DAICOX is aimed to allow the
reusability and extensibility of the designed solutions gained from previous knowledge.
Thus, the cooperative design of machine and knowledge from the design expert can also
be utilized for obtaining fully enhanced solutions. In particular, the integration of self-x
properties as well as intrinsic optimization into the system is proposed to gain enduring
reliability and robustness. Hence, DAICOX will allow the inclusion of dynamically
reconfigurable hardware instances to the designed solutions in order to realize intrinsic
optimization and self-x properties.
As a result from the research work in this thesis, a comprehensive intelligent multisensor
system design architecture with automated method selection, parameterization,
and model analysis is developed with compliance to open-source multi-platform software.It is integrated with an intuitive design environment, which includes visual programming
concept and design information visualizations. Thus, the design effort is minimized as
investigated in three case studies of different application background, e.g., food analysis
(LoX), driving assistance (DeCaDrive), and magnetic localization. Moreover, DAICOX
achieved better quality of the solutions compared to the manual approach in all cases,
where the classification rate was increased by 5.4%, 0.06%, and 11.4% in the LoX,
DeCaDrive, and magnetic localization case, respectively. The design time was reduced
by 81.87% compared to the conventional approach by using DAICOX in the LoX case
study. At the current state of development, a number of novel contributions of the thesis
are outlined below.
Automated processing chain construction and parameterization for the design of
signal processing and feature computation.
Novel dimensionality reduction methods, e.g., GA and PSO based feature selection
and feature weighting with multi-objective feature quality assessment.
A modification of non-parametric compactness measure for feature space quality
assessment.
Decision level sensor fusion architecture based on proposed hierarchical classification
approach using, i.e., H-SVM.
A collection of one-class classification methods and a novel variation, i.e.,
NOVCLASS-R.
Automated design toolboxes supporting front to back design with automated
model selection and information visualization.
In this research work, due to the complexity of the task, neither all of the identified goals
have been comprehensively reached yet nor has the complete architecture definition been
fully implemented. Based on the currently implemented tools and frameworks, ongoing
development of DAICOX is pursuing towards the complete architecture. The potential
future improvements are the extension of method pool with a richer choice of methods
and algorithms, processing chain breeding via graph based evolution approach, incorporation
of intrinsic optimization, and the integration of self-x properties. According to
these features, DAICOX will improve its aptness in designing advanced systems to serve
the increasingly growing technologies of distributed intelligent measurement systems, in
particular, CPS and Industrie 4.0.
Wireless sensor networks are the driving force behind many popular and interdisciplinary research areas, such as environmental monitoring, building automation, healthcare and assisted living applications. Requirements like compactness, high integration of sensors, flexibility, and power efficiency are often very different and cannot be fulfilled by state-of-the-art node platforms at once. In this paper, we present and analyze AmICA: a flexible, compact, easy-to-program, and low-power node platform. Developed from scratch and including a node, a basic communication protocol, and a debugging toolkit, it assists in an user-friendly rapid application development. The general purpose nature of AmICA was evaluated in two practical applications with diametric requirements. Our analysis shows that AmICA nodes are 67% smaller than BTnodes, have five times more sensors than Mica2Dot and consume 72% less energy than the state-of-the-art TelosB mote in sleep mode.
Wireless Sensor Networks (WSN) are dynamically-arranged networks typically composed of a large number of arbitrarily-distributed sensor nodes with computing capabilities contributing to –at least– one common application. The main characteristic of these networks is that of being functionally constrained due to a scarce availability of resources and strong dependence on uncontrollable environmental factors. These conditions introduce severe restrictions on the applicability of classic real-time methods aiming at guaranteeing time-bounded communications. Existing real-time solutions tend to apply concepts that were originally not conceived for sensor networks, idealizing realistic application scenarios and overlooking at important design limitations. This results in a number of misleading practices contributing to approaches of restricted validity in real-world scenarios. Amending the confrontation between WSNs and real-time objectives starts with a review of the basic fundamentals of existing approaches. In doing so, this thesis presents an alternative approach based on a generalized timeliness notion suitable to the particularities of WSNs. The new conceptual notion allows the definition of feasible real-time objectives opening a new scope of possibilities not constrained to idealized systems. The core of this thesis is based on the definition and application of Quality of Service (QoS) trade-offs between timeliness and other significant QoS metrics. The analysis of local and global trade-offs provides a step-by-step methodology identifying the correlations between these quality metrics. This association enables the definition of alternative trade-off configurations (set points) influencing the quality performance of the network at selected instants of time. With the basic grounds established, the above concepts are embedded in a simple routing protocol constituting a proof of concept for the validity of the presented analysis. Extensive evaluations under realistic scenarios are driven on simulation environments as well as real testbeds, validating the consistency of this approach.
An interrupter for use in a daisy-chained VME bus interrupt system has beendesigned and implemented as an asynchronous sequential circuit. The concur-rency of the processes posed a design problem that was solved by means of asystematic design procedure that uses Petri nets for specifying system and in-terrupter behaviour, and for deriving a primitive flow table. Classical designand additional measures to cope with non-fundamental mode operation yieldeda coded state-machine representation. This was implemented on a GAL 22V10,chosen for its hazard-preventing structure and for rapid prototyping in studentlaboratories.
Netzbasierte Automatisierungssysteme (NAS) sind das Ergebnis der zunehmenden Dezentralisierung von Automatisierungssystemen mittels neuerer Netzwerkstrukturen. Eine ganze Fülle von Einflussfaktoren führt jedoch zu einem Spektrum von nicht-deterministischen Verzögerungen, die direkten Einfluss auf Qualität, Sicherheit und Zuverlässigkeit der Automatisierungsanlagen haben. Eine genaue Analyse dieser Einflussfaktoren ist somit nicht nur Voraussetzung für den verantwortungsbewussten Einsatz dieser Technologie sondern ermöglicht es auch, bereits im Vorfeld von Umstrukturierungen oder Erweiterungen Fragen der Verlässlichkeit zu klären. In diesem Beitrag wird gezeigt, welchen Einfluss einzelne Komponenten sowie netzbedingte Verhaltensmodi wie Synchronisation und die gemeinsame Nutzung von Ressourcen auf die Antwortzeiten des Gesamtsystems haben. Zur Analyse wird die wahrscheinlichkeitsbasierte Modellverifikation (PMC) verwendet. Umfangreiche Messungen wurden zur Validierung der Ergebnisse durchgeführt.
Phase-gradient metasurfaces can be designed to manipulate electromagnetic waves according to the generalized Snell’s law. Here, we show that a phased parallel-plate waveguide array (PPWA) can be devised to act in the same manner as a phase-gradient metasurface. We derive an analytic model that describes the wave propagation in the PPWA and calculate both the angle and amplitude distribution of the diffracted waves. The analytic model provides an intuitive understanding of the diffraction from the PPWA. We verify the (semi-)analytically calculated angle and amplitude distribution of the diffracted waves by numerical 3-D simulations and experimental measurements in a microwave goniometer.
Radar cross section reducing (RCSR) metasurfaces or coding metasurfaces were primarily designed for normally incident radiation in the past. It is evident that the performance of coding metasurfaces for RCSR can be significantly improved by additional backscattering reduction of obliquely incident radiation, which requires a valid analytic conception tool. Here, we derive an analytic current density distribution model for the calculation of the backscatter far-field of obliquely incident radiation on a coding metasurface for RCSR. For demonstration, we devise and fabricate a metasurface for a working frequency of 10.66GHz and obtain good agreement between the measured, simulated, and analytically calculated backscatter far-fields. The metasurface significantly reduces backscattering for incidence angles between −40∘ and 40∘ in a spectral working range of approximately 1GHz.
The energy efficiency of today’s microcontrollers is supported by the extensive usage of low-power mechanisms. A full power-down requires in many cases a complex, and maybe error prone, administration scheme, because data from the volatile memory have to be stored in a flash based back- up memory. New types of non-volatile memory, e.g. in RRAM technology, are faster and consumes a fraction of the energy compared to flash technology. This paper evaluates power gating for WSN with RRAM as back-up memory.
Photonic crystals are inhomogeneous dielectric media with periodic variation of the refractive index. A photonic crystal gives us new tools for the manipulation of photons and thus has received great interests in a variety of fields. Photonic crystals are expected to be used in novel optical devices such as thresholdless laser diodes, single-mode light emitting diodes, small waveguides with low-loss sharp bends, small prisms, and small integrated optical circuits. They can be operated in some aspects as "left handed materials" which are capable of focusing transmitted waves into a sub-wavelength spot due to negative refraction. The thesis is focused on the applications of photonic crystals in communications and optical imaging: • Photonic crystal structures for potential dispersion management in optical telecommunication systems • 2D non-uniform photonic crystal waveguides with a square lattice for wide-angle beam refocusing using negative refraction • 2D non-uniform photonic crystal slabs with triangular lattice for all-angle beam refocusing • Compact phase-shifted band-pass transmission filter based on photonic crystals
The design of the fifth generation (5G) cellular network should take account of the emerging services with divergent quality of service requirements. For instance, a vehicle-to-everything (V2X) communication is required to facilitate the local data exchange and therefore improve the automation level in automated driving applications. In this work, we inspect the performance of two different air interfaces (i.e., LTE-Uu and PC5) which are proposed by the third generation partnership project (3GPP) to enable the V2X communication. With these two air interfaces, the V2X communication can be realized by transmitting data packets either over the network infrastructure or directly among traffic participants. In addition, the ultra-high reliability requirement in some V2X communication scenarios can not be fulfilled with any single transmission technology (i.e., either LTE-Uu or PC5). Therefore, we discuss how to efficiently apply multi-radio access technologies (multi-RAT) to improve the communication reliability. In order to exploit the multi-RAT in an efficient manner, both the independent and the coordinated transmission schemes are designed and inspected. Subsequently, the conventional uplink is also extended to the case where a base station can receive data packets through both the LTE-Uu and PC5 interfaces. Moreover, different multicast-broadcast single-frequency network (MBSFN) area mapping approaches are also proposed to improve the communication reliability in the LTE downlink. Last but not least, a system level simulator is implemented in this work. The simulation results do not only provide us insights on the performances of different technologies but also validate the effectiveness of the proposed multi-RAT scheme.
Three-dimensional (3D) integration using through- silicon via (TSV) has been used for memory designs. Content addressable memory (CAM) is an important component in digital systems. In this paper, we propose an evaluation tool for 3D CAMs, which can aid the designer to explore the delay and power of various partitioning strategies. Delay, power, and energy models of 3D CAM with respect to different architectures are built as well.
Die industrielle Oberflächeninspektion und insbesondere die Defekterkennung ist ein wichtiges Anwendungsgebiet für die automatische Bildverarbeitung (BV). Für den Entwurf und die Konfiguration der entsprechenden Softwaresysteme, in der Regel anwendungsspezifische Einzellösungen, werden im industriellen Umfeld zumeist entweder firmeneigene Bildverarbeitungsbibliotheken, kommerzielle oder freie Toolboxen verwendet. In der Regel beinhalten diese u.a. Standardalgorithmen der Bildverarbeitung in modularer Form, z. B. Filter- oder Schwellwertoperatoren. Die einzelnen BV-Methoden werden in der Regel nach dem Prinzip der visuellen Programmierung in einer grafischen Entwicklungsumgebung ausgewählt und zu einer BV-Kette bzw. einem -Graph zusammengesetzt. Dieses Prinzip ermöglicht es auch einem Programmierunkundigen, BV-Systeme zu erstellen und zu konfigurieren. Eine gewisse Grundkenntnis der Methoden der Bildverarbeitung ist jedoch notwendig. Je nach Aufgabenstellung und Erfahrung des Systementwicklers erfordern manueller Entwurf und Konfiguration eines BV-Systems erheblichen Zeiteinsatz. Diese Arbeit beschäftigt sich mit automatischen Entwurfs-, Konfigurations- und Optimierungsmöglichkeiten dieser modularen BV-Systeme, die es auch einem ungeübten Endnutzer ermöglichen, adäquate Lösungen zu generieren mit dem Ziel, ein effizienteres Entwurfswerkzeug für Bildverarbeitungssysteme mit neuen und verbesserten Eigenschaften zu schaffen. Die Methodenauswahl und Parameteroptimierung reicht von der Bildvorverarbeitung und -verbesserung mittels BV-Algorithmen bis hin zu ggf. eingesetzten Klassifikatoren, wie Nächste-Nachbar-Klassifikator (NNK) und Support-Vektor-Maschinen (SVM) und verschiedenen Bewertungsfunktionen. Der flexible Einsatz verschiedener Klassifikations- und Bewertungsmethoden ermöglicht einen automatischen problemspezifischen Entwurf und die Optimierung des BV-Systems für Aufgaben der Fehlerdetektion und Texturanalyse für 2d-Bilder, sowie die Trennung von Objekten und Hintergrund für 2d- und 3d-Grauwertbilder. Für die Struktur- und Parameteroptimierung des BV-Systems werden Evolutionäre Algorithmen (EA) und Partikelschwarmoptimierung (PSO) verwendet.
Hardware prototyping is an essential part in the hardware design flow. Furthermore, hardware prototyping usually relies on system-level design and hardware-in-the-loop simulations in order to develop, test and evaluate intellectual property cores. One common task in this process consist on interfacing cores with different port specifications. Data width conversion is used to overcome this issue. This work presents two open source hardware cores compliant with AXI4-Stream bus protocol, where each core performs upsizing/downsizing data width conversion.
Langvorträge: T. Schorr, A. Dittrich, W. Sauer-Greff, R. Urbansky (Lehrstuhl für Nachrichtentechnik, TU Kaiserslautern): Iterative Equalization in Fibre Optical Systems Using High-Rate RCPR, BCH and LDPC Codes A. Doenmez, T. Hehn, J. B. Huber (Lehrstuhl für Informationsübertragung, Universität Erlangen-Nürnberg): Analytical Calculation of Thresholds for LDPC Codes transmitted over Binary Erasure Channels S. Deng, T. Weber (Institut für Nachrichtentechnik und Informationselektronik, Universität Rostock), M. Meurer (Lehrstuhl für hochfrequente Signalübertragung und -verarbeitung, TU Kaiserslautern): Dynamic Resource Allocation in Future OFDM Based Mobile Radio Systems J. Hahn, M. Meurer, T. Weber (Lehrstuhl für hochfrequente Signalübertragung und -verarbeitung, TU Kaiserslautern): Receiver Oriented FEC Coding (RFC) for Selective Channels C. Stierstorfer, R. Fischer (Lehrstuhl für Informationsübertragung, Universität Erlangen-Nürnberg): Comparison of Code Design Requirements for Single- and Multicarrier Transmission over Frequency-Selective MIMO Channels A. Scherb (Arbeitsbereich Nachrichtentechnik, Universität Bremen): Unbiased Semiblind Channel Estimation for Coded Systems T.-J. Liang, W. Rave, G. Fettweis (Vodafone Stiftungslehrstuhl Mobile Nachrichtensysteme, Technische Universität Dresden): Iterative Joint Channel Estimation and Decoding Using Superimposed Pilots in OFDM-WLAN A. Dittrich, T. Schorr, W. Sauer-Greff, R. Urbansky (Lehrstuhl für Nachrichtentechnik, TU Kaiserslautern): DIORAMA - An Iterative Decoding Real-Time MATLAB Receiver for the Multicarrier-Based Digital Radio DRM Kurzvorträge: S. Plass, A. Dammann (German Aerospace Center (DLR)): Radio Resource Management for MC-CDMA over Correlated Rayleigh Fading Channels S. Heilmann, M. Meurer, S. Abdellaoui, T. Weber (Lehrstuhl für hochfrequente Signalübertragung und -verarbeitung, TU Kaiserslautern): Concepts for Accurate Low-Cost Signature Based Localisation of Mobile Terminals M. Siegrist, A. Dittrich, W. Sauer-Greff, R. Urbansky (Lehrstuhl für Nachrichtentechnik, TU Kaiserslautern): SIMO and MIMO Concepts for Fibre Optical Communications C. Bockelmann (Arbeitsbereich Nachrichtentechnik, Universität Bremen): Sender- und Empfängerstrukturen für codierte MIMO-Übertragung
Um die in der Automatisierung zunehmenden Anforderungen an Vorschubachsen hinsichtlich Dynamik, Präzision und Wartungsaufwand bei niedriger Bauhöhe und kleiner werdendem Bauvolumen gerecht zu werden, kommen immer mehr Synchron-Linearmotoren in Zahnspulentechnik mit Permanentmagneterregung in Werkzeugmaschinen zum Einsatz. Als hauptsächlicher Vorteil gegenüber der rotierenden Antriebslösung mit Getriebeübersetzung und Kugelrollspindel wird die direkte Kraftübertragung ohne Bewegungswandler genannt. Der Übergang vom konventionellen linearen Antriebssystem zum Direktantriebssystem eröffnet dem Werkzeugmaschinenherstellern und den Industrieanwendungen eine Vielzahl neuer Möglichkeiten durch beeindruckende Verfahrgeschwindigkeit und hohes Beschleunigungsvermögen sowie Positionier- und Wiederholgenauigkeit und bietet darüber hinaus die Chance zu einer weiteren Produktivitäts- und Qualitätssteigerung. Um alle dieser Vorteile ausnutzen zu können, muss der Antrieb zuerst hinsichtlich der für Linearmotoren typisch Kraftwelligkeit optimiert werden. Die Suche nach wirtschaftlichen und praxistauglichen Gegenmaßnahmen ist ein aktuelles Forschungsthema in der Antriebstechnik. In der vorliegenden Arbeit werden die Kraftschwankungen infolge Nutung, Endeffekt und elektrischer Durchflutung in PM-Synchron-Linearmotor rechnerisch und messtechnisch untersucht. Ursachen und Eigenschaften der Kraftwelligkeit werden beschrieben und Einflussparameter aufgezeigt. Es besteht die Möglichkeit, die Kraftwelligkeit durch bestimmte Maßnahmen zu beeinflussen, z. B. mit Hilfe des Kraftwelligkeitsausgleichs bestehend aus ferromagnetischem Material oder durch gegenseitigen Ausgleich mehrerer zusammengekoppelter Primärteile. Wie die Untersuchungen gezeigt haben, ist eine Abstimmung der Einflussparameter auf analytischem Weg kaum möglich, in der Praxis führt das auf eine experimentell-iterative Optimierung mit FEM-Unterstützung. Die gute Übereinstimmung zwischen Messung und Simulation bietet einen klaren Hinweis, dass die hier vorgestellten Maßnahmen als geeignet angesehen werden können, sie ermöglichen eine Kraftwelligkeitsreduzierung von ursprünglichen 3-5% bis auf 1%, wobei eine leichte Herabsetzung der Kraftdichte in Kauf genommen werden muss. Beim Maschinenentwurf muss rechtzeitig ermittelt werden, welches Kompensationsverfahren günstig ist bezüglich der vorgesehenen Anwendungen.
Analog sensor electronics requires special care during design in order to increase the quality and precision of the signal, and the life time of the product. Nevertheless, it can experience static deviations due to the manufacturing tolerances, and dynamic deviations due to operating in non-ideal environment. Therefore, the advanced applications such as MEMS technology employs calibration loop to deal with the deviations, but unfortunately, it is considered only in the digital domain, which cannot cope with all the analog deviations such as saturation of the analog signal, etc. On the other hand, rapid-prototyping is essential to decrease the development time, and the cost of the products for small quantities. Recently, evolvable hardware has been developed with the motivation to cope with the mentioned sensor electronic problems. However the industrial specifications and requirements are not considered in the hardware learning loop. Indeed, it minimizes the error between the required output and the real output generated due to given test signal. The aim of this thesis is to synthesize the generic organic-computing sensor electronics and return hardware with predictable behavior for embedded system applications that gains the industrial acceptance; therefore, the hardware topology is constrained to the standard hardware topologies, the hardware standard specifications are included in the optimization, and hierarchical optimization are abstracted from the synthesis tools to evolve first the building blocks, then evolve the abstract level that employs these optimized blocks. On the other hand, measuring some of the industrial specifications needs expensive equipments and some others are time consuming which is not fortunate for embedded system applications. Therefore, the novel approach "mixtrinsic multi-objective optimization" is proposed that simulates/estimates the set of the specifications that is hard to be measured due to the cost or time requirements, while it measures intrinsically the set of the specifications that has high sensitivity to deviations. These approaches succeed to optimize the hardware to meet the industrial specifications with low cost measurement setup which is essential for embedded system applications.
For many years, most distributed real-time systems employed data communication systems specially tailored to address the specific requirements of individual domains: for instance, Controlled Area Network (CAN) and Flexray in the automotive domain, ARINC 429 [FW10] and TTP [Kop95] in the aerospace domain. Some of these solutions were expensive, and eventually not well understood.
Mostly driven by the ever decreasing costs, the application of such distributed real-time system have drastically increased in the last years in different domains. Consequently, cross-domain communication systems are advantageous. Not only the number of distributed real-time systems have been increasing but also the number of nodes per system, have drastically increased, which in turn increases their network bandwidth requirements. Further, the system architectures have been changing, allowing for applications to spread computations among different computer nodes. For example, modern avionics systems moved from federated to integrated modular architecture, also increasing the network bandwidth requirements.
Ethernet (IEEE 802.3) [iee12] is a well established network standard. Further, it is fast, easy to install, and the interface ICs are cheap [Dec05]. However, Ethernet does not offer any temporal guarantee. Research groups from academia and industry have presented a number of protocols merging the benefits of Ethernet and the temporal guarantees required by distributed real-time systems. Two of these protocols are: Avionics Full-Duplex Switched Ethernet (AFDX) [AFD09] and Time-Triggered Ethernet (TTEthernet) [tim16]. In this dissertation, we propose solutions for two problems faced during the design of AFDX and TTEthernet networks: avoiding data loss due to buffer overflow in AFDX networks with multiple priority traffic, and scheduling of TTEthernet networks.
AFDX guarantees bandwidth separation and bounded transmission latency for each communication channel. Communication channels in AFDX networks are not synchronized, and therefore frames might compete for the same output port, requiring buffering to avoid data loss. To avoid buffer overflow and the resulting data loss, the network designer must reserve a safe, but not too pessimistic amount of memory of each buffer. The current AFDX standard allows for the classification of the network traffic with two priorities. Nevertheless, some commercial solutions provide multiple priorities, increasing the complexity of the buffer backlog analysis. The state-of-the-art AFDX buffer backlog analysis does not provide a method to compute deterministic upper bounds
iiifor buffer backlog of AFDX networks with multiple priority traffic. Therefore, in this dissertation we propose a method to address this open problem. Our method is based on the analysis of the largest busy period encountered by frames stored in a buffer. We identify the ingress (and respective egress) order of frames in the largest busy period that leads to the largest buffer backlog, and then compute the respective buffer backlog upper bound. We present experiments to measure the computational costs of our method.
In TTEthernet, nodes are synchronized, allowing for message transmission at well defined points in time, computed off-line and stored in a conflict-free scheduling table. The computation of such scheduling tables is a NP-complete problem [Kor92], which should be solved in reasonable time for industrial size networks. We propose an approach to efficiently compute a schedule for the TT communication channels in TTEthernet networks, in which we model the scheduling problem as a search tree. As the scheduler traverses the search tree, it schedules the communication channels on a physical link. We presented two approaches to traverse the search tree while progressively creating the vertices of the search tree. A valid schedule is found once the scheduler reaches a valid leaf. If on the contrary, it reaches an invalid leaf, the scheduler backtracks searching for a path to a valid leaf. We present a set of experiments to demonstrate the impact of the input parameters on the time taken to compute a feasible schedule or to deem the set of virtual links infeasible.
In embedded systems, there is a trend of integrating several different functionalities on a common platform. This has been enabled by increasing processing power and the arise of integrated system-on-chips.
The composition of safety-critical and non-safety-critical applications results in mixed-criticality systems. Certification Authorities (CAs) demand the certification of safety-critical applications with strong confidence in the execution time bounds. As a consequence, CAs use conservative assumptions in the worst-case execution time (WCET) analysis which result in more pessimistic WCETs than the ones used by designers. The existence of certified safety-critical and non-safety-critical applications can be represented by dual-criticality systems, i.e., systems with two criticality levels.
In this thesis, we focus on the scheduling of mixed-criticality systems which are subject to certification. Scheduling policies cognizant of the mixed-criticality nature of the systems and the certification requirements are needed for efficient and effective scheduling. Furthermore, we aim at reducing the certification costs to allow faster modification and upgrading, and less error-prone certification. Besides certification aspects, requirements of different operational modes result in challenging problems for the scheduling process. Despite the mentioned problems, schedulers require a low runtime overhead for an efficient execution at runtime.
The presented solutions are centered around time-triggered systems which feature a low runtime overhead. We present a transformation to include event-triggered activities, represented by sporadic tasks, already into the offline scheduling process. Further, this transformation can also be applied on periodic tasks to shorten the length of schedule tables which reduces certification costs. These results can be used in our method to construct schedule tables which creates two schedule tables to fulfill the requirements of dual-criticality systems using mode changes at runtime. Finally, we present a scheduler based on the slot-shifting algorithm for mixed-criticality systems. In a first version, the method schedules dual-criticality jobs without the need for mode changes. An already certified schedule table can be used and at runtime, the scheduler reacts to the actual behavior of the jobs and thus, makes effective use of the available resources. Next, we extend this method to schedule mixed-criticality job sets with different operational modes. As a result, we can schedule jobs with varying parameters in different modes.
To continue reducing voltage in scaled technologies, both circuit and architecture-level resiliency techniques are needed to tolerate process-induced defects, variation, and aging in SRAM cells. Many different resiliency schemes have been proposed and evaluated, but most prior results focus on voltage reduction instead of energy reduction. At the circuit level, device cell architectures and assist techniques have been shown to lower Vmin for SRAM, while at the architecture level, redundancy and cache disable techniques have been used to improve resiliency at low voltages. This paper presents a unified study of error tolerance for both circuit and architecture techniques and estimates their area and energy overheads. Optimal techniques are selected by evaluating both the error-correcting abilities at low supplies and the overheads of each technique in a 28nm. The results can be applied to many of the emerging memory technologies.
The dissertation describes a practically proven, particularly efficient approach for the verification of digital circuit designs. The approach outperforms simulation based verification wrt. final circuit quality as well as wrt. required verification effort. In the dissertation, the paradigm of transaction based verification is ported from simulation to formal verification. One consequence is a particular format of formal properties, called operation properties. Circuit descriptions are verified by proof of operation properties with Interval Property Checking (IPC), a particularly strong SAT based formal verification algorithm. Furtheron, a completeness checker is presented that identifies all verification gaps in sets of operation properties. This completeness checker can handle the large operation properties that arise, if this approach is applied to realistic circuits. The methodology of operation properties, Interval Property Checking, and the completeness checker form a symbiosis that is of particular benefit to the verification of digital circuit designs. On top of this symbiosis an approach to completely verify the interaction of completely verified modules has been developed by adaptation of the modelling theories of digital systems. The approach presented in the dissertation has proven in multiple commercial application projects that it indeed completely verifies modules. After reaching a termination criterion that is well defined by completeness checking, no further bugs were found in the verified modules. The approach is marketed by OneSpin Solutions GmbH, Munich, under the names "Operation Based Verification" and "Gap Free Verification".
”In contemporary electronics 80% of a chip may perform digital functions but the 20%
of analog functions may take 80% of the development time.” [1]. Aggravating this, the
demands on analog design is increasing with rapid technology scaling. Most designs
have moved away from analog to digital domains, where possible, however, interacting
with the environment will always require analog to digital data conversion. Adding to
this problem, the number of sensors used in consumer and industry related products are
rapidly increasing. Designers of ADCs are dealing with this problem in several ways, the
most important is the migration towards digital designs and time domain techniques.
Time to Digital Converters (TDC) are becoming increasingly popular for robust signal
processing. Biological neurons make use of spikes, which carry spike timing information
and will not be affected by the problems related to technology scaling. Neuromorphic
ADCs still remain exotic with few implementations in sub-micron technologies Table 2.7.
Even among these few designs, the strengths of biological neurons are rarely exploited.
From a previous work [2], LUCOS, a high dynamic range image sensor, the efficiency
of spike processing has been validated. The ideas from this work can be generalized to
make a highly effective sensor signal conditioning system, which carries the promise to
be robust to technology scaling.
The goal of this work is to create a novel spiking neural ADC as a novel form of a
Multi-Sensor Signal Conditioning and Conversion system, which
• Will be able to interface with or be a part of a System on Chip with traditional
analog or advanced digital components.
• Will have a graceful degradation.
• Will be robust to noise and jitter related problems.
• Will be able to learn and adapt to static errors and dynamic errors.
• Will be capable of self-repair, self-monitoring and self-calibration
Sensory systems in humans and other animals analyze the environment using several
techniques. These techniques have been evolved and perfected to help the animal sur-
vive. Different animals specialize in different sense organs, however, the peripheral
neural network architectures remain similar among various animal species with few ex-
ceptions. While there are many biological sensing techniques present, most popularly
used engineering techniques are based on intensity detection, frequency detection, and
edge detection. These techniques are used with traditional analog processing (e.g., colorvi
sensors using filters), and with biological techniques (e.g. LUCOS chip [2]). The local-
ization capability of animals has never been fully utilized.
One of the most important capabilities for animals, vertebrates or invertebrates, is the
capability for localization. The object of localization can be predator, prey, sources of
water, or food. Since these are basic necessities for survival, they evolve much faster
due to the survival of the fittest. In fact, localization capabilities, even if the sensors
are different, have convergently evolved to have same processing methods (coincidence
detection) in their peripheral neurons (for e.g., forked tongue of a snake, antennae of
a cockroach, acoustic localization in fishes and mammals). This convergent evolution
increases the validity of the technique. In this work, localization concepts based on
acoustic localization and tropotaxis are investigated and employed for creation of novel
ADCs.
Unlike intensity and frequency detection, which are not linear (for e.g. eyes saturate in
bright light, loose color perception in low light), localization is inherently linear. This
is mainly because the accurate localization of predator or prey can be the difference
between life and death for an animal.
Figure 1 visually explains the ADC concept proposed in this work. This has two parts.
(1) Sensor to Spike(time) Conversion (SSC), (2) Spike(time) to Digital Conversion(SDC).
Both of the structures have been designed with models of biological neurons. The
combination of these two structures is called SSDC.
To efficiently implement the proposed concept, a comparison of several biological neural
models is made and two models are shortlisted. Various synapse structures are also
studied. From this study, Leaky Integrate and Fire neuron (LIF) is chosen since it
fulfills all the requirements of the proposed structure. The analog neuron and synapse
designs from Indiveri et. al. [3], [4] were taken, and simulations were conducted using
cadence and the behavioral equivalence with biological counterpart was checked. The
LIF neuron had features, that were not required for the proposed approach. A simple
LIF neuron stripped of these features and was designed to be as fast as allowed by the
technology.
The SDC was designed with the neural building blocks and the delays were designed
using buffer chains. This SDC converts incoming Time Interval Code (TIC) to sparse
place coding using coincidence detection. Coincidence detection is a property of spiking
neurons, which is a time domain equivalent of a Gaussian Kernel. The SDC is designed to
have an online reconfigurable Gaussian kernel width, weight, threshold, and refractory
period. The advantage of sparse place codes, which contain rank order coding wasvii
Figure 1: ADC as a localization problem (right), Jeffress model of sound localization
visualized (left). The values t 1 and t 2 indicate the time taken from the source to s1 and
s2 respectively.
described in our work [5]. A time based winner take all circuit with memory was created
based on a previous work [6] for reading out of sparse place codes asynchronously.
The SSC was also initially designed with the same building blocks. Additionally, a
differential synapse was designed for better SSC. The sensor element considered wasviii
a Wheatstone full bridge AMR sensor AFF755 from Sensitec GmbH. A reconfigurable
version of the synapse was also designed for a more generic sensor interface.
The first prototype chip SSDCα was designed with 257 modules of coincidence detectors
realizing the SDC and the SSC. Since the spike times are the most important information,
the spikes can be treated as digital pulses. This provides the capability for digital
communication between analog modules. This creates a lot of freedom for use of digital
processing between the discussed analog modules. This advantage is fully exploited
in the design of SSDCα. Three SSC modules are multiplexed to the SDC. These SSC
modules also provide outputs from the chip simultaneously. A rising edge detecting fixed
pulse width generation circuit is used to create pulses that are best suited for efficient
performance of the SDC. The delay lines are made reconfigurable to increase robustness
and modify the span of the SDC. The readout technique used in the first prototype is
a relatively slow but safe shift register. It is used to analyze the characteristics of the
core work. This will be replaced by faster alternatives discussed in the work. The area
of the chip is 8.5 mm 2 . It has a sampling rate from DC to 150 kHz. It has a resolution
from 8-bit to 13-bit. It has 28,200 transistors on the chip. It has been designed in 350
nm CMOS technology from ams. The chip has been manufactured and tested with a
sampling rate of 10 kHz and a theoretical resolution of 8 bits. However, due to the
limitations of our Time-Interval-Generator, we are able to confirm for only 4 bits of
resolution.
The key novel contributions of this work are
• Neuromorphic implementation of AD conversion as a localization problem based
on sound localization and tropotaxis concepts found in nature.
• Coincidence detection with sparse place coding to enhance resolution.
• Graceful degradation without redundant elements, inherent robustness to noise,
which helps in scaling of technologies
• Amenable to local adaptation and self-x features.
Conceptual goals have all been fulfilled, with the exception of adaptation. The feasibility
for local adaptation has been shown with promising results and further investigation is
required for future work. This thesis work acts as a baseline, paving the way for R&D
in a new direction. The chip design has used 350 nm ams hitkit as a vehicle to prove
the functionality of the core concept. The concept can be easily ported to present
aggressively-scaled-technologies and future technologies.
The heterogeneity of today's access possibilities to wireless networks imposes challenges for efficient mobility support and resource management across different Radio Access Technologies (RATs). The current situation is characterized by the coexistence of various wireless communication systems, such as GSM, HSPA, LTE, WiMAX, and WLAN. These RATs greatly differ with respect to coverage, spectrum, data rates, Quality of Service (QoS), and mobility support.
In real systems, mobility-related events, such as Handover (HO) procedures, directly affect resource efficiency and End-To-End (E2E) performance, in particular with respect to signaling efforts and users' QoS. In order to lay a basis for realistic multi-radio network evaluation, a novel evaluation methodology is introduced in this thesis.
A central hypothesis of this thesis is that the consideration and exploitation of additional information characterizing user, network, and environment context, is beneficial for enhancing Heterogeneous Access Management (HAM) and Self-Optimizing Networks (SONs). Further, Mobile Network Operator (MNO) revenues are maximized by tightly integrating bandwidth adaptation and admission control mechanisms as well as simultaneously accounting for user profiles and service characteristics. In addition, mobility robustness is optimized by enabling network nodes to tune HO parameters according to locally observed conditions.
For establishing all these facets of context awareness, various schemes and algorithms are developed and evaluated in this thesis. System-level simulation results demonstrate the potential of context information exploitation for enhancing resource utilization, mobility support, self-tuning network operations, and users' E2E performance.
In essence, the conducted research activities and presented results motivate and substantiate the consideration of context awareness as key enabler for cognitive and autonomous network management. Further, the performed investigations and aspects evaluated in the scope of this thesis are highly relevant for future 5G wireless systems and current discussions in the 5G infrastructure Public Private Partnership (PPP).
The fifth-generation mobile telecommunication network is expected to support multi-access edge computing (MEC), which intends to distribute computation tasks and services from the central cloud to the edge clouds. Toward ultra-responsive, ultra-reliable, and ultra-low-latency MEC services, the current mobile network security architecture should enable a more decentralized approach for authentication and authorization processes. This paper proposes a novel decentralized authentication architecture that supports flexible and low-cost local authentication with the awareness of context information of network elements such as user equipment and virtual network functions. Based on a Markov model for backhaul link quality as well as a random walk mobility model with mixed mobility classes and traffic scenarios, numerical simulations have demonstrated that the proposed approach is able to achieve a flexible balance between the network operating cost and the MEC reliability.
Context-Enabled Optimization of Energy-Autarkic Networks for Carrier-Grade Wireless Backhauling
(2015)
This work establishes the novel category of coordinated Wireless Backhaul Networks (WBNs) for energy-autarkic point-to-point radio backhauling. The networking concept is based on three major building blocks: cost-efficient radio transceiver hardware, a self-organizing network operations framework, and power supply from renewable energy sources. The aim of this novel backhauling approach is to combine carrier-grade network performance with reduced maintenance effort as well as independent and self-sufficient power supply. In order to facilitate the success prospects of this concept, the thesis comprises the following major contributions: Formal, multi-domain system model and evaluation methodology
First, adapted from the theory of cyber-physical systems, the author devises a multi-domain evaluation methodology and a system-level simulation framework for energy-autarkic coordinated WBNs, including a novel balanced scorecard concept. Second, the thesis specifically addresses the topic of Topology Control (TC) in point-to-point radio networks and how it can be exploited for network management purposes. Given a set of network nodes equipped with multiple radio transceivers and known locations, TC continuously optimizes the setup and configuration of radio links between network nodes, thus supporting initial network deployment, network operation, as well as topology re-configuration. In particular, the author shows that TC in WBNs belongs to the class of NP-hard quadratic assignment problems and that it has significant impact in operational practice, e.g., on routing efficiency, network redundancy levels, service reliability, and energy consumption. Two novel algorithms focusing on maximizing edge connectivity of network graphs are developed.
Finally, this work carries out an analytical benchmarking and a numerical performance analysis of the introduced concepts and algorithms. The author analytically derives minimum performance levels of the the developed TC algorithms. For the analyzed scenarios of remote Alpine communities and rural Tanzania, the evaluation shows that the algorithms improve energy efficiency and more evenly balance energy consumption across backhaul nodes, thus significantly increasing the number of available backhaul nodes compared to state-of-the-art TC algorithms.
This thesis has the goal to propose measures which allow an increase of the power efficiency of OFDM transmission systems. As compared to OFDM transmission over AWGN channels, OFDM transmission over frequency selective radio channels requires a significantly larger transmit power in order to achieve a certain transmission quality. It is well known that this detrimental impact of frequency selectivity can be combated by frequency diversity. We revisit and further investigate an approach to frequency diversity based on the spreading of subsets of the data elements over corresponding subsets of the OFDM subcarriers and term this approach Partial Data Spreading (PDS). The size of said subsets, which we designate as spreading factor, is a design parameter of PDS, and by properly choosing , depending on the system designer's requirements, an adequate compromise between a good system performance and a low complexity can be found. We show how PDS can be combined with ML, MMSE and ZF data detection, and it is recognized that MMSE data detection offers a good compromise between performance and complexity. After having presented the utilization of PDS in OFDM transmission without FEC encoding, we also show that PDS readily lends itself for FEC encoded OFDM transmission. We display that in this case the system performance can be significantly enhanced by specific schemes of interleaving and utilization of reliabiliy information developed in the thesis. A severe problem of OFDM transmission is the large Peak-to-Average-Power Ratio (PAPR) of the OFDM symbols, which hampers the application of power efficient transmit amplifiers. Our investigations reveal that PDS inherently reduces the PAPR. Another approch to PAPR reduction is the well known scheme Selective Data Mapping (SDM). In the thesis it is shown that PDS can be beneficially combined with SDM to the scheme PDS-SDM with a view to jointly exploit the PAPR reduction potentials of both schemes. However, even when such a PAPR reduction is achieved, the amplitude maximum of the resulting OFDM symbols is not constant, but depends on the data content. This entails the disadvantage that the power amplifier cannot be designed, with a view to achieve a high power efficiency, for a fixed amplitude maximum, what would be desirable. In order to overcome this problem, we propose the scheme Optimum Clipping (OC), in which we obtain the desired fixed amplitude maximum by a specific combination of the measures clipping, filtering and rescaling. In OFDM transmission a certain number of OFDM subcarriers have to be sacrificed for pilot transmission in order to enable channel estimation in the receiver. For a given energy of the OFDM symbols, the question arises in which way this energy should be subdivided among the pilots and the data carrying OFDM subcarriers. If a large portion of the available transmit energy goes to the pilots, then the quality of channel estimation is good, however, the data detection performs poor. Data detection also performs poor if the energy provided for the pilots is too small, because then the channel estimate indispensable for data detection is not accurate enough. We present a scheme how to assign the energy to pilot and data OFDM subcarriers in an optimum way which minimizes the symbol error probability as the ultimate quality measure of the transmission. The major part of the thesis is dedicated to point-to-point OFDM transmission systems. Towards the end of the thesis we show that the PDS can be also applied to multipoint-to-point OFDM transmission systems encountered for instance in the uplinks of mobile radio systems.
Contributions to the application of adaptive antennas and CDMA code pooling in the TD CDMA downlink
(2002)
TD (Time Division)-CDMA is one of the partial standards adopted by 3GPP (3rd Generation Partnership Project) for 3rd Generation (3G) mobile radio systems. An important issue when designing 3G mobile radio systems is the efficient use of the available frequency spectrum, that is the achievement of a spectrum efficiency as high as possible. It is well known that the spectrum efficiency can be enhanced by utilizing multi-element antennas instead of single-element antennas at the base station (BS). Concerning the uplink of TD- CDMA, the benefits achievable by multi-element BS antennas have been quantitatively studied to a satisfactory extent. However, corresponding studies for the downlink are still missing. This thesis has the goal to make contributions to fill this lack of information. For near-to-reality directional mobile radio scenarios TD-CDMA downlink utilizing multi-element antennas at the BS are investigated both on the system level and on the link level. The system level investigations show how the carrier-to-interference ratio can be improved by applying such antennas. As the result of the link level investigations, which rely on the detection scheme Joint Detection (JD), the improvement of the bit er- ror rate by utilizing multi-element antennas at the BS can be quantified. Concerning the link level of TD-CDMA, a number of improvements are proposed which allow considerable performance enhancement of TD-CDMA downlink in connection with multi-element BS antennas. These improvements include * the concept of partial joint detection (PJD), in which at each mobile station (MS) only a subset of the arriving CDMA signals including those being of interest to this MS are jointly detected, * a blind channel estimation algorithm, * CDMA code pooling, that is assigning more than one CDMA code to certain con- nections in order to offer these users higher data rates, * maximizing the Shannon transmission capacity by an interleaving concept termed CDMA code interleaving and by advantageously selecting the assignment of CDMA codes to mobile radio channels, * specific power control schemes, which tackle the problem of different transmission qualities of the CDMA codes. As a comprehensive illustration of the advantages achievable by multi-element BS anten- nas in the TD-CDMA downlink, quantitative results concerning the spectrum efficiency for different numbers of antenna elements at the BS conclude the thesis.
This paper presents a completely systematic design procedure for asynchronous controllers.The initial step is the construction of a signal transition graph (STG, an interpreted Petri net) ofthe dialog between data path and controller: a formal representation without reference to timeor internal states. To implement concurrently operating control structures, and also to reducedesign effort and circuit cost, this STG can be decomposed into overlapping subnets. A univer-sal initial solution is then obtained by algorithmically constructing a primitive flow table fromeach component net. This step links the procedure to classical asynchronous design, in particu-lar to its proven optimization methods, without restricting the set of solutions. In contrast toother approaches, there is no need to extend the original STG intuitively.
Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.
Umgangssprachlich wurde das Wort Daten schon gebraucht, lange bevor der Computer erfundenwurdeund die AbkürzungEDV für "Elektronische Datenverarbeitung" in die Alltagssprache gelangte.So sagte beispielsweise der Steuerberater zu seinem Klienten: "Bevor ich Ihre Steuererklärung fertigmachen kann, brauche ich von Ihnen noch ein paar Daten." Oder der Straßenbaureferent einer Stadtschrieb an den Oberbürgermeister: "Für die Entscheidung, welche der beiden in Frage stehenden Stra-ßen vorrangig ausgebaut werden soll, müssen wir noch eine Datenerhebung durchführen." Bei diesenDaten ging es zwar oft um Zahlen - Geldbeträge, Anzahl der Kinder, Anzahl der Beschäftigungsmo-nate, gezählte Autos - , aber eine Gleichsetzung von Daten mit Zahlen wäre falsch. Zum einen wärenZahlen ohne mitgelieferte Wörter wie Monatseinkommen, Kinderzahl u.ä. für den Steuerberater nutz-los, zum anderen will das Finanzamt u.a. auch den Arbeitgeber des Steuerpflichtigen wissen, und dazumuß eine Adresse angegeben werden, aber keine Zahl.
Für die Systemtheorie ist der Begriff Zustand ein sehr zentraler Begriff. Das Wort "Zustand" wird um-gangssprachlich recht häufig verwendet, aber wenn man die Leute fragen würde, was sie denn meinen,wenn sie das Wort Zustand benützen, dann würde man sicher nicht die präzise Definition bekommen,die man für die Systemtheorie braucht.
Multiple-channel die-stacked DRAMs have been used for maximizing the performance and minimizing the power of memory access in 2.5D/3D system chips. Stacked DRAM dies can be used as a cache for the processor die in 2.5D/3D system chips. Typically, modern processor system-on-chips (SOCs) have three-level caches, L1, L2, and L3. Could the DRAM cache be used to replace which level of caches? In this paper, we derive an inequality which can aid the designer to check if the designed DRAM cache can provide better performance than the L3 cache. Also, design considerations of DRAM caches for meet the inequality are discussed. We find that a dilemma of the DRAM cache access time and associativity exists for providing better performance than the L3 cache. Organizing multiple channels into a DRAM cache is proposed to cope with the dilemma.
In der Philosophie ist es selbstverständlich, daß Autoren, die Erkenntnisse früherer Philosophen weitergeben oder kommentieren, die Originalliteratur kennen und sich in ihrer Argumentation explizit auf bestimmte Stellen in den Originaldarstellungen beziehen. In der Technik dagegen ist es allgemein akzeptierte Praxis, daß Autoren von Lehrbüchern, in denen Erkenntnisse früherer Forscher dargestellt oder kommentiert werden, nicht die Originaldarstellungen zugrunde legen, sondern sich mit den Darstellungen in der Sekundärliteratur begnügen. Man denke an die Erkenntnisse von Boole oder Maxwell, die in sehr vielen Lehrbüchern der Digitaltechnik bzw. der theoretischen Elektrotechnik vermittelt werden, ohne daß die Autoren dieser Lehrbücher auf die Originalschriften von Boole oder Maxwell Bezug nehmen. Dagegen wird man wohl kaum ein Buch über Erkenntnisse von Aristoteles oder Kant finden, dessen Autor sich nicht explizit auf bestimmte Stellen in den Schriften dieser Philosophen bezieht.
Autonomous driving is disrupting the conventional automotive development. In fact, autonomous driving kicks off the consolidation of control units, i.e. the transition from distributed Electronic Control Units (ECUs) to centralized domain controllers. Platforms like Audi’s zFAS demonstrate this very clearly, where GPUs, Custom SoCs, Microcontrollers, and FPGAs are integrated on a single domain controller in order to perform sensor fusion, processing and decision making on a single Printed Circuit Board (PCB). The communication between these heterogeneous components and the algorithms for Advanced Driving Assistant Systems (ADAS) itself requires a huge amount of memory bandwidth, which will bring the Memory Wall from High Performance Computing (HPC) and data-centers directly in our cars. In this paper we highlight the roles and issues of Dynamic Random Access Memories (DRAMs) for future autonomous driving architectures.
In this thesis we studied and investigated a very common but a long existing noise problem and we provided a solution to this problem. The task is to deal with different types of noise that occur simultaneously and which we call hybrid. Although there are individual solutions for specific types one cannot simply combine them because each solution affects the whole speech. We developed an automatic speech recognition system DANSR ( Dynamic Automatic Noisy Speech Recognition System) for hybrid noisy environmental noise. For this we had to study all of speech starting from the production of sounds until their recognition. Central elements are the feature vectors on which pay much attention. As an additional effect we worked on the production of quantities for psychoacoustic speech elements.
The thesis has four parts:
1) The first part we give an introduction. The chapter 2 and 3 give an overview over speech generation and recognition when machines are used. Also noise is considered.
2) In the second part we describe our general system for speech recognition in a noisy environment. This is contained in the chapters 4-10. In chapter 4 we deal with data preparation. Chapter 5 is concerned with very strong noise and its modeling using Poisson distribution. In the chapters 5-8 we deal with parameter based modeling. Chapter 7 is concerned with autoregressive methods in relation to the vocal tract. In the chapters 8 and 9 we discuss linear prediction and its parameters. Chapter 9 is also concerned with quadratic errors, the decomposition into sub-bands and the use of Kalman filters for non-stationary colored noise in chapter 10. There one finds classical approaches as long we have used and modified them. This includes covariance mehods, the method of Burg and others.
3) The third part deals firstly with psychoacoustic questions. We look at quantitative magnitudes that describe them. This has serious consequences for the perception models. For hearing we use different scales and filters. In the center of the chapters 12 and 13 one finds the features and their extraction. The fearures are the only elements that contain information for further use. We consider here Cepstrum features and Mel frequency cepstral coefficients(MFCC), shift invariant local trigonometric transformed (SILTT), linear predictive coefficients (LPC), linear predictive cepstral coefficients (LPCC), perceptual linear predictive (PLP) cepstral coefficients. In chapter 13 we present our extraction methods in DANSR and how they use window techniques And discrete cosine transform (DCT-IV) as well as their inverses.
4) The fourth part considers classification and the ultimate speech recognition. Here we use the hidden Markov model (HMM) for describing the speech process and the Gaussian mixture model (GMM) for the acoustic modelling. For the recognition we use forward algorithm, the Viterbi search and the Baum-Welch algorithm. We also draw the connection to dynamic time warping (DTW). In the rest we show experimental results and conclusions.
Modern applications in the realms of wireless communication and mobile broadband Internet increase the demand for compact antennas with well defined directivity. Here, we present an approach for the design and implementation of hybrid antennas consisting of a classic feeding antenna that is near-field-coupled to a subwavelength resonator. In such a combined structure, the composite antenna always radiates at the resonance frequency of the subwavelength oscillator as well as at the resonance frequency of the feeding antenna. While the classic antenna serves as impedance-matched feeding element, the subwavelength resonator induces an additional resonance to the composite antenna. In general, these near-field coupled structures are known for decades and are lately published as near-field resonant parasitic antennas. We describe an antenna design consisting of a high-frequency electric dipole antenna at fd=25 GHz that couples to a low-frequency subwavelength split-ring resonator, which emits electromagnetic waves at fSRR=10.41 GHz. The radiating part of the antenna has a size of approximately 3.2mm×8mm×1mm and thus is electrically small at this frequency with a product k⋅a=0.5 . The input return loss of the antenna was moderate at −18 dB and it radiated at a spectral bandwidth of 120 MHz. The measured main lobe of the antenna was observed at 60∘ with a −3 dB angular width of 65∘ in the E-plane and at 130∘ with a −3 dB angular width of 145∘ in the H-plane
This paper aims to improve the traditional calibration method for reconfigurable self-X (self-calibration, self-healing, self-optimize, etc.) sensor interface readout circuit for industry 4.0. A cost-effective test stimulus is applied to the device under test, and the transient response of the system is analyzed to correlate the circuit's characteristics parameters. Due to complexity in the search and objective space of the smart sensory electronics, a novel experience replay particle swarm optimization (ERPSO) algorithm is being proposed and proved a better-searching capability than some currently well-known PSO algorithms. The newly proposed ERPSO expanded the selection producer of the classical PSO by introducing an experience replay buffer (ERB) intending to reduce the probability of trapping into the local minima. The ERB reflects the archive of previously visited global best particles, while its selection is based upon an adaptive epsilon greedy method in the velocity updating model. The performance of the proposed ERPSO algorithm is verified by using eight different popular benchmarking functions. Furthermore, an extrinsic evaluation of the ERPSO algorithm is also examined on a reconfigurable wide swing indirect current-feedback instrumentation amplifier (CFIA). For the later test, we proposed an efficient optimization procedure by using total harmonic distortion analyses of CFIA output to reduce the total number of measurements and save considerable optimization time and cost. The proposed optimization methodology is roughly 3 times faster than the classical optimization process. The circuit is implemented by using Cadence design tools and CMOS 0.35 µm technology from Austria Microsystems (AMS). The efficiency and robustness are the key features of the proposed methodology toward implementing reliable sensory electronic systems for industry 4.0 applications.
Moderne Mobilfunksysteme, die nach dem zellularen Konzept arbeiten, sind interferenzbegrenzte Systeme. Ein wesentliches Ziel beim Entwurf zukünftiger Mobilfunkkonzepte ist daher die Reduktion der auftretenden Interferenz. Nur so läßt sich die spektrale Effizienz künftiger Mobilfunksysteme noch signifikant gegenüber dem Stand der Technik steigern. Die Elimination der Intrazellinterferenz, das heißt der auftretenden Wechselwirkungen zwischen Signalen mehrerer von der gleichen Zelle bedienter Teilnehmer, durch gemeinsame Detektion (engl. Joint Detection, JD) ist bereits ein wesentliches Merkmal des Luftschnittstellenkonzepts TD-CDMA. Ein bislang noch weitgehend unbeachtetes Potential zum Steigern von spektraler Effizienz und Kapazität hingegen ist die Reduktion der Interzellinterferenz, das heißt der durch Teilnehmer verschiedener Zellen wechselseitig verursachten Interferenz. Insbesondere in Systemen mit niedrigen Clustergrößen verspricht eine Reduktion der in diesem Fall sehr starken Interzellinterferenz erhebliche Gewinne. Die Interzellinterferenzreduktion ist daher der logische nächste Schritt nach der Intrazellinterferenzreduktion. Die vorliegende Arbeit leistet einen Beitrag zum Entwickeln gewinnbringender Verfahren zur Reduktion der Interzellinterferenz in zukünftigen Mobilfunksystemen durch entsprechende Berücksichtigung und Elimination des Einflusses der Interzellinterferenzsignale in der empfängerseitigen Signalverarbeitung. Ziel ist eine verbesserte Schätzung der übertragenen Teilnehmerdaten zu erhalten, dazu werden Signale von Interzellinterferenzquellen beim Datenschätzen berücksichtigt. Die dabei benötigten Informationen werden mit den ebenfalls erläuterten Verfahren zur Identifikation und Selektion starker Interzellinterferenzquellen sowie einer gegenüber dem bisherigen Systementwurf erweiterten Kanalschätzung gewonnen. Es wird gezeigt, daß sich mit einem aufwandsgünstigen Detektor die relevanten Interzellinterferenzquellen zuverlässig identifizieren lassen. Mit einem auf kurze Mobilfunkkanäle, die in Hotspots vermehrt zu erwarten sind, optimierten Kanalschätzverfahren werden die aktuellen Mobilfunkkanalimpulsantworten für alle relevanten Teilnehmer bestimmt. Um die Datenschätzung für viele Teilnehmer durchführen zu können, wird das Schätzverfahren Multi-Step Joint Detection entworfen, das die von der herkömmlichen gemeinsamen Detektion bekannte SNR-Degradation verringert. Die Simulationsergebnisse zeigen die Leistungsfähigkeit des entworfenen Systemkonzeptes. Die Interzellinterferenzreduktionsverfahren können sowohl zum Erhöhen der spektralen Effizienz des Systems, als auch zu einer Verbesserung der Dienstgüte bei gleichbleibender spektraler Effizienz gewinnbringend eingesetzt werden.
In Anbetracht der ständig steigenden Nachfrage nach Mobilkommunikation einerseits und der nur begrenzt zur Verfügung stehenden Ressource Frequenzspektrum andererseits müssen Mobilfunksysteme der dritten Generation (3G) eine hohe Frequenzökonomie haben. Dies trifft insbesondere auf die Abwärtsstrecken dieser Systeme zu, in denen auch paketorientierte Dienste mit hohen Datenraten angeboten werden sollen. Seitens der Basisstationen kann die spektrale Effizienz der Abwärtsstrecke durch das Verwenden mehrelementiger adaptiver Sendeantennen erhöht werden. Hierzu sind leistungsfähige Signalverarbeitungskonzepte erforderlich, die die effektive Kombination der adaptiven Antennen mit der eingesetzten Sendeleistungsregelung ermöglichen. Die wichtigsten Aspekte beim Entwerfen von Signalverarbeitungskonzepten für adaptive Sendeantennen sind das Gewährleisten mobilstationsspezifischer Mindestdatenraten sowie das Reduzieren der aufzuwendenden Sendeleistungen. Die vorliegende Arbeit trägt dazu bei, den Einsatz mehrantennenelementiger adaptiver Sendeantennen in Mobilfunksystemen der dritten Generation voranzutreiben. Existierende Konzepte werden dargestellt, vereinheitlicht, analysiert und durch eigene Ansätze des Autors erweitert. Signalverarbeitungskonzepte für adaptive Antennen benötigen als Wissensbasis zumindest einen gewissen Grad an Kenntnis über die Mobilfunkkanäle der Abwärtsstrecke. Beim für den FDD-Modus angedachten 3G-Teilstandard WCDMA ergibt sich das Problem, daß wegen des Frequenzversatzes zwischen der Auf- und der Abwärtsstrecke die Ergebnisse der Kanalschätzung in der Aufwärtsstrecke nicht direkt zum Einstellen der adaptiven Sendeantennen verwendet werden können. Eine Möglichkeit, in FDD-Systemen an den Basisstationen ein gewisses Maß an Kenntnis über die räumlichen Eigenschaften der Mobilfunkkanäle der Abwärtsstrecke verfügbar zu machen, besteht im Ausnutzen der an den Basisstationen ermittelbaren räumlichen Korrelationsmatrizen der Mobilfunkkanäle der Aufwärtsstrecke. Diese Vorgehensweise ist nur dann sinnvoll, wenn die relevanten Einfallsrichtungen der Aufwärtsstrecke mit den relevanten Abstrahlungsrichtungen der Abwärtsstrecke übereinstimmen. Für diesen Fall wird in der vorliegenden Arbeit ein aufwandsgünstiges Verfahren zum Anpassen der adaptiven Sendeantennen erarbeitet, das nicht auf komplexen Richtungsschätzalgorithmen beruht. Eine verläßlichere Methode, an den Basisstationen ein gewisses Maß an Kenntnis über die räumlichen Eigenschaften der Mobilfunkkanäle der Abwärtsstrecke verfügbar zu machen, ist das Signalisieren von Kanalzustandsinformation, die an den Mobilstationen gewonnen wird, über einen Rückkanal an die versorgende Basisstation. Da dieses Rücksignalisieren zeitkritisch ist und die Übertragungskapazität des Rückkanals begrenzt ist, wird in der vorliegenden Arbeit ein aufwandsgünstiges Verfahren zum Vorverarbeiten und Rücksignalisieren von Kanalzustandsinformation erarbeitet.
Ein Beitrag zur Zustandsschätzung in Niederspannungsnetzen mit niedrigredundanter Messwertaufnahme
(2020)
Durch den wachsenden Anteil an Erzeugungsanlagen und leistungsstarken Verbrauchern aus dem Verkehr- und Wärmesektor kommen Niederspannungsnetze immer näher an ihre Betriebsgrenzen. Da für die Niederspannungsnetze bisher keine Messwerterfassung vorgesehen war, können Netzbetreiber Grenzverletzungen nicht erkennen. Um dieses zu ändern, werden deutsche Anschlussnutzer in Zukunft flächendeckend mit modernen Messeinrichtungen oder intelligenten Messsystemen (auch als Smart Meter bezeichnet) ausgestattet sein. Diese sind in der Lage über eine Kommunikationseinheit, das Smart-Meter-Gateway, Messdaten an die Netzbetreiber zu senden. Werden Messdaten aber als personenbezogene Netzzustandsdaten deklariert, so ist aus Datenschutzgründen eine Erhebung dieser Daten weitgehend untersagt.
Ziel dieser Arbeit ist es eine Zustandsschätzung zu entwickeln, die auch bei niedrigredundanter Messwertaufnahme für den Netzbetrieb von Niederspannungsnetzen anwendbare Ergebnisse liefert. Neben geeigneten Algorithmen zur Zustandsschätzung ist dazu die Generierung von Ersatzwerten im Fokus.
Die Untersuchungen und Erkenntnisse dieser Arbeit tragen dazu bei, den Verteilnetzbetreibern bei den maßgeblichen Entscheidungen in Bezug auf die Zustandsschätzung in Niederspannungsnetzen zu unterstützen. Erst wenn Niederspannungsnetze mit Hilfe der Zustandsschätzung beobachtbar sind, können darauf aufbauende Konzepte zur Regelung entwickelt werden, um die Energiewende zu unterstützen.
Seit Aufkommen der Halbleiter-Technologie existiert ein Trend zur Miniaturisierung elektronischer Systeme. Dies, steigende Anforderungen sowie die zunehmende Integration verschiedener Sensoren zur Interaktion mit der Umgebung lassen solche eingebetteten Systeme, wie sie zum Beispiel in mobilen Geräten oder Fahrzeugen vorkommen, zunehmend komplexer werden. Die Folgen sind ein Anstieg der Entwicklungszeit und ein immer höherer Bauteileaufwand, bei gleichzeitig geforderter Reduktion von Größe und Energiebedarf. Insbesondere der Entwurf von Multi-Sensor-Systemen verlangt für jeden verwendeten Sensortyp jeweils gesondert nach einer spezifischen Sensorelektronik und steht damit den Forderungen nach Miniaturisierung und geringem Leistungsverbrauch entgegen.
In dieser Forschungsarbeit wird das oben beschriebene Problem aufgegriffen und die Entwicklung eines universellen Sensor-Interfaces für eben solche Multi-Sensor-Systeme erörtert. Als ein einzelner integrierter Baustein kann dieses Interface bis zu neun verschiedenen Sensoren unterschiedlichen Typs als Sensorelektronik dienen. Die aufnehmbaren Messgrößen umfassen: Spannung, Strom, Widerstand, Kapazität, Induktivität und Impedanz.
Durch dynamische Rekonfigurierbarkeit und applikationsspezifische Programmierung wird eine variable Konfiguration entsprechend der jeweiligen Anforderungen ermöglicht. Sowohl der Entwicklungs- als auch der Bauteileaufwand können dank dieser Schnittstelle, die zudem einen Energiesparmodus beinhaltet, erheblich reduziert werden.
Die flexible Struktur ermöglicht den Aufbau intelligenter Systeme mit sogenannten Self-x Charakteristiken. Diese betreffen Fähigkeiten zur eigenständigen Systemüberwachung, Kalibrierung oder Reparatur und tragen damit zu einer erhöhten Robustheit und Fehlertoleranz bei. Als weitere Innovation enthält das universelle Interface neuartige Schaltungs- und Sensorkonzepte, beispielsweise zur Messung der Chip-Temperatur oder Kompensation thermischer Einflüsse auf die Sensorik.
Zwei unterschiedliche Anwendungen demonstrieren die Funktionalität der hergestellten Prototypen. Die realisierten Applikationen haben die Lebensmittelanalyse sowie die dreidimensionale magnetische Lokalisierung zum Gegenstand.
Vorgestellt wird ein Verfahren zur Bestimmung der Erdschlussentfernung in hochohmig geerdeten
Netzen. Nach Abklingen der transienten Vorgänge im Fehlerfall stellt sich ein stationärer
Zustand ein, in dem das Netz zunächst weiter betrieben werden kann.
Ausgehend von diesem stationären Fehlerfall wird auf der Basis eines Π-Glieds das Leitungsmodell
des einseitig gespeisten Stichabgangs mit einer Last in der Vier-Leiter-Darstellung
entwickelt. Die Schaltungsanalyse erfolgt mit Hilfe komplexer Rechnung und der Kirchhoffschen
Gesetze. Grundlage der Betrachtungen bildet das Netz mit isoliertem Sternpunkt.
Das entstehende Gleichungssystem ist in seiner Grundform nichtlinear, lässt sich jedoch auf eine
elementar lösbare kubische Gleichung im gesuchten Fehlerentfernungsparameter zurückführen.
Eine weitere Lösungsmöglichkeit bietet das Newton-Raphson-Verfahren.
Durch Verlegen der lastseitigen Leiter-Erd-Kapazitäten an den Abgangsanfang kann das vollständige,
nichtlineare System in ein lineares System überführt werden. Hierbei sind die beiden
Ausprägungen „direkte Lösung mit unsymmetrischer Last“ oder „Ausgleichsrechnung mit
symmetrischer Last“ möglich.
Eine MATLAB®-Implementierung dieser vier Rechenalgorithmen bildet die Basis der weiteren
Analysen.
Alle messtechnischen Untersuchungen erfolgten am Netz-Kraftwerksmodell der TU Kaiserslautern.
Hier wurden verschiedene Fehlerszenarien hinsichtlich Fehlerentfernung, -widerstand und
Größe des gesunden Restnetzes hergestellt, in 480 Einzelmessungen erfasst und mit den Algorithmen
ausgewertet. Dabei wurden auch Messungen an fehlerfreien Abgängen erhoben, um das
Detektionsvermögen der Algorithmen zu testen.
Neben Grundschwingungsbetrachtungen ist die Auswertung aller Datensätze mit der 5. und der
7. Harmonischen ein zentrales Thema. Im Fokus steht die Verwendbarkeit dieser Oberschwingungen
zur Erdschlussentfernungsmessung bzw. -detektion mit den o.g. Algorithmen.
Besondere Bedeutung kommt der Fragestellung zu, inwieweit die für ein Netz mit isoliertem
Sternpunkt konzipierten Algorithmen unter Benutzung der höheren Harmonischen zur Erdschlussentfernungsmessung
in einem gelöschten Netz geeignet sind.
Schließlich wird das Verfahren auf Abgänge mit inhomogenem Leitermaterial erweitert, da auch
diese Konstellation von praktischer Bedeutung ist.
Ausgehend von den Batterien nach dem Stand der Technik werden die Einflüsse auf diese untersucht. Es stellt sich heraus, dass vor allem Batterien für den Einsatz in der Traktion und in stationären Anlagen nur mit nicht zu vernachlässigenden Toleranzen gefertigt werden können. Dazu kommt der Einfluss durch den Betrieb, der in Verbindung mit der Fertigungstoleranz und der Lagerbeanspruchung eine Überwachung der Batterie mit einem elektronischen Überwachungs- und Steuergerät, einem sogenannten Batteriemanagementsystem, zwingend notwendig macht. Nur damit kann die aktuelle Qualität einer Batterie erhalten oder verbessert werden. Es folgt eine Klassifizierung von Batteriemanagementsystemen, die ihrerseits in elektrisches Batteriemanagement, thermisches Batteriemanagement und Ladeausgleich aufgeteilt werden. Für diese drei Teileinheiten werden jeweils eine Reihe von Topologien definiert. Das Batteriemanagementsystem mit verteilten Datenerfassungseinheiten und Energiezuführung für den Ladeausgleich wird als Beispiel für eine Geräteentwicklung detailliert beschrieben. Grundlegend für Batteriemanagementsysteme sind deren Algorithmen. Nach der Definition der elektrischen und thermischen Betriebsbereiche verschiedener Batterien werden typische Algorithmen für den Lade- und Entladebetrieb vorgestellt. Weiter werden Verfahren zur Bestimmung des aktuellen Ladezustandes einer Batterie diskutiert. Im Falle eines Betriebes in einem Inselnetz geht in die aktuelle Qualität einer Batterie auch der Isolationswiderstand mit ein. Näher beschrieben wird ein Gerät zur Messung des Isolationswiderstandes. Das Batteriemanagement fordert eine hohe Genauigkeit bei der Datenerfassung. Es werden typische Datenerfassungseinheiten und Sensoren auf ihre Fehlerquellen und die daraus resultierenden Toleranzen untersucht. Weiter werden Kalibrierungsmöglichkeiten diskutiert. Ein weiterer Punkt ist der Test von Batteriemanagementsystemen. Ein Test mit einer realen Batterie nimmt eine längere Zeit in Anspruch. Außerdem besteht dabei die Gefahr einer Schädigung der Batterie. Es wird ein rechnergestütztes System zum Test des elektrischen Batteriemanagements vorgestellt. Abschließend werden integrierte Batteriemanagementsysteme, die im Gerätebereich Anwendung finden, klassifiziert. Es wird gezeigt, dass diese integrierten Lösungen als Ersatz für Datenerfassungseinheiten bei Batterien im Traktions- und Stationärbetrieb eingesetzt werden können. Insgesamt wird nachgewiesen, dass elektronische Überwachungs- und Steuergeräte unabhängig vom Batterietyp und der Einsatzart zum Erhalt oder zur Verbesserung der aktuellen Qualität von Batterien nicht nur sinnvoll sondern für neuere Batterietechnologien unbedingt notwendig sind.
Emerging Memories (EMs) could benefit from Error Correcting Codes (ECCs) able to correct few errors in a few nanoseconds. The low latency is necessary to meet the DRAM- like and/or eXecuted-in-Place requirements of Storage Class Memory devices. The error correction capability would help manufacturers to cope with unknown failure mechanisms and to fulfill the market demand for a rapid increase in density. This paper shows the design of an ECC decoder for a shortened BCH code with 256-data-bit page able to correct three errors in less than 3 ns. The tight latency constraint is met by pre-computing the coefficients of carefully chosen Error Locator Polynomials, by optimizing the operations in the Galois Fields and by resorting to a fully parallel combinatorial implementation of the decoder. The latency and the area occupancy are first estimated by the number of elementary gates to traverse, and by the total number of elementary gates of the decoder. Eventually, the implementation of the solution by Synopsys topographical synthesis methodology in 54nm logic gate length CMOS technology gives a latency lower than 3 ns and a total area less than \(250 \cdot 10^3 \mu m^2\).
In heutigen Mobilfunksystemen wird ausschließlich senderorientierte Funkkommunikation eingesetzt. Bei senderorientierter Funkkommunikation beginnt der Systementwurf mit dem Sender. Dies bedeutet, daß man a priori die senderseitig verwendeten Algorithmen der Sendesignalerzeugung auswählt und in Abhängigkeit davon a posteriori den im Empfänger zum Datenschätzen verwendeten Algorithmus gegebenenfalls unter Einbeziehen von Kanalzustandsinformation festlegt. Dies ist nötig, um beispielsweise einen möglichst großen Anteil der senderseitig investierten Energie empfängerseitig auszunutzen, das heißt energieeffizient zu sein, und dabei gleichzeitig das Entstehen schädlicher Interferenzsignale zu vermeiden oder zu begrenzen. Im Falle der Senderorientierung kann man senderseitig sehr einfache Algorithmen wählen und implementieren, wobei dieser Vorteil typischerweise durch eine ungleich höher Implementierungskomplexität der a posteriori festzulegenden empfängerseitigen Algorithmen aufgewogen werden muß. Betrachtet man die wirtschaftlich bedeutenden zellularen Mobilfunksysteme, so ist eine derartige Funkkommunikation in der Aufwärtsstrecke vorteilhaft, denn in der Aufwärtsstrecke sind die Endgeräte der mobilen Teilnehmer, die Mobilstationen, die einfachen Sender, wohingegen die ortsfesten Basisstationen die Empfänger sind - und dort kann typischerweise eine größere Komplexität in Kauf genommen werden. In der Abwärtsstrecke derartiger Mobilfunksysteme hingegen, sind die Basisstationen die einfachen Sender, wohingegen die Mobilstationen die aufwendigen Empfänger sind. Dies ist nicht vorteilhaft, da in praktischen Mobilfunksystemen Gewicht, Volumen, Energieverbrauch und Kosten der Endgerätehardware und damit der Mobilstationen mit der Implementierungskomplexität steigen. Wie der Verfasser in der vorliegenden Schrift vorschlägt, läßt sich dieses Problem jedoch umgehen, denn die Funkkommunikation in Mobilfunksystemen kann auch in neuartiger Weise empfängerorientiert gestaltet werden. Empfängerorientierte Funkkommunikation ist dadurch gekennzeichnet, daß der Systementwurf auf der Empfängerseite beginnt. In diesem Fall werden die empfängerseitig verwendeten Algorithmen des Datenschätzens a priori festgelegt, und die senderseitig einzusetzenden Algorithmen der Sendesignalerzeugung ergeben sich dann daraus a posteriori durch Adaption wiederum gegebenenfalls unter Einbeziehen von Kanalzustandsinformation. Durch Empfängerorientierung kann man empfängerseitig sehr einfache Algorithmen wählen und implementieren, muß dafür jedoch eine höhere Implementierungskomplexität auf der Senderseite tolerieren. Angesichts der erwähnten Komplexitätscharakteristika von Sender- beziehungsweise Empfängerorientierung schlägt der Verfasser daher für künftige Mobilfunksysteme vor, Empfängerorientierung in der Abwärtsstrecke und Senderorientierung in der Aufwärtsstrecke einzusetzen. Dies ist insbesondere deshalb vorteilhaft, da Empfängerorientierung in der Abwärtsstrecke neben anderen noch die folgenden weiteren Vorteile gegenüber herkömmlicher Senderorientierung aufweist: 1) Die Leistung der von den Basisstationen abgestrahlten Signale kann reduziert werden. Dies erlaubt performanzhemmende systeminherente Störeinflüsse, die als Interzellinterferenz bezeichnet werden, zu reduzieren und ist im übrigen auch wünschenswert im Hinblick auf die zunehmende Elektrophobie der Bevölkerung. 2) Kanalzustandsinformation wird empfängerseitig nicht benötigt, so daß auf das Senden resourcenbindender Trainingssignale verzichtet und anstelle dessen das Sende von Nutzdaten ermöglicht werden kann. 3) Empfängerseitig ist kein Kanalschätzer vorzusehen, was des weiteren der Implementierungskomplexität des Empfängers zu gute kommt. Mobilfunksysteme lassen sich demzufolge durch Einsetzen des Grundkonzepts der Empfängerorientierung maßgeblich aufwerten. Dieses ist eine klare Motivation die Grundzüge, das Potential und die Ausgestaltungen dieses Grundkonzepts in der Mobilkommunikation in dieser Schrift eingehend zu studieren. Zur Klärung dieser Punkte im Kontext von Mobilkommunikation ist es entscheidend, die Frage der Wahl der Empfänger und die der Adaption der Sender zu beantworten. Die Frage nach der Adaption der Sender ist dabei gleichbedeutend mit der Frage nach der im allgemeinen auf Basis aller Daten erfolgenden gemeinsamen Sendesignalerzeugung. Nach der Einführung eines geeigneten allgemeinen Modells der Abwärtsstreckenübertragung eines zellularen Mobilfunksystems, das auch erst in jüngster Vergangenheit vorgeschlagene Mehrantennenkonfigurationen an den Basisstationen und Mobilstationen einschließt, wird hinsichtlich der A-priori-Wahl der Empfänger herausgestellt, daß, im Hinblick auf die bereits oben angesprochene möglichst geringe Implementierungskomplexität die Ausgestaltung der empfängerseitigen Signalverarbeitung als serielle Verkettung einer linearen Signalverarbeitung und eines nichtlinearen Quantisierers vorteilhaft ist. Die Prinzipien, die bei der Wahl sowohl der linearen Signalverarbeitung als auch des nichtlinearen Quantisierer gelten, werden im folgenden herausgearbeitet. Als Ergebnis dieser Betrachtungen stellt sich heraus, daß ein Gestalten der empfängerseitigen linearen Signalverarbeitung gemäß Codemultiplex hinsichtlich der ausnutzbaren Frequenz-, Zeit- und Raumdiversität vorteilhaft ist, jedoch leistungsfähige Verfahren der gemeinsamen Sendesignalerzeugung voraussetzt, die die Entstehung schädlicher Interferenzsignale verhindern. Des weiteren wird klar, daß sich die nichtlinearen Quantisierer sinnvollerweise in die Klasse der konventionellen und die der unkonventionellen Quantisierer unterteilen lassen; gleiches gilt für die diese Quantisierer verwendenden Empfänger. Konventionelle Quantisierer basieren auf einfach zusammenhängenden Entscheidungsgebieten, wobei jedes Entscheidungsgebiet eindeutig einer möglichen Ausprägung eines übertragenen Nachrichtenelements zugeordnet ist. Demgegenüber weisen unkonventionelle Quantisierer mehrfach zusammenhängende Entscheidungsgebiete auf, die sich jeweils aus mehreren Teilentscheidungsgebieten zusammensetzen. Das Vorhandensein mehrerer Teilentscheidungsgebiete pro Entscheidungsgebiet und damit pro Ausprägung eines übertragenen Nachrichtenelements stellt einen bei unkonventionellen Quantisierern verfügbaren zusätzlichen Freiheitsgrad dar, der bei der gemeinsamen Sendesignalerzeugung vorteilhaft genutzt werden kann, um die angesprochene Leistung der von den Basisstationen abgestrahlten Signale zu reduzieren. Ein Schwerpunkt der vorliegenden Schrift ist das Studium von Verfahren der gemeinsamen Sendesignalerzeugung. Diese werden daher systematisch gegliedert und erarbeitet. Es stellt sich heraus, daß Verfahren der gemeinsamen Sendesignalerzeugung prinzipiell unterteilt werden können in solche Verfahren für konventionelle Empfänger und solche für unkonventionelle Empfänger. Hinsichtlich Verfahren der erstgenannten Art wird herausgearbeitet, wie eine optimale gemeinsame Sendesignalerzeugung zu erfolgen hat, die unter gewissen Nebenbedingungen eine optimale Übertragungsqualität im Sinne minimaler Übertragungsfehlerwahrscheinlichkeit erzielt. Eine derartige gemeinsame Sendesignalerzeugung ist im allgemeinen recht aufwendig, so daß im Folgeverlauf die suboptimalen linearen Verfahren der gemeinsamen Sendesignalerzeugung Transmit Matched Filter (TxMF), Transmit Zero-Forcing (TxZF) und Transmit Minimum-Mean-Square-Error (TxMMSE) vorgeschlagen werden, die jeweils einen mehr oder weniger guten Kompromiß zwischen Implementierungskomplexität, Interferenzunterdrückung und Robustheit hinsichtlich Rauschens aufweisen. Der Verfasser schlägt vor, die Leistungsfähigkeit derartiger suboptimaler Verfahren unter anderem durch die bei gegebener Zeitdauer abgestrahlte totale Energie der Sendesignale, die totale Sendeenergie, - denn diese ist nicht nur im technischen, sondern auch im gesellschaftlichen Sinn ein wichtiger Aspekt, - und das Kriterium der Sendeeffizienz zu bewerten. Sendeeffizienz beurteilt das Zusammenspiel aus Interferenzunterdrückung einerseits und energieeffizienter Übertragung andererseits. Es stellt sich durch analytische und numerische Betrachtungen heraus, daß beide Größen vorrangig von zwei Einflußfaktoren bestimmt werden: der Anzahl der Freiheitsgrade bei der gemeinsamen Sendesignalerzeugung - und das ist die Anzahl der zu bestimmenden Abtastwerte aller Sendesignale - und der Anzahl der dabei einzuhaltenden Restriktionen. Da die Anzahl der Restriktionen bei der Forderung einer möglichst geringen wechselseitigen Interferenz nicht beeinflußbar ist, schlägt der Verfasser daher zum Erhöhen der Leistungsfähigkeit der empfängerorientierten Funkkommunikation vor, die Anzahl der Freiheitsgrade zu erhöhen, was sich vorzugsweise durch Verfolgen des Prinzips der unkonventionellen Empfänger umsetzen läßt. Es wird gezeigt, wie unter gewissen Nebenbedingungen eine hinsichtlich der Übertragungsfehlerwahrscheinlichkeiten optimale gemeinsame Sendesignalerzeugung prinzipiell erfolgen muß, und welche erheblichen Performanzgewinne im Sinne der totalen Sendeenergie und der Sendeeffizienz möglich werden. Diese optimale Vorgehensweise ist sehr aufwendig, so daß darüber hinaus aufwandsgünstige suboptimale hochperformante Alternativen der gemeinsamen Sendesignalerzeugung für unkonventionelle Empfänger vorgeschlagen und betrachtet werden. Die gemeinsame Sendesignalerzeugung setzt senderseitiges Vorliegen von Kanalzustandsinformation voraus. Daher werden die prinzipiellen Möglichkeiten des zur Verfügung Stellens dieser Information behandelt, wobei dabei das Bereitstellen dieser Information auf Basis gegebenenfalls vorliegender Kanalreziprozität im Falle von Duplexübertragung favorisiert wird. Dabei wird die in der Aufwärtsstrecke gewonnene Kanalzustandsinformation zur gemeinsamen Sendesignalerzeugung in der Abwärtsstrecke genutzt. Ist die dabei genutzte Kanalzustandsinformation nicht exakt, so hat dieses prinzipiell eine Degradation der Leistungsfähigkeit der empfängerorientierten Funkkommunikation zur Folge. Analytische und/oder numerische Betrachtungen erlauben, die Degradation zu quantifizieren. Es stellt sich heraus, daß diese Degradation vergleichbar mit der von konventionellen senderorientierten Funkkommunikationssystemen bekannten ist. Eine Betrachtung möglicher Weiterentwicklungen des Grundprinzips der Empfängerorientierung komplettieren die in dieser Schrift angestellten Betrachtungen. Die Ergebnisse dieser Schrift belegen, daß Empfängerorientierung ein interessanter Kandidat für die Organisation der Abwärtsstreckenübertragung künftiger Mobilfunksysteme ist. Darüber hinaus wird klar, welche grundsätzlichen Prinzipien und Effekte bei der empfängerorientierten Funkkommunikation wirksam sind und durch welche Vorgehensweisen bei der Gestaltung derartiger Funkkommunikation die Einflüsse der verschiedenen Effekte gegeneinander ausbalanciert werden können. Für den Systemdesigner morgiger Mobilfunksysteme steht mit dieser Schrift daher ein wertvolles Nachschlagewerk zur Verfügung, daß dabei unterstützt, die genannten prinzipiellen Vorteile von Empfängerorientierung in Funktechnologien der Praxis umzumünzen.
Empfängerorientierte Übertragungsverfahren sind dadurch gekennzeichnet, daß der im Sender zu verwendende Signalverarbeitungsalgorithmus an den im Empfänger verwendeten Signalverarbeitungsalgorithmus angepaßt ist. Dies geschieht meist mit zusätzlicher Kanalinformation, die nur am Sender und nicht am Empfänger verfügbar ist. In empfängerorientierten Systemen kann man besonders einfache Algorithmen in den Empfängern realisieren, die im Falle einer Abwärtsstreckenübertragung eines Mobilfunksystems, in den Mobilstationen sind. Dies ist mit geringen Produktionskosten und geringem Energieverbrauch der Mobilstationen verbunden. Um dennoch eine gewisse Güte der Datenübertragung zu gewährleisten, wird bei der Empfängerorientierung mehr Aufwand in der Feststation des Mobilfunksystems betrieben. Die derzeit verwendeten und für die dritte Mobilfunkgeneration (UMTS) vorgesehenen Übertragungsverfahren sind senderorientiert. Das bedeutet, daß der Signalverarbeitungsalgorithmus im Empfänger an den Signalverarbeitungsalgorithmus des Senders angepaßt ist. Auch bei der Senderorientierung wird meist die Kanalinformation in den Anpassungsprozeß im Empfänger einbezogen. Zum Gewinnen der Kanalinformation sind Testsignale notwendig, anhand der die Kanalinformation geschätzt werden kann. Solche Testsignale können in der Abwärtsstrecke eines empfängerorientierten Mobilfunksystems entfallen. Anstelle der Testsignale kann man Daten übertragen und somit die Datenrate im Vergleich zu senderorientierten Systemen erhöhen. Um die Performanz von Übertragungsverfahren beurteilen zu können, sind geeignete Kriterien notwendig. Meist werden zur Beurteilung Bitfehlerwahrscheinlichkeiten oder Signal-Stör-Verhältnisse verwendet. Da die Höhe der aufzuwendenden Sendeenergie nicht nur technisch, sondern auch gesellschaftlich ein wichtiger Aspekt zukünftiger Mobilfunksysteme ist, wird vom Verfasser das Kriterium der Energieeffizienz vorgeschlagen. Die Energieeffizienz beurteilt das Zusammenspiel von Signalverarbeitungsalgorithmen des Senders und des Empfängers unter Berücksichtigung der Kanaleigenschaften. Dabei wird die nutzbare Empfangsenergie auf die investierte Sendeenergie bezogen. Anhand der ermittelten energieeffizienzen und analytischen Betrachtungen in der vorliegenden Arbeit kann man den Schluß ziehen, daß empfängerorientierte Übertragungsverfahren für die Abwärtsstreckenübertragung in Mobilfunksystemen den senderorientierten vorzuziehen sind, wenn an der Feststation relativ viele und an den Mobilstationen relativ wenige Antennen zur Verfügung stehen. Dies ist bereits heute der Fall und auch in zukünftigen Mobilfunksystemen zu erwarten. Ferner eröffnet das am Rande untersuchte kanalorientierte Übertragungsverfahren, bei dem die Signalverarbeitungsalgorithmen des Sender und des Empfängers an die Kanalinformation angepaßt werden, ein weites Feld für zukünftige Forschungsvorhaben.
In this paper, we show the feasibility of low supply voltage for SRAM (Static Random Access Memory) by adding error correction coding (ECC). In SRAM, the memory matrix needs to be powered for data retentive standby operation, resulting in standby leakage current. Particularly for low duty- cycle systems, the energy consumed due to standby leakage current can become significant. Lowering the supply voltage (VDD) during standby mode to below the specified data retention voltage (DRV) helps decrease the leakage current. At these VDD levels errors start to appear, which we can remedy by adding ECC. We show in this paper that addition of a simple single error correcting (SEC) ECC enables us to decrease the leakage current by 45% and leakage power by 72%. We verify this on a large set of commercially available standard 40nm SRAMs.
Entwicklung eines Verfahrens zur dreiphasigen Zustandsschätzung in vermaschten Niederspannungsnetzen
(2018)
Betreiber von Niederspannungsnetzen sind im Zuge der Energiewende durch den anhaltenden Ausbau dezentraler Erzeugungsanlagen und dem Aufkommen der Elektromobilität mit steigenden Netzauslastungen konfrontiert. Zukünftig wird ein sicherer Netzbetrieb ohne Leitungsüberlastungen grundsätzlich nur gewährleistet sein, wenn der Netzzustand durch geeignete Systeme ermittelt wird und auf dessen Basis ein intelligentes Netzmanagement mit regelnden Eingriffen erfolgt.
Diese Arbeit befasst sich mit der Entwicklung und dem Test eines Verfahrens zur dreiphasigen Zustandsschätzung in vermaschten Niederspannungsnetzen. Als Eingangsdaten dienen dabei Spannungs- und Strommesswerte, welche im Wesentlichen durch Smart Meter an Hausanschlusspunkten messtechnisch erfasst werden. Das Verfahren zielt darauf ab, Grenzwertverletzungen mit einer hohen Wahrscheinlichkeit zu erkennen.
Schwerpunkte der Betrachtung sind neben der Systemkonzeptionierung zum einen die Vorverarbeitung der Systemeingangsdaten im Rahmen der Generierung von Ersatzmesswerten sowie der Erkennung von Topologiefehlern und zum anderem die Entwicklung eines Schätzalgorithmus mit linearem Messmodell und der Möglichkeit zur Lokalisierung grob falscher Messdaten.
Die typische Aufgabe eines Nahbereichsradarnetzes ist es, Fahrzeuge in einem definierten Überwachungsbereich, beispielsweise dem Rollfeld eines Flughafens, zu detektieren, zu orten und ihre Spur zu verfolgen. Wegen der stark unterschiedlichen Radarrückstreuquerschnitte der Radarziele sind die Anforderungen an den verfügbaren Dynamikbereich der einzelnen eingesetzten Radarempfänger sehr hoch. Bei niedriger Radarsignalleistung ist daher die Verwendung eines Impulskompressionsverfahrens notwendig. Beim Nahbereichsnetz NRN, im Rahmen dessen Entwicklung auch die vorliegende Arbeit entstand, wird zudem ein neuartiges Ortungsprinzip eingesetzt, weshalb die Radarstationen mit feststehenden, d. h. nicht-rotierenden Antennen mit breiter Antennencharakteristik ausgestattet werden können. Radarsignale setzen sich aus den Echosignalen von den Radarsendeimpuls reflektierenden Objekten, sowie dem Rauschen zusammen. Bei den reflektierenden Objekten handelt es nicht nur um die interessierenden Radarziele, d. h. die zu detektierenden Fahrzeuge. Wegen der Bodennähe, in der ein Nahbereichsradarnetz betrieben wird, sowie der zumindest beim NRN breiten Antennencharakteristiken erfaßt der Radastrahl eine Vielzahl weiterer Radarreflektoren, deren Echosignal, Cluttersignal genannt, das eigentliche Nutzsignalüberlagert. Darüberhinaus verursacht der Einsatz eines Impulskompressionsverfahrens i. a. eine künstliche störende Signalkomponente, die sogenannten Impulskompressionsnebenmaxima, die auch Eigenclutter genannt werden. Durch den Einsatz eines erwartungstreuen Impulskompressionsverfahrens beim NRN wird theoretisch keine Eigenclutterkomponente erzeugt. Es existieren jedoch Effekte, die die Eigenclutterfreiheit zerstören. Diese werden im ersten Teil der Arbeit untersucht. Es wird gezeigt, wie die Eigenclutterfreiheit wiederhergestellt werden kann. Im zweiten Teil der Arbeit wird das Cluttersignal von reflektierenden Objekten anhand von mit dem NRN gemessenen Signalzeitreihen analysiert. Ein Modell zur Beschreibung des Cluttersignals wird entwickelt. Mit den Methoden der Detektionstheorie wird ein optimales Filter- und Detektionsverfahren für ein vollständig unbekanntes Nutzsignal in einem durch dieses Modell beschreibbaren Störsignal abgeleitet. Um dieses Verfahren einzusetzen, ist die Kenntnis der Modellparameter erforderlich. Prinzipiell existieren verschiedene Methoden, die sich im Laufe der Zeit verändernden Modellparameter zu schätzen. Das Filter- und Detektionsverfahren kann dann stetig an die aktuellen Schätzungen der Parameter des Cluttersignalmodells adaptiert werden. Die Schätzung liefert jedoch im Falle des Vorhandenseins von Nutzsignalkomponenten verfälschte Parameterwerte. In der vorliegenden Arbeit werden verschiedene Methoden zur Adaptionskontrolle vorgeschlagen, die den Einfluß dieser verfälschten Parameterschätzungen auf die Nutzsignaldetektion minimieren. Damit existiert ein Algorithmus, der adaptiv aus dem Echosignal cluttersignalbeschreibende Parameter bestimmt, die ihrerseits von einem Filter- und Detektionsalgorithmus verwendet werden, um ein eventuell im Echosignal vorhandenes Nutzsignal optimal zu detektieren. Anhand von Radarechosignalen, die mit dem NRN bei Meßkampagnen aufgezeichnet wurden, sowie anhand von Simulationen wurde schließlich die Leistungsfähigkeit des entwickelten adaptiven Filter- und Detektionsverfahrens mit Adaptionskontrolle beim Einsatz in einem Nahbereichsradarnetz gezeigt.
In der Arbeit wurde eine Herzfrequenzregelung für einen Fahrrad-Heimtrainer entworfen und in Matlab/Simulink implementiert. Dabei wird die Herzfrequenz des Fahrers über Funk erfasst und über eine unterlagerte Leistungsregelung eingeregelt. Als Aktuator dient eine Wirbelstrombremse, die das Hinterrad des Fahrrads bremst. Die Arbeit beschreibt den Reglerentwurf, die Modellierung des Menschen, den Systemaufbau und diverse Tests.
The objective of this thesis consists in developing systematic event-triggered control designs for specified event generators, which is an important alternative to the traditional periodic sampling control. Sporadic sampling inherently arising in event-triggered control is determined by the event-triggering conditions. This feature invokes the desire of
finding new control theory as the traditional sampled-data theory in computer control.
Developing controller coupling with the applied event-triggering condition to maximize the control performance is the essence for event-triggered control design. In the design the stability of the control system needs to be ensured with the first priority. Concerning variant control aims they should be clearly incorporated in the design procedures. Considering applications in embedded control systems efficient implementation requires a low complexity of embedded software architectures. The thesis targets at offering such a design to further complete the theory of event-triggered control designs.
In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.
The advances in sensor technology have introduced smart electronic products with
high integration of multi-sensor elements, sensor electronics and sophisticated signal
processing algorithms, resulting in intelligent sensor systems with a significant level
of complexity. This complexity leads to higher vulnerability in performing their
respective functions in a dynamic environment. The system dependability can be
improved via the implementation of self-x features in reconfigurable systems. The
reconfiguration capability requires capable switching elements, typically in the form
of a CMOS switch or miniaturized electromagnetic relay. The emerging DC-MEMS
switch has the potential to complement the CMOS switch in System-in-Package as
well as integrated circuits solutions. The aim of this thesis is to study the feasibility
of using DC-MEMS switches to enable the self-x functionality at system level.
The self-x implementation is also extended to the component level, in which the
ISE-DC-MEMS switch is equipped with self-monitoring and self-repairing features.
The MEMS electrical behavioural model generated by the design tool is inadequate,
so additional electrical models have been proposed, simulated and validated. The
simplification of the mechanical MEMS model has produced inaccurate simulation
results that lead to the occurrence of stiction in the actual device. A stiction conformity
test has been proposed, implemented, and successfully validated to compensate
the inaccurate mechanical model. Four different system simulations of representative
applications were carried out using the improved behavioural MEMS model, to
show the aptness and the performances of the ISE-DC-MEMS switch in sensitive
reconfiguration tasks in the application and to compare it with transmission gates.
The current design of the ISE-DC-MEMS switch needs further optimization in terms
of size, driving voltage, and the robustness of the design to guarantee high output
yield in order to match the performance of commercial DC MEMS switches.
Bees are recognized as an indispensable link in the human food chain and general ecological system.
Numerous threats, from pesticides to parasites, endanger bees and frequently lead to hive collapse. The varroa destructor mite is a key threat to bee keeping and the monitoring of hive infestation level is of major concern for effective treatment. Sensors and automation, e.g., as in condition-monitoring and Industry 4.0, with machine
learning offer help. In numerous activities a rich variety of sensors have been applied to apiary/hive
instrumentation and bee monitoring. Quite recent activities try to extract estimates of varroa infestation level by
hive air analysis based on gas sensing and gas sensor systems. In our work in the IndusBee4.0 project [8, 11], an hive-integrated, compact autonomous gas sensing system for varroa infestation level estimation based on low-
cost highly integrated gas sensors was conceived and applied. This paper adds to [11] with the first results of a
mid-term duration investigation from July to September 2020 until formic acid treatment. For the regarded hive more than 79 % of detection probability based on the SGP30 gas sensor readings have been achieved.
Small embedded devices are highly specialized platforms that integrate several pe- ripherals alongside the CPU core. Embedded devices extensively rely on Firmware (FW) to control and access the peripherals as well as other important functionality. Customizing embedded computing platforms to specific application domains often necessitates optimizing the firmware and/or the HW/SW interface under tight re- source constraints. Such optimizations frequently alter the communication between the firmware and the peripheral devices, possibly compromising functional correct- ness of the input/output behavior of the embedded system. This poses challenges to the development and verification of such systems. The system must be adapted and verified to each specific device configuration.
This thesis presents a formal approach to formulate these verification tasks at several levels of abstraction, along with corresponding HW/SW co-equivalence checking techniques for verifying correct I/O behavior of peripherals under a modified firmware. The feasibility of the approach is shown on several case studies, including industrial driver software as well as open-source peripherals. In addition, a subtle bug in one of the peripherals and several undocumented preconditions for correct device behavior were detected by the verification method.
In current practices of system-on-chip (SoC) design a trend can be observed to integrate more and more low-level software components into the system hardware at different levels of granularity. The implementation of important control functions and communication structures is frequently shifted from the SoC’s hardware into its firmware. As a result, the tight coupling of hardware and software at a low level of granularity raises substantial verification challenges since the conventional practice of verifying hardware and software independently is no longer sufficient. This calls for new methods for verification based on a joint analysis of hardware and software.
This thesis proposes hardware-dependent models of low-level software for performing formal verification. The proposed models are conceived to represent the software integrated with its hardware environment according to the current SoC design practices. Two hardware/software integration scenarios are addressed in this thesis, namely, speed-independent communication of the processor with its hardware periphery and cycle-accurate integration of firmware into an SoC module. For speed-independent hardware/software integration an approach for equivalence checking of hardware-dependent software is proposed and an evaluated. For the case of cycle-accurate hardware/software integration, a model for hardware/software co-verification has been developed and experimentally evaluated by applying it to property checking.
Die Architekturen vieler technischer Systeme sind derzeit im Umbruch. Der fortschreitende Einsatz von Netzwerken aus intelligenten rechnenden Knoten führt zu neuen Anforderungen an den Entwurf und die Analyse der resultierenden Systeme. Dabei spielt die Analyse des Zeitverhaltens mit seinen Bezügen zu Sicherheit und Performanz eine zentrale Rolle. Netzbasierte Automatisierungssysteme (NAS) unterscheiden sich hierbei von anderen verteilten Echtzeitsystemen durch ihr zyklisches Komponentenverhalten. Das aus der asynchronen Verknüpfung entstehende Gesamtverhalten ist mit klassischen Methoden kaum analysierbar. Zur Analyse von NAS wird deshalb der Einsatz der wahrscheinlichkeitsbasierten Modellverifikation (PMC) vorgeschlagen. PMC erlaubt detaillierte, quantitative Aussagen über das Systemverhalten. Für die dazu notwendige Modellierung des Systems auf Basis wahrscheinlichkeitsbasierter, zeitbewerteter Automaten wird die Beschreibungssprache DesLaNAS eingeführt. Exemplarisch werden der Einfluss verschiedener Komponenten und Verhaltensmodi auf die Antwortzeit eines NAS untersucht und die Ergebnisse mittels Labormessungen validiert.
Formalismen und Anschauung
(1999)
Mobilfunksysteme sind interferenzbegrenzt. Eine signifikante Steigerung der Leistungsfähigkeit künftiger Mobilfunksysteme kann daher nur durch den Einsatz von Verfahren zum Reduzieren der schädlichen Wirkung von Interferenz erreicht werden. Eine besonders attraktive Klasse von Verfahren, die dieses leisten, sind jene der gemeinsamen Empfangssignalverarbeitung, wobei bisher der systematische Entwurf und die systematische Analyse solcher Verfahren für CDMA-Mobilfunksysteme mit infiniter oder quasi-infiniter Datenübertragung - eine im Hinblick auf die derzeit in Betrieb gehenden zellularen Mobilfunksysteme der dritten Generation besonders interessierende Klasse von künftigen Mobilfunksystemen - noch unklar ist. Die vorliegende Arbeit liefert einen Beitrag zur Systematisierung des Entwurfs- und Optimierungsprozesses von Verfahren zur gemeinsamen Empfangssignalverarbeitung für Mobilfunksysteme der genannten Art. Zu diesem Zweck wird gezeigt, daß sich die Aufgabe der gemeinsamen Empfangssignalverarbeitung in die fünf Teilaufgaben Blockbilden, Datenzuordnen, Interblock-Signalverarbeitung, Intrablock-Signalverarbeitung und Kombinieren & Entscheiden zerlegen läßt. Nachdem in einem ersten Schritt alle fünf Teilaufgaben klar definiert und gegeneinander abgegrenzt werden, erfolgt in einem zweiten Schritt für jede Teilaufgabe die Entwicklung von Lösungsvorschlägen, die nach gewissen Kriterien optimal bzw. suboptimal sind. Zur Lösung jeder einzelnen Teilaufgabe werden neuartige Vorgehensweisen vorgeschlagen, wobei dabei sowohl die Optimierung der Leistungsfähigkeit der jeweiligen Vorgehensweisen als auch Belange, die für die praktische Realisierbarkeit relevant sind, im Vordergrund stehen. Eine Schlüsselrolle kommt den Verfahren der Intrablock-Signalverarbeitung zu, deren Aufgabe darin besteht, ausgehend von Ausschnitten des Empfangssignals Schätzungen von Daten zu ermitteln, die zu dem jeweiligen Ausschnitt beitragen. Die vorgeschlagenen Verfahren der Intrablock-Signalverarbeitung beruhen im wesentlichen auf iterativen Versionen bekannter linearer Schätzer, die um einen nichtlinearen Schätzwertverbesserer erweitert werden. Der nichtlineare Schätzwertverbesserer nutzt dabei A-priori-Information, wie z.B. die Kenntnis des Datensymbolalphabetes und der A-priori-Wahrscheinlichkeiten der zu übertragenden Daten, zum Erhöhen der Zuverlässigkeit der zu ermittelnden Datenschätzungen. Die verschiedenen Versionen der iterativ realisierten linearen Schätzer und verschiedene Schätzwertverbesserer bilden eine Art Baukastensystem, das es erlaubt, für viele Anwendungsfälle ein maßgeschneidertes Verfahren zur Intrablock-Signalverarbeitung zu konstruieren. Aufbauend auf dem entwickelten systematischen Entwurfsprinzip wird abschließend für ein exemplarisches CDMA-Mobilfunksystem mit synchronem Mehrteilnehmerzugriff ein darauf zugeschnittenes Verfahren zur gemeinsamen Empfangssignalverarbeitung vorgeschlagen. Die dargelegten Simulationsergebnisse zeigen, daß ausgehend von derzeit favorisierten nicht dem Prinzip der gemeinsamen Empfangssignalverarbeitung folgenden Verfahren zum Schätzen der übertragenen Daten in typischen Mobilfunkszenarien durch Einsetzen des vorgeschlagenen Verfahrens zur gemeinsamen Empfangssignalverarbeitung die Anzahl der gleichzeitig aktiven CDMA-Codes um nahezu eine Größenordnung erhöht werden kann, ohne dabei die bei einem vorgegebenen Signal-Stör-Verhältnis am Referenzempfänger beobachtbare Zuverlässigkeit der ermittelten Schätzungen zu verschlechtern. Deshalb ist der Einsatz von Verfahren zur gemeinsamen Empfangssignalverarbeitung eine vielversprechende Maßnahme zur Kapazitätssteigerung künftiger Mobilfunksysteme.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
Motivation: Mathematical models take an important place in science and engineering.
A model can help scientists to explain dynamic behavior of a system and to understand
the functionality of system components. Since length of a time series and number of
replicates is limited by the cost of experiments, Boolean networks as a structurally simple
and parameter-free logical model for gene regulatory networks have attracted interests
of many scientists. In order to fit into the biological contexts and to lower the data
requirements, biological prior knowledge is taken into consideration during the inference
procedure. In the literature, the existing identification approaches can only deal with a
subset of possible types of prior knowledge.
Results: We propose a new approach to identify Boolean networks fromtime series data
incorporating prior knowledge, such as partial network structure, canalizing property,
positive and negative unateness. Using vector form of Boolean variables and applying
a generalized matrix multiplication called the semi-tensor product (STP), each Boolean
function can be equivalently converted into a matrix expression. Based on this, the
identification problem is reformulated as an integer linear programming problem to
reveal the system matrix of Boolean model in a computationally efficient way, whose
dynamics are consistent with the important dynamics captured in the data. By using
prior knowledge the number of candidate functions can be reduced during the inference.
Hence, identification incorporating prior knowledge is especially suitable for the case of
small size time series data and data without sufficient stimuli. The proposed approach is
illustrated with the help of a biological model of the network of oxidative stress response.
Conclusions: The combination of efficient reformulation of the identification problem
with the possibility to incorporate various types of prior knowledge enables the
application of computational model inference to systems with limited amount of time
series data. The general applicability of thismethodological approachmakes it suitable for
a variety of biological systems and of general interest for biological and medical research.
Nach einer Einführung in die zur Erfassung inkorporaler Meßgrößen und zur Meßdatenübertragung verwendeten Verfahren wird in dieser Arbeit ein Konzept vorgestellt, das die Entwicklung implantierbarer Meßsysteme in den ersten Entwurfsphasen unterstützt. Im Anschluß wird die mit dem Konzept durchgeführte Entwicklung eines multisensorischen Meß- und Überwachungssystems beschrieben. Zur Ableitung des Entwurfskonzepts werden die an implantierbare Meßsysteme gestellten Anforderungen analysiert. Dabei findet eine Unterteilung in klassenspezifische und applikationsspezifische Anforderungen statt. Aus den klassenspezifischen Anforderungen dieser Aufstellung wird ein für die hier untersuchten Meßsysteme allgemein gültiges, in Funktionsblöcke und Funktionsgruppen untergliedertes, Strukturmodell erstellt. Zum Entwurf der untereinander in Verbindung stehenden Komponenten dieses Strukturmodells wird eine Reihenfolge vorgegeben, die vorhandene Abhängigkeiten zwischen den Funktionseinheiten berücksichtigt. Anschließend werden in der Entwurfsreihenfolge und unter Berücksichtigung der applikationsspezifischen Anforderungen die einzelnen Funktionsblöcke im Detail spezifiziert und für jeden der Funktionsblöcke eine Anforderungsliste erstellt. Unter Verwendung der Anforderungslisten werden die Funktionsgruppen bestimmt. Eine dieser Funktionsgruppen ist die drahtlose Energieversorgung. Zu deren Entwurf wird ein Verfahren vorgestellt, mit dem induktive Übertragungsstrecken berechnet werden können. Mit der Verwendung des beschriebenen Konzepts wird der Entwickler eines implantierbaren Meßsystems insbesondere in der Systemspezifikation und der Systempartitionierung unterstützt. Der aufgestellte Anforderungskatalog erleichtert ihm die Spezifikation des Meßsystems. Zu jedem der Funktionsblöcke des allgemein geltenden Strukturmodells erhält der Entwickler eine applikationsabhängige Detailspezifikation, die ihm die Bestimmung der zum Aufbau der Funktionsblöcke erforderlichen Funktionsgruppen ermöglicht. So entsteht eine detaillierte Aufbaustruktur des zu entwerfenden Meßsystems, in der alle an das System gestellten Anforderungen berücksichtigt sind. Damit steht eine fundierte Ausgangsbasis zur Entwicklung der Funktionsgruppen zur Verfügung. Das Verfahren zum Entwurf induktiver Energieübertragungsstrecken unterstützt die Entwicklung drahtlos gespeister Meßsysteme. Der Entwurf der restlichen Funktionsgruppen wird mit bereits etablierten Methoden und Werkzeugen durchgeführt. Die Verifikation des gesamten Meßsystems erfolgt mit einem aus den Funktionsgruppen aufgebauten Prototypen. Unter Verwendung des Entwurfskonzepts wurde ein implantierbares Meßsystem für eine neue Osteosyntheseplatte entwickelt, die zur Versorgung von Knochenfrakturen verwendet wird. Die Platte verfügt über besondere mechanische Eigenschaften, die eine verbesserte Frakturheilung versprechen. Zur Beobachtung der Knochenheilung werden bislang Röntgenaufnahmen eingesetzt. Das in die neue Osteosyntheseplatte integrierte Meßsystem hingegen ermöglicht die Erfassung mehrerer Meßwerte direkt am Knochen. Damit ergibt sich eine erheblich verbesserte Diagnostik ohne Strahlenbelastung des Patienten. Neben der Erforschung des Verbundes Knochen-Implantat erlauben die verbesserten Diagnosemöglichkeiten die Einleitung gezielterer Rehabilitations-Maßnahmen. In Verbindung mit der verbesserten Frakturheilung sollen so optimale Behandlungsergebnisse erzielt und die Behandlungszeiten verkürzt werden.
In today’s world, mobile communication has become one of the most widely used technologies corroborated by growing number of mobile subscriptions and extensive usage of mobile multimedia services. It is a key challenge for the network operators to accommodate such large number of users and high traffic volume. Further, several day-to-day scenarios such as public transportation, public events etc., are now characterized with high mobile data
usage. A large number of users avail cellular services in such situations posing
high load to the respective base stations. This results in increased number of dropped connections, blocking of new access attempts and blocking of handovers (HO). The users in such system will thus be subjected to poor
Quality of Experience (QoE). Beforehand knowledge of the changing data traffic dynamics associated with such practical situations will assist in designing
radio resource management schemes aiming to ease the forthcoming congestion situations. The key hypothesis of this thesis is that consideration and utilization of additional context information regarding user, network and his environment is valuable in designing such smart Radio Resource Management(RRM) schemes. Methods are developed to predict the user cell transitions, considering the fact that mobility of the users is not purely random but rather direction oriented. This is particularly used in case of traffic dense moving network or group of users moving jointly in the same vehicle (e.g., bus, train, etc.) to
predict the propagation of high load situation among cells well in advance.
This enables a proactive triggering of load balancing (LB) in cells anticipating
the arrival of high load situation and accommodating the incoming user group or moving network. The evaluated KPIs such as blocked access
attempts, dropped connections and blocked HO are reduced.
Further, everyday scenario of dynamic crowd formation is considered as another potential case of high load situation. In real world scenarios such as open air festivals, shopping malls, stadiums or public events, several mobile users gather to form a crowd. This poses high load situation to the respective serving base station at the site of crowd formation, thereby leading to congestion. As a consequence, mobile users are subjected to poor QoE due to high dropping and blocking rates. A framework to predict crowd formation in a cell is developed based on coalition of user cell transition prediction, cluster detection and trajectory prediction. This framework is suitably used to prompt context aware load balancing mechanism and activate a small cell at the probable site of crowd formation. Simulations show that proactive LB
reduces the dropping of users (23%), blocking of users (10%) and blocked
HO (15%). In addition, activation of a Small Cell (SC) at the site of frequent
crowd formation leads to further reductions in dropping of users (60%),
blocking of users (56%) and blocked HO (59%).
Similar to the framework for crowd formation prediction, a concept is developed for predicting vehicular traffic jams. Many vehicular users avail broadband cellular services on a daily basis while traveling. The density of such vehicular users change dynamically in a cell and at certain sites (e.g.
signal lights), traffic jams arise frequently leading to a high load situation at
respective serving base station. A traffic prediction algorithm is developed
from cellular network perspective as a coalition strategy consisting of schemes to predict user cell transition, vehicular cluster/moving network detection, user velocity monitoring etc. The traffic status indication provided by the algorithm is then used to trigger LB and activate/deactivate a small cell suitably. The evaluated KPIs such as blocked access attempts, dropped connections
and blocked HO are reduced by approximately 10%, 18% and 18%, respectively due to LB. In addition, switching ON of SC reduces blocked access attempts, dropped connections and blocked HO by circa 42%, 82% and 81%, respectively.
Amidst increasing number of connected devices and traffic volume, another key issue for today’s network is to provide uniform service quality
despite high mobility. Further, urban scenarios are often characterized by
coverage holes which hinder service continuity. A context aware resource allocation scheme is proposed which uses enhanced mobility prediction to facilitate service continuity. Mobility prediction takes into account additional information about the user’s origin and possible destination to predict next road segment. If a coverage hole is anticipated in upcoming road, then additional
resources are allocated to respective user and data is buffered suitably.
The buffered data is used when the user is in a coverage hole to improve service continuity. Simulation shows improvement in throughput (in coverage
hole) by circa 80% and service interruption is reduced by around 90%, for a
non-real-time streaming service. Additionally, investigation of context aware procedures is carried out with a focus on user mobility, to find commonalities among different procedures and a general framework is proposed to support mobility context awareness. The new information and interfaces which are required from various entities
(e.g., vehicular infrastructure) are discussed as well.
Device-to-Device (D2D) communications commonly refer to the technology
that enables direct communication between devices, hence relieving the
base station from traffic routing. Thus, D2D communication is a feasible
solution in crowded situations, where users in proximity requesting to communicate with one another could be granted D2D links for communication, thereby easing the traffic load to serving base station. D2D links can potentially
reuse the radio resources from cellular users (known as D2D underlay) leading to better spectral utilization. However, the mutual interference can hinder system performance. For instance, if D2D links are reusing cellular uplink resources then D2D transmissions cause interference to cellular uplink at base station. Whereas, cellular transmissions cause interference to
D2D receivers. To cope up with such issues, location aware resource allocation
schemes are proposed for D2D communication. The key aim of such RA scheme is to reuse resources with minimal interference. The RA scheme based on virtual sectoring of a cell leads to approximately 15% more established
links and 25% more capacity with respect to a random resource allocation. D2D transmissions cause significant interference to cellular links with
which they reuse physical resource blocks, thereby hindering cellular performance. Regulating D2D transmissions to mitigate the aforementioned problem would mean sub-optimal exploitation of D2D communications. As
a solution, post-resource allocation power control at cellular users is proposed.
Three schemes namely interference aware power control, blind power
control and threshold based power control are discussed. Simulation results
show reductions in dropping of cellular users due to interference from D2D
transmissions, improvement in throughput at base station (uplink) while not hindering the D2D performance.
Mit dem Vorhandensein elektrischer Energie und moderner Sensorik an elektrisch unterstützten Fahrrädern eröffnen sich neue Möglichkeiten der Entwicklung von Fahrerassistenzsystemen am Pedelec zur Erhöhung der Sicherheit und des Fahrkomforts. Die Leistungsfähigkeit solcher Systeme kann durch die Nutzung von Inertialsensorik weiter gesteigert werden. Jedoch müssen solche Sensoren, vor allem bei sicherheitsrelevanten Assistenzsystemen, zuverlässige, robuste und plausible Sensordaten liefern. Hieraus ergibt sich das Thema dieser Arbeit: die Evaluation von Inertialsensorik für Fahrerassistenzsysteme am Pedelec anhand systematischer Untersuchung der Fahrdynamik.
Durch simulative und experimentelle Untersuchungen der MEMS-Sensorik und der Fahrdynamik, basierend auf Testkatalogen, werden die Anforderungen an Inertialsensorik abgeleitet und die Störbarkeit der Drehrate analysiert. Dabei führt die Betrachtung verschiedener Sensortypen, Fahrszenarien und Anbaupositionen zu der Erkenntnis, dass bspw. die Anbauposition am Sattelrohr und in der Antriebseinheit besonders geeignet sind. Vor allem der betrachtete Automotive-MEMS-Sensor liefert auch bei potentiell kritischen Vibrationen bei einer Fahrt über Kopfsteinpflaster oder über Treppenstufen sowie bei Bremsenquietschen zuverlässig plausible Sensordaten.
Zusätzlich zeigt eine Betrachtung der Auswirkungen von Sensorfehlern auf eine Datenfusion, d.h. der Berechnung der Raumwinkel, dass vor allem die Minimierung des Offset-Fehlers, bspw. durch eine Langzeitkorrektur, sinnvoll erscheint und resultierende Winkelfehler minimieren kann.
Die Untersuchung der Fahrdynamik betrachtet insbesondere das Fahrszenario (kritische) Kurvenfahrt. Anhand der Fahrdaten zahlreicher Pedelec-Nutzer werden eine Methode zur Erkennung von Kurvenfahrten sowie theoretische Ansätze zur Vermeidung einer kritischen Kurvenfahrt durch einen aktiven Lenkeingriff realisiert.
Der Trend zur Verfügbarkeit mehrerer Mobilfunknetze im gleichen Versorgungsgebiet nicht nur unterschiedlicher Operatoren, sondern auch unterschiedlicher Mobilfunkstandards in möglicherweise unterschiedlichen Hierarchieebenen führt zu einer Vielzahl von Koexistenzszenarien, in denen Intersystem- und Interoperator-MAI die einzelnen Mobilfunknetze beeinträchtigen können. In der vorliegenden Arbeit wird ein systematischer Zugang zur Koexistenzproblematik durch die Klassifizierung der MAI erarbeitet. Eine MAI-Art kann dabei mehreren MAI-Klassen angehören. Durch die Einteilung in Klassen wird angestrebt, zum einen die eine MAI-Art beeinflussenden Effekte anhand der Zugehörigkeit zu bestimmten MAI-Klassen besser verstehen zu können. Zum anderen dient die Einteilung der MAI in Klassen zum Abschätzen der Gefährlichkeit einer MAI-Art, über die sich Aussagen machen lassen anhand der Zugehörigkeit zu bestimmten MAI-Klassen. Der Begriff Gefährlichkeit einer MAI-Art schließt neben der mittleren Leistung auch weitere Eigenschaften wie Varianz oder Ursache der MAI ein. Einfache Schlimmstfall-Abschätzungen, wie sie in der Literatur gebräuchlich sind, können leicht zu Fehleinschätzungen der Gefährlichkeit einer MAI-Art führen. Durch die Kenntnis der zugehörigen MAI-Klassen einer MAI-Art wird die Gefahr solcher Fehleinschätzungen erkennbar. Neben den Schlimmstfall-Abschätzungen unter Berücksichtigung der MAI-Klassen werden in der vorliegenden Arbeit auch Simulationen durchgeführt, anhand derer die Abschätzungen verifiziert werden. Dazu werden Werkzeuge in Form von mathematischen Modellen zum Berechnen der Leistung der verschiedenen MAI-Arten unter Einbeziehen der verschiedenen betrachteten Verfahren zum Mindern von MAI erarbeitet. Dabei wird auch ein Konzept zum Vermindern der erforderlichen Rechenleistung vorgestellt. Anhand der Untersuchung der Koexistenz der beispielhaften Mobilfunksysteme WCDMA und TD-CDMA wird gezeigt, daß sich das Auftreten extrem hoher Intersystem- bzw. Interoperator-MAI durch geeignete Wahl der Systemparameter wie Zellradien und Antennenhöhen, sowie durch Verfahren zum Mindern von MAI wie effizienten Leistungsregelungsverfahren und dynamische Kanalzuweisung meist vermeiden läßt. Es ist jedoch essentiell, daß die Koexistenzproblematik bereits in der Phase der Funknetzplanung adäquat berücksichtigt wird. Dabei ist eine Kooperation der beteiligten Operatoren meist nicht notwendig, lediglich besonders kritische Fälle wie Kollokation von BSen verschiedener TDD-Mobilfunknetze z.B. nach dem 3G-Teilstandard TD-CDMA müssen von den Operatoren einvernehmlich vermieden werden. Da bei der Koexistenz von Mobilfunknetzen in Makrozellen aufgrund ihres hohen Zellradius besonders hohe Interoperator-MAI für den Fall der Gleichstrecken-MAI auftreten kann, wird in der vorliegenden Arbeit ein neuartiges Konzept zum Vermindern dieser MAI basierend auf Antennentechniken vorgestellt. Das Konzept zeigt ein vielverspechendes Potential zum Mindern der Interoperator-MAI.
Interferenzreduktion in CDMA-Mobilfunksystemen - ein aktuelles Problem und Wege zu seiner Lösung
(2003)
Eine signifikante Steigerung der Leistungsfähigkeit von Mobilfunksystemen und die damit verbundene Erhöhung des mit begrenzten Frequenzspektrumsressourcen erzielbaren ökonomischen Gewinns erfordert eine Interferenzreduktion. Da der von einem empfangenen Interferenzsignal erzeugte Störeffekt sowohl von der Leistung des Interferenzsignals als auch von der Struktur des Interferenzsignals im Vergleich zur Struktur des Nutzsignals abhängt, ergeben sich zwei prinzipielle Ansätze zur Reduktion der Interferenz. Bei den Interferenzreduktionsverfahren auf der Systemebene wird die Leistung der empfangenen Interferenzsignale zum Beispiel durch geschickte Regelung der Sendeleistungen oder durch Einstellen der Richtcharakteristiken von Antennen reduziert. Interferenzreduktionsverfahren auf der Systemebene sind relativ einfach realisierbar und können bereits in heutigen Mobilfunksystemen erfolgreich eingesetzt werden. Interferenzreduktionsverfahren auf der Verbindungsebene zielen auf eine vorteilhafte Beeinflussung oder Berücksichtigung der Signalstrukturen. Ausgehend von allgemeingültigen Eigenschaften des Mobilfunkkanals wie Linearität kann man Signalstrukturen finden, die a priori zu wenig oder sogar keiner schädlichen Interferenz führen. Solche einfachste, vom aktuellen Zustand des Mobilfunkkanals unabhängigen Interferenzreduktionsverfahren auf der Verbindungsebene sind beispielsweise die Vielfachzugriffsverfahren, die in jedem Mobilfunksystem eingesetzt werden. In letzter Zeit werden auch vermehrt Interferenzreduktionsverfahren auf der Verbindungsebene untersucht, die die Kenntnis des aktuellen Kanalzustands ausnutzen. Solche Interferenzreduktionsverfahren erfordern komplizierte Berechnungen in Sender oder Empfänger, in die die einzelnen Signalabtastwerte und die schnell zeitvarianten Kanalimpulsantworten eingehen. Der daraus resultierende hohe Rechenaufwand verhinderte bis vor kurzem eine Realisierung in kommerziellen Produkten. Interferenzreduktionsverfahren auf der Verbindungsebene kann man in senderseitige Verfahren und empfängerseitige Verfahren unterteilen. Die senderseitigen Verfahren versuchen, durch geschickte Gestaltung der Sendesignale schädliche Interferenzen zu vermeiden. Ein Schwerpunkt der vorliegenden Schrift ist das Untersuchen empfängerseitiger Interferenzreduktionsverfahren auf der Verbindungsebene. Hier interessiert neben der gemeinsamen Kanalschätzung insbesondere die gemeinsame Datenschätzung. Ein wesentliches Problem bei der empfängerseitigen Interferenzreduktion auf der Verbindungsebene ist die erhöhte Anzahl zu berücksichtigender Mobilstationen bei der gemeinsamen Datenschätzung. Im Vergleich zu Empfängern ohne Interferenzreduktion müssen mehr Daten aus einer unveränderten Anzahl an verfügbaren Empfangswerten geschätzt werden, was zu einem verminderten Mehrteilnehmercodierungsgewinn des Datenschätzers führt. Verfahren der gemeinsamen Datenschätzung können nur dann gewinnbringend eingesetzt werden, wenn der negative Effekt des verminderten Mehrteilnehmercodierungsgewinns durch den positiven Effekt der reduzierten Interferenz mindestens kompensiert wird. Diese Forderung ist bei der Interzellinterferenzreduktion besonders kritisch, da die einzelnen Interzellinterferer häufig nur mit geringer Leistung empfangen werden, das heißt der positive Effekt der reduzierten Interferenz bei Berücksichtigen eines Interzellinterferers relativ gering ausfällt. Eine Voraussetzung zur erfolgreichen Interferenzreduktion und insbesondere zur Interzellinterferenzreduktion sind folglich Datenschätzer mit hohem Mehrteilnehmercodierungsgewinn. Die bekannten linearen gemeinsamen Datenschätzer wie der Zero-Forcing-Schätzer können diese Forderung nach hohem Mehrteilnehmercodierungsgewinn bei größeren Anzahlen gemeinsam detektierter Mobilstationen nicht erfüllen. Ein mögliche Lösung zum Erzielen hoher Mehrteilnehmercodierungsgewinne mit moderaten Rechenaufwänden sind die in der vorliegenden Schrift untersuchten, auf dem Turbo-Prinzip basierenden iterativen gemeinsamen Datenschätzer. Prinzipiell handelt es sich bei den hier untersuchten Datenschätzern um iterative Versionen der bekannten linearen gemeinsamen Datenschätzer, die um einen nichtlinearen Schätzwertverbesserer erweitert werden. Der nichtlineare Schätzwertverbesserer nutzt die Kenntnis des Modulationsalphabets und optional des eingesetzten Fehlerschutzcodes zum Verbessern der Schätzergebnisse. Die vielen vorgestellten Varianten der iterativen gemeinsamen Datenschätzer und die verschiedenen Schätzwertverbesserer bilden eine Art Baukastensystem, das es erlaubt, für jeden Anwendungsfall einen maßgeschneiderten gemeinsamen Datenschätzer zu konstruieren.
Investigate the hardware description language Chisel - A case study implementing the Heston model
(2013)
This paper presents a case study comparing the hardware description language „Constructing Hardware in a Scala Embedded Language“(Chisel) to VHDL. For a thorough comparison the Heston Model was implemented, a stochastic model used in financial mathematics to calculate option prices. Metrics like hardware utilization and maximum clock rate were extracted from both resulting designs and compared to each other. The results showed a 30% reduction in code size compared to VHDL, while the resulting circuits had about the same hardware utilization. Using Chisel however proofed to be difficult because of a few features that were not available for this case study.
Chisel (Constructing Hardware in a Scala embedded language) is a new programming language, which embedded in Scala, used for hardware synthesis. It aims to increase productivity when creating hardware by enabling designers to use features present in higher level programming languages to build complex hardware blocks. In this paper, the most advertised features of Chisel are investigated and compared to their VHDL counterparts, if present. Afterwards, the authors’ opinion if a switch to Chisel is worth considering is presented. Additionally, results from a related case study on Chisel are briefly summarized. The author concludes that, while Chisel has promising features, it is not yet ready for use in the industry.
In the thesis the task of channel estimation in beyond 3G service area based mobile radio air interfaces is considered. A system concept named Joint Transmission and Detection Integrated Network (JOINT) forms the target platform for the investigations. A single service area of JOINT is considered, in which a number of mobile terminals is supported by a number of radio access points, which are connected to a central unit responsible for the signal processing. The modulation scheme of JOINT is OFDM. Pilot-aided channel estimation is considered, which has to be performed only in the uplink of JOINT, because the duplexing scheme TDD is applied. In this way, the complexity of the mobile terminals is reduced, because they do not need a channel estimator. Based on the signals received by the access points, the central unit estimates the channel transfer functions jointly for all mobile terminals. This is done by resorting to the a priori knowledge of the radiated pilot signals and by applying the technique of joint channel estimation, which is developed in the thesis. The quality of the gained estimates is judged by the degradation of their signal-to-noise ratio as compared to the signal-to-noise ratio of the respective estimates gained in the case of a single mobile terminal radiating its pilots. In the case of single-element receive antennas at the access points, said degradation depends solely on the structure of the applied pilots. In the thesis it is shown how by a proper design of the pilots the SNR degradation can be minimized. Besides using appropriate pilots, the performance of joint channel estimation can be further improved by the inclusion of additional a-priori information in the estimation process. An example of such additional information would be the knowledge of the directional properties of the radio channels. This knowledge can be gained if multi-element antennas are applied at the access points. Further, a-priori channel state information in the form of the power delay profiles of the radio channels can be included in the estimation process by the application of the minimum mean square error estimation principle for joint channel estimation. After having intensively studied the problem of joint channel estimation in JOINT, the thesis rounds itself by considering the impact of the unavoidable channel estimation errors on the performance of data estimation in JOINT. For the case of small channel estimation errors occurring due to the presence of noise at the access points, the performance of joint detection in the uplink and of joint transmission in the downlink of JOINT are investigated based on simulations. For the uplink, which utilizes joint detection, it is shown to which degree the bit error probability increases due to channel estimation errors. For the downlink, which utilizes joint transmission, channel estimation errors lead to an increase of the required transmit power, which can be quantified by the simulation results.
Der zunehmende Ausbau dezentraler Erzeugungsanlagen sowie die steigende Anzahl an Elektrofahrzeugen stellen die Niederspannungsnetze vor neue Herausforderungen. Neben der Einhaltung des zulässigen Spannungsbands führen Erzeugungsanlagen und neue Lasten zu einer zunehmenden thermischen Auslastung der Leitungen. Einfache, konventionelle Maßnahmen wie Topologieänderungen zu vermascht betriebenen Niederspannungsnetzen sind ein erster hilfreicher und kostengünstiger Ansatz, bieten aber keinen grundsätzlichen Schutz vor einer thermischen Überlastung der Betriebsmittel. Diese Arbeit befasst sich mit der Konzeption eines Spannungs- und Wirkleistungsreglers für vermaschte Niederspannungsnetze. Durch den Regler erfolgt eine messtechnische Erfassung der Spannungen und Ströme in einzelnen Messpunkten des Niederspannungsnetzes. Mit Hilfe eines speziellen Kennlinienverfahrens kann eine Leistungsverschiebung in einzelnen Netzmaschen hervorgerufen und vorgegebene Soll- oder Grenzwerte eingehalten werden. In vorliegender Arbeit werden die analytischen Grundlagen des Reglers, seine Hardware sowie das Kennlinienverfahren zusammen mit den realisierbaren Regelkonzepten vorgestellt. Die Ergebnisse aus Simulationsstudien, Labor- und Feldtests stellen die Effektivität des Reglers eindeutig dar und werden diskutiert.
Die Einführung des Internets hat einen stetigen Wandel des täglichen,
sowie beruflichen Alltags verursacht. Hierbei ist eine deutliche Verlagerung
in den virtuellen Raum (Internet) festzustellen. Zusätzlich hat
die Einführung von sozialen Netzwerken, wie beispielsweise Facebook
das Verlangen des Nutzers immer „online“ zu sein, deutlich verstärkt.
Hinzu kommen die kontinuierlich wachsenden Datenmengen, welche beispielsweise
durch Videostreaming (YouTube oder Internet Protocol Television
(IPTV)) oder den Austausch von Bildern verursacht werden.
Zusätzlich verursachen neue Dienste, welche beispielsweise im Rahmen
vom Internet der Dinge und auch Industrie 4.0 eingeführt werden, zusätzliche
Datenmengen. Aktuelle Technologien wie Long Term Evolution
Advanced (LTE-A) im Funkbereich und Very High Speed Digital Subsciber
Line (VDSL) beziehungsweise Glasfaser in kabelgebundenen Netzen,
versuchen diesen Anforderungen gerecht zu werden.
Angesichts der steigenden Anforderungen an die Mobilität des Nutzers,
ist die Verwendung von Funktechnologien unabdingbar. In Verbindung
mit dem stetig wachsenden Datenaufkommen und den ansteigenden
Datenraten ist ein wachsender Bedarf an Spektrum, also freien,
beziehungsweise ungenutzten Frequenzbereichen einhergehend. Für die
Identifikation geeigneter Bereiche müssen allerdings eine Vielzahl von
Parametern und Einflussfaktoren betrachtet werden. Einer der entscheidenden
Parameter ist die entstehende Dämpfung im betrachteten Frequenzbereich,
da diese mit steigender Frequenz größer wird und somit
die resultierende Abdeckung bei gleichbleibender Sendeleistung sinkt.
In aktuellen Funksystemen werden Frequenzen < 6 GHz verwendet, da
diese von den Ausbreitungseigenschaften geeignete Eigenschaften aufweisen.
Des Weiteren müssen vorhandene Nutzungsrechte, Inhaber des
Spektrums, Nutzungsbedingungen und so weiter im Vorfeld abgeklärt
werden. In Deutschland wird die Koordination von der Bundesnetzagentur
vorgenommen.
Aufgrund der Vielfalt der vorhandenen Dienste und Anwendungen ist
es leicht ersichtlich, dass der Frequenzbereich < 6 GHz stark ausgelastet
ist. Neben den kontinuierlich ausgelasteten Diensten wie zum Beispiel
Long Term Evolution (LTE) oder Digital Video Broadcast (DVB), gibt
es spektrale Bereiche, die nur eine geringe zeitliche Auslastung aufweisen.
Markant hierfür sind Frequenzbereiche, welche beispielsweise ausschließlich
für militärische Nutzung reserviert sind. Bei genauerer Betrachtung
fällt auf, dass sich dies nicht ausschließlich auf den zeitlichen Bereich
beschränkt, vielmehr ergibt sich eine Kombination aus zeitlicher und
räumlicher Beschränkung, da die Nutzung meist auf einen räumlichen
Bereich eingrenzbar ist. Eine weitere Einschränkung resultiert aus der
derzeit starren Vergabe von Frequenzbereichen. Die Zuteilung basiert
auf langwierigen Antragsverfahren und macht somit eine kurzfristige variable
Zuteilung unmöglich.
Um diesem Problem gerecht zu werden, erfolgt im Rahmen dieser Arbeit
die Entwicklung eines generischen Spektrum-Management-Systems
(SMSs) zur dynamischen Zuteilung vorhandener Ressourcen. Eine Anforderung
an das System ist die Unterstützung von bereits bekannten
Spektrum Sharing Verfahren, wie beispielsweise Licensed Shared Access
(LSA) beziehungsweise Authorized Shared Access (ASA) oder Spectrum
Load Smoothing (SLS). Hierfür wird eine Analyse der derzeit bekannten
Sharing Verfahren vorgenommen und diese bezüglich ihrer Anwendbarkeit
charakterisiert. DesWeiteren werden die Frequenzbereiche unterhalb
6 GHz hinsichtlich ihrer Verwendbarkeiten und regulatorischen Anforderungen
betrachtet. Zusätzlich wird ein erweiterter Anforderungskatalog
an das Spektrum-Management-System (SMS) entwickelt, welcher
als Grundlage für das Systemdesign verwendet wird. Essentiell ist hierbei,
dass alle (potentiellen) Nutzer beziehungsweise Inhaber eines spektralen
Bereiches die Funktionalität eines derartigen Systems verwenden
können. Hieraus ergibt sich bereits die Anforderung der Skalierbarkeit
des Systems. Zur Entwicklung einer geeigneten Systemarchitektur werden
bereits vorhandene Lösungsansätze zur Verwaltung und Speicherung
von Daten hinsichtlich ihrer Anwendbarkeit verglichen und bewertet.
Des Weiteren erfolgt die Einbeziehung der geografischen Position.
Um dies adäquat gewährleisten zu können, werden hierarchische Strukturen
in Netzwerken untersucht und auf ihre Verwendbarkeit geprüft.
Das Ziel dieser Arbeit ist die Entwicklung eines Spektrum-Management-
Systems (SMSs) durch Adaption bereits vorhandener Technologien und
Verfahren, sowie der Berücksichtigung aller definierten Anforderungen.
Es hat sich gezeigt, dass die Verwendung einer zentralisierten Broker-
Lösung nicht geeignet ist, da die Verzögerungszeit einen exponentiellförmigen
Verlauf bezüglich der Anzahl der Anfragen aufweist und somit
nicht skaliert. Dies kann mittels einer Distributed Hash Table (DHT)-
basierten Erweiterung überwunden werden ohne dabei die Funktionalität
der Broker-Lösung einzuschränken. Für die Einbringung der Geoinformation
hat sich die hierarchische Struktur, vergleichbar zum Domain
Naming Service (DNS) als geeignet erwiesen.
Als Parameter für die Evaluierung hat sich die resultierende Zugriffszeit,
das heißt die Zeit welche das System benötigt um Anfragen zu
bearbeiten, sowie die resultierende Anzahl der versorgbaren Nutzer herausgestellt.
Für die Simulation wird ein urbanes Areal mit fünf Gebäuden
betrachtet. In der Mitte befindet sich ein sechsstöckiges Firmengebäude,
welches in jedem Stockwerk mit einem Wireless Local Area Network Access
Point (WLAN-AP) ausgestattet ist. Umliegend befinden sich vier
Privathäuser, welche jeweils mit einem WLAN-AP ausgestattet sind.
Das komplette Areal wird von drei Mobilfunkbetreibern mit je einer
Basisstation (BS) versorgt. Als Ausgangspunkt für die Evaluierung erfolgt
der Betrieb ohne SMS. Aus den Ergebnissen wird deutlich, dass
eine Überlastung der Long Term Evolution Basisstationen (LTE-BSen)
vorliegt (im Speziellen bei Betreiber A und B). Im zweiten Durchlauf
wird das Szenario mit einem SMS betrachtet. Zusätzlich kommen in diesem
Fall noch Mikro Basisstationen (Mikro-BSen) zum Einsatz, welche
von der Spezifikation vergleichbar zu einem Wireless Local Area Network
(WLAN) sind. Hier zeigt sich ein deutlich ausgewogeneres Systemverhalten.
Alle BSen und Access Points (APs) befinden sich deutlich
unterhalb der Volllastgrenze.
Die Untersuchungen im Rahmen dieser Arbeit belegen, dass ein heterogenes,
zeitweise überlastetes Funksystem, vollständig harmonisiert
werden kann. Des Weiteren ermöglicht der Einsatz eines SMSs die effiziente
Verwendung von temporär ungenutzten Frequenzbereichen (sogenannte
White- und Gray-spaces).
Durch den zunehmenden Anteil erneuerbarer Erzeugung und die voranschreitende Elektrifizierung von Verkehrs- und Wärmesektor gewinnt die Möglichkeit, den Netzzustand im Verteilnetz zu kennen und steuern zu können, für die Netzbetreiber immer mehr an Bedeutung. Im Forschungsprojekt "SmartAPO" wurde ein Netzautomatisierungssystem entwickelt, das den Zustand im Niederspannungsnetz auf Basis von Smart-Meter-Daten schätzen und regeln kann.
Ziel dieser Arbeit ist die Analyse und Bewertung des Systems im Vorlauf eines Feldtests durch die Implementierung im Labor. Die Untersuchungen leisten einen Beitrag zu den Forschungsaktivitäten im Bereich intelligenter Verteilnetze.
Nach der Darstellung der im Netzautomatisierungssystem verwendeten Funktionen werden für die Laboruntersuchungen geeignete Anwendungsfälle entwickelt und qualitative Bewertungskriterien formuliert. Die Anwendungsfälle orientieren sich an realistischen Begebenheiten und beinhalten Tages- und Echtzeitprofile. Neben Haushaltslasten wird die Last durch Ladevorgänge von Elektrofahrzeugen und durch Wärmepumpen einbezogen. Es kommen verschiedene Konfigurationen der Netznachbildung zum Einsatz, um die Regelung in unterschiedlichen Netztopologien zu testen. Auch der Umgang mit Störeinflüssen, wie langen Messzyklen und fehlenden Messdaten, wird betrachtet.
Durch die Auswertung der durchgeführten Untersuchungen und die Anwendung der angesetzten Kriterien wird die Spannungsregelung mit einem regelbaren Ortsnetztransformator und die Stromregelung mit einem Maschenstromregler validiert. Die betrachteten internen Funktionen werden verifiziert. Die Verifizierung des Umgangs mit Störeinflüssen erfolgt mit Einschränkungen, die den Bedarf an weiteren Untersuchungen aufzeigen.
Die gemachte Arbeit ist wichtiger Bestandteil des Forschungsprojektes und trägt dazu bei, die Netzautomatisierung im Niederspannungsnetz, die für die erfolgreiche Umsetzung der Energiewende in Deutschland erforderlich ist, voranzubringen.
We present new algorithms and provide an overall framework for the interaction of the classically separate steps of logic synthesis and physical layout in the design of VLSI circuits. Due to the continuous development of smaller sized fabrication processes and the subsequent domination of interconnect delays, the traditional separation of logical and physical design results in increasingly inaccurate cost functions and aggravates the design closure problem. Consequently, the interaction of physical and logical domains has become one of the greatest challenges in the design of VLSI circuits. To address this challenge, we propose different solutions for the control and datapath logic of a design, and show how to combine them to reach design closure.
The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.
This paper briefly discusses a new architecture, Computation-In-Memory (CIM Architecture), which performs “processing-in-memory”. It is based on the integration of storage and computation in the same physical location (crossbar topology) and the use of non-volatile resistive-switching technology (memristive devices or memristors in short) instead of CMOS technology. The architecture has the potential of improving the energy-delay product, computing efficiency and performance area by at least two orders of magnitude.
This work shall provide a foundation for the cross-design of wireless networked control systems with limited resources. A cross-design methodology is devised, which includes principles for the modeling, analysis, design, and realization of low cost but high performance and intelligent wireless networked control systems. To this end, a framework is developed in which control algorithms and communication protocols are jointly designed, implemented, and optimized taking into consideration the limited communication, computing, memory, and energy resources of the low performance, low power, and low cost wireless nodes used. A special focus of the proposed methodology is on the prediction and minimization of the total energy consumption of the wireless network (i.e. maximization of the lifetime of wireless nodes) under control performance constraints (e.g. stability and robustness) in dynamic environments with uncertainty in resource availability, through the joint (offline/online) adaptation of communication protocol parameters and control algorithm parameters according to the traffic and channel conditions. Appropriate optimization approaches that exploit the structure of the optimization problems to be solved (e.g. linearity, affinity, convexity) and which are based on Linear Matrix Inequalities (LMIs), Dynamic Programming (DP), and Genetic Algorithms (GAs) are investigated. The proposed cross-design approach is evaluated on a testbed consisting of a real lab plant equipped with wireless nodes. Obtained results show the advantages of the proposed cross-design approach compared to standard approaches which are less flexible.
The complexity of modern real-time systems is increasing day by day. This inevitable rise in complexity predominantly stems from two contradicting requirements, i.e., ever increasing demand for functionality, and required low cost for the final product. The development of modern multi-processors and variety of network protocols and architectures have enabled such a leap in complexity and functionality possible. Albeit, efficient use of these multi-processors and network architectures is still a major problem. Moreover, the software design and its development process needs improvements in order to support rapid-prototyping for ever changing system designs. Therefore, in this dissertation, we provide solutions for different problems faced in the development and deployment process of real-time systems. The contributions presented in this thesis enable efficient utilization of system resources, rapid design & development and component modularity & portability.
In order to ease the certification process, time-triggered computation model is often used in distributed systems. However, time-triggered scheduling is NP-hard, due to which the process of schedule generation for complex large systems becomes convoluted. Large scheduler run-times and low scalability are two major problems with time-triggered scheduling. To solve these problems, we present a modular real-time scheduler based on a novel search-tree pruning technique, which consumes less time (compared to the state-of-the-art) in order to schedule tasks on large distributed time-triggered systems. In order to provide end-to-end guarantees, we also extend our modular scheduler to quickly generate schedules for time-triggered network traffic in large TTEthernet based networks. We evaluate our schedulers on synthetic but practical task-sets and demonstrate that our pruning technique efficiently reduces scheduler run-times and exhibits adequate scalability for future time-triggered distributed systems.
In safety critical systems, the certification process also requires strict isolation between independent components. This isolation is enforced by utilizing resource partitioning approach, where different criticality components execute in different partitions (each temporally and spatially isolated from each other). However, existing partitioning approaches use periodic servers or tasks to service aperiodic activities. This approach leads to utilization loss and potentially leads to large latencies. On the contrary to the periodic approaches, state-of-the-art aperiodic task admission algorithms do not suffer from problems like utilization loss. However, these approaches do not support partitioned scheduling or mixed-criticality execution environment. To solve this problem, we propose an algorithm for online admission of aperiodic tasks which provides job execution flexibility, jitter control and leads to lower latencies of aperiodic tasks.
For safety critical systems, fault-tolerance is one of the most important requirements. In time-triggered systems, modes are often used to ensure survivability against faults, i.e., when a fault is detected, current system configuration (or mode) is changed such that the overall system performance is either unaffected or degrades gracefully. In literature, it has been asserted that a task-set might be schedulable in individual modes but unschedulable during a mode-change. Moreover, conventional mode-change execution strategies might cause significant delays until the next mode is established. In order to address these issues, in this dissertation, we present an approach for schedulability analysis of mode-changes and propose mode-change delay reduction techniques in distributed system architecture defined by the DREAMS project. We evaluate our approach on an avionics use case and demonstrate that our approach can drastically reduce mode-change delays.
In order to manage increasing system complexity, real-time applications also require new design and development technologies. Other than fulfilling the technical requirements, the main features required from such technologies include modularity and re-usability. AUTOSAR is one of these technologies in automotive industry, which defines an open standard for software architecture of a real-time operating system. However, being an industrial standard, the available proprietary tools do not support model extensions and/or new developments by third-parties and, therefore, hinder the software evolution. To solve this problem, we developed an open-source AUTOSAR toolchain which supports application development and code generation for several modules. In order to exhibit the capabilities of our toolchain, we developed two case studies. These case studies demonstrate that our toolchain generates valid artifacts, avoids dirty workarounds and supports application development.
In order to cope with evolving system designs and hardware platforms, rapid-development of scheduling and analysis algorithms is required. In order to ease the process of algorithm development, a number of scheduling and analysis frameworks are proposed in literature. However, these frameworks focus on a specific class of applications and are limited in functionality. In this dissertation, we provide the skeleton of a scheduling and analysis framework for real-time systems. In order to support rapid-development, we also highlight different development components which promote code reuse and component modularity.
Model-based fault diagnosis and fault-tolerant control for a nonlinear electro-hydraulic system
(2010)
The work presented in this thesis discusses the model-based fault diagnosis and fault-tolerant control with application to a nonlinear electro-hydraulic system. High performance control with guaranteed safety and reliability for electro-hydraulic systems is a challenging task due to the high nonlinearity and system uncertainties. This thesis developed a diagnosis integrated fault-tolerant control (FTC) strategy for the electro-hydraulic system. In fault free case the nominal controller is in operation for achieving the best performance. If the fault occurs, the controller will be automatically reconfigured based on the fault information provided by the diagnosis system. Fault diagnosis and reconfigurable controller are the key parts for the proposed methodology. The system and sensor faults both are studied in the thesis. Fault diagnosis consists of fault detection and isolation (FDI). A model-base residual generating is realized by calculating the redundant information from the system model and available signal. In this thesis differential-geometric approach is employed, which gives a general formulation of FDI problem and is more compact and transparent among various model-based approaches. The principle of residual construction with differential-geometric method is to find an unobservable distribution. It indicates the existence of a system transformation, with which the unknown system disturbance can be decoupled. With the observability codistribution algorithm the local weak observability of transformed system is ensured. A Fault detection observer for the transformed system can be constructed to generate the residual. This method cannot isolated sensor faults. In the thesis the special decision making logic (DML) is designed based on the individual signal analysis of the residuals to isolate the fault. The reconfigurable controller is designed with the backstepping technique. Backstepping method is a recursive Lyapunov-based approach and can deal with nonlinear systems. Some system variables are considered as ``virtual controls'' during the design procedure. Then the feedback control laws and the associate Lyapunov function can be constructed by following step-by-step routine. For the electro-hydraulic system adaptive backstepping controller is employed for compensate the impact of the unknown external load in the fault free case. As soon as the fault is identified, the controller can be reconfigured according to the new modeling of faulty system. The system fault is modeled as the uncertainty of system and can be tolerated by parameter adaption. The senor fault acts to the system via controller. It can be modeled as parameter uncertainty of controller. All parameters coupled with the faulty measurement are replaced by its approximation. After the reconfiguration the pre-specified control performance can be recovered. FDI integrated FTC based on backstepping technique is implemented successfully on the electro-hydraulic testbed. The on-line robust FDI and controller reconfiguration can be achieved. The tracking performance of the controlled system is guaranteed and the considered faults can be tolerated. But the problem of theoretical robustness analysis for the time delay caused by the fault diagnosis is still open.
In this thesis, the software development principles of Model-Driven Architecture have been adopted for developing a generation flow for properties. The taken approach for property generation introduces three models, namely the Model-of-Things, the Model-of-Property, and the Model-of-View. Each model belongs to a distinct model layer in the generation flow and each model layer addresses a specific concern of the property generation. The separation of concerns through model layers ensures modular flow development, and enables uncomplicated enhancements and feature extensions. The properties are generated through a series of model-to-model transformations between these model layers. Python is used as the domain-specific language for describing the intermediate transformations. A metamodel-based automation framework is utilized to generate an infrastructure that facilitates the description of transformations. The APIs that form the central part of the infrastructure are generated from the metamodel definitions of the models mentioned before. The generated APIs are further extended with domain-specific APIs to significantly reduce the effort required for developing the transformations. The property generation solution developed in this thesis is termed as “MetaProp”.
A key aspect of the property generation flow is the translation of informal specifications to formal specification models. Due to the diverse nature of hardware designs, the methodology includes different modeling paradigms to formalize the specifications. The metamodel Meta-Expression provides features to describe the behavior of combinational designs in the form of expression trees and dataflow expressions. The MetaExpression metamodel is modular in nature and can be integrated into other metamodel definitions that capture the specification level configurations of the design. For modeling the behavior of sequential designs, a formalism using finite state machine-like notations for traces is introduced. The metamodel MetaSTS defines this formalism. The MetaSTS metamodel enables to define the behavior of sequential designs with annotated timing information for transitions between important states. Annotation is also used to map abstract states in the Model-of-Things to the Model-of-Property and, finally, to the design implementation. Such an annotation or binding mechanism enables Model-of-Properties to be applicable on a variety of design implementations.
Another important contribution of this thesis is a complete processor verification methodology, which is based on the aforementioned generation approach. The introduced methods for specification modeling are employed to formalize the ISA and the behavior of instructions within the processor pipelines. However, it requires substantial manual efforts and in-depth knowledge of the microarchitectural details of the processor implementation to describe the transformations that define the Model-of-Properties. The prime reason for this requirement is the overlapped execution of instructions within the pipelined architectures of processors and the numerous internal and external pipeline stall scenarios. For a complete processor verification, a set of generated properties must consider all combinations of instruction overlapping coupled with all scenarios of pipeline stalls. In retrospect, the Model-of-Properties —from which the properties are generated — are required to consider all combinations of the aforementioned scenarios. To address these aspects, the C-S²QED method — an extension of the S²QED method — has been developed to completely verify a processor. The C-S²QED method is also applicable to exceptions within the processor pipelines and superscalar pipeline architectures. The C-S²QED method detects all functional bugs in a processor implementation and requires significantly less manual efforts compared to state-of-the-art processor verification methods. The completeness hypothesis of the C-S²QED method based on the completeness criterion defined by C-IPC and a completeness proof are also part of this thesis. The property generation flow has been leveraged to generate a set of C-S²QED properties to further enhance the effectiveness of the methodology.
The applicability and effectiveness of the introduced modeling paradigms and developed methods have been demonstrated with the formal verification of several industry strength designs. Numerous logic bugs including the bugs that are typically regarded as difficult to find have been detected during the formal verification with generated properties. Most IPs of an SoC called “RiVal” including the RISC-V core and excluding the legacy IPs have been formally verified only with the proposed methods in this thesis. The Rival SoC is used in the powertrain and safety automotive applications. The manufactured chip works “first time right” and no logic bug has been detected during the post-manufacturing tests. Various architectural alternatives of the RISC-V based processor designs are verified with the generated C-S²QED properties. The property generation is built in a configurable manner such that any changes in microarchitecture of the processor — that may be caused by the changes in specifications — are implicitly covered by the generation flow. Thus, additional manual efforts are not required and the functional flaws due to the changes in specifications are neutralized. Furthermore, the proposed methods have also been applied to communication protocol IPs, bus bridges, interrupt controllers and safety-relevant designs.
Mit zunehmender Integration von immermehr Funktionalität in zukünftigen SoC-Designs erhöht sich die Bedeutung der funktionalen Verifikation auf der Blockebene. Nur Blockentwürfe mit extrem niedriger Fehlerrate erlauben eine schnelle Integration in einen SoC-Entwurf. Diese hohen Qualitätsansprüche können durch simulationsbasierte Verifikation nicht erreicht werden. Aus diesem Grund rücken Methoden zur formalen Entwurfsverifikation in den Fokus. Auf der Blockebene hat sich die Eigenschaftsprüfung basierend auf dem iterativen Schaltungsmodell als erfolgreiche Technologie herausgestellt. Trotzdem gibt es immer noch einige Design-Klassen, die für BIMC schwer zu handhaben sind. Hierzu gehören Schaltungen mit hoher sequentieller Tiefe sowie arithmetische Blöcke. Die fortlaufende Verbesserung der verwendeten Beweismethoden, z.B. der verwendeten SAT-Solver, wird der zunehmenden Komplexität immer größer werdender Blöcke alleine nicht gewachsen sein. Aus diesem Grund zeigt diese Arbeit auf, wie bereits in der Problemaufbereitung des Front-Ends eines Werkzeugs zur formalen Verifikation Maßnahmen zur Vereinfachung der entstehenden Beweisprobleme ergriffen werden können. In den beiden angesprochenen Problemfeldern werden dazu exemplarisch geeignete Freiheitsgrade bei der Modellgenerierung im Front-End identifiziert und zur Vereinfachung der Beweisaufgaben für das Back-End ausgenutzt.
Die fortschreitende Verbreitung von Ethernet-basierten Strukturen mit dezentralen und verteilten Anwendungen in der Automatisierung führt zu den so genannten netzbasier-ten Automatisierungssystemen (NAS). Diese sind zwar in Anschaffung und Betrieb kostengünstiger, moderner und flexibler als herkömmliche Strukturen, weisen jedoch nicht-deterministische Verzögerungen auf. Die genaue Analyse der resultierenden Antwortzeiten ist somit nicht nur Voraussetzung für den verantwortungsbewussten Einsatz dieser Technologie sondern ermöglicht es auch, bereits im Vorfeld von Umstrukturierungen oder Erweiterungen, Fragen der Verlässlichkeit zu klären. In diesem ersten von zwei Beiträgen wird hierfür zunächst die für die speziellen Bedürfnisse der Strukturbeschreibung von netzbasierten Automatisierungssystemen entwickelte Modellierungssprache DesLaNAS vorgestellt und auf ein einführendes Beispiel angewendet. Im zweiten Beitrag wird darauf aufbauend gezeigt, welchen Einfluss die einzelnen System-komponenten (SPS, Netzwerk, I/O-Karten) sowie netzbedingte Verhaltensmodi wie Synchronisation und die gemeinsame Nutzung von Ressourcen auf die Antwortzeiten des Gesamtsystems haben. Zur Analyse selbst wird die wahrscheinlichkeitsbasierte Modellverifikation (PMC) angewendet.
The thesis is focused on modelling and simulation of a Joint Transmission and Detection Integrated Network (JOINT), a novel air interface concept for B3G mobile radio systems. Besides the utilization of the OFDM transmission technique, which is a promising candidate for future mobile radio systems, and of the duplexing scheme time division duplexing (TDD), the subdivision of the geographical domain to be supported by mobile radio communications into service areas (SAs) is a highlighted concept of JOINT. A SA consists of neighboring sub-areas, which correspond to the cells of conventional cellular systems. The signals in a SA are jointly processed in a Central Unit (CU) in each SA. The CU performs joint channel estimation (JCE) and joint detection (JD) in the form of the receive-zero-forcing (RxZF) Filter for the uplink (UL) transmission and joint transmission (JT) in the form of the transmit-zero-forcing (TxZF) Filter for the downlink (DL) transmission. By these algorithms intra-SA multiple access interference (MAI) can be eliminated within the limits of the used model so that unbiased data estimates are obtained, and most of the computational effort is moved from mobile terminals (MTs) to the CU so that the MTs can do with low complexity. A simulation chain of JOINT has been established in the software MLDesigner by the author based on time discrete equivalent lowpass modelling. In this simulation chain, all key functionalities of JOINT are implemented. The simulation chain is designed for link level investigations. A number of channel models are implemented both for the single-SA scenario and the multiple-SA scenario so that the system performance of JOINT can be comprehensively studied. It is shown that in JOINT a duality or a symmetry of the MAI elimination in the UL and in the DL exists. Therefore, the typical noise enhancement going along with the MAI elimination by JD and JT, respectively, is the same in both links. In the simulations also the impact of channel estimation errors on the system performance is studied. In the multiple-SA scenario, due to the existence of the inter-SA MAI, which cannot be suppressed by the algorithms of JD and JT, the system performance in terms of the average bit error rate (BER) and the BER statistics degrades. A collection of simulation results show the potential of JOINT with respect to the improvement of the system performance and the enhancement of the spectrum e±ciency as compared to conventional cellular systems.
The current procedures for achieving industrial process surveillance, waste reduction, and prognosis of critical process states are still insufficient in some parts of the manufacturing industry. Increasing competitive pressure, falling margins, increasing cost, just-in-time production, environmental protection requirements, and guidelines concerning energy savings pose new challenges to manufacturing companies, from the semiconductor to the pharmaceutical industry.
New, more intelligent technologies adapted to the current technical standards provide companies with improved options to tackle these situations. Here, knowledge-based approaches open up pathways that have not yet been exploited to their full extent. The Knowledge-Discovery-Process for knowledge generation describes such a concept. Based on an understanding of the problems arising during production, it derives conclusions from real data, processes these data, transfers them into evaluated models and, by this open-loop approach, reiteratively reflects the results in order to resolve the production problems. Here, the generation of data through control units, their transfer via field bus for storage in database systems, their formatting, and the immediate querying of these data, their analysis and their subsequent presentation with its ensuing benefits play a decisive role.
The aims of this work result from the lack of systematic approaches to the above-mentioned issues, such as process visualization, the generation of recommendations, the prediction of unknown sensor und production states, and statements on energy cost.
Both science and commerce offer mature statistical tools for data preprocessing, analysis and modeling, and for the final reporting step. Since their creation, the insurance business, the world of banking, market analysis, and marketing have been the application fields of these software types; they are now expanding to the production environment.
Appropriate modeling can be achieved via specific machine learning procedures, which have been established in various industrial areas, e.g., in process surveillance by optical control systems. Here, State-of-the-art classification methods are used, with multiple applications comprising sensor technology, process areas, and production site data. Manufacturing companies now intend to establish a more holistic surveillance of process data, such as, e.g., sensor failures or process deviations, to identify dependencies. The causes of quality problems must be recognized and selected in real time from about 500 attributes of a highly complex production machine. Based on these identified causes, recommendations for improvement must then be generated for the operator at the machine, in order to enable timely measures to avoid these quality deviations.
Unfortunately, the ability to meet the required increases in efficiency – with simultaneous consumption and waste minimization – still depends on data that are, for the most part, not available. There is an overrepresentation of positive examples whereas the number of definite negative examples is too low.
The acquired information can be influenced by sensor drift effects and the occurrence of quality degradation may not be adequately recognized. Sensorless diagnostic procedures with dual use of actuators can be of help here.
Moreover, in the course of a process, critical states with sometimes unexplained behavior can occur. Also in these cases, deviations could be reduced by early countermeasures.
The generation of data models using appropriate statistical methods is of advantage here.
Conventional classification methods sometimes reach their limits. Supervised learning methods are mostly used in areas of high information density with sufficient data available for the classes under examination. However, there is a growing trend (e.g., spam filtering) to apply supervised learning methods to underrepresented classes, the datasets of which are, at best, outliers or not at all existent.
The application field of One-Class Classification (OCC) deals with this issue. Standard classification procedures (e.g., k-nearest-neighbor classifier, support vector machines) can be modified in adjustment to such problems. Thereby, a control system is able to classify statements on changing process states or sensor deviations. The above-described knowledge discovery process was employed in a case study from the polymer film industry, at the Mondi Gronau GmbH, taken as an example, and accomplished by a real-data survey at the production site and subsequent data preprocessing, modeling, evaluation, and deployment as a system for the generation of recommendations. To this end, questions regarding the following topics had to be clarified: data sources, datasets and their formatting, transfer pathways, storage media, query sequences, the employed methods of classification, their adjustment to the problems at hand, evaluation of the results, construction of a dynamic cycle, and the final implementation in the production process, along with its surplus value for the company.
Pivotal options for optimization with respect to ecological and economical aspects can be found here. Capacity for improvement is given in the reduction of energy consumption, CO\(_2\) emissions, and waste at all machines. At this one site, savings of several million euros per month can be achieved.
One major difficulty so far has been hardly accessible process data which, distributed on various data sources and unconnected, in some areas led to an increased analysis effort and a lack of holistic real-time quality surveillance. Monitoring of specifications and the thus obtained support for the operator at the installation resulted in a clear disadvantage with regard to cost minimization.
The data of the case study, captured according to their purposes and in coordination with process experts, amounted to 21,900 process datasets from cast film extrusion during 2 years’ time, including sensor data from dosing facilities and 300 site-specific energy datasets from the years 2002–2014.
In the following, the investigation sequence is displayed:
1. In the first step, industrial approaches according to Industrie 4.0 and related to Big Data were investigated. The applied statistical software suites and their functions were compared with a focus on real-time data acquisition from database systems, different data formats, their sensor locations at the machines, and the data processing part. The linkage of datasets from various data sources for, e.g., labeling and downstream exploration according to the knowledge discovery process is of high importance for polymer manufacturing applications.
2. In the second step, the aims were defined according to the industrial requirements, i.e. the critical production problem called “cut-off” as the main selection, and with regard to their investigation with machine learning methods. Therefore, a system architecture corresponding to the polymer industry was developed, containing the following processing steps: data acquisition, monitoring \& recommendation, and self-configuration.
3. The novel sensor datasets, with 160–2,500 real and synthetic attributes, were acquired within 1-min intervals via PLC and field bus from an Oracle database. The 160 features were reduced to 6 dimensions with feature reduction methods. Due to underrepresentation of the critical class, the learning approaches had to be modified and optimized for one-class classification, which achieved 99% accuracy after training, testing and evaluation with real datasets.
4. In the next step, the 6-dimensional dataset was scaled into lower 1-, 2-, or 3-dimensional space with classical and non-classical mapping approaches for downstream visualization. The mapped view was separated into zones of normal and abnormal process conditions by threshold setting.
5. Afterwards, the boundary zone was investigated and an approach for trajectory extraction consisting of condition points in sequence was developed, to optimize the prediction behavior of the model. The extracted trajectories were trained, tested and evaluated by State-of-the-art classification methods, achieving a 99% recognition ratio.
6. In the last step, the best methods and processing parts were converted into a specifically developed domain-specific graphical user interface for real-time visualization of process condition changes. The requirements of such an interface were discussed with the operators with regard to intuitive handling, interactive visualization and recommendations (as e.g., messaging and traffic lights), and implemented.
The software prototype was tested at a laboratory machine. Correct recognition of abnormal process problems was achieved at a 90\% ratio. The software was afterwards transferred to a group of on-line production machines.
As demonstrated, the monthly amount of waste arising at machine M150 could be decreased from 20.96% to 12.44% during the application time. The frequency of occurrence of the specific problem was reduced by 30% related to monthly savings of 50,000 EUR.
In the approach pertaining to the energy prognosis of load profiles, monthly energy data from 2002 to 2014 (about 36 trajectories with three to eight real parameters each) were used as the basis, analyzed and modeled systematically. The prognosis quality increased with approaching target date. Thereby, the site-specific load profile for 2014 could be predicted with an accuracy of 99%.
The achievement of sustained cost reductions of several 100,000 euros, combined with additional savings of EUR 2.8 million, could be demonstrated.
The process improvements achieved while pursuing scientific targets could be successfully and permanently integrated at the case study plant. The increase in methodical and experimental knowledge was reflected by first economical results and could be verified numerically. The expectations of the company were more than fulfilled and further developments based on the new findings were initiated. Among the new finding are the transfer of the scientific findings onto more machines and even the initiation of further studies expanding into the diagnostics area.
Considering the size of the enterprise, future enhanced success should also be possible for other locations. In the course of the grid charge exemption according to EEG, the energy savings at further German locations can amount to 4–11% on a monetary basis and at least 5% based on energy. Up to 10% of materials and cost can be saved with regard to waste reduction related to specific problems. According to projections, material savings of 5–10 t per month and time savings of up to 50 person-hours are achievable. Important synergy effects can be created by the knowledge transfer.
The increasing complexity of modern SoC designs makes tasks of SoC formal verification
a lot more complex and challenging. This motivates the research community to develop
more robust approaches that enable efficient formal verification for such designs.
It is a common scenario to apply a correctness by integration strategy while a SoC
design is being verified. This strategy assumes formal verification to be implemented in
two major steps. First of all, each module of a SoC is considered and verified separately
from the other blocks of the system. At the second step – when the functional correctness
is successfully proved for every individual module – the communicational behavior has
to be verified between all the modules of the SoC. In industrial applications, SAT/SMT-based interval property checking(IPC) has become widely adopted for SoC verification. Using IPC approaches, a verification engineer is able to afford solving a wide range of important verification problems and proving functional correctness of diverse complex components in a modern SoC design. However, there exist critical parts of a design where formal methods often lack their robustness. State-of-the-art property checkers fail in proving correctness for a data path of an industrial central processing unit (CPU). In particular, arithmetic circuits of a realistic size (32 bits or 64 bits) – especially implementing multiplication algorithms – are well-known examples when SAT/SMT-based
formal verification may reach its capacity very fast. In cases like this, formal verification
is replaced with simulation-based approaches in practice. Simulation is a good methodology that may assure a high rate of discovered bugs hidden in a SoC design. However, in contrast to formal methods, a simulation-based technique cannot guarantee the absence of errors in a design. Thus, simulation may still miss some so-called corner-case bugs in the design. This may potentially lead to additional and very expensive costs in terms of time, effort, and investments spent for redesigns, refabrications, and reshipments of new chips.
The work of this thesis concentrates on studying and developing robust algorithms
for solving hard arithmetic decision problems. Such decision problems often originate from a task of RTL property checking for data-path designs. Proving properties of those
designs can efficiently be performed by solving SMT decision problems formulated with
the quantifier-free logic over fixed-sized bit vectors (QF-BV).
This thesis, firstly, proposes an effective algebraic approach based on a Gröbner basis theory that allows to efficiently decide arithmetic problems. Secondly, for the case of custom-designed components, this thesis describes a sophisticated modeling technique which is required to restore all the necessary arithmetic description from these components. Further, this thesis, also, explains how methods from computer algebra and the modeling techniques can be integrated into a common SMT solver. Finally, a new QF-BV SMT solver is introduced.