Refine
Year of publication
Document Type
- Doctoral Thesis (70) (remove)
Keywords
- Mobilfunk (10)
- Model checking (5)
- mobile radio (4)
- MIMO (3)
- OFDM (3)
- System-on-Chip (3)
- Verifikation (3)
- Bounded Model Checking (2)
- Elektromobilität (2)
- Empfangssignalverarbeitung (2)
Faculty / Organisational entity
- Fachbereich Elektrotechnik und Informationstechnik (70) (remove)
Der zunehmende Ausbau dezentraler Erzeugungsanlagen sowie die steigende Anzahl an Elektrofahrzeugen stellen die Niederspannungsnetze vor neue Herausforderungen. Neben der Einhaltung des zulässigen Spannungsbands führen Erzeugungsanlagen und neue Lasten zu einer zunehmenden thermischen Auslastung der Leitungen. Einfache, konventionelle Maßnahmen wie Topologieänderungen zu vermascht betriebenen Niederspannungsnetzen sind ein erster hilfreicher und kostengünstiger Ansatz, bieten aber keinen grundsätzlichen Schutz vor einer thermischen Überlastung der Betriebsmittel. Diese Arbeit befasst sich mit der Konzeption eines Spannungs- und Wirkleistungsreglers für vermaschte Niederspannungsnetze. Durch den Regler erfolgt eine messtechnische Erfassung der Spannungen und Ströme in einzelnen Messpunkten des Niederspannungsnetzes. Mit Hilfe eines speziellen Kennlinienverfahrens kann eine Leistungsverschiebung in einzelnen Netzmaschen hervorgerufen und vorgegebene Soll- oder Grenzwerte eingehalten werden. In vorliegender Arbeit werden die analytischen Grundlagen des Reglers, seine Hardware sowie das Kennlinienverfahren zusammen mit den realisierbaren Regelkonzepten vorgestellt. Die Ergebnisse aus Simulationsstudien, Labor- und Feldtests stellen die Effektivität des Reglers eindeutig dar und werden diskutiert.
Die Versorgungsaufgaben für Niederspannungsnetze werden sich in den kommenden Jahrzehnten durch die weitere Verbreitung von Photovoltaikanlagen, Wärmepumpenheizungen und Elektroautomobilen gegenüber denen des Jahres 2018 voraussichtlich stark ändern. In der Praxis verbreitete Planungsgrundsätze für den Neubau von Niederspannungsnetzen sind veraltet, denn sie stammen vielfach in ihren Grundzügen aus Zeiten, in denen die neuen Lasten und Einspeisungen nicht erwartet und dementsprechend nicht berücksichtigt wurden. Der Bedarf für neue Planungsgrundsätze fällt zeitlich mit der Verfügbarkeit regelbarer Ortsnetztransformatoren (rONT) zusammen, die zur Verbesserung der Spannungsverhältnisse im Netz eingesetzt werden können. Die hier entwickelten neuen Planungsgrundsätze erfordern für ländliche und vorstädtische Versorgungsaufgaben (nicht jedoch für städtische Versorgungsaufgaben) den rONT-Einsatz, um die hohen erwarteten Leistungen des Jahres 2040 zu geringen Kosten beherrschen zu können. Eine geeignete rONT-Standardregelkennlinie wird angegeben. In allen Fällen werden abschnittsweise parallelverlegte Kabel mit dem Querschnitt 240 mm² empfohlen.
Basierend auf den Erkenntnissen und Erfahrungen zum Klimawandel ergaben sich in den letzten Jahren weltweit enorme energie- und klimapolitische Veränderungen. Dies führt zu einem immer stärken Wandel der Erzeugungs-, Verbrauchs- und Versorgungsstrukturen unserer Energiesysteme. Der Fokus der Energieerzeugung auf fluktuierenden erneuerbaren Energieträgern erfordert einen weitreichenderen Einsatz von Flexibilitäten als dies bisher der Fall war.
Diese Arbeit diskutiert den Einsatz von Wärmepumpen und Speichersystemen als Flexibilitäten im Kontext des Zellularen Ansatzes der Energieversorgung. Dazu werden die Flexibilitätspotentiale von Wärmepumpen -Speichersystemen auf drei Betrachtungsebenen untersucht und validiert. Erstere berücksichtigt die Wärmepumpe, den thermischen Speicher und thermische Lasten in einer generellen Potentialbetrachtung. Darauf aufbauend folgt die Betrachtung der Wärmepumpen-Speichersysteme im Rahmen einer Haushalts-Zelle als energetische Einheit, gefolgt von Untersuchungen im Niederspannungs-Zellkontext. Zur Abbildung des Flexibilitätsverhaltens werden detaillierte Modelle der Wandler und Speicher sowie deren Steuerungen entwickelt und anhand von Zeitreihensimulationen analysiert und evaluiert.
Die zentrale Frage ob Wärmepumpen mit Speichersystemen einen Beitrag als Flexibilität zum Gelingen der Energiewende leisten können kann mit einem klaren Ja beantwortet werden. Dennoch sind die beim Einsatz von Wärmepumpen-Speichersystemen als Flexibilität zu beachtenden Randbedingungen vielfältig und bedürfen, je nach Anwendungszweck der Flexibilität, einer genauen Betrachtung. Die entscheidenden Faktoren sind dabei die Außentemperatur, der zeitliche Kontext, das Netz und die Wirtschaftlichkeit.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
The complexity of modern real-time systems is increasing day by day. This inevitable rise in complexity predominantly stems from two contradicting requirements, i.e., ever increasing demand for functionality, and required low cost for the final product. The development of modern multi-processors and variety of network protocols and architectures have enabled such a leap in complexity and functionality possible. Albeit, efficient use of these multi-processors and network architectures is still a major problem. Moreover, the software design and its development process needs improvements in order to support rapid-prototyping for ever changing system designs. Therefore, in this dissertation, we provide solutions for different problems faced in the development and deployment process of real-time systems. The contributions presented in this thesis enable efficient utilization of system resources, rapid design & development and component modularity & portability.
In order to ease the certification process, time-triggered computation model is often used in distributed systems. However, time-triggered scheduling is NP-hard, due to which the process of schedule generation for complex large systems becomes convoluted. Large scheduler run-times and low scalability are two major problems with time-triggered scheduling. To solve these problems, we present a modular real-time scheduler based on a novel search-tree pruning technique, which consumes less time (compared to the state-of-the-art) in order to schedule tasks on large distributed time-triggered systems. In order to provide end-to-end guarantees, we also extend our modular scheduler to quickly generate schedules for time-triggered network traffic in large TTEthernet based networks. We evaluate our schedulers on synthetic but practical task-sets and demonstrate that our pruning technique efficiently reduces scheduler run-times and exhibits adequate scalability for future time-triggered distributed systems.
In safety critical systems, the certification process also requires strict isolation between independent components. This isolation is enforced by utilizing resource partitioning approach, where different criticality components execute in different partitions (each temporally and spatially isolated from each other). However, existing partitioning approaches use periodic servers or tasks to service aperiodic activities. This approach leads to utilization loss and potentially leads to large latencies. On the contrary to the periodic approaches, state-of-the-art aperiodic task admission algorithms do not suffer from problems like utilization loss. However, these approaches do not support partitioned scheduling or mixed-criticality execution environment. To solve this problem, we propose an algorithm for online admission of aperiodic tasks which provides job execution flexibility, jitter control and leads to lower latencies of aperiodic tasks.
For safety critical systems, fault-tolerance is one of the most important requirements. In time-triggered systems, modes are often used to ensure survivability against faults, i.e., when a fault is detected, current system configuration (or mode) is changed such that the overall system performance is either unaffected or degrades gracefully. In literature, it has been asserted that a task-set might be schedulable in individual modes but unschedulable during a mode-change. Moreover, conventional mode-change execution strategies might cause significant delays until the next mode is established. In order to address these issues, in this dissertation, we present an approach for schedulability analysis of mode-changes and propose mode-change delay reduction techniques in distributed system architecture defined by the DREAMS project. We evaluate our approach on an avionics use case and demonstrate that our approach can drastically reduce mode-change delays.
In order to manage increasing system complexity, real-time applications also require new design and development technologies. Other than fulfilling the technical requirements, the main features required from such technologies include modularity and re-usability. AUTOSAR is one of these technologies in automotive industry, which defines an open standard for software architecture of a real-time operating system. However, being an industrial standard, the available proprietary tools do not support model extensions and/or new developments by third-parties and, therefore, hinder the software evolution. To solve this problem, we developed an open-source AUTOSAR toolchain which supports application development and code generation for several modules. In order to exhibit the capabilities of our toolchain, we developed two case studies. These case studies demonstrate that our toolchain generates valid artifacts, avoids dirty workarounds and supports application development.
In order to cope with evolving system designs and hardware platforms, rapid-development of scheduling and analysis algorithms is required. In order to ease the process of algorithm development, a number of scheduling and analysis frameworks are proposed in literature. However, these frameworks focus on a specific class of applications and are limited in functionality. In this dissertation, we provide the skeleton of a scheduling and analysis framework for real-time systems. In order to support rapid-development, we also highlight different development components which promote code reuse and component modularity.
Die Einführung des Internets hat einen stetigen Wandel des täglichen,
sowie beruflichen Alltags verursacht. Hierbei ist eine deutliche Verlagerung
in den virtuellen Raum (Internet) festzustellen. Zusätzlich hat
die Einführung von sozialen Netzwerken, wie beispielsweise Facebook
das Verlangen des Nutzers immer „online“ zu sein, deutlich verstärkt.
Hinzu kommen die kontinuierlich wachsenden Datenmengen, welche beispielsweise
durch Videostreaming (YouTube oder Internet Protocol Television
(IPTV)) oder den Austausch von Bildern verursacht werden.
Zusätzlich verursachen neue Dienste, welche beispielsweise im Rahmen
vom Internet der Dinge und auch Industrie 4.0 eingeführt werden, zusätzliche
Datenmengen. Aktuelle Technologien wie Long Term Evolution
Advanced (LTE-A) im Funkbereich und Very High Speed Digital Subsciber
Line (VDSL) beziehungsweise Glasfaser in kabelgebundenen Netzen,
versuchen diesen Anforderungen gerecht zu werden.
Angesichts der steigenden Anforderungen an die Mobilität des Nutzers,
ist die Verwendung von Funktechnologien unabdingbar. In Verbindung
mit dem stetig wachsenden Datenaufkommen und den ansteigenden
Datenraten ist ein wachsender Bedarf an Spektrum, also freien,
beziehungsweise ungenutzten Frequenzbereichen einhergehend. Für die
Identifikation geeigneter Bereiche müssen allerdings eine Vielzahl von
Parametern und Einflussfaktoren betrachtet werden. Einer der entscheidenden
Parameter ist die entstehende Dämpfung im betrachteten Frequenzbereich,
da diese mit steigender Frequenz größer wird und somit
die resultierende Abdeckung bei gleichbleibender Sendeleistung sinkt.
In aktuellen Funksystemen werden Frequenzen < 6 GHz verwendet, da
diese von den Ausbreitungseigenschaften geeignete Eigenschaften aufweisen.
Des Weiteren müssen vorhandene Nutzungsrechte, Inhaber des
Spektrums, Nutzungsbedingungen und so weiter im Vorfeld abgeklärt
werden. In Deutschland wird die Koordination von der Bundesnetzagentur
vorgenommen.
Aufgrund der Vielfalt der vorhandenen Dienste und Anwendungen ist
es leicht ersichtlich, dass der Frequenzbereich < 6 GHz stark ausgelastet
ist. Neben den kontinuierlich ausgelasteten Diensten wie zum Beispiel
Long Term Evolution (LTE) oder Digital Video Broadcast (DVB), gibt
es spektrale Bereiche, die nur eine geringe zeitliche Auslastung aufweisen.
Markant hierfür sind Frequenzbereiche, welche beispielsweise ausschließlich
für militärische Nutzung reserviert sind. Bei genauerer Betrachtung
fällt auf, dass sich dies nicht ausschließlich auf den zeitlichen Bereich
beschränkt, vielmehr ergibt sich eine Kombination aus zeitlicher und
räumlicher Beschränkung, da die Nutzung meist auf einen räumlichen
Bereich eingrenzbar ist. Eine weitere Einschränkung resultiert aus der
derzeit starren Vergabe von Frequenzbereichen. Die Zuteilung basiert
auf langwierigen Antragsverfahren und macht somit eine kurzfristige variable
Zuteilung unmöglich.
Um diesem Problem gerecht zu werden, erfolgt im Rahmen dieser Arbeit
die Entwicklung eines generischen Spektrum-Management-Systems
(SMSs) zur dynamischen Zuteilung vorhandener Ressourcen. Eine Anforderung
an das System ist die Unterstützung von bereits bekannten
Spektrum Sharing Verfahren, wie beispielsweise Licensed Shared Access
(LSA) beziehungsweise Authorized Shared Access (ASA) oder Spectrum
Load Smoothing (SLS). Hierfür wird eine Analyse der derzeit bekannten
Sharing Verfahren vorgenommen und diese bezüglich ihrer Anwendbarkeit
charakterisiert. DesWeiteren werden die Frequenzbereiche unterhalb
6 GHz hinsichtlich ihrer Verwendbarkeiten und regulatorischen Anforderungen
betrachtet. Zusätzlich wird ein erweiterter Anforderungskatalog
an das Spektrum-Management-System (SMS) entwickelt, welcher
als Grundlage für das Systemdesign verwendet wird. Essentiell ist hierbei,
dass alle (potentiellen) Nutzer beziehungsweise Inhaber eines spektralen
Bereiches die Funktionalität eines derartigen Systems verwenden
können. Hieraus ergibt sich bereits die Anforderung der Skalierbarkeit
des Systems. Zur Entwicklung einer geeigneten Systemarchitektur werden
bereits vorhandene Lösungsansätze zur Verwaltung und Speicherung
von Daten hinsichtlich ihrer Anwendbarkeit verglichen und bewertet.
Des Weiteren erfolgt die Einbeziehung der geografischen Position.
Um dies adäquat gewährleisten zu können, werden hierarchische Strukturen
in Netzwerken untersucht und auf ihre Verwendbarkeit geprüft.
Das Ziel dieser Arbeit ist die Entwicklung eines Spektrum-Management-
Systems (SMSs) durch Adaption bereits vorhandener Technologien und
Verfahren, sowie der Berücksichtigung aller definierten Anforderungen.
Es hat sich gezeigt, dass die Verwendung einer zentralisierten Broker-
Lösung nicht geeignet ist, da die Verzögerungszeit einen exponentiellförmigen
Verlauf bezüglich der Anzahl der Anfragen aufweist und somit
nicht skaliert. Dies kann mittels einer Distributed Hash Table (DHT)-
basierten Erweiterung überwunden werden ohne dabei die Funktionalität
der Broker-Lösung einzuschränken. Für die Einbringung der Geoinformation
hat sich die hierarchische Struktur, vergleichbar zum Domain
Naming Service (DNS) als geeignet erwiesen.
Als Parameter für die Evaluierung hat sich die resultierende Zugriffszeit,
das heißt die Zeit welche das System benötigt um Anfragen zu
bearbeiten, sowie die resultierende Anzahl der versorgbaren Nutzer herausgestellt.
Für die Simulation wird ein urbanes Areal mit fünf Gebäuden
betrachtet. In der Mitte befindet sich ein sechsstöckiges Firmengebäude,
welches in jedem Stockwerk mit einem Wireless Local Area Network Access
Point (WLAN-AP) ausgestattet ist. Umliegend befinden sich vier
Privathäuser, welche jeweils mit einem WLAN-AP ausgestattet sind.
Das komplette Areal wird von drei Mobilfunkbetreibern mit je einer
Basisstation (BS) versorgt. Als Ausgangspunkt für die Evaluierung erfolgt
der Betrieb ohne SMS. Aus den Ergebnissen wird deutlich, dass
eine Überlastung der Long Term Evolution Basisstationen (LTE-BSen)
vorliegt (im Speziellen bei Betreiber A und B). Im zweiten Durchlauf
wird das Szenario mit einem SMS betrachtet. Zusätzlich kommen in diesem
Fall noch Mikro Basisstationen (Mikro-BSen) zum Einsatz, welche
von der Spezifikation vergleichbar zu einem Wireless Local Area Network
(WLAN) sind. Hier zeigt sich ein deutlich ausgewogeneres Systemverhalten.
Alle BSen und Access Points (APs) befinden sich deutlich
unterhalb der Volllastgrenze.
Die Untersuchungen im Rahmen dieser Arbeit belegen, dass ein heterogenes,
zeitweise überlastetes Funksystem, vollständig harmonisiert
werden kann. Des Weiteren ermöglicht der Einsatz eines SMSs die effiziente
Verwendung von temporär ungenutzten Frequenzbereichen (sogenannte
White- und Gray-spaces).
Field-effect transistor (FET) sensors and in particular their nanoscale variant of silicon nanowire transistors are very promising technology platforms for label-free biosensor applications. These devices directly detect the intrinsic electrical charge of biomolecules at the sensor’s liquid-solid interface. The maturity of micro fabrication techniques enables very large FET sensor arrays for massive multiplex detection. However, the direct detection of charged molecules in liquids faces a significant limitation due to a charge screening effect in physiological solutions, which inhibits the realization of point-of-care applications. As an alternative, impedance spectroscopy with FET devices has the potential to enable measurements in physiological samples. Even though promising studies were published in the field, impedimetric detection with silicon FET devices is not well understood.
The first goal of this thesis was to understand the device performances and to relate the effects seen in biosensing experiments to device and biomolecule types. A model approach should help to understand the capability and limitations of the impedimetric measurement method with FET biosensors. In addition, to obtain experimental results, a high precision readout device was needed. Consequently, the second goal was to build up multi-channel, highly accurate amplifier systems that would also enable future multi-parameter handheld devices.
A PSPICE FET model for potentiometric and impedimetric detection was adapted to the experiments and further expanded to investigate the sensing mechanism, the working principle, and effects of side parameters for the biosensor experiments. For potentiometric experiments, the pH sensitivity of the sensors was also included in this modelling approach. For impedimetric experiments, solutions of different conductivity were used to validate the suggested theories and assumptions. The impedance spectra showed two pronounced frequency domains: a low-pass characteristic at lower frequencies and a resonance effect at higher frequencies. The former can be interpreted as a contribution of the source and double layer capacitances. The latter can be interpreted as a combined effect of the drain capacitance with the operational amplifier in the transimpedance circuit.
Two readout systems, one as a laboratory system and one as a point-of-care demonstrator, were developed and used for several chemical and biosensing experiments. The PSPICE model applied to the sensors and circuits were utilized to optimize the systems and to explain the sensor responses. The systems as well as the developed modelling approach were a significant step towards portable instruments with combined transducer principles in future healthcare applications.
Entwicklung eines Verfahrens zur dreiphasigen Zustandsschätzung in vermaschten Niederspannungsnetzen
(2018)
Betreiber von Niederspannungsnetzen sind im Zuge der Energiewende durch den anhaltenden Ausbau dezentraler Erzeugungsanlagen und dem Aufkommen der Elektromobilität mit steigenden Netzauslastungen konfrontiert. Zukünftig wird ein sicherer Netzbetrieb ohne Leitungsüberlastungen grundsätzlich nur gewährleistet sein, wenn der Netzzustand durch geeignete Systeme ermittelt wird und auf dessen Basis ein intelligentes Netzmanagement mit regelnden Eingriffen erfolgt.
Diese Arbeit befasst sich mit der Entwicklung und dem Test eines Verfahrens zur dreiphasigen Zustandsschätzung in vermaschten Niederspannungsnetzen. Als Eingangsdaten dienen dabei Spannungs- und Strommesswerte, welche im Wesentlichen durch Smart Meter an Hausanschlusspunkten messtechnisch erfasst werden. Das Verfahren zielt darauf ab, Grenzwertverletzungen mit einer hohen Wahrscheinlichkeit zu erkennen.
Schwerpunkte der Betrachtung sind neben der Systemkonzeptionierung zum einen die Vorverarbeitung der Systemeingangsdaten im Rahmen der Generierung von Ersatzmesswerten sowie der Erkennung von Topologiefehlern und zum anderem die Entwicklung eines Schätzalgorithmus mit linearem Messmodell und der Möglichkeit zur Lokalisierung grob falscher Messdaten.
A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)
Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.
In current practices of system-on-chip (SoC) design a trend can be observed to integrate more and more low-level software components into the system hardware at different levels of granularity. The implementation of important control functions and communication structures is frequently shifted from the SoC’s hardware into its firmware. As a result, the tight coupling of hardware and software at a low level of granularity raises substantial verification challenges since the conventional practice of verifying hardware and software independently is no longer sufficient. This calls for new methods for verification based on a joint analysis of hardware and software.
This thesis proposes hardware-dependent models of low-level software for performing formal verification. The proposed models are conceived to represent the software integrated with its hardware environment according to the current SoC design practices. Two hardware/software integration scenarios are addressed in this thesis, namely, speed-independent communication of the processor with its hardware periphery and cycle-accurate integration of firmware into an SoC module. For speed-independent hardware/software integration an approach for equivalence checking of hardware-dependent software is proposed and an evaluated. For the case of cycle-accurate hardware/software integration, a model for hardware/software co-verification has been developed and experimentally evaluated by applying it to property checking.
”In contemporary electronics 80% of a chip may perform digital functions but the 20%
of analog functions may take 80% of the development time.” [1]. Aggravating this, the
demands on analog design is increasing with rapid technology scaling. Most designs
have moved away from analog to digital domains, where possible, however, interacting
with the environment will always require analog to digital data conversion. Adding to
this problem, the number of sensors used in consumer and industry related products are
rapidly increasing. Designers of ADCs are dealing with this problem in several ways, the
most important is the migration towards digital designs and time domain techniques.
Time to Digital Converters (TDC) are becoming increasingly popular for robust signal
processing. Biological neurons make use of spikes, which carry spike timing information
and will not be affected by the problems related to technology scaling. Neuromorphic
ADCs still remain exotic with few implementations in sub-micron technologies Table 2.7.
Even among these few designs, the strengths of biological neurons are rarely exploited.
From a previous work [2], LUCOS, a high dynamic range image sensor, the efficiency
of spike processing has been validated. The ideas from this work can be generalized to
make a highly effective sensor signal conditioning system, which carries the promise to
be robust to technology scaling.
The goal of this work is to create a novel spiking neural ADC as a novel form of a
Multi-Sensor Signal Conditioning and Conversion system, which
• Will be able to interface with or be a part of a System on Chip with traditional
analog or advanced digital components.
• Will have a graceful degradation.
• Will be robust to noise and jitter related problems.
• Will be able to learn and adapt to static errors and dynamic errors.
• Will be capable of self-repair, self-monitoring and self-calibration
Sensory systems in humans and other animals analyze the environment using several
techniques. These techniques have been evolved and perfected to help the animal sur-
vive. Different animals specialize in different sense organs, however, the peripheral
neural network architectures remain similar among various animal species with few ex-
ceptions. While there are many biological sensing techniques present, most popularly
used engineering techniques are based on intensity detection, frequency detection, and
edge detection. These techniques are used with traditional analog processing (e.g., colorvi
sensors using filters), and with biological techniques (e.g. LUCOS chip [2]). The local-
ization capability of animals has never been fully utilized.
One of the most important capabilities for animals, vertebrates or invertebrates, is the
capability for localization. The object of localization can be predator, prey, sources of
water, or food. Since these are basic necessities for survival, they evolve much faster
due to the survival of the fittest. In fact, localization capabilities, even if the sensors
are different, have convergently evolved to have same processing methods (coincidence
detection) in their peripheral neurons (for e.g., forked tongue of a snake, antennae of
a cockroach, acoustic localization in fishes and mammals). This convergent evolution
increases the validity of the technique. In this work, localization concepts based on
acoustic localization and tropotaxis are investigated and employed for creation of novel
ADCs.
Unlike intensity and frequency detection, which are not linear (for e.g. eyes saturate in
bright light, loose color perception in low light), localization is inherently linear. This
is mainly because the accurate localization of predator or prey can be the difference
between life and death for an animal.
Figure 1 visually explains the ADC concept proposed in this work. This has two parts.
(1) Sensor to Spike(time) Conversion (SSC), (2) Spike(time) to Digital Conversion(SDC).
Both of the structures have been designed with models of biological neurons. The
combination of these two structures is called SSDC.
To efficiently implement the proposed concept, a comparison of several biological neural
models is made and two models are shortlisted. Various synapse structures are also
studied. From this study, Leaky Integrate and Fire neuron (LIF) is chosen since it
fulfills all the requirements of the proposed structure. The analog neuron and synapse
designs from Indiveri et. al. [3], [4] were taken, and simulations were conducted using
cadence and the behavioral equivalence with biological counterpart was checked. The
LIF neuron had features, that were not required for the proposed approach. A simple
LIF neuron stripped of these features and was designed to be as fast as allowed by the
technology.
The SDC was designed with the neural building blocks and the delays were designed
using buffer chains. This SDC converts incoming Time Interval Code (TIC) to sparse
place coding using coincidence detection. Coincidence detection is a property of spiking
neurons, which is a time domain equivalent of a Gaussian Kernel. The SDC is designed to
have an online reconfigurable Gaussian kernel width, weight, threshold, and refractory
period. The advantage of sparse place codes, which contain rank order coding wasvii
Figure 1: ADC as a localization problem (right), Jeffress model of sound localization
visualized (left). The values t 1 and t 2 indicate the time taken from the source to s1 and
s2 respectively.
described in our work [5]. A time based winner take all circuit with memory was created
based on a previous work [6] for reading out of sparse place codes asynchronously.
The SSC was also initially designed with the same building blocks. Additionally, a
differential synapse was designed for better SSC. The sensor element considered wasviii
a Wheatstone full bridge AMR sensor AFF755 from Sensitec GmbH. A reconfigurable
version of the synapse was also designed for a more generic sensor interface.
The first prototype chip SSDCα was designed with 257 modules of coincidence detectors
realizing the SDC and the SSC. Since the spike times are the most important information,
the spikes can be treated as digital pulses. This provides the capability for digital
communication between analog modules. This creates a lot of freedom for use of digital
processing between the discussed analog modules. This advantage is fully exploited
in the design of SSDCα. Three SSC modules are multiplexed to the SDC. These SSC
modules also provide outputs from the chip simultaneously. A rising edge detecting fixed
pulse width generation circuit is used to create pulses that are best suited for efficient
performance of the SDC. The delay lines are made reconfigurable to increase robustness
and modify the span of the SDC. The readout technique used in the first prototype is
a relatively slow but safe shift register. It is used to analyze the characteristics of the
core work. This will be replaced by faster alternatives discussed in the work. The area
of the chip is 8.5 mm 2 . It has a sampling rate from DC to 150 kHz. It has a resolution
from 8-bit to 13-bit. It has 28,200 transistors on the chip. It has been designed in 350
nm CMOS technology from ams. The chip has been manufactured and tested with a
sampling rate of 10 kHz and a theoretical resolution of 8 bits. However, due to the
limitations of our Time-Interval-Generator, we are able to confirm for only 4 bits of
resolution.
The key novel contributions of this work are
• Neuromorphic implementation of AD conversion as a localization problem based
on sound localization and tropotaxis concepts found in nature.
• Coincidence detection with sparse place coding to enhance resolution.
• Graceful degradation without redundant elements, inherent robustness to noise,
which helps in scaling of technologies
• Amenable to local adaptation and self-x features.
Conceptual goals have all been fulfilled, with the exception of adaptation. The feasibility
for local adaptation has been shown with promising results and further investigation is
required for future work. This thesis work acts as a baseline, paving the way for R&D
in a new direction. The chip design has used 350 nm ams hitkit as a vehicle to prove
the functionality of the core concept. The concept can be easily ported to present
aggressively-scaled-technologies and future technologies.
Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.
For many years, most distributed real-time systems employed data communication systems specially tailored to address the specific requirements of individual domains: for instance, Controlled Area Network (CAN) and Flexray in the automotive domain, ARINC 429 [FW10] and TTP [Kop95] in the aerospace domain. Some of these solutions were expensive, and eventually not well understood.
Mostly driven by the ever decreasing costs, the application of such distributed real-time system have drastically increased in the last years in different domains. Consequently, cross-domain communication systems are advantageous. Not only the number of distributed real-time systems have been increasing but also the number of nodes per system, have drastically increased, which in turn increases their network bandwidth requirements. Further, the system architectures have been changing, allowing for applications to spread computations among different computer nodes. For example, modern avionics systems moved from federated to integrated modular architecture, also increasing the network bandwidth requirements.
Ethernet (IEEE 802.3) [iee12] is a well established network standard. Further, it is fast, easy to install, and the interface ICs are cheap [Dec05]. However, Ethernet does not offer any temporal guarantee. Research groups from academia and industry have presented a number of protocols merging the benefits of Ethernet and the temporal guarantees required by distributed real-time systems. Two of these protocols are: Avionics Full-Duplex Switched Ethernet (AFDX) [AFD09] and Time-Triggered Ethernet (TTEthernet) [tim16]. In this dissertation, we propose solutions for two problems faced during the design of AFDX and TTEthernet networks: avoiding data loss due to buffer overflow in AFDX networks with multiple priority traffic, and scheduling of TTEthernet networks.
AFDX guarantees bandwidth separation and bounded transmission latency for each communication channel. Communication channels in AFDX networks are not synchronized, and therefore frames might compete for the same output port, requiring buffering to avoid data loss. To avoid buffer overflow and the resulting data loss, the network designer must reserve a safe, but not too pessimistic amount of memory of each buffer. The current AFDX standard allows for the classification of the network traffic with two priorities. Nevertheless, some commercial solutions provide multiple priorities, increasing the complexity of the buffer backlog analysis. The state-of-the-art AFDX buffer backlog analysis does not provide a method to compute deterministic upper bounds
iiifor buffer backlog of AFDX networks with multiple priority traffic. Therefore, in this dissertation we propose a method to address this open problem. Our method is based on the analysis of the largest busy period encountered by frames stored in a buffer. We identify the ingress (and respective egress) order of frames in the largest busy period that leads to the largest buffer backlog, and then compute the respective buffer backlog upper bound. We present experiments to measure the computational costs of our method.
In TTEthernet, nodes are synchronized, allowing for message transmission at well defined points in time, computed off-line and stored in a conflict-free scheduling table. The computation of such scheduling tables is a NP-complete problem [Kor92], which should be solved in reasonable time for industrial size networks. We propose an approach to efficiently compute a schedule for the TT communication channels in TTEthernet networks, in which we model the scheduling problem as a search tree. As the scheduler traverses the search tree, it schedules the communication channels on a physical link. We presented two approaches to traverse the search tree while progressively creating the vertices of the search tree. A valid schedule is found once the scheduler reaches a valid leaf. If on the contrary, it reaches an invalid leaf, the scheduler backtracks searching for a path to a valid leaf. We present a set of experiments to demonstrate the impact of the input parameters on the time taken to compute a feasible schedule or to deem the set of virtual links infeasible.
Safety-related Systems (SRS) protect from the unacceptable risk resulting from failures of technical systems. The average probability of dangerous failure on demand (PFD) of these SRS in low demand mode is limited by standards. Probabilistic models are applied to determine the average PFD and verify the specified limits. In this thesis an effective framework for probabilistic modeling of complex SRS is provided. This framework enables to compute the average, instantaneous, and maximum PFD. In SRS, preventive maintenance (PM) is essential to achieve an average PFD in compliance with specified limits. PM intends to reveal dangerous undetected failures and provides repair if necessary. The introduced framework pays special attention to the precise and detailed modeling of PM. Multiple so far neglected degrees of freedom of the PM are considered, such as two types of elementwise PM at arbitrarily variable times. As shown by analyses, these degrees of freedom have a significant impact on the average, instantaneous, and maximum PFD. The PM is optimized to improve the average or maximum PFD or both. A well-known heuristic nonlinear optimization method (Nelder-Mead method) is applied to minimize the average or maximum PFD or a weighted trade-off. A significant improvement of the objectives and an improved protection are achieved. These improvements are achieved via the available degrees of freedom of the PM and without additional effort. Moreover, a set of rules is presented to decide for a given SRS if significant improvements will be achieved by optimization of the PM. These rules are based on the well-known characteristics of the SRS, e.g. redundancy or no redundancy, complete or incomplete coverage of PM. The presented rules aim to support the decision whether the optimization is advantageous for a given SRS and if it should be applied or not.
The recently established technologies in the areas of distributed measurement and intelligent
information processing systems, e.g., Cyber Physical Systems (CPS), Ambient
Intelligence/Ambient Assisted Living systems (AmI/AAL), the Internet of Things
(IoT), and Industry 4.0 have increased the demand for the development of intelligent
integrated multi-sensory systems as to serve rapid growing markets [1, 2]. These increase
the significance of complex measurement systems, that incorporate numerous advanced
methodological implementations including electronics circuit, signal processing,
and multi-sensory information fusion. In particular, in multi-sensory cognition applications,
to design such systems, the skill-required tasks, e.g., method selection, parameterization,
model analysis, and processing chain construction are elaborated with immense
effort, which conventionally are done manually by the expert designer. Moreover, the
strong technological competition imposes even more complicated design problems with
multiple constraints, e.g., cost, speed, power consumption,
exibility, and reliability.
Thus, the conventional human expert based design approach may not be able to cope
with the increasing demand in numbers, complexity, and diversity. To alleviate the issue,
the design automation approach has been the topic for numerous research works [3-14]
and has been commercialized to several products [15-18]. Additionally, the dynamic
adaptation of intelligent multi-sensor systems is the potential solution for developing
dependable and robust systems. Intrinsic evolution approach and self-x properties [19],
which include self-monitoring, -calibrating/trimming, and -healing/repairing, are among
the best candidates for the issue. Motivated from the ongoing research trends and based
on the background of our research work [12, 13] among the pioneers in this topic, the
research work of the thesis contributes to the design automation of intelligent integrated
multi-sensor systems.
In this research work, the Design Automation for Intelligent COgnitive system with self-
X properties, the DAICOX, architecture is presented with the aim of tackling the design
effort and to providing high quality and robust solutions for multi-sensor intelligent
systems. Therefore, the DAICOX architecture is conceived with the defined goals as
listed below.
Perform front to back complete processing chain design with automated method
selection and parameterization,
Provide a rich choice of pattern recognition methods to the design method pool,
Associate design information via interactive user interface and visualization along
with intuitive visual programming,
Deliver high quality solutions outperforming conventional approaches by using
multi-objective optimization,
Gain the adaptability, reliability and robustness of designed solutions with self-x
properties,
Derived from the goals, several scientific methodological developments and implementations,
particularly in the areas of pattern recognition and computational intelligence,
will be pursued as part of the DAICOX architecture in the research work of this thesis.
The method pool is aimed to contain a rich choice of methods and algorithms covering
data acquisition and sensor configuration, signal processing and feature computation,
dimensionality reduction, and classification. These methods will be selected and parameterized
automatically by the DAICOX design optimization to construct a multi-sensory
cognition processing chain. A collection of non-parametric feature quality assessment
functions for the purpose of Dimensionality Reduction (DR) process will be presented.
In addition, to standard DR methods, the variations of feature selection method, in
particular, feature weighting will be proposed. Three different classification categories
shall be incorporated in the method pool. Hierarchical classification approach will be
proposed and developed to serve as a multi-sensor fusion architecture at the decision
level. Beside multi-class classification, one-class classification methods, e.g., One-Class
SVM and NOVCLASS will be presented to extend functionality of the solutions, in particular,
anomaly and novelty detection. DAICOX is conceived to effectively handle the
problem of method selection and parameter setting for a particular application yielding
high performance solutions. The processing chain construction tasks will be carried
out by meta-heuristic optimization methods, e.g., Genetic Algorithms (GA) and Particle
Swarm Optimization (PSO), with multi-objective optimization approach and model
analysis for robust solutions. In addition, to the automated system design mechanisms,
DAICOX will facilitate the design tasks with intuitive visual programming and various
options of visualization. Design database concept of DAICOX is aimed to allow the
reusability and extensibility of the designed solutions gained from previous knowledge.
Thus, the cooperative design of machine and knowledge from the design expert can also
be utilized for obtaining fully enhanced solutions. In particular, the integration of self-x
properties as well as intrinsic optimization into the system is proposed to gain enduring
reliability and robustness. Hence, DAICOX will allow the inclusion of dynamically
reconfigurable hardware instances to the designed solutions in order to realize intrinsic
optimization and self-x properties.
As a result from the research work in this thesis, a comprehensive intelligent multisensor
system design architecture with automated method selection, parameterization,
and model analysis is developed with compliance to open-source multi-platform software.It is integrated with an intuitive design environment, which includes visual programming
concept and design information visualizations. Thus, the design effort is minimized as
investigated in three case studies of different application background, e.g., food analysis
(LoX), driving assistance (DeCaDrive), and magnetic localization. Moreover, DAICOX
achieved better quality of the solutions compared to the manual approach in all cases,
where the classification rate was increased by 5.4%, 0.06%, and 11.4% in the LoX,
DeCaDrive, and magnetic localization case, respectively. The design time was reduced
by 81.87% compared to the conventional approach by using DAICOX in the LoX case
study. At the current state of development, a number of novel contributions of the thesis
are outlined below.
Automated processing chain construction and parameterization for the design of
signal processing and feature computation.
Novel dimensionality reduction methods, e.g., GA and PSO based feature selection
and feature weighting with multi-objective feature quality assessment.
A modification of non-parametric compactness measure for feature space quality
assessment.
Decision level sensor fusion architecture based on proposed hierarchical classification
approach using, i.e., H-SVM.
A collection of one-class classification methods and a novel variation, i.e.,
NOVCLASS-R.
Automated design toolboxes supporting front to back design with automated
model selection and information visualization.
In this research work, due to the complexity of the task, neither all of the identified goals
have been comprehensively reached yet nor has the complete architecture definition been
fully implemented. Based on the currently implemented tools and frameworks, ongoing
development of DAICOX is pursuing towards the complete architecture. The potential
future improvements are the extension of method pool with a richer choice of methods
and algorithms, processing chain breeding via graph based evolution approach, incorporation
of intrinsic optimization, and the integration of self-x properties. According to
these features, DAICOX will improve its aptness in designing advanced systems to serve
the increasingly growing technologies of distributed intelligent measurement systems, in
particular, CPS and Industrie 4.0.
The advances in sensor technology have introduced smart electronic products with
high integration of multi-sensor elements, sensor electronics and sophisticated signal
processing algorithms, resulting in intelligent sensor systems with a significant level
of complexity. This complexity leads to higher vulnerability in performing their
respective functions in a dynamic environment. The system dependability can be
improved via the implementation of self-x features in reconfigurable systems. The
reconfiguration capability requires capable switching elements, typically in the form
of a CMOS switch or miniaturized electromagnetic relay. The emerging DC-MEMS
switch has the potential to complement the CMOS switch in System-in-Package as
well as integrated circuits solutions. The aim of this thesis is to study the feasibility
of using DC-MEMS switches to enable the self-x functionality at system level.
The self-x implementation is also extended to the component level, in which the
ISE-DC-MEMS switch is equipped with self-monitoring and self-repairing features.
The MEMS electrical behavioural model generated by the design tool is inadequate,
so additional electrical models have been proposed, simulated and validated. The
simplification of the mechanical MEMS model has produced inaccurate simulation
results that lead to the occurrence of stiction in the actual device. A stiction conformity
test has been proposed, implemented, and successfully validated to compensate
the inaccurate mechanical model. Four different system simulations of representative
applications were carried out using the improved behavioural MEMS model, to
show the aptness and the performances of the ISE-DC-MEMS switch in sensitive
reconfiguration tasks in the application and to compare it with transmission gates.
The current design of the ISE-DC-MEMS switch needs further optimization in terms
of size, driving voltage, and the robustness of the design to guarantee high output
yield in order to match the performance of commercial DC MEMS switches.
In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.
Seit Aufkommen der Halbleiter-Technologie existiert ein Trend zur Miniaturisierung elektronischer Systeme. Dies, steigende Anforderungen sowie die zunehmende Integration verschiedener Sensoren zur Interaktion mit der Umgebung lassen solche eingebetteten Systeme, wie sie zum Beispiel in mobilen Geräten oder Fahrzeugen vorkommen, zunehmend komplexer werden. Die Folgen sind ein Anstieg der Entwicklungszeit und ein immer höherer Bauteileaufwand, bei gleichzeitig geforderter Reduktion von Größe und Energiebedarf. Insbesondere der Entwurf von Multi-Sensor-Systemen verlangt für jeden verwendeten Sensortyp jeweils gesondert nach einer spezifischen Sensorelektronik und steht damit den Forderungen nach Miniaturisierung und geringem Leistungsverbrauch entgegen.
In dieser Forschungsarbeit wird das oben beschriebene Problem aufgegriffen und die Entwicklung eines universellen Sensor-Interfaces für eben solche Multi-Sensor-Systeme erörtert. Als ein einzelner integrierter Baustein kann dieses Interface bis zu neun verschiedenen Sensoren unterschiedlichen Typs als Sensorelektronik dienen. Die aufnehmbaren Messgrößen umfassen: Spannung, Strom, Widerstand, Kapazität, Induktivität und Impedanz.
Durch dynamische Rekonfigurierbarkeit und applikationsspezifische Programmierung wird eine variable Konfiguration entsprechend der jeweiligen Anforderungen ermöglicht. Sowohl der Entwicklungs- als auch der Bauteileaufwand können dank dieser Schnittstelle, die zudem einen Energiesparmodus beinhaltet, erheblich reduziert werden.
Die flexible Struktur ermöglicht den Aufbau intelligenter Systeme mit sogenannten Self-x Charakteristiken. Diese betreffen Fähigkeiten zur eigenständigen Systemüberwachung, Kalibrierung oder Reparatur und tragen damit zu einer erhöhten Robustheit und Fehlertoleranz bei. Als weitere Innovation enthält das universelle Interface neuartige Schaltungs- und Sensorkonzepte, beispielsweise zur Messung der Chip-Temperatur oder Kompensation thermischer Einflüsse auf die Sensorik.
Zwei unterschiedliche Anwendungen demonstrieren die Funktionalität der hergestellten Prototypen. Die realisierten Applikationen haben die Lebensmittelanalyse sowie die dreidimensionale magnetische Lokalisierung zum Gegenstand.
The current procedures for achieving industrial process surveillance, waste reduction, and prognosis of critical process states are still insufficient in some parts of the manufacturing industry. Increasing competitive pressure, falling margins, increasing cost, just-in-time production, environmental protection requirements, and guidelines concerning energy savings pose new challenges to manufacturing companies, from the semiconductor to the pharmaceutical industry.
New, more intelligent technologies adapted to the current technical standards provide companies with improved options to tackle these situations. Here, knowledge-based approaches open up pathways that have not yet been exploited to their full extent. The Knowledge-Discovery-Process for knowledge generation describes such a concept. Based on an understanding of the problems arising during production, it derives conclusions from real data, processes these data, transfers them into evaluated models and, by this open-loop approach, reiteratively reflects the results in order to resolve the production problems. Here, the generation of data through control units, their transfer via field bus for storage in database systems, their formatting, and the immediate querying of these data, their analysis and their subsequent presentation with its ensuing benefits play a decisive role.
The aims of this work result from the lack of systematic approaches to the above-mentioned issues, such as process visualization, the generation of recommendations, the prediction of unknown sensor und production states, and statements on energy cost.
Both science and commerce offer mature statistical tools for data preprocessing, analysis and modeling, and for the final reporting step. Since their creation, the insurance business, the world of banking, market analysis, and marketing have been the application fields of these software types; they are now expanding to the production environment.
Appropriate modeling can be achieved via specific machine learning procedures, which have been established in various industrial areas, e.g., in process surveillance by optical control systems. Here, State-of-the-art classification methods are used, with multiple applications comprising sensor technology, process areas, and production site data. Manufacturing companies now intend to establish a more holistic surveillance of process data, such as, e.g., sensor failures or process deviations, to identify dependencies. The causes of quality problems must be recognized and selected in real time from about 500 attributes of a highly complex production machine. Based on these identified causes, recommendations for improvement must then be generated for the operator at the machine, in order to enable timely measures to avoid these quality deviations.
Unfortunately, the ability to meet the required increases in efficiency – with simultaneous consumption and waste minimization – still depends on data that are, for the most part, not available. There is an overrepresentation of positive examples whereas the number of definite negative examples is too low.
The acquired information can be influenced by sensor drift effects and the occurrence of quality degradation may not be adequately recognized. Sensorless diagnostic procedures with dual use of actuators can be of help here.
Moreover, in the course of a process, critical states with sometimes unexplained behavior can occur. Also in these cases, deviations could be reduced by early countermeasures.
The generation of data models using appropriate statistical methods is of advantage here.
Conventional classification methods sometimes reach their limits. Supervised learning methods are mostly used in areas of high information density with sufficient data available for the classes under examination. However, there is a growing trend (e.g., spam filtering) to apply supervised learning methods to underrepresented classes, the datasets of which are, at best, outliers or not at all existent.
The application field of One-Class Classification (OCC) deals with this issue. Standard classification procedures (e.g., k-nearest-neighbor classifier, support vector machines) can be modified in adjustment to such problems. Thereby, a control system is able to classify statements on changing process states or sensor deviations. The above-described knowledge discovery process was employed in a case study from the polymer film industry, at the Mondi Gronau GmbH, taken as an example, and accomplished by a real-data survey at the production site and subsequent data preprocessing, modeling, evaluation, and deployment as a system for the generation of recommendations. To this end, questions regarding the following topics had to be clarified: data sources, datasets and their formatting, transfer pathways, storage media, query sequences, the employed methods of classification, their adjustment to the problems at hand, evaluation of the results, construction of a dynamic cycle, and the final implementation in the production process, along with its surplus value for the company.
Pivotal options for optimization with respect to ecological and economical aspects can be found here. Capacity for improvement is given in the reduction of energy consumption, CO\(_2\) emissions, and waste at all machines. At this one site, savings of several million euros per month can be achieved.
One major difficulty so far has been hardly accessible process data which, distributed on various data sources and unconnected, in some areas led to an increased analysis effort and a lack of holistic real-time quality surveillance. Monitoring of specifications and the thus obtained support for the operator at the installation resulted in a clear disadvantage with regard to cost minimization.
The data of the case study, captured according to their purposes and in coordination with process experts, amounted to 21,900 process datasets from cast film extrusion during 2 years’ time, including sensor data from dosing facilities and 300 site-specific energy datasets from the years 2002–2014.
In the following, the investigation sequence is displayed:
1. In the first step, industrial approaches according to Industrie 4.0 and related to Big Data were investigated. The applied statistical software suites and their functions were compared with a focus on real-time data acquisition from database systems, different data formats, their sensor locations at the machines, and the data processing part. The linkage of datasets from various data sources for, e.g., labeling and downstream exploration according to the knowledge discovery process is of high importance for polymer manufacturing applications.
2. In the second step, the aims were defined according to the industrial requirements, i.e. the critical production problem called “cut-off” as the main selection, and with regard to their investigation with machine learning methods. Therefore, a system architecture corresponding to the polymer industry was developed, containing the following processing steps: data acquisition, monitoring \& recommendation, and self-configuration.
3. The novel sensor datasets, with 160–2,500 real and synthetic attributes, were acquired within 1-min intervals via PLC and field bus from an Oracle database. The 160 features were reduced to 6 dimensions with feature reduction methods. Due to underrepresentation of the critical class, the learning approaches had to be modified and optimized for one-class classification, which achieved 99% accuracy after training, testing and evaluation with real datasets.
4. In the next step, the 6-dimensional dataset was scaled into lower 1-, 2-, or 3-dimensional space with classical and non-classical mapping approaches for downstream visualization. The mapped view was separated into zones of normal and abnormal process conditions by threshold setting.
5. Afterwards, the boundary zone was investigated and an approach for trajectory extraction consisting of condition points in sequence was developed, to optimize the prediction behavior of the model. The extracted trajectories were trained, tested and evaluated by State-of-the-art classification methods, achieving a 99% recognition ratio.
6. In the last step, the best methods and processing parts were converted into a specifically developed domain-specific graphical user interface for real-time visualization of process condition changes. The requirements of such an interface were discussed with the operators with regard to intuitive handling, interactive visualization and recommendations (as e.g., messaging and traffic lights), and implemented.
The software prototype was tested at a laboratory machine. Correct recognition of abnormal process problems was achieved at a 90\% ratio. The software was afterwards transferred to a group of on-line production machines.
As demonstrated, the monthly amount of waste arising at machine M150 could be decreased from 20.96% to 12.44% during the application time. The frequency of occurrence of the specific problem was reduced by 30% related to monthly savings of 50,000 EUR.
In the approach pertaining to the energy prognosis of load profiles, monthly energy data from 2002 to 2014 (about 36 trajectories with three to eight real parameters each) were used as the basis, analyzed and modeled systematically. The prognosis quality increased with approaching target date. Thereby, the site-specific load profile for 2014 could be predicted with an accuracy of 99%.
The achievement of sustained cost reductions of several 100,000 euros, combined with additional savings of EUR 2.8 million, could be demonstrated.
The process improvements achieved while pursuing scientific targets could be successfully and permanently integrated at the case study plant. The increase in methodical and experimental knowledge was reflected by first economical results and could be verified numerically. The expectations of the company were more than fulfilled and further developments based on the new findings were initiated. Among the new finding are the transfer of the scientific findings onto more machines and even the initiation of further studies expanding into the diagnostics area.
Considering the size of the enterprise, future enhanced success should also be possible for other locations. In the course of the grid charge exemption according to EEG, the energy savings at further German locations can amount to 4–11% on a monetary basis and at least 5% based on energy. Up to 10% of materials and cost can be saved with regard to waste reduction related to specific problems. According to projections, material savings of 5–10 t per month and time savings of up to 50 person-hours are achievable. Important synergy effects can be created by the knowledge transfer.
Vorgestellt wird ein Verfahren zur Bestimmung der Erdschlussentfernung in hochohmig geerdeten
Netzen. Nach Abklingen der transienten Vorgänge im Fehlerfall stellt sich ein stationärer
Zustand ein, in dem das Netz zunächst weiter betrieben werden kann.
Ausgehend von diesem stationären Fehlerfall wird auf der Basis eines Π-Glieds das Leitungsmodell
des einseitig gespeisten Stichabgangs mit einer Last in der Vier-Leiter-Darstellung
entwickelt. Die Schaltungsanalyse erfolgt mit Hilfe komplexer Rechnung und der Kirchhoffschen
Gesetze. Grundlage der Betrachtungen bildet das Netz mit isoliertem Sternpunkt.
Das entstehende Gleichungssystem ist in seiner Grundform nichtlinear, lässt sich jedoch auf eine
elementar lösbare kubische Gleichung im gesuchten Fehlerentfernungsparameter zurückführen.
Eine weitere Lösungsmöglichkeit bietet das Newton-Raphson-Verfahren.
Durch Verlegen der lastseitigen Leiter-Erd-Kapazitäten an den Abgangsanfang kann das vollständige,
nichtlineare System in ein lineares System überführt werden. Hierbei sind die beiden
Ausprägungen „direkte Lösung mit unsymmetrischer Last“ oder „Ausgleichsrechnung mit
symmetrischer Last“ möglich.
Eine MATLAB®-Implementierung dieser vier Rechenalgorithmen bildet die Basis der weiteren
Analysen.
Alle messtechnischen Untersuchungen erfolgten am Netz-Kraftwerksmodell der TU Kaiserslautern.
Hier wurden verschiedene Fehlerszenarien hinsichtlich Fehlerentfernung, -widerstand und
Größe des gesunden Restnetzes hergestellt, in 480 Einzelmessungen erfasst und mit den Algorithmen
ausgewertet. Dabei wurden auch Messungen an fehlerfreien Abgängen erhoben, um das
Detektionsvermögen der Algorithmen zu testen.
Neben Grundschwingungsbetrachtungen ist die Auswertung aller Datensätze mit der 5. und der
7. Harmonischen ein zentrales Thema. Im Fokus steht die Verwendbarkeit dieser Oberschwingungen
zur Erdschlussentfernungsmessung bzw. -detektion mit den o.g. Algorithmen.
Besondere Bedeutung kommt der Fragestellung zu, inwieweit die für ein Netz mit isoliertem
Sternpunkt konzipierten Algorithmen unter Benutzung der höheren Harmonischen zur Erdschlussentfernungsmessung
in einem gelöschten Netz geeignet sind.
Schließlich wird das Verfahren auf Abgänge mit inhomogenem Leitermaterial erweitert, da auch
diese Konstellation von praktischer Bedeutung ist.
Context-Enabled Optimization of Energy-Autarkic Networks for Carrier-Grade Wireless Backhauling
(2015)
This work establishes the novel category of coordinated Wireless Backhaul Networks (WBNs) for energy-autarkic point-to-point radio backhauling. The networking concept is based on three major building blocks: cost-efficient radio transceiver hardware, a self-organizing network operations framework, and power supply from renewable energy sources. The aim of this novel backhauling approach is to combine carrier-grade network performance with reduced maintenance effort as well as independent and self-sufficient power supply. In order to facilitate the success prospects of this concept, the thesis comprises the following major contributions: Formal, multi-domain system model and evaluation methodology
First, adapted from the theory of cyber-physical systems, the author devises a multi-domain evaluation methodology and a system-level simulation framework for energy-autarkic coordinated WBNs, including a novel balanced scorecard concept. Second, the thesis specifically addresses the topic of Topology Control (TC) in point-to-point radio networks and how it can be exploited for network management purposes. Given a set of network nodes equipped with multiple radio transceivers and known locations, TC continuously optimizes the setup and configuration of radio links between network nodes, thus supporting initial network deployment, network operation, as well as topology re-configuration. In particular, the author shows that TC in WBNs belongs to the class of NP-hard quadratic assignment problems and that it has significant impact in operational practice, e.g., on routing efficiency, network redundancy levels, service reliability, and energy consumption. Two novel algorithms focusing on maximizing edge connectivity of network graphs are developed.
Finally, this work carries out an analytical benchmarking and a numerical performance analysis of the introduced concepts and algorithms. The author analytically derives minimum performance levels of the the developed TC algorithms. For the analyzed scenarios of remote Alpine communities and rural Tanzania, the evaluation shows that the algorithms improve energy efficiency and more evenly balance energy consumption across backhaul nodes, thus significantly increasing the number of available backhaul nodes compared to state-of-the-art TC algorithms.
The heterogeneity of today's access possibilities to wireless networks imposes challenges for efficient mobility support and resource management across different Radio Access Technologies (RATs). The current situation is characterized by the coexistence of various wireless communication systems, such as GSM, HSPA, LTE, WiMAX, and WLAN. These RATs greatly differ with respect to coverage, spectrum, data rates, Quality of Service (QoS), and mobility support.
In real systems, mobility-related events, such as Handover (HO) procedures, directly affect resource efficiency and End-To-End (E2E) performance, in particular with respect to signaling efforts and users' QoS. In order to lay a basis for realistic multi-radio network evaluation, a novel evaluation methodology is introduced in this thesis.
A central hypothesis of this thesis is that the consideration and exploitation of additional information characterizing user, network, and environment context, is beneficial for enhancing Heterogeneous Access Management (HAM) and Self-Optimizing Networks (SONs). Further, Mobile Network Operator (MNO) revenues are maximized by tightly integrating bandwidth adaptation and admission control mechanisms as well as simultaneously accounting for user profiles and service characteristics. In addition, mobility robustness is optimized by enabling network nodes to tune HO parameters according to locally observed conditions.
For establishing all these facets of context awareness, various schemes and algorithms are developed and evaluated in this thesis. System-level simulation results demonstrate the potential of context information exploitation for enhancing resource utilization, mobility support, self-tuning network operations, and users' E2E performance.
In essence, the conducted research activities and presented results motivate and substantiate the consideration of context awareness as key enabler for cognitive and autonomous network management. Further, the performed investigations and aspects evaluated in the scope of this thesis are highly relevant for future 5G wireless systems and current discussions in the 5G infrastructure Public Private Partnership (PPP).
In embedded systems, there is a trend of integrating several different functionalities on a common platform. This has been enabled by increasing processing power and the arise of integrated system-on-chips.
The composition of safety-critical and non-safety-critical applications results in mixed-criticality systems. Certification Authorities (CAs) demand the certification of safety-critical applications with strong confidence in the execution time bounds. As a consequence, CAs use conservative assumptions in the worst-case execution time (WCET) analysis which result in more pessimistic WCETs than the ones used by designers. The existence of certified safety-critical and non-safety-critical applications can be represented by dual-criticality systems, i.e., systems with two criticality levels.
In this thesis, we focus on the scheduling of mixed-criticality systems which are subject to certification. Scheduling policies cognizant of the mixed-criticality nature of the systems and the certification requirements are needed for efficient and effective scheduling. Furthermore, we aim at reducing the certification costs to allow faster modification and upgrading, and less error-prone certification. Besides certification aspects, requirements of different operational modes result in challenging problems for the scheduling process. Despite the mentioned problems, schedulers require a low runtime overhead for an efficient execution at runtime.
The presented solutions are centered around time-triggered systems which feature a low runtime overhead. We present a transformation to include event-triggered activities, represented by sporadic tasks, already into the offline scheduling process. Further, this transformation can also be applied on periodic tasks to shorten the length of schedule tables which reduces certification costs. These results can be used in our method to construct schedule tables which creates two schedule tables to fulfill the requirements of dual-criticality systems using mode changes at runtime. Finally, we present a scheduler based on the slot-shifting algorithm for mixed-criticality systems. In a first version, the method schedules dual-criticality jobs without the need for mode changes. An already certified schedule table can be used and at runtime, the scheduler reacts to the actual behavior of the jobs and thus, makes effective use of the available resources. Next, we extend this method to schedule mixed-criticality job sets with different operational modes. As a result, we can schedule jobs with varying parameters in different modes.
Specification of asynchronous circuit behaviour becomes more complex as the
complexity of today’s System-On-a-Chip (SOC) design increases. This also causes
the Signal Transition Graphs (STGs) – interpreted Petri nets for the specification
of asynchronous circuit behaviour – to become bigger and more complex, which
makes it more difficult, sometimes even impossible, to synthesize an asynchronous
circuit from an STG with a tool like petrify [CKK+96] or CASCADE [BEW00].
It has, therefore, been suggested to decompose the STG as a first step; this
leads to a modular implementation [KWVB03] [KVWB05], which can reduce syn-
thesis effort by possibly avoiding state explosion or by allowing the use of library
elements. A decomposition approach for STGs was presented in [VW02] [KKT93]
[Chu87a]. The decomposition algorithm by Vogler and Wollowski [VW02] is based
on that of Chu [Chu87a] but is much more generally applicable than the one in
[KKT93] [Chu87a], and its correctness has been proved formally in [VW02].
This dissertation begins with Petri net background described in chapter 2.
It starts with a class of Petri nets called a place/transition (P/T) nets. Then
STGs, the subclass of P/T nets, is viewed. Background in net decomposition
is presented in chapter 3. It begins with the structural decomposition of P/T
nets for analysis purposes – liveness and boundedness of the net. Then STG
decomposition for synthesis from [VW02] is described.
The decomposition method from [VW02] still could be improved to deal with
STGs from real applications and to give better decomposition results. Some
improvements for [VW02] to improve decomposition result and increase algorithm
efficiency are discussed in chapter 4. These improvement ideas are suggested in
[KVWB04] and some of them are have been proved formally in [VK04].
The decomposition method from [VW02] is based on net reduction to find
an output block component. A large amount of work has to be done to reduce
an initial specification until the final component is found. This reduction is not
always possible, which causes input initially classified as irrelevant to become
relevant input for the component. But under certain conditions (e.g. if structural
auto-conflicts turn out to be non-dynamic) some of them could be reclassified as
irrelevant. If this is not done, the specifications become unnecessarily large, which
intern leads to unnecessarily large implemented circuits. Instead of reduction, a
new approach, presented in chapter 5, decomposes the original net into structural
components first. An initial output block component is found by composing the
structural components. Then, a final output block component is obtained by net
reduction.
As we cope with the structure of a net most of the time, it would be useful
to have a structural abstraction of the net. A structural abstraction algorithm
[Kan03] is presented in chapter 6. It can improve the performance in finding an
output block component in most of the cases [War05] [Taw04]. Also, the structure
net is in most cases smaller than the net itself. This increases the efficiency of the
decomposition algorithm because it allows the transitions contained in a node of
the structure graph to be contracted at the same time if the structure graph is
used as internal representation of the net.
Chapter 7 discusses the application of STG decomposition in asynchronous
circuit design. Application to speed independent circuits is discussed first. Af-
ter that 3D circuits synthesized from extended burst mode (XBM) specifications
are discussed. An algorithm for translating STG specifications to XBM specifi-
cations was first suggested by [BEW99]. This algorithm first derives the state
machine from the STG specification, then translates the state machine to XBM
specification. An XBM specification, though it is a state machine, allows some
concurrency. These concurrencies can be translated directly, without deriving
all of the possible states. An algorithm which directly translates STG to XBM
specifications, is presented in chapter 7.3.1. Finally DESI, a tool to decompose
STGs and its decomposition results are presented.
Real-time systems are systems that have to react correctly to stimuli from the environment within given timing constraints.
Today, real-time systems are employed everywhere in industry, not only in safety-critical systems but also in, e.g., communication, entertainment, and multimedia systems.
With the advent of multicore platforms, new challenges on the efficient exploitation of real-time systems have arisen:
First, there is the need for effective scheduling algorithms that feature low overheads to improve the use of the computational resources of real-time systems.
The goal of these algorithms is to ensure timely execution of tasks, i.e., to provide runtime guarantees.
Additionally, many systems require their scheduling algorithm to flexibly react to unforeseen events.
Second, the inherent parallelism of multicore systems leads to contention for shared hardware resources and complicates system analysis.
At any time, multiple applications run with varying resource requirements and compete for the scarce resources of the system.
As a result, there is a need for an adaptive resource management.
Achieving and implementing an effective and efficient resource management is a challenging task.
The main goal of resource management is to guarantee a minimum resource availability to real-time applications.
A further goal is to fulfill global optimization objectives, e.g., maximization of the global system performance, or the user perceived quality of service.
In this thesis, we derive methods based on the slot shifting algorithm.
Slot shifting provides flexible scheduling of time-constrained applications and can react to unforeseen events in time-triggered systems.
For this reason, we aim at designing slot shifting based algorithms targeted for multicore systems to tackle the aforementioned challenges.
The main contribution of this thesis is to present two global slot shifting algorithms targeted for multicore systems.
Additionally, we extend slot shifting algorithms to improve their runtime behavior, or to handle non-preemptive firm aperiodic tasks.
In a variety of experiments, the effectiveness and efficiency of the algorithms are evaluated and confirmed.
Finally, the thesis presents an implementation of a slot-shifting-based logic into a resource management framework for multicore systems.
Thus, the thesis closes the circle and successfully bridges the gap between real-time scheduling theory and real-world implementations.
We prove applicability of the slot shifting algorithm to effectively and efficiently perform adaptive resource management on multicore systems.
The objective of this thesis consists in developing systematic event-triggered control designs for specified event generators, which is an important alternative to the traditional periodic sampling control. Sporadic sampling inherently arising in event-triggered control is determined by the event-triggering conditions. This feature invokes the desire of
finding new control theory as the traditional sampled-data theory in computer control.
Developing controller coupling with the applied event-triggering condition to maximize the control performance is the essence for event-triggered control design. In the design the stability of the control system needs to be ensured with the first priority. Concerning variant control aims they should be clearly incorporated in the design procedures. Considering applications in embedded control systems efficient implementation requires a low complexity of embedded software architectures. The thesis targets at offering such a design to further complete the theory of event-triggered control designs.
In this thesis we studied and investigated a very common but a long existing noise problem and we provided a solution to this problem. The task is to deal with different types of noise that occur simultaneously and which we call hybrid. Although there are individual solutions for specific types one cannot simply combine them because each solution affects the whole speech. We developed an automatic speech recognition system DANSR ( Dynamic Automatic Noisy Speech Recognition System) for hybrid noisy environmental noise. For this we had to study all of speech starting from the production of sounds until their recognition. Central elements are the feature vectors on which pay much attention. As an additional effect we worked on the production of quantities for psychoacoustic speech elements.
The thesis has four parts:
1) The first part we give an introduction. The chapter 2 and 3 give an overview over speech generation and recognition when machines are used. Also noise is considered.
2) In the second part we describe our general system for speech recognition in a noisy environment. This is contained in the chapters 4-10. In chapter 4 we deal with data preparation. Chapter 5 is concerned with very strong noise and its modeling using Poisson distribution. In the chapters 5-8 we deal with parameter based modeling. Chapter 7 is concerned with autoregressive methods in relation to the vocal tract. In the chapters 8 and 9 we discuss linear prediction and its parameters. Chapter 9 is also concerned with quadratic errors, the decomposition into sub-bands and the use of Kalman filters for non-stationary colored noise in chapter 10. There one finds classical approaches as long we have used and modified them. This includes covariance mehods, the method of Burg and others.
3) The third part deals firstly with psychoacoustic questions. We look at quantitative magnitudes that describe them. This has serious consequences for the perception models. For hearing we use different scales and filters. In the center of the chapters 12 and 13 one finds the features and their extraction. The fearures are the only elements that contain information for further use. We consider here Cepstrum features and Mel frequency cepstral coefficients(MFCC), shift invariant local trigonometric transformed (SILTT), linear predictive coefficients (LPC), linear predictive cepstral coefficients (LPCC), perceptual linear predictive (PLP) cepstral coefficients. In chapter 13 we present our extraction methods in DANSR and how they use window techniques And discrete cosine transform (DCT-IV) as well as their inverses.
4) The fourth part considers classification and the ultimate speech recognition. Here we use the hidden Markov model (HMM) for describing the speech process and the Gaussian mixture model (GMM) for the acoustic modelling. For the recognition we use forward algorithm, the Viterbi search and the Baum-Welch algorithm. We also draw the connection to dynamic time warping (DTW). In the rest we show experimental results and conclusions.
The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.
This work shall provide a foundation for the cross-design of wireless networked control systems with limited resources. A cross-design methodology is devised, which includes principles for the modeling, analysis, design, and realization of low cost but high performance and intelligent wireless networked control systems. To this end, a framework is developed in which control algorithms and communication protocols are jointly designed, implemented, and optimized taking into consideration the limited communication, computing, memory, and energy resources of the low performance, low power, and low cost wireless nodes used. A special focus of the proposed methodology is on the prediction and minimization of the total energy consumption of the wireless network (i.e. maximization of the lifetime of wireless nodes) under control performance constraints (e.g. stability and robustness) in dynamic environments with uncertainty in resource availability, through the joint (offline/online) adaptation of communication protocol parameters and control algorithm parameters according to the traffic and channel conditions. Appropriate optimization approaches that exploit the structure of the optimization problems to be solved (e.g. linearity, affinity, convexity) and which are based on Linear Matrix Inequalities (LMIs), Dynamic Programming (DP), and Genetic Algorithms (GAs) are investigated. The proposed cross-design approach is evaluated on a testbed consisting of a real lab plant equipped with wireless nodes. Obtained results show the advantages of the proposed cross-design approach compared to standard approaches which are less flexible.
The increasing complexity of modern SoC designs makes tasks of SoC formal verification
a lot more complex and challenging. This motivates the research community to develop
more robust approaches that enable efficient formal verification for such designs.
It is a common scenario to apply a correctness by integration strategy while a SoC
design is being verified. This strategy assumes formal verification to be implemented in
two major steps. First of all, each module of a SoC is considered and verified separately
from the other blocks of the system. At the second step – when the functional correctness
is successfully proved for every individual module – the communicational behavior has
to be verified between all the modules of the SoC. In industrial applications, SAT/SMT-based interval property checking(IPC) has become widely adopted for SoC verification. Using IPC approaches, a verification engineer is able to afford solving a wide range of important verification problems and proving functional correctness of diverse complex components in a modern SoC design. However, there exist critical parts of a design where formal methods often lack their robustness. State-of-the-art property checkers fail in proving correctness for a data path of an industrial central processing unit (CPU). In particular, arithmetic circuits of a realistic size (32 bits or 64 bits) – especially implementing multiplication algorithms – are well-known examples when SAT/SMT-based
formal verification may reach its capacity very fast. In cases like this, formal verification
is replaced with simulation-based approaches in practice. Simulation is a good methodology that may assure a high rate of discovered bugs hidden in a SoC design. However, in contrast to formal methods, a simulation-based technique cannot guarantee the absence of errors in a design. Thus, simulation may still miss some so-called corner-case bugs in the design. This may potentially lead to additional and very expensive costs in terms of time, effort, and investments spent for redesigns, refabrications, and reshipments of new chips.
The work of this thesis concentrates on studying and developing robust algorithms
for solving hard arithmetic decision problems. Such decision problems often originate from a task of RTL property checking for data-path designs. Proving properties of those
designs can efficiently be performed by solving SMT decision problems formulated with
the quantifier-free logic over fixed-sized bit vectors (QF-BV).
This thesis, firstly, proposes an effective algebraic approach based on a Gröbner basis theory that allows to efficiently decide arithmetic problems. Secondly, for the case of custom-designed components, this thesis describes a sophisticated modeling technique which is required to restore all the necessary arithmetic description from these components. Further, this thesis, also, explains how methods from computer algebra and the modeling techniques can be integrated into a common SMT solver. Finally, a new QF-BV SMT solver is introduced.
For many years real-time task models have focused the timing constraints on execution windows defined by earliest start times and deadlines for feasibility.
However, the utility of some application may vary among scenarios which yield correct behavior, and maximizing this utility improves the resource utilization.
For example, target sensitive applications have a target point where execution results in maximized utility, and an execution window for feasibility.
Execution around this point and within the execution window is allowed, albeit at lower utility.
The intensity of the utility decay accounts for the importance of the application.
Examples of such applications include multimedia and control; multimedia application are very popular nowadays and control applications are present in every automated system.
In this thesis, we present a novel real-time task model which provides for easy abstractions to express the timing constraints of target sensitive RT applications: the gravitational task model.
This model uses a simple gravity pendulum (or bob pendulum) system as a visualization model for trade-offs among target sensitive RT applications.
We consider jobs as objects in a pendulum system, and the target points as the central point.
Then, the equilibrium state of the physical problem is equivalent to the best compromise among jobs with conflicting targets.
Analogies with well-known systems are helpful to fill in the gap between application requirements and theoretical abstractions used in task models.
For instance, the so-called nature algorithms use key elements of physical processes to form the basis of an optimization algorithm.
Examples include the knapsack problem, traveling salesman problem, ant colony optimization, and simulated annealing.
We also present a few scheduling algorithms designed for the gravitational task model which fulfill the requirements for on-line adaptivity.
The scheduling of target sensitive RT applications must account for timing constraints, and the trade-off among tasks with conflicting targets.
Our proposed scheduling algorithms use the equilibrium state concept to order the execution sequence of jobs, and compute the deviation of jobs from their target points for increased system utility.
The execution sequence of jobs in the schedule has a significant impact on the equilibrium of jobs, and dominates the complexity of the problem --- the optimum solution is NP-hard.
We show the efficacy of our approach through simulations results and 3 target sensitive RT applications enhanced with the gravitational task model.
This thesis has the goal to propose measures which allow an increase of the power efficiency of OFDM transmission systems. As compared to OFDM transmission over AWGN channels, OFDM transmission over frequency selective radio channels requires a significantly larger transmit power in order to achieve a certain transmission quality. It is well known that this detrimental impact of frequency selectivity can be combated by frequency diversity. We revisit and further investigate an approach to frequency diversity based on the spreading of subsets of the data elements over corresponding subsets of the OFDM subcarriers and term this approach Partial Data Spreading (PDS). The size of said subsets, which we designate as spreading factor, is a design parameter of PDS, and by properly choosing , depending on the system designer's requirements, an adequate compromise between a good system performance and a low complexity can be found. We show how PDS can be combined with ML, MMSE and ZF data detection, and it is recognized that MMSE data detection offers a good compromise between performance and complexity. After having presented the utilization of PDS in OFDM transmission without FEC encoding, we also show that PDS readily lends itself for FEC encoded OFDM transmission. We display that in this case the system performance can be significantly enhanced by specific schemes of interleaving and utilization of reliabiliy information developed in the thesis. A severe problem of OFDM transmission is the large Peak-to-Average-Power Ratio (PAPR) of the OFDM symbols, which hampers the application of power efficient transmit amplifiers. Our investigations reveal that PDS inherently reduces the PAPR. Another approch to PAPR reduction is the well known scheme Selective Data Mapping (SDM). In the thesis it is shown that PDS can be beneficially combined with SDM to the scheme PDS-SDM with a view to jointly exploit the PAPR reduction potentials of both schemes. However, even when such a PAPR reduction is achieved, the amplitude maximum of the resulting OFDM symbols is not constant, but depends on the data content. This entails the disadvantage that the power amplifier cannot be designed, with a view to achieve a high power efficiency, for a fixed amplitude maximum, what would be desirable. In order to overcome this problem, we propose the scheme Optimum Clipping (OC), in which we obtain the desired fixed amplitude maximum by a specific combination of the measures clipping, filtering and rescaling. In OFDM transmission a certain number of OFDM subcarriers have to be sacrificed for pilot transmission in order to enable channel estimation in the receiver. For a given energy of the OFDM symbols, the question arises in which way this energy should be subdivided among the pilots and the data carrying OFDM subcarriers. If a large portion of the available transmit energy goes to the pilots, then the quality of channel estimation is good, however, the data detection performs poor. Data detection also performs poor if the energy provided for the pilots is too small, because then the channel estimate indispensable for data detection is not accurate enough. We present a scheme how to assign the energy to pilot and data OFDM subcarriers in an optimum way which minimizes the symbol error probability as the ultimate quality measure of the transmission. The major part of the thesis is dedicated to point-to-point OFDM transmission systems. Towards the end of the thesis we show that the PDS can be also applied to multipoint-to-point OFDM transmission systems encountered for instance in the uplinks of mobile radio systems.
Die industrielle Oberflächeninspektion und insbesondere die Defekterkennung ist ein wichtiges Anwendungsgebiet für die automatische Bildverarbeitung (BV). Für den Entwurf und die Konfiguration der entsprechenden Softwaresysteme, in der Regel anwendungsspezifische Einzellösungen, werden im industriellen Umfeld zumeist entweder firmeneigene Bildverarbeitungsbibliotheken, kommerzielle oder freie Toolboxen verwendet. In der Regel beinhalten diese u.a. Standardalgorithmen der Bildverarbeitung in modularer Form, z. B. Filter- oder Schwellwertoperatoren. Die einzelnen BV-Methoden werden in der Regel nach dem Prinzip der visuellen Programmierung in einer grafischen Entwicklungsumgebung ausgewählt und zu einer BV-Kette bzw. einem -Graph zusammengesetzt. Dieses Prinzip ermöglicht es auch einem Programmierunkundigen, BV-Systeme zu erstellen und zu konfigurieren. Eine gewisse Grundkenntnis der Methoden der Bildverarbeitung ist jedoch notwendig. Je nach Aufgabenstellung und Erfahrung des Systementwicklers erfordern manueller Entwurf und Konfiguration eines BV-Systems erheblichen Zeiteinsatz. Diese Arbeit beschäftigt sich mit automatischen Entwurfs-, Konfigurations- und Optimierungsmöglichkeiten dieser modularen BV-Systeme, die es auch einem ungeübten Endnutzer ermöglichen, adäquate Lösungen zu generieren mit dem Ziel, ein effizienteres Entwurfswerkzeug für Bildverarbeitungssysteme mit neuen und verbesserten Eigenschaften zu schaffen. Die Methodenauswahl und Parameteroptimierung reicht von der Bildvorverarbeitung und -verbesserung mittels BV-Algorithmen bis hin zu ggf. eingesetzten Klassifikatoren, wie Nächste-Nachbar-Klassifikator (NNK) und Support-Vektor-Maschinen (SVM) und verschiedenen Bewertungsfunktionen. Der flexible Einsatz verschiedener Klassifikations- und Bewertungsmethoden ermöglicht einen automatischen problemspezifischen Entwurf und die Optimierung des BV-Systems für Aufgaben der Fehlerdetektion und Texturanalyse für 2d-Bilder, sowie die Trennung von Objekten und Hintergrund für 2d- und 3d-Grauwertbilder. Für die Struktur- und Parameteroptimierung des BV-Systems werden Evolutionäre Algorithmen (EA) und Partikelschwarmoptimierung (PSO) verwendet.
Model-based fault diagnosis and fault-tolerant control for a nonlinear electro-hydraulic system
(2010)
The work presented in this thesis discusses the model-based fault diagnosis and fault-tolerant control with application to a nonlinear electro-hydraulic system. High performance control with guaranteed safety and reliability for electro-hydraulic systems is a challenging task due to the high nonlinearity and system uncertainties. This thesis developed a diagnosis integrated fault-tolerant control (FTC) strategy for the electro-hydraulic system. In fault free case the nominal controller is in operation for achieving the best performance. If the fault occurs, the controller will be automatically reconfigured based on the fault information provided by the diagnosis system. Fault diagnosis and reconfigurable controller are the key parts for the proposed methodology. The system and sensor faults both are studied in the thesis. Fault diagnosis consists of fault detection and isolation (FDI). A model-base residual generating is realized by calculating the redundant information from the system model and available signal. In this thesis differential-geometric approach is employed, which gives a general formulation of FDI problem and is more compact and transparent among various model-based approaches. The principle of residual construction with differential-geometric method is to find an unobservable distribution. It indicates the existence of a system transformation, with which the unknown system disturbance can be decoupled. With the observability codistribution algorithm the local weak observability of transformed system is ensured. A Fault detection observer for the transformed system can be constructed to generate the residual. This method cannot isolated sensor faults. In the thesis the special decision making logic (DML) is designed based on the individual signal analysis of the residuals to isolate the fault. The reconfigurable controller is designed with the backstepping technique. Backstepping method is a recursive Lyapunov-based approach and can deal with nonlinear systems. Some system variables are considered as ``virtual controls'' during the design procedure. Then the feedback control laws and the associate Lyapunov function can be constructed by following step-by-step routine. For the electro-hydraulic system adaptive backstepping controller is employed for compensate the impact of the unknown external load in the fault free case. As soon as the fault is identified, the controller can be reconfigured according to the new modeling of faulty system. The system fault is modeled as the uncertainty of system and can be tolerated by parameter adaption. The senor fault acts to the system via controller. It can be modeled as parameter uncertainty of controller. All parameters coupled with the faulty measurement are replaced by its approximation. After the reconfiguration the pre-specified control performance can be recovered. FDI integrated FTC based on backstepping technique is implemented successfully on the electro-hydraulic testbed. The on-line robust FDI and controller reconfiguration can be achieved. The tracking performance of the controlled system is guaranteed and the considered faults can be tolerated. But the problem of theoretical robustness analysis for the time delay caused by the fault diagnosis is still open.
Wireless Sensor Networks (WSN) are dynamically-arranged networks typically composed of a large number of arbitrarily-distributed sensor nodes with computing capabilities contributing to –at least– one common application. The main characteristic of these networks is that of being functionally constrained due to a scarce availability of resources and strong dependence on uncontrollable environmental factors. These conditions introduce severe restrictions on the applicability of classic real-time methods aiming at guaranteeing time-bounded communications. Existing real-time solutions tend to apply concepts that were originally not conceived for sensor networks, idealizing realistic application scenarios and overlooking at important design limitations. This results in a number of misleading practices contributing to approaches of restricted validity in real-world scenarios. Amending the confrontation between WSNs and real-time objectives starts with a review of the basic fundamentals of existing approaches. In doing so, this thesis presents an alternative approach based on a generalized timeliness notion suitable to the particularities of WSNs. The new conceptual notion allows the definition of feasible real-time objectives opening a new scope of possibilities not constrained to idealized systems. The core of this thesis is based on the definition and application of Quality of Service (QoS) trade-offs between timeliness and other significant QoS metrics. The analysis of local and global trade-offs provides a step-by-step methodology identifying the correlations between these quality metrics. This association enables the definition of alternative trade-off configurations (set points) influencing the quality performance of the network at selected instants of time. With the basic grounds established, the above concepts are embedded in a simple routing protocol constituting a proof of concept for the validity of the presented analysis. Extensive evaluations under realistic scenarios are driven on simulation environments as well as real testbeds, validating the consistency of this approach.
Die Paarungsstörung mit Pheromonen ist ein etabliertes Verfahren der ökologischen Schädlingsbekämpfung in vielen Bereichen der Landwirtschaft. Um dieses Verfahren zu optimieren, ist es erforderlich, genauere Erkenntnisse über die Verteilung des Pheromons über den behandelten Agrarflächen zu erhalten. Die Messung dieser Duftstoffe mit dem EAG-System ist eine Methode, mit der man schnell und zuverlässig Pheromonkonzentrationen im Freiland bestimmen kann. Diese Arbeit beschreibt Beiträge, die zur Weiterentwicklung des Systems von großer Bedeutung sind. Die Steuerung des Messablaufs durch eine Ablaufdatei, die erst zur Laufzeit ins Programm geladen wird, ermöglicht eine zeitgenaue und flexible Steuerung des Messsystems. Die Auswertung der Messergebnisse wird durch Methoden der Gesamtdarstellung der Konzentrationsberechnung und durch rigorose Fehlerbetrachtung auf eine solide Grundlage gestellt. Die für die Konzentrationsberechnung erforderlichen Grundvoraussetzungen werden anhand experimenteller Beispiele ausführlich erläutert und verfiziert. Zusätzlich wird durch ein iteratives Verfahren die Konzentrationsberechnung von der mathematischen oder empirischen Darstellung der Dosis-Wirkungskurve unabhängig gemacht. Zur Nutzung einer erweiterten EAG-Apparatur zur Messung komplexer Duftstoffgemische wurde das Messsystem im Bereich der Steuerung und der Auswertung tiefgreifend umgestaltet und vollständig einsatztauglich gemacht. Dazu wurde das Steuerungssystem erweitert, das Programm für die Messwerterfassung neu strukturiert, eine Methode zur Konzentrationsberechnung für Duftstoffgemische entwickelt und in einer entsprechenden Auswertesoftware implementiert. Das wichtigste experimentelle Ergebnis besteht in der Durchführung und Auswertung einer speziellen Messung, bei der das EAG-System parallel mit einer klassischen Gaschromatograph-Methode eingesetzt wurde. Die Ergebnisse ermöglichen erstmals eine absolute Festlegung der Konzentrations-Messergebnisse des EAG-Messsystems für das Pheromon des Apfelwicklers. Bisher konnten nur Ergebnisse in Relativen Einheiten angegeben werden.
Photonic crystals are inhomogeneous dielectric media with periodic variation of the refractive index. A photonic crystal gives us new tools for the manipulation of photons and thus has received great interests in a variety of fields. Photonic crystals are expected to be used in novel optical devices such as thresholdless laser diodes, single-mode light emitting diodes, small waveguides with low-loss sharp bends, small prisms, and small integrated optical circuits. They can be operated in some aspects as "left handed materials" which are capable of focusing transmitted waves into a sub-wavelength spot due to negative refraction. The thesis is focused on the applications of photonic crystals in communications and optical imaging: • Photonic crystal structures for potential dispersion management in optical telecommunication systems • 2D non-uniform photonic crystal waveguides with a square lattice for wide-angle beam refocusing using negative refraction • 2D non-uniform photonic crystal slabs with triangular lattice for all-angle beam refocusing • Compact phase-shifted band-pass transmission filter based on photonic crystals
Diese Arbeit beschreibt einen in der Praxis bereits vielfach erprobten, besonders leistungsfähigen Ansatz zur Verifikation digitaler Schaltungsentwürfe. Der Ansatz ist im Hinblick auf die Schaltungsqualität nach der Verifikation, als auch in Bezug auf den Verifikationsaufwand der simulationsbasierten Schaltungsverifikation deutlich überlegen. Die Arbeit überträgt zunächst das Paradigma der transaktionsbasierten Verifikation aus der Simulation in die formale Verifikation. Ein Ergebnis dieser Übertragung ist eine bestimmte Form von formalen Eigenschaften, die Operationseigenschaften genannt werden. Schaltungen werden mit Operationseigenschaften untersucht durch Interval Property Checking, einer be-sonders leistungsfähigen SAT-basierten funktionalen Verifikation. Dadurch können Schaltungen untersucht werden, die sonst als zu komplex für formale Verifikation gelten. Ferner beschreibt diese Arbeit ein für Mengen von Operationseigenschaften geeignetes Werkzeug, das alle Verifikationslücken aufdeckt, komplexitätsmäßig mit den Fähigkeiten der IPC-basierten Schaltungsuntersuchung Schritt hält und als Vollständigkeitprüfer bezeichnet wird. Die Methodik der Operationseigenschaften und die Technologie des IPC-basierten Eigenschaftsprüfers und des Vollständigkeitsprüfers gehen eine vorteilhafte Symbiose zum Vorteil der funktionalen Verifikation digitaler Schaltungen ein. Darauf aufbauend wird ein Verfahren zur lückenlosen Überprüfung der Verschaltung derartig verifizierter Module entwickelt, das aus den Theorien zur Modellierung digitaler Systeme abgeleitet ist. Der in dieser Arbeit vorgestellte Ansatz hat in vielen kommerziellen Anwendungsprojekten unter Beweis gestellt, dass er den Namen "vollständige funktionale Verifikation" zu Recht trägt, weil in diesen Anwendungsprojekten nach dem Erreichen eines durch die Vollständigkeitsprüfung wohldefinierten Abschlusses keine Fehler mehr gefunden wurden. Der Ansatz wird von OneSpin Solutions GmbH unter dem Namen "Operation Based Verification" und "Gap Free Verification" vermarktet.
Rapid growth in sensors and sensor technology introduces variety of products to the market. The increasing number of available sensor concepts and implementations demands more versatile sensor electronics and signal conditioning. Nowadays signal conditioning for the available spectrum of sensors is becoming more and more challenging. Moreover, developing a sensor signal conditioning ASIC is a function of cost, area, and robustness to maintain signal integrity. Field programmable analog approaches and the recent evolvable hardware approaches offer partial solution for advanced compensation as well as for rapid prototyping. The recent research field of evolutionary concepts focuses predominantly on digital and is at its advancement stage in analog domain. Thus, the main research goal is to combine the ever increasing industrial demand for sensor signal conditioning with evolutionary concepts and dynamically reconfigurable matched analog arrays implemented in main stream Complementary Metal Oxide Semiconductors (CMOS) technologies to yield an intelligent and smart sensor system with acceptable fault tolerance and the so called self-x features, such as self-monitoring, self-repairing and self-trimming. For this aim, the work suggests and progresses towards a novel, time continuous and dynamically reconfigurable signal conditioning hardware platform suitable to support variety of sensors. The state-of-the-art has been investigated with regard to existing programmable/reconfigurable analog devices and the common industrial application scenario and circuits, in particular including resource and sizing analysis for proper motivation of design decisions. The pursued intermediate granular level approach called as Field Programmable Medium-granular mixed signal Array (FPMA) offers flexibility, trimming and rapid prototyping capabilities. The proposed approach targets at the investigation of industrial applicability of evolvable hardware concepts and to merge it with reconfigurable or programmable analog concepts, and industrial electronics standards and needs for next generation robust and flexible sensor systems. The devised programmable sensor signal conditioning test chips, namely FPMA1/FPMA2, designed in 0.35 µm (C35B4) Austriamicrosystems, can be used as a single instance, off the shelf chip at the PCB level for conditioning or in the loop with dedicated software to inherit the aspired self-x features. The use of such self–x sensor system carries the promise of improved flexibility, better accuracy and reduced vulnerability to manufacturing deviations and drift. An embedded system, namely PHYTEC miniMODUL-515C was used to program and characterize the mixed-signal test chips in various feedback arrangements to answer some of the questions raised by the research goals. Wide range of established analog circuits, ranging from single output to fully differential amplifiers, was investigated at different hierarchical levels to realize circuits like instrumentation amplifier and filters. A more extensive design issues based on low-power like for e.g., sub-threshold design were investigated and a novel soft sleep mode idea was proposed. The bandwidth limitations observed in the state of the art fine granular approaches were enhanced by the proposed intermediate granular approach. The so designed sensor signal conditioning instrumentation amplifier was then compared to the commercially available products in the market like LT 1167, INA 125 and AD 8250. In an adaptive prototype, evolutionary approaches, in particular based on particle swarm optimization with multi-objectives, were just deployed to all the test samples of FPMA1/FMPA2 (15 each) to exhibit self-x properties and to recover from manufacturing variations and drift. The variations observed in the performance of the test samples were compensated through reconfiguration for the desired specification.
In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.
The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.
Die Architekturen vieler technischer Systeme sind derzeit im Umbruch. Der fortschreitende Einsatz von Netzwerken aus intelligenten rechnenden Knoten führt zu neuen Anforderungen an den Entwurf und die Analyse der resultierenden Systeme. Dabei spielt die Analyse des Zeitverhaltens mit seinen Bezügen zu Sicherheit und Performanz eine zentrale Rolle. Netzbasierte Automatisierungssysteme (NAS) unterscheiden sich hierbei von anderen verteilten Echtzeitsystemen durch ihr zyklisches Komponentenverhalten. Das aus der asynchronen Verknüpfung entstehende Gesamtverhalten ist mit klassischen Methoden kaum analysierbar. Zur Analyse von NAS wird deshalb der Einsatz der wahrscheinlichkeitsbasierten Modellverifikation (PMC) vorgeschlagen. PMC erlaubt detaillierte, quantitative Aussagen über das Systemverhalten. Für die dazu notwendige Modellierung des Systems auf Basis wahrscheinlichkeitsbasierter, zeitbewerteter Automaten wird die Beschreibungssprache DesLaNAS eingeführt. Exemplarisch werden der Einfluss verschiedener Komponenten und Verhaltensmodi auf die Antwortzeit eines NAS untersucht und die Ergebnisse mittels Labormessungen validiert.
Analog sensor electronics requires special care during design in order to increase the quality and precision of the signal, and the life time of the product. Nevertheless, it can experience static deviations due to the manufacturing tolerances, and dynamic deviations due to operating in non-ideal environment. Therefore, the advanced applications such as MEMS technology employs calibration loop to deal with the deviations, but unfortunately, it is considered only in the digital domain, which cannot cope with all the analog deviations such as saturation of the analog signal, etc. On the other hand, rapid-prototyping is essential to decrease the development time, and the cost of the products for small quantities. Recently, evolvable hardware has been developed with the motivation to cope with the mentioned sensor electronic problems. However the industrial specifications and requirements are not considered in the hardware learning loop. Indeed, it minimizes the error between the required output and the real output generated due to given test signal. The aim of this thesis is to synthesize the generic organic-computing sensor electronics and return hardware with predictable behavior for embedded system applications that gains the industrial acceptance; therefore, the hardware topology is constrained to the standard hardware topologies, the hardware standard specifications are included in the optimization, and hierarchical optimization are abstracted from the synthesis tools to evolve first the building blocks, then evolve the abstract level that employs these optimized blocks. On the other hand, measuring some of the industrial specifications needs expensive equipments and some others are time consuming which is not fortunate for embedded system applications. Therefore, the novel approach "mixtrinsic multi-objective optimization" is proposed that simulates/estimates the set of the specifications that is hard to be measured due to the cost or time requirements, while it measures intrinsically the set of the specifications that has high sensitivity to deviations. These approaches succeed to optimize the hardware to meet the industrial specifications with low cost measurement setup which is essential for embedded system applications.
The present thesis deals with multi-user mobile radio systems, and more specifically, the downlinks (DL) of such systems. As a key demand on future mobile radio systems, they should enable highest possible spectrum and energy efficiency. It is well known that, in principle, the utilization of multi-antennas in the form of MIMO systems, offers considerable potential to meet this demand. Concerning the energy issue, the DL is more critical than the uplink. This is due to the growing importance of wireless Internet applications, in which the DL data rates and, consequently, the radiated DL energies tend to be substantially higher than the corresponding uplink quantities. In this thesis, precoding schemes for MIMO multi-user mobile radio DLs are considered, where, in order to keep the complexity of the mobile terminals as low as possible, the rationale receiver orientation (RO) is adopted, with the main focus to further reduce the required transmit energy in such systems. Unfortunately, besides the mentioned low receiver complexity, conventional RO schemes, such as Transmit Zero Forcing (TxZF), do not offer any transmit energy reductions as compared to conventional transmitter oriented schemes. Therefore, the main goal of this thesis is the design and analysis of precoding schemes in which such transmit energy reductions become feasible - under virtually maintaining the low receiver complexity - by means of replacing the conventional unique mappings by the selectable representations of the data. Concerning the channel access scheme, Orthogonal Frequency Division Multiplex (OFDM) is presently being favored as the most promising candidate in the standardization process of the enhanced 3G and forthcoming 4G systems, because it allows a very flexible resource allocation and low receiver complexity. Receiver oriented MIMO OFDM multi-user downlink transmission, in which channel equalization is already performed in the transmitter of the access point, further contributes to low receiver complexity in the mobile terminals. For these reasons, OFDM is adopted in the target system of the considered receiver oriented precoding schemes. In the precoding schemes considered the knowledge of channel state information (CSI) in the access point in the form of the channel matrix is essential. Independently of the applied duplexing schemes FDD or TDD, the provision of this information to the access point is always erroneous. However, it is shown that the impact of such deviations not only scales with the variance of the channel estimation errors, but also with the required transmit energies. Accordingly, the reduced transmit energies of the precoding schemes with selectable data representation also have the advantage of a reduced sensitivity to imperfect knowledge of CSI. In fact, these two advantages are coupled with each other.
Mit zunehmender Integration von immermehr Funktionalität in zukünftigen SoC-Designs erhöht sich die Bedeutung der funktionalen Verifikation auf der Blockebene. Nur Blockentwürfe mit extrem niedriger Fehlerrate erlauben eine schnelle Integration in einen SoC-Entwurf. Diese hohen Qualitätsansprüche können durch simulationsbasierte Verifikation nicht erreicht werden. Aus diesem Grund rücken Methoden zur formalen Entwurfsverifikation in den Fokus. Auf der Blockebene hat sich die Eigenschaftsprüfung basierend auf dem iterativen Schaltungsmodell als erfolgreiche Technologie herausgestellt. Trotzdem gibt es immer noch einige Design-Klassen, die für BIMC schwer zu handhaben sind. Hierzu gehören Schaltungen mit hoher sequentieller Tiefe sowie arithmetische Blöcke. Die fortlaufende Verbesserung der verwendeten Beweismethoden, z.B. der verwendeten SAT-Solver, wird der zunehmenden Komplexität immer größer werdender Blöcke alleine nicht gewachsen sein. Aus diesem Grund zeigt diese Arbeit auf, wie bereits in der Problemaufbereitung des Front-Ends eines Werkzeugs zur formalen Verifikation Maßnahmen zur Vereinfachung der entstehenden Beweisprobleme ergriffen werden können. In den beiden angesprochenen Problemfeldern werden dazu exemplarisch geeignete Freiheitsgrade bei der Modellgenerierung im Front-End identifiziert und zur Vereinfachung der Beweisaufgaben für das Back-End ausgenutzt.
Um die in der Automatisierung zunehmenden Anforderungen an Vorschubachsen hinsichtlich Dynamik, Präzision und Wartungsaufwand bei niedriger Bauhöhe und kleiner werdendem Bauvolumen gerecht zu werden, kommen immer mehr Synchron-Linearmotoren in Zahnspulentechnik mit Permanentmagneterregung in Werkzeugmaschinen zum Einsatz. Als hauptsächlicher Vorteil gegenüber der rotierenden Antriebslösung mit Getriebeübersetzung und Kugelrollspindel wird die direkte Kraftübertragung ohne Bewegungswandler genannt. Der Übergang vom konventionellen linearen Antriebssystem zum Direktantriebssystem eröffnet dem Werkzeugmaschinenherstellern und den Industrieanwendungen eine Vielzahl neuer Möglichkeiten durch beeindruckende Verfahrgeschwindigkeit und hohes Beschleunigungsvermögen sowie Positionier- und Wiederholgenauigkeit und bietet darüber hinaus die Chance zu einer weiteren Produktivitäts- und Qualitätssteigerung. Um alle dieser Vorteile ausnutzen zu können, muss der Antrieb zuerst hinsichtlich der für Linearmotoren typisch Kraftwelligkeit optimiert werden. Die Suche nach wirtschaftlichen und praxistauglichen Gegenmaßnahmen ist ein aktuelles Forschungsthema in der Antriebstechnik. In der vorliegenden Arbeit werden die Kraftschwankungen infolge Nutung, Endeffekt und elektrischer Durchflutung in PM-Synchron-Linearmotor rechnerisch und messtechnisch untersucht. Ursachen und Eigenschaften der Kraftwelligkeit werden beschrieben und Einflussparameter aufgezeigt. Es besteht die Möglichkeit, die Kraftwelligkeit durch bestimmte Maßnahmen zu beeinflussen, z. B. mit Hilfe des Kraftwelligkeitsausgleichs bestehend aus ferromagnetischem Material oder durch gegenseitigen Ausgleich mehrerer zusammengekoppelter Primärteile. Wie die Untersuchungen gezeigt haben, ist eine Abstimmung der Einflussparameter auf analytischem Weg kaum möglich, in der Praxis führt das auf eine experimentell-iterative Optimierung mit FEM-Unterstützung. Die gute Übereinstimmung zwischen Messung und Simulation bietet einen klaren Hinweis, dass die hier vorgestellten Maßnahmen als geeignet angesehen werden können, sie ermöglichen eine Kraftwelligkeitsreduzierung von ursprünglichen 3-5% bis auf 1%, wobei eine leichte Herabsetzung der Kraftdichte in Kauf genommen werden muss. Beim Maschinenentwurf muss rechtzeitig ermittelt werden, welches Kompensationsverfahren günstig ist bezüglich der vorgesehenen Anwendungen.
The present thesis deals with a novel approach to increase the resource usage in digital communications. In digital communication systems, each information bearing data symbol is associated to a waveform which is transmitted over a physical medium. The time or frequency separations among the waveforms associated to the information data have always been chosen to avoid or limit the interference among them. By doing so, n the presence of a distortionless ideal channel, a single receive waveform is affected as little as possible by the presence of the other waveforms. The conditions necessary to meet the absence of any interference among the waveforms are well known and consist of a relationship between the minimum time separation among the waveforms and their bandwidth occupation or, equivalently, the minimum frequency separation and their time occupation. These conditions are referred to as Nyquist assumptions. The key idea of this work is to relax the Nyquist assumptions and to transmit with a time and/or frequency separation between the waveforms smaller than the minimum required to avoid interference. The reduction of the time and/or frequency separation generates not only an increment of the resource usage, but also a degradation in the quality of the received data. Therefore, to maintain a certain quality in the received signal, we have to increase the amount of transmitted power. We investigate the trade-off between the increment of the resource usage and the correspondent performance degradation in three different cases. The first case is the single carrier case in which all waveforms have the same spectrum, but have different temporal locations. The second one is the multi carrier case in which each waveform has its distinct spectrum and occupies all the available time. Finally, the hybrid case when each waveform has its unique time and frequency location. These different cases are framed within the general system modelling developed in the thesis so that they can be easily compared. We evaluate the potential of the key idea of the thesis by choosing a set of four possible waveforms with different characteristics. By doing so, we study the influence of the waveform characteristics in the three system configurations. We propose an interpretation of the results by modifying the well-known Shannon capacity formula and by explicitly expressing its dependency on the increment of resource usage and on the performance degradation. The results are very promising. We show that both in the case of a single carrier system with a time limited waveform and in the case of a multi-carrier system with a frequency limited waveform, the reduction of the time or frequency separation, respectively, has a positive effect on the channel capacity. The latter, depending on the actual SNR, can double or increase even more significantly.
In conventional radio communication systems, the system design generally starts from the transmitter (Tx), i.e. the signal processing algorithm in the transmitter is a priori selected, and then the signal processing algorithm in the receiver is a posteriori determined to obtain the corresponding data estimate. Therefore, in these conventional communication systems, the transmitter can be considered the master and the receiver can be considered the slave. Consequently, such systems can be termed transmitter (Tx) oriented. In the case of Tx orientation, the a priori selected transmitter algorithm can be chosen with a view to arrive at particularly simple transmitter implementations. This advantage has to be countervailed by a higher implementation complexity of the a posteriori determined receiver algorithm. Opposed to the conventional scheme of Tx orientation, the design of communication systems can alternatively start from the receiver (Rx). Then, the signal processing algorithm in the receiver is a priori determined, and the transmitter algorithm results a posteriori. Such an unconventional approach to system design can be termed receiver (Rx) oriented. In the case of Rx orientation, the receiver algorithm can be a priori selected in such a way that the receiver complexity is minimum, and the a posteriori determined transmitter has to tolerate more implementation complexity. In practical communication systems the implementation complexity corresponds to the weight, volume, cost etc of the equipment. Therefore, the complexity is an important aspect which should be taken into account, when building practical communication systems. In mobile radio communication systems, the complexity of the mobile terminals (MTs) should be as low as possible, whereas more complicated implementations can be tolerated in the base station (BS). Having in mind the above mentioned complexity features of the rationales Tx orientation and Rx orientation, this means that in the uplink (UL), i.e. in the radio link from the MT to the BS, the quasi natural choice would be Tx orientation, which leads to low cost transmitters at the MTs, whereas in the downlink (DL), i.e. in the radio link from the BS to the MTs, the rationale Rx orientation would be the favorite alternative, because this results in simple receivers at the MTs. Mobile radio downlinks with the rationale Rx orientation are considered in the thesis. Modern mobile radio communication systems are cellular systems, in which both the intracell and intercell interferences exist. These interferences are the limiting factors for the performance of mobile radio systems. The intracell interference can be eliminated or at least reduced by joint signal processing with consideration of all the signals in the considered cell. However such joint signal processing is not feasible for the elimination of intercell interference in practical systems. Knowing that the detrimental effect of intercell interference grows with its average energy, the transmit energy radiated from the transmitter should be as low as possible to keep the intercell interference low. Low transmit energy is required also with respect to the growing electro-phobia of the public. The transmit energy reduction for multi-user mobile radio downlinks by the rationale Rx orientation is dealt with in the thesis. Among the questions still open in this research area, two questions of major importance are considered here. MIMO is an important feature with respect to the transmit power reduction of mobile radio systems. Therefore, first questionconcerns the linear Rx oriented transmission schemes combined with MIMO antenna structures. The investigations of the MIMO benefit on the linear Rx oriented transmission schemes are studied in the thesis. Utilization of unconventional multiply connected quantization schemes at the receiver has also great potential to reduce the transmit energy. Therefore, the second question considers the designing of non-linear Rx oriented transmission schemes combined with multiply connected quantization schemes.
The thesis is focused on modelling and simulation of a Joint Transmission and Detection Integrated Network (JOINT), a novel air interface concept for B3G mobile radio systems. Besides the utilization of the OFDM transmission technique, which is a promising candidate for future mobile radio systems, and of the duplexing scheme time division duplexing (TDD), the subdivision of the geographical domain to be supported by mobile radio communications into service areas (SAs) is a highlighted concept of JOINT. A SA consists of neighboring sub-areas, which correspond to the cells of conventional cellular systems. The signals in a SA are jointly processed in a Central Unit (CU) in each SA. The CU performs joint channel estimation (JCE) and joint detection (JD) in the form of the receive-zero-forcing (RxZF) Filter for the uplink (UL) transmission and joint transmission (JT) in the form of the transmit-zero-forcing (TxZF) Filter for the downlink (DL) transmission. By these algorithms intra-SA multiple access interference (MAI) can be eliminated within the limits of the used model so that unbiased data estimates are obtained, and most of the computational effort is moved from mobile terminals (MTs) to the CU so that the MTs can do with low complexity. A simulation chain of JOINT has been established in the software MLDesigner by the author based on time discrete equivalent lowpass modelling. In this simulation chain, all key functionalities of JOINT are implemented. The simulation chain is designed for link level investigations. A number of channel models are implemented both for the single-SA scenario and the multiple-SA scenario so that the system performance of JOINT can be comprehensively studied. It is shown that in JOINT a duality or a symmetry of the MAI elimination in the UL and in the DL exists. Therefore, the typical noise enhancement going along with the MAI elimination by JD and JT, respectively, is the same in both links. In the simulations also the impact of channel estimation errors on the system performance is studied. In the multiple-SA scenario, due to the existence of the inter-SA MAI, which cannot be suppressed by the algorithms of JD and JT, the system performance in terms of the average bit error rate (BER) and the BER statistics degrades. A collection of simulation results show the potential of JOINT with respect to the improvement of the system performance and the enhancement of the spectrum e±ciency as compared to conventional cellular systems.
In the thesis the task of channel estimation in beyond 3G service area based mobile radio air interfaces is considered. A system concept named Joint Transmission and Detection Integrated Network (JOINT) forms the target platform for the investigations. A single service area of JOINT is considered, in which a number of mobile terminals is supported by a number of radio access points, which are connected to a central unit responsible for the signal processing. The modulation scheme of JOINT is OFDM. Pilot-aided channel estimation is considered, which has to be performed only in the uplink of JOINT, because the duplexing scheme TDD is applied. In this way, the complexity of the mobile terminals is reduced, because they do not need a channel estimator. Based on the signals received by the access points, the central unit estimates the channel transfer functions jointly for all mobile terminals. This is done by resorting to the a priori knowledge of the radiated pilot signals and by applying the technique of joint channel estimation, which is developed in the thesis. The quality of the gained estimates is judged by the degradation of their signal-to-noise ratio as compared to the signal-to-noise ratio of the respective estimates gained in the case of a single mobile terminal radiating its pilots. In the case of single-element receive antennas at the access points, said degradation depends solely on the structure of the applied pilots. In the thesis it is shown how by a proper design of the pilots the SNR degradation can be minimized. Besides using appropriate pilots, the performance of joint channel estimation can be further improved by the inclusion of additional a-priori information in the estimation process. An example of such additional information would be the knowledge of the directional properties of the radio channels. This knowledge can be gained if multi-element antennas are applied at the access points. Further, a-priori channel state information in the form of the power delay profiles of the radio channels can be included in the estimation process by the application of the minimum mean square error estimation principle for joint channel estimation. After having intensively studied the problem of joint channel estimation in JOINT, the thesis rounds itself by considering the impact of the unavoidable channel estimation errors on the performance of data estimation in JOINT. For the case of small channel estimation errors occurring due to the presence of noise at the access points, the performance of joint detection in the uplink and of joint transmission in the downlink of JOINT are investigated based on simulations. For the uplink, which utilizes joint detection, it is shown to which degree the bit error probability increases due to channel estimation errors. For the downlink, which utilizes joint transmission, channel estimation errors lead to an increase of the required transmit power, which can be quantified by the simulation results.
Channel estimation is of great importance in many wireless communication systems, since it influences the overall performance of a system significantly. Especially in multi-user and/or multi-antenna systems, i.e. generally in multi-branch systems, the requirements on channel estimation are very high, since the training signals or so called pilots that are used for channel estimation suffer from multiple access interference. Recently, in the context with such systems more and more attention is paid to concepts for joint channel estimation (JCE) which have the capability to eliminate the multiple access interference and also the interference between the channel coefficients. The performance of JCE can be evaluated in noise limited systems by the SNR degradation and in interference limited systems by the variation coefficient. Theoretical analysis carried out in this thesis verifies that both performance criteria are closely related to the patterns of the pilots used for JCE, no matter the signals are represented in the time domain or in the frequency domain. Optimum pilots like disjoint pilots, Walsh code based pilots or CAZAC code based pilots, whose constructions are described in this thesis, do not show any SNR degradation when being applied to multi-branch systems. It is shown that optimum pilots constructed in the time domain become optimum pilots in the frequency domain after a discrete Fourier transformation. Correspondingly, optimum pilots in the frequency domain become optimum pilots in the time domain after an inverse discrete Fourier transformation. However, even for optimum pilots different variation coefficients are obtained in interference limited systems. Furthermore, especially for OFDM-based transmission schemes the peak-to-average power ratio (PAPR) of the transmit signal is an important decision criteria for choosing the most suitable pilots. CAZAC code based pilots are the only pilots among the regarded pilot constructions that result in a PAPR of 0 dB for the transmit signal that origins in the transmitted pilots. When summarizing the analysis regarding the SNR degradation, the variation coefficient and the PAPR with respect to one single service area and considering the impact due to interference from other adjacent service areas that occur due to a certain choice of the pilots, one can conclude that CAZAC codes are the most suitable pilots for the application in JCE of multi-carrier multi-branch systems, especially in the case if CAZAC codes that origin in different mother codes are assigned to different adjacent service areas. The theoretical results of the thesis are verified by simulation results. The choice of the parameters for the frequency domain or time domain JCE is oriented towards the evaluated implementation complexity. According to the chosen parameterization of the regarded OFDM-based and FMT-based systems it is shown that a frequency domain JCE is the best choice for OFDM and a time domain JCE is the best choice for FMT applying CAZAC codes as pilots. The results of this thesis can be used as a basis for further theoretical research and also for future JCE implementation in wireless systems.
Der Trend zur Verfügbarkeit mehrerer Mobilfunknetze im gleichen Versorgungsgebiet nicht nur unterschiedlicher Operatoren, sondern auch unterschiedlicher Mobilfunkstandards in möglicherweise unterschiedlichen Hierarchieebenen führt zu einer Vielzahl von Koexistenzszenarien, in denen Intersystem- und Interoperator-MAI die einzelnen Mobilfunknetze beeinträchtigen können. In der vorliegenden Arbeit wird ein systematischer Zugang zur Koexistenzproblematik durch die Klassifizierung der MAI erarbeitet. Eine MAI-Art kann dabei mehreren MAI-Klassen angehören. Durch die Einteilung in Klassen wird angestrebt, zum einen die eine MAI-Art beeinflussenden Effekte anhand der Zugehörigkeit zu bestimmten MAI-Klassen besser verstehen zu können. Zum anderen dient die Einteilung der MAI in Klassen zum Abschätzen der Gefährlichkeit einer MAI-Art, über die sich Aussagen machen lassen anhand der Zugehörigkeit zu bestimmten MAI-Klassen. Der Begriff Gefährlichkeit einer MAI-Art schließt neben der mittleren Leistung auch weitere Eigenschaften wie Varianz oder Ursache der MAI ein. Einfache Schlimmstfall-Abschätzungen, wie sie in der Literatur gebräuchlich sind, können leicht zu Fehleinschätzungen der Gefährlichkeit einer MAI-Art führen. Durch die Kenntnis der zugehörigen MAI-Klassen einer MAI-Art wird die Gefahr solcher Fehleinschätzungen erkennbar. Neben den Schlimmstfall-Abschätzungen unter Berücksichtigung der MAI-Klassen werden in der vorliegenden Arbeit auch Simulationen durchgeführt, anhand derer die Abschätzungen verifiziert werden. Dazu werden Werkzeuge in Form von mathematischen Modellen zum Berechnen der Leistung der verschiedenen MAI-Arten unter Einbeziehen der verschiedenen betrachteten Verfahren zum Mindern von MAI erarbeitet. Dabei wird auch ein Konzept zum Vermindern der erforderlichen Rechenleistung vorgestellt. Anhand der Untersuchung der Koexistenz der beispielhaften Mobilfunksysteme WCDMA und TD-CDMA wird gezeigt, daß sich das Auftreten extrem hoher Intersystem- bzw. Interoperator-MAI durch geeignete Wahl der Systemparameter wie Zellradien und Antennenhöhen, sowie durch Verfahren zum Mindern von MAI wie effizienten Leistungsregelungsverfahren und dynamische Kanalzuweisung meist vermeiden läßt. Es ist jedoch essentiell, daß die Koexistenzproblematik bereits in der Phase der Funknetzplanung adäquat berücksichtigt wird. Dabei ist eine Kooperation der beteiligten Operatoren meist nicht notwendig, lediglich besonders kritische Fälle wie Kollokation von BSen verschiedener TDD-Mobilfunknetze z.B. nach dem 3G-Teilstandard TD-CDMA müssen von den Operatoren einvernehmlich vermieden werden. Da bei der Koexistenz von Mobilfunknetzen in Makrozellen aufgrund ihres hohen Zellradius besonders hohe Interoperator-MAI für den Fall der Gleichstrecken-MAI auftreten kann, wird in der vorliegenden Arbeit ein neuartiges Konzept zum Vermindern dieser MAI basierend auf Antennentechniken vorgestellt. Das Konzept zeigt ein vielverspechendes Potential zum Mindern der Interoperator-MAI.
The present thesis deals with a novel air interface concept for beyond 3G mobile radio systems. Signals received at a certain reference cell in a cellular system which originate in neighboring cells of the same cellular system are undesired and constitute the intercell interference. Due to intercell interference, the spectrum capacity of cellular systems is limited and therefore the reduction of intercell interference is an important goal in the design of future mobile radio systems. In the present thesis, a novel service area based air interface concept is investigated in which interference is combated by joint detection and joint transmission, providing an increased spectrum capacity as compared to state-of-the-art cellular systems. Various algorithms are studied, with the aid of which intra service area interference can be combated. In the uplink transmission, by optimum joint detection the probability of erroneous decision is minimized. Alternatively, suboptimum joint detection algorithms can be applied offering reduced complexity. By linear receive zero-forcing joint detection interference in a service area is eliminated, while by linear minimum mean square error joint detection a trade-off is performed between interference elimination and noise enhancement. Moreover, iterative joint detection is investigated and it is shown that convergence of the data estimates of iterative joint detection without data estimate refinement towards the data estimates of linear joint detection can be achieved. Iterative joint detection can be further enhanced by the refinement of the data estimates in each iteration. For the downlink transmission, the reciprocity of uplink and downlink channels is used by joint transmission eliminating the need for channel estimation and therefore allowing for simple mobile terminals. A novel algorithm for optimum joint transmission is presented and it is shown how transmit signals can be designed which result in the minimum possible average bit error probability at the mobile terminals. By linear transmit zero-forcing joint transmission interference in the downlink transmission is eliminated, whereas by iterative joint transmission transmit signals are constructed in an iterative manner. In a next step, the performance of joint detection and joint transmission in service area based systems is investigated. It is shown that the price to be paid for the interference suppression in service area based systems is the suboptimum use of the receive energy in the uplink transmission and of the transmit energy in the downlink transmission, with respect to the single user reference system. In the case of receive zero-forcing joint detection in the uplink and transmit zero-forcing joint transmission in the downlink, i.e., in the case of linear unbiased data transmission, it is shown that the same price, quantified by the energy efficiency, has to be paid for interference elimination in both uplink and downlink. Finally it is shown that if the system load is fixed, the number of active mobile terminals in a SA and hence the spectrum capacity can be increased without any significant reduction in the average energy efficiency of the data transmission.
Das moderne Wohngebäude zeichnet sich durch einen niedrigen Heizwärmebedarf aus. Mit Zunahme der Sensitivität des Wohngebäudes bezüglich der Solarstrahlung aufgrund neuartiger Systeme wie transparenter Wärmedämmung, Phasenwechselmaterialien oder großer Fensterflächen, erweitert sich der herkömmliche Regelungsansatz zur Einhaltung des behaglichen Raumklimas. Das thermische Gebäudeverhalten definiert sich weitaus komplexer. Es kommen neben der notwendigen Heizung weitere Aktoren (Sonnenschutzeinrichtung, Lüftung) ins Spiel. Die Zunahme der realisierbaren, solaren Erträge bewirken im Winter fossile Energieeinsparungen. Im Sommer sind jedoch ohne geeignete Maßnahmen Überhitzungen die Folge. Mit Hilfe moderner und vernetzter Regelungstechnik können Wirtschaftlichkeit des Systems und Komfort optimiert werden. Hierzu wurden bewährte Simulationswerkzeuge erweitert. Moderne Komponenten wie Phasenwechselmaterialien, transparente Wärmedämmung und Verschattungssysteme auf Basis einer schaltenden Schicht im Glasverbund erfahren eine Modellbildung. Umfangreiche Validierungen zu den Teilmodellen und zum Gesamtmodell zeigen, dass eine realitätsnahe Abbildung erreicht wird. Grundlage dieser Validierungssequenzen waren Feldtestmessungen an bewohnten Gebäuden, sowie Ergebnisse von Systemtestständen. Die Aussagesicherheit des gesamten Gebäudemodells wurde durch eine sogenannte "Cross-Validation" mit anderen etablierten Simulationsprogrammen hergestellt. Mit der Schaffung der realitätsnahen Abbildung eines solaroptimierten Wohngebäudes, welches an eine heizungsunterstützende, solarthermische Anlage gekoppelt ist, wurde die Grundlage zur Entwicklung einer prädiktiven Wärmeflussregelung in Wohngebäuden mit erweitertem thermischen Verhalten gelegt. Eine Untersuchung zum dynamischen Verhalten auf periodische Anregung von Einflussgrößen, zeigt die dominanten Zeitkonstanten des Gebäudesystems auf. Einfallende Solarstrahlung durch Fenster wirkt sich am schnellsten auf die empfundene Raumtemperatur aus. Dies hat Auswirkung auf die prädiktive Regelung. Während im Winter die Solarstrahlung zur Heizungsunterstützung herangezogen werden soll, gilt es, im Sommer die Bewohner vor Überhitzung zu schützen. Damit die Regelung in der Heizperiode an sonnigen Tagen nicht unnötig vorheizt (d. h. frühzeitig abgeschalten wird, da der zukünftige Heizbedarf von der Sonne gedeckt werden kann) und damit Überhitzungen(vor allem im Sommer) vermieden werden können, wurde für das untersuchte Gebäudemodell ein Prognosehorizont für den Prädiktor bestimmt. Mit dem modellbasierten Regelungskonzept wurde ein übergreifendes Wärmemanagementsystem entwickelt, welches mit der Information einer lokalen Wettervorhersage den thermischen Zustand des Gebäudes vorhersagt. Aufgrund des im Regler implementierten, reduzierten Modells, berücksichtigt die Prädiktion die besonderen Eigenschaften der eingesetzten Fassadenkomponenten. Umfangreiche, simulationsgestützte Untersuchungen bewerten das Regelungskonzept. Als Referenzsystem dient ein Gebäude mit herkömmlichem Regelungskonzept.
We present new algorithms and provide an overall framework for the interaction of the classically separate steps of logic synthesis and physical layout in the design of VLSI circuits. Due to the continuous development of smaller sized fabrication processes and the subsequent domination of interconnect delays, the traditional separation of logical and physical design results in increasingly inaccurate cost functions and aggravates the design closure problem. Consequently, the interaction of physical and logical domains has become one of the greatest challenges in the design of VLSI circuits. To address this challenge, we propose different solutions for the control and datapath logic of a design, and show how to combine them to reach design closure.
As the sustained trend towards integrating more and more functionality into systems on a chip can be observed in all fields, their economic realization is a challenge for the chip making industry. This is, however, barely possible today, as the ability to design and verify such complex systems could not keep up with the rapid technological development. Owing to this productivity gap, a design methodology, mainly using pre designed and pre verifying blocks, is mandatory. The availability of such blocks, meeting the highest possible quality standards, is decisive for its success. Cost-effective, this can only be achieved by formal verification on the block-level, namely by checking properties, ranging over finite intervals of time. As this verification approach is based on constructing and solving Boolean equivalence problems, it allows for using backtrack search procedures, such as SAT. Recent improvements of the latter are responsible for its high capacity. Still, the verification of some classes of hardware designs, enjoying regular substructures or complex arithmetic data paths, is difficult and often intractable. For regular designs, this is mainly due to individual treatment of symmetrical parts of the search space by backtrack search procedures used. One approach to tackle these deficiencies, is to exploit the regular structure for problem reduction on the register transfer level (RTL). This work describes a new approach for property checking on the RTL, preserving the problem inherent structure for subsequent reduction. The reduction is based on eliminating symmetrical parts from bitvector functions, and hence, from the search space. Several approaches for symmetry reduction in search problems, based on invariance of a function under permutation of variables, have been previously proposed. Unfortunately, our investigations did not reveal this kind of symmetry in relevant cases. Instead, we propose a reduction based on symmetrical values, as we encounter them much more frequently in our industrial examples. Let \(f\) be a Boolean function. The values \(0\) and \(1\) are symmetrical values for a variable \(x\) in \(f\) iff there is a variable permutation \(\pi\) of the variables of \(f\), fixing \(x\), such that \(f|_{x=0} = \pi(f|_{x=1})\). Then the question whether \(f=1\) holds is independent from this variable, and it can be removed. By iterative application of this approach to all variables of \(f\), they are either all removed, leaving \(f=1\) or \(f=0\) trivially, or there is a variable \(x'\) with no such \(\pi\). The latter leads to the conclusion that \(f=1\) does not hold, as we found a counter-example either with \(x'=0\), or \(x'=1\). Extending this basic idea to vectors of variables, allows to elevate it to the RTL. There, self similarities in the function representation, resulting from the regular structure preserved, can be exploited, and as a consequence, symmetrical bitvector values can be found syntactically. In particular, bitvector term-rewriting techniques, isomorphism procedures for specially manipulated term graphs, and combinations thereof, are proposed. This approach dramatically reduces the computational effort needed for functional verification on the block-level and, in particular, for the important problem class of regular designs. It allows the verification of industrial designs previously intractable. The main contributions of this work are in providing a framework for dealing with bitvector functions algebraically, a concise description of bounded model checking on the register transfer level, as well as new reduction techniques and new approaches for finding and exploiting symmetrical values in bitvector functions.
In Anbetracht der ständig steigenden Nachfrage nach Mobilkommunikation einerseits und der nur begrenzt zur Verfügung stehenden Ressource Frequenzspektrum andererseits müssen Mobilfunksysteme der dritten Generation (3G) eine hohe Frequenzökonomie haben. Dies trifft insbesondere auf die Abwärtsstrecken dieser Systeme zu, in denen auch paketorientierte Dienste mit hohen Datenraten angeboten werden sollen. Seitens der Basisstationen kann die spektrale Effizienz der Abwärtsstrecke durch das Verwenden mehrelementiger adaptiver Sendeantennen erhöht werden. Hierzu sind leistungsfähige Signalverarbeitungskonzepte erforderlich, die die effektive Kombination der adaptiven Antennen mit der eingesetzten Sendeleistungsregelung ermöglichen. Die wichtigsten Aspekte beim Entwerfen von Signalverarbeitungskonzepten für adaptive Sendeantennen sind das Gewährleisten mobilstationsspezifischer Mindestdatenraten sowie das Reduzieren der aufzuwendenden Sendeleistungen. Die vorliegende Arbeit trägt dazu bei, den Einsatz mehrantennenelementiger adaptiver Sendeantennen in Mobilfunksystemen der dritten Generation voranzutreiben. Existierende Konzepte werden dargestellt, vereinheitlicht, analysiert und durch eigene Ansätze des Autors erweitert. Signalverarbeitungskonzepte für adaptive Antennen benötigen als Wissensbasis zumindest einen gewissen Grad an Kenntnis über die Mobilfunkkanäle der Abwärtsstrecke. Beim für den FDD-Modus angedachten 3G-Teilstandard WCDMA ergibt sich das Problem, daß wegen des Frequenzversatzes zwischen der Auf- und der Abwärtsstrecke die Ergebnisse der Kanalschätzung in der Aufwärtsstrecke nicht direkt zum Einstellen der adaptiven Sendeantennen verwendet werden können. Eine Möglichkeit, in FDD-Systemen an den Basisstationen ein gewisses Maß an Kenntnis über die räumlichen Eigenschaften der Mobilfunkkanäle der Abwärtsstrecke verfügbar zu machen, besteht im Ausnutzen der an den Basisstationen ermittelbaren räumlichen Korrelationsmatrizen der Mobilfunkkanäle der Aufwärtsstrecke. Diese Vorgehensweise ist nur dann sinnvoll, wenn die relevanten Einfallsrichtungen der Aufwärtsstrecke mit den relevanten Abstrahlungsrichtungen der Abwärtsstrecke übereinstimmen. Für diesen Fall wird in der vorliegenden Arbeit ein aufwandsgünstiges Verfahren zum Anpassen der adaptiven Sendeantennen erarbeitet, das nicht auf komplexen Richtungsschätzalgorithmen beruht. Eine verläßlichere Methode, an den Basisstationen ein gewisses Maß an Kenntnis über die räumlichen Eigenschaften der Mobilfunkkanäle der Abwärtsstrecke verfügbar zu machen, ist das Signalisieren von Kanalzustandsinformation, die an den Mobilstationen gewonnen wird, über einen Rückkanal an die versorgende Basisstation. Da dieses Rücksignalisieren zeitkritisch ist und die Übertragungskapazität des Rückkanals begrenzt ist, wird in der vorliegenden Arbeit ein aufwandsgünstiges Verfahren zum Vorverarbeiten und Rücksignalisieren von Kanalzustandsinformation erarbeitet.
Empfängerorientierte Übertragungsverfahren sind dadurch gekennzeichnet, daß der im Sender zu verwendende Signalverarbeitungsalgorithmus an den im Empfänger verwendeten Signalverarbeitungsalgorithmus angepaßt ist. Dies geschieht meist mit zusätzlicher Kanalinformation, die nur am Sender und nicht am Empfänger verfügbar ist. In empfängerorientierten Systemen kann man besonders einfache Algorithmen in den Empfängern realisieren, die im Falle einer Abwärtsstreckenübertragung eines Mobilfunksystems, in den Mobilstationen sind. Dies ist mit geringen Produktionskosten und geringem Energieverbrauch der Mobilstationen verbunden. Um dennoch eine gewisse Güte der Datenübertragung zu gewährleisten, wird bei der Empfängerorientierung mehr Aufwand in der Feststation des Mobilfunksystems betrieben. Die derzeit verwendeten und für die dritte Mobilfunkgeneration (UMTS) vorgesehenen Übertragungsverfahren sind senderorientiert. Das bedeutet, daß der Signalverarbeitungsalgorithmus im Empfänger an den Signalverarbeitungsalgorithmus des Senders angepaßt ist. Auch bei der Senderorientierung wird meist die Kanalinformation in den Anpassungsprozeß im Empfänger einbezogen. Zum Gewinnen der Kanalinformation sind Testsignale notwendig, anhand der die Kanalinformation geschätzt werden kann. Solche Testsignale können in der Abwärtsstrecke eines empfängerorientierten Mobilfunksystems entfallen. Anstelle der Testsignale kann man Daten übertragen und somit die Datenrate im Vergleich zu senderorientierten Systemen erhöhen. Um die Performanz von Übertragungsverfahren beurteilen zu können, sind geeignete Kriterien notwendig. Meist werden zur Beurteilung Bitfehlerwahrscheinlichkeiten oder Signal-Stör-Verhältnisse verwendet. Da die Höhe der aufzuwendenden Sendeenergie nicht nur technisch, sondern auch gesellschaftlich ein wichtiger Aspekt zukünftiger Mobilfunksysteme ist, wird vom Verfasser das Kriterium der Energieeffizienz vorgeschlagen. Die Energieeffizienz beurteilt das Zusammenspiel von Signalverarbeitungsalgorithmen des Senders und des Empfängers unter Berücksichtigung der Kanaleigenschaften. Dabei wird die nutzbare Empfangsenergie auf die investierte Sendeenergie bezogen. Anhand der ermittelten energieeffizienzen und analytischen Betrachtungen in der vorliegenden Arbeit kann man den Schluß ziehen, daß empfängerorientierte Übertragungsverfahren für die Abwärtsstreckenübertragung in Mobilfunksystemen den senderorientierten vorzuziehen sind, wenn an der Feststation relativ viele und an den Mobilstationen relativ wenige Antennen zur Verfügung stehen. Dies ist bereits heute der Fall und auch in zukünftigen Mobilfunksystemen zu erwarten. Ferner eröffnet das am Rande untersuchte kanalorientierte Übertragungsverfahren, bei dem die Signalverarbeitungsalgorithmen des Sender und des Empfängers an die Kanalinformation angepaßt werden, ein weites Feld für zukünftige Forschungsvorhaben.
Mobilfunksysteme sind interferenzbegrenzt. Eine signifikante Steigerung der Leistungsfähigkeit künftiger Mobilfunksysteme kann daher nur durch den Einsatz von Verfahren zum Reduzieren der schädlichen Wirkung von Interferenz erreicht werden. Eine besonders attraktive Klasse von Verfahren, die dieses leisten, sind jene der gemeinsamen Empfangssignalverarbeitung, wobei bisher der systematische Entwurf und die systematische Analyse solcher Verfahren für CDMA-Mobilfunksysteme mit infiniter oder quasi-infiniter Datenübertragung - eine im Hinblick auf die derzeit in Betrieb gehenden zellularen Mobilfunksysteme der dritten Generation besonders interessierende Klasse von künftigen Mobilfunksystemen - noch unklar ist. Die vorliegende Arbeit liefert einen Beitrag zur Systematisierung des Entwurfs- und Optimierungsprozesses von Verfahren zur gemeinsamen Empfangssignalverarbeitung für Mobilfunksysteme der genannten Art. Zu diesem Zweck wird gezeigt, daß sich die Aufgabe der gemeinsamen Empfangssignalverarbeitung in die fünf Teilaufgaben Blockbilden, Datenzuordnen, Interblock-Signalverarbeitung, Intrablock-Signalverarbeitung und Kombinieren & Entscheiden zerlegen läßt. Nachdem in einem ersten Schritt alle fünf Teilaufgaben klar definiert und gegeneinander abgegrenzt werden, erfolgt in einem zweiten Schritt für jede Teilaufgabe die Entwicklung von Lösungsvorschlägen, die nach gewissen Kriterien optimal bzw. suboptimal sind. Zur Lösung jeder einzelnen Teilaufgabe werden neuartige Vorgehensweisen vorgeschlagen, wobei dabei sowohl die Optimierung der Leistungsfähigkeit der jeweiligen Vorgehensweisen als auch Belange, die für die praktische Realisierbarkeit relevant sind, im Vordergrund stehen. Eine Schlüsselrolle kommt den Verfahren der Intrablock-Signalverarbeitung zu, deren Aufgabe darin besteht, ausgehend von Ausschnitten des Empfangssignals Schätzungen von Daten zu ermitteln, die zu dem jeweiligen Ausschnitt beitragen. Die vorgeschlagenen Verfahren der Intrablock-Signalverarbeitung beruhen im wesentlichen auf iterativen Versionen bekannter linearer Schätzer, die um einen nichtlinearen Schätzwertverbesserer erweitert werden. Der nichtlineare Schätzwertverbesserer nutzt dabei A-priori-Information, wie z.B. die Kenntnis des Datensymbolalphabetes und der A-priori-Wahrscheinlichkeiten der zu übertragenden Daten, zum Erhöhen der Zuverlässigkeit der zu ermittelnden Datenschätzungen. Die verschiedenen Versionen der iterativ realisierten linearen Schätzer und verschiedene Schätzwertverbesserer bilden eine Art Baukastensystem, das es erlaubt, für viele Anwendungsfälle ein maßgeschneidertes Verfahren zur Intrablock-Signalverarbeitung zu konstruieren. Aufbauend auf dem entwickelten systematischen Entwurfsprinzip wird abschließend für ein exemplarisches CDMA-Mobilfunksystem mit synchronem Mehrteilnehmerzugriff ein darauf zugeschnittenes Verfahren zur gemeinsamen Empfangssignalverarbeitung vorgeschlagen. Die dargelegten Simulationsergebnisse zeigen, daß ausgehend von derzeit favorisierten nicht dem Prinzip der gemeinsamen Empfangssignalverarbeitung folgenden Verfahren zum Schätzen der übertragenen Daten in typischen Mobilfunkszenarien durch Einsetzen des vorgeschlagenen Verfahrens zur gemeinsamen Empfangssignalverarbeitung die Anzahl der gleichzeitig aktiven CDMA-Codes um nahezu eine Größenordnung erhöht werden kann, ohne dabei die bei einem vorgegebenen Signal-Stör-Verhältnis am Referenzempfänger beobachtbare Zuverlässigkeit der ermittelten Schätzungen zu verschlechtern. Deshalb ist der Einsatz von Verfahren zur gemeinsamen Empfangssignalverarbeitung eine vielversprechende Maßnahme zur Kapazitätssteigerung künftiger Mobilfunksysteme.
Zur Entwicklung und Planung energiesparender Gebäude, zum Entwurf geeigneter Regelungsalgorithmen benötigt man detailliertes Wissen über das thermische und energetische Verhalten eines Gebäudes, das in Wechselwirkung mit seiner Umgebung und seinen Bewohnern steht. Dies leistet ein mathematisches Modell. Die Beschreibung großer, komplexer technischer Systeme führt zu hoch komplexen, umfangreichen mathematischen Modellen, die - zur Simulation implementiert - große Softwaresysteme ergeben. Es liegt daher nahe, Konzepte der Informatik auch in der mathematischen Modellbildung zu nutzen. Neben der Dekomposition in Teilsysteme, den Strukturierungskonzepten zur Beherrschung der Komplexität ist hier ein aktueller Forschungsgegenstand der Informatik von besonderem Interesse. Es handelt sich um die Nutzung der Wiederverwendung als methodisches Element des Softwareentwicklungsprozesses großer Systeme. Es wurde eine Modellbibliothek zur Simulation thermischen Gebäudeverhaltens in Modelica erstellt. Sie untergliedert sich in die Abschnitte Gebäude-, Thermohydraulik-, Umgebungs- und Algorithmenbibliothek. Die objektorientiert implementierten, nicht berechnungskausalen Modellkomponenten sind hierarchisch strukturiert. Ihre Implementierung orientiert sich am intuitiven physikalischen Verständnis des zu beschreibenden technischen Prozesses. So aggregiert ein Gebäude einzelne Räume, Fenster, Wände und diese wiederum einzelne Wandschichten.
Utilization of Correlation Matrices in Adaptive Array Processors for Time-Slotted CDMA Uplinks
(2002)
It is well known that the performance of mobile radio systems can be significantly enhanced by the application of adaptive antennas which consist of multi-element antenna arrays plus signal processing circuitry. In the thesis the utilization of such antennas as receive antennas in the uplink of mobile radio air interfaces of the type TD-CDMA is studied. Especially, the incorporation of covariance matrices of the received interference signals into the signal processing algorithms is investigated with a view to improve the system performance as compared to state of the art adaptive antenna technology. These covariance matrices implicitly contain information on the directions of incidence of the interference signals, and this information may be exploited to reduce the effective interference power when processing the signals received by the array elements. As a basis for the investigations, first directional models of the mobile radio channels and of the interference impinging at the receiver are developed, which can be implemented on the computer at low cost. These channel models cover both outdoor and indoor environments. They are partly based on measured channel impulse responses and, therefore, allow a description of the mobile radio channels which comes sufficiently close to reality. Concerning the interference models, two cases are considered. In the one case, the interference signals arriving from different directions are correlated, and in the other case these signals are uncorrelated. After a visualization of the potential of adaptive receive antennas, data detection and channel estimation schemes for the TD-CDMA uplink are presented, which rely on such antennas under the consideration of interference covariance matrices. Of special interest is the detection scheme MSJD (Multi Step Joint Detection), which is a novel iterative approach to multi-user detection. Concerning channel estimation, the incorporation of the knowledge of the interference covariance matrix and of the correlation matrix of the channel impulse responses is enabled by an MMSE (Minimum Mean Square Error) based channel estimator. The presented signal processing concepts using covariance matrices for channel estimation and data detection are merged in order to form entire receiver structures. Important tasks to be fulfilled in such receivers are the estimation of the interference covariance matrices and the reconstruction of the received desired signals. These reconstructions are required when applying MSJD in data detection. The considered receiver structures are implemented on the computer in order to enable system simulations. The obtained simulation results show that the developed schemes are very promising in cases, where the impinging interference is highly directional, whereas in cases with the interference directions being more homogeneously distributed over the azimuth the consideration of the interference covariance matrices is of only limited benefit. The thesis can serve as a basis for practical system implementations.
Contributions to the application of adaptive antennas and CDMA code pooling in the TD CDMA downlink
(2002)
TD (Time Division)-CDMA is one of the partial standards adopted by 3GPP (3rd Generation Partnership Project) for 3rd Generation (3G) mobile radio systems. An important issue when designing 3G mobile radio systems is the efficient use of the available frequency spectrum, that is the achievement of a spectrum efficiency as high as possible. It is well known that the spectrum efficiency can be enhanced by utilizing multi-element antennas instead of single-element antennas at the base station (BS). Concerning the uplink of TD- CDMA, the benefits achievable by multi-element BS antennas have been quantitatively studied to a satisfactory extent. However, corresponding studies for the downlink are still missing. This thesis has the goal to make contributions to fill this lack of information. For near-to-reality directional mobile radio scenarios TD-CDMA downlink utilizing multi-element antennas at the BS are investigated both on the system level and on the link level. The system level investigations show how the carrier-to-interference ratio can be improved by applying such antennas. As the result of the link level investigations, which rely on the detection scheme Joint Detection (JD), the improvement of the bit er- ror rate by utilizing multi-element antennas at the BS can be quantified. Concerning the link level of TD-CDMA, a number of improvements are proposed which allow considerable performance enhancement of TD-CDMA downlink in connection with multi-element BS antennas. These improvements include * the concept of partial joint detection (PJD), in which at each mobile station (MS) only a subset of the arriving CDMA signals including those being of interest to this MS are jointly detected, * a blind channel estimation algorithm, * CDMA code pooling, that is assigning more than one CDMA code to certain con- nections in order to offer these users higher data rates, * maximizing the Shannon transmission capacity by an interleaving concept termed CDMA code interleaving and by advantageously selecting the assignment of CDMA codes to mobile radio channels, * specific power control schemes, which tackle the problem of different transmission qualities of the CDMA codes. As a comprehensive illustration of the advantages achievable by multi-element BS anten- nas in the TD-CDMA downlink, quantitative results concerning the spectrum efficiency for different numbers of antenna elements at the BS conclude the thesis.
Moderne Mobilfunksysteme, die nach dem zellularen Konzept arbeiten, sind interferenzbegrenzte Systeme. Ein wesentliches Ziel beim Entwurf zukünftiger Mobilfunkkonzepte ist daher die Reduktion der auftretenden Interferenz. Nur so läßt sich die spektrale Effizienz künftiger Mobilfunksysteme noch signifikant gegenüber dem Stand der Technik steigern. Die Elimination der Intrazellinterferenz, das heißt der auftretenden Wechselwirkungen zwischen Signalen mehrerer von der gleichen Zelle bedienter Teilnehmer, durch gemeinsame Detektion (engl. Joint Detection, JD) ist bereits ein wesentliches Merkmal des Luftschnittstellenkonzepts TD-CDMA. Ein bislang noch weitgehend unbeachtetes Potential zum Steigern von spektraler Effizienz und Kapazität hingegen ist die Reduktion der Interzellinterferenz, das heißt der durch Teilnehmer verschiedener Zellen wechselseitig verursachten Interferenz. Insbesondere in Systemen mit niedrigen Clustergrößen verspricht eine Reduktion der in diesem Fall sehr starken Interzellinterferenz erhebliche Gewinne. Die Interzellinterferenzreduktion ist daher der logische nächste Schritt nach der Intrazellinterferenzreduktion. Die vorliegende Arbeit leistet einen Beitrag zum Entwickeln gewinnbringender Verfahren zur Reduktion der Interzellinterferenz in zukünftigen Mobilfunksystemen durch entsprechende Berücksichtigung und Elimination des Einflusses der Interzellinterferenzsignale in der empfängerseitigen Signalverarbeitung. Ziel ist eine verbesserte Schätzung der übertragenen Teilnehmerdaten zu erhalten, dazu werden Signale von Interzellinterferenzquellen beim Datenschätzen berücksichtigt. Die dabei benötigten Informationen werden mit den ebenfalls erläuterten Verfahren zur Identifikation und Selektion starker Interzellinterferenzquellen sowie einer gegenüber dem bisherigen Systementwurf erweiterten Kanalschätzung gewonnen. Es wird gezeigt, daß sich mit einem aufwandsgünstigen Detektor die relevanten Interzellinterferenzquellen zuverlässig identifizieren lassen. Mit einem auf kurze Mobilfunkkanäle, die in Hotspots vermehrt zu erwarten sind, optimierten Kanalschätzverfahren werden die aktuellen Mobilfunkkanalimpulsantworten für alle relevanten Teilnehmer bestimmt. Um die Datenschätzung für viele Teilnehmer durchführen zu können, wird das Schätzverfahren Multi-Step Joint Detection entworfen, das die von der herkömmlichen gemeinsamen Detektion bekannte SNR-Degradation verringert. Die Simulationsergebnisse zeigen die Leistungsfähigkeit des entworfenen Systemkonzeptes. Die Interzellinterferenzreduktionsverfahren können sowohl zum Erhöhen der spektralen Effizienz des Systems, als auch zu einer Verbesserung der Dienstgüte bei gleichbleibender spektraler Effizienz gewinnbringend eingesetzt werden.
In this thesis a new family of codes for the use in optical high bit rate transmission systems with a direct sequence code division multiple access scheme component was developed and its performance examined. These codes were then used as orthogonal sequences for the coding of the different wavelength channels in a hybrid OCDMA/WDMA system. The overall performance was finally compared to a pure WDMA system. The common codes known up to date have the problem of needing very long sequence lengths in order to accommodate an adequate number of users. Thus, code sequence lengths of 1000 or more were necessary to reach bit error ratios of with only about 10 simultaneous users. However, these sequence lengths are unacceptable if signals with data rates higher than 100 MBit/s are to be transmitted, not to speak about the number of simultaneous users. Starting from the well known optical orthogonal codes (OOC) and under the assumption of synchronization among the participating transmitters - justified for high bit rate WDM transmission systems -, a new code family called ?modified optical orthogonal codes? (MOOC) was developed by minimizing the crosscorrelation products of each two sequences. By this, the number of simultaneous users could be increased by several orders of magnitude compared to the known codes so far. The obtained code sequences were then introduced in numerical simulations of a 80 GBit/s DWDM transmission system with 8 channels, each carrying a 10 GBit/s payload. Usual DWDM systems are featured by enormous efforts to minimize the spectral spacing between the various wavelength channels. These small spacings in combination with the high bit rates lead to very strict demands on the system components like laser diode, filters, multiplexers etc. Continuous channel monitoring and temperature regulations of sensitive components are inevitable, but often cannot prevent drop downs of the bit error ratio due to aging effects or outer influences like mechanical stress. The obtained results show that - very different to the pure WDM system - by orthogonally coding adjacent wavelength channels with the proposed MOOC, the overall system performance gets widely independent from system parameters like input powers, channel spacings and link lengths. Nonlinear effects like XPM that insert interchannel crosstalk are effectively fought. Furthermore, one can entirely dispense with the bandpass filters, thus simplifying the receiver structure, which is especially interesting for broadcast networks. A DWDM system upgraded with the OCDMA subsystem shows a very robust behavior against a variety of influences.
Das europäische Mobilfunksystem der dritten Generation heißt UMTS. UTRA - der terrestrische Funkzugang von UMTS - stellt zwei harmonisierte Luftschnittstellen zur Verfügung: Das TDD-basierte TD-CDMA und das FDD-basierte WCDMA. Das Duplexverfahren TDD bietet gegenüber FDD erhebliche Vorteile, z.B. können TDD-basierte Luftschnittstellen unterschiedliche Datenraten in der Aufwärts- und Abwärtsstrecke i.a. effizienter bereitstellen als FDD-basierte Luftschnittstellen. TD-CDMA ist Gegenstand dieser Arbeit. Die wichtigsten Details dieser Luftschnittstelle werden vorgestellt. Laufzeit und Interferenz sind wesentliche Gesichtspunkte beim Verwenden von TDD. Diese wesentlichen Gesichtspunkte werden eingehend für den Fall des betrachteten TD-CDMA untersucht. In UMTS spielen neben der Sprachübertragung insbesondere hochratige Datendienste und Multimediadienste eine wichtige Rolle. Die unterschiedlichen Qualitätsanforderungen dieser Dienste sind eine große Herausforderung für UMTS, insbesondere auf der physikalischen Ebene. Um den Qualitätsanforderungen verschiedener Dienste gerecht zu werden, definiert UTRA die L1/L2-Schnittstelle durch unterschiedliche Transportkanäle. Jeder Transportkanal garantiert durch die vorgegebene Datenrate, Verzögerung und maximal zulässige Bitfehlerrate eine bestimmte Qualität der Übertragung. Hieraus ergibt sich das Problem der Realisierung dieser Transportkanäle auf physikalischer Ebene. Dieses Problem wird in der vorliegenden Arbeit eingehend für TD-CDMA untersucht. Der UTRA-Standard bezeichnet die Realisierung eines Transportkanals als Transportformat. Wichtige Parameter des Transportformats sind das verwendete Pooling-Konzept, das eingesetzte FEC-Verfahren und die zugehörige Coderate. Um die Leistungsfähigkeit unterschiedlicher Transportformate quantitativ zu vergleichen, wird ein geeignetes Bewertungsmaß angegeben. Die zur Bewertung erforderlichen Meßwerte können nur durch Simulation auf Verbindungsebene ermittelt werden. Deshalb wird ein Programm für die Simulation von Transportformaten in TD-CDMA entwickelt. Bei der Entwicklung dieses Programms wird auf Konzepte, Techniken, Methoden und Prinzipien der Informatik für die Software-Entwicklung zurückgegriffen, um die Wiederverwendbarkeit und Änderbarkeit des Programms zu unterstützen. Außerdem werden wichtige Verfahren zur Reduzierung der Bitfehlerrate - die schnelle Leistungsregelung und die Antennendiversität - implementiert. Die Leistungsfähigkeit einer exemplarischen Auswahl von Transportformaten wird durch Simulation ermittelt und unter Verwendung des Bewertungsmaßes verglichen. Als FEC-Verfahren werden Turbo-Codes und die Code-Verkettung aus innerem Faltungscode und äußerem RS-Code eingesetzt. Es wird gezeigt, daß die untersuchten Verfahren zur Reduzierung der Bitfehlerrate wesentlichen Einfluß auf die Leistungsfähigkeit der Transportformate haben. Des weiteren wird gezeigt, daß die Transportformate mit Turbo-Codes bessere Ergebnisse erzielen als die Transportformate mit Code-Verkettung.
Nach einer Einführung in die zur Erfassung inkorporaler Meßgrößen und zur Meßdatenübertragung verwendeten Verfahren wird in dieser Arbeit ein Konzept vorgestellt, das die Entwicklung implantierbarer Meßsysteme in den ersten Entwurfsphasen unterstützt. Im Anschluß wird die mit dem Konzept durchgeführte Entwicklung eines multisensorischen Meß- und Überwachungssystems beschrieben. Zur Ableitung des Entwurfskonzepts werden die an implantierbare Meßsysteme gestellten Anforderungen analysiert. Dabei findet eine Unterteilung in klassenspezifische und applikationsspezifische Anforderungen statt. Aus den klassenspezifischen Anforderungen dieser Aufstellung wird ein für die hier untersuchten Meßsysteme allgemein gültiges, in Funktionsblöcke und Funktionsgruppen untergliedertes, Strukturmodell erstellt. Zum Entwurf der untereinander in Verbindung stehenden Komponenten dieses Strukturmodells wird eine Reihenfolge vorgegeben, die vorhandene Abhängigkeiten zwischen den Funktionseinheiten berücksichtigt. Anschließend werden in der Entwurfsreihenfolge und unter Berücksichtigung der applikationsspezifischen Anforderungen die einzelnen Funktionsblöcke im Detail spezifiziert und für jeden der Funktionsblöcke eine Anforderungsliste erstellt. Unter Verwendung der Anforderungslisten werden die Funktionsgruppen bestimmt. Eine dieser Funktionsgruppen ist die drahtlose Energieversorgung. Zu deren Entwurf wird ein Verfahren vorgestellt, mit dem induktive Übertragungsstrecken berechnet werden können. Mit der Verwendung des beschriebenen Konzepts wird der Entwickler eines implantierbaren Meßsystems insbesondere in der Systemspezifikation und der Systempartitionierung unterstützt. Der aufgestellte Anforderungskatalog erleichtert ihm die Spezifikation des Meßsystems. Zu jedem der Funktionsblöcke des allgemein geltenden Strukturmodells erhält der Entwickler eine applikationsabhängige Detailspezifikation, die ihm die Bestimmung der zum Aufbau der Funktionsblöcke erforderlichen Funktionsgruppen ermöglicht. So entsteht eine detaillierte Aufbaustruktur des zu entwerfenden Meßsystems, in der alle an das System gestellten Anforderungen berücksichtigt sind. Damit steht eine fundierte Ausgangsbasis zur Entwicklung der Funktionsgruppen zur Verfügung. Das Verfahren zum Entwurf induktiver Energieübertragungsstrecken unterstützt die Entwicklung drahtlos gespeister Meßsysteme. Der Entwurf der restlichen Funktionsgruppen wird mit bereits etablierten Methoden und Werkzeugen durchgeführt. Die Verifikation des gesamten Meßsystems erfolgt mit einem aus den Funktionsgruppen aufgebauten Prototypen. Unter Verwendung des Entwurfskonzepts wurde ein implantierbares Meßsystem für eine neue Osteosyntheseplatte entwickelt, die zur Versorgung von Knochenfrakturen verwendet wird. Die Platte verfügt über besondere mechanische Eigenschaften, die eine verbesserte Frakturheilung versprechen. Zur Beobachtung der Knochenheilung werden bislang Röntgenaufnahmen eingesetzt. Das in die neue Osteosyntheseplatte integrierte Meßsystem hingegen ermöglicht die Erfassung mehrerer Meßwerte direkt am Knochen. Damit ergibt sich eine erheblich verbesserte Diagnostik ohne Strahlenbelastung des Patienten. Neben der Erforschung des Verbundes Knochen-Implantat erlauben die verbesserten Diagnosemöglichkeiten die Einleitung gezielterer Rehabilitations-Maßnahmen. In Verbindung mit der verbesserten Frakturheilung sollen so optimale Behandlungsergebnisse erzielt und die Behandlungszeiten verkürzt werden.
Der Einsatz von Freisprecheinrichtungen bei der Sprachkommunikation in Fahrzeugen erfor- dert die Reduktion der mit dem Sprachsignal erfaßten Umgebungsgeräusche. Die akustischen Störungen beeinträchtigen in der Regel die Verständlichkeit des zu übertragenden Sprachsi- gnals. In der Literatur wurden zahlreiche Verfahren und Ansätze zur Geräuschreduktion vor- geschlagen und beschrieben. Prinzipiell können diese Ansätze in drei Kategorien unterteilt werden: Einkanalige Geräuschreduktionssysteme, wie zum Beispiel das Verfahren der Spek- tralen Subtraktion, mehrkanalige Geräuschkompensationsverfahren, die mindestens ein Stör- geräusch-Referenzsignal benötigen, und adaptive Mikrophonarrays, die zur Erfassung des Sprachsignals ein richtungsselektives Reduktionsverfahren (beam forming) einsetzen. Diese Arbeit fokussiert ausschließlich auf das Problem der einkanaligen Geräuschreduktions- systeme, wie sie häufig in Kraftfahrzeugen oder Telefonen aus Kosten- und konstruktiven Gründen zu finden sind. Mehrkanalige Verfahren werden nur der Vollständigkeit halber am Rande behandelt. Einkanalige Verfahren sind durch den Kompromiß zwischen der Dämpfung der störenden Geräusche und den unvermeidbaren Verzerrungen des Sprachsignals und der verbleibenden Reststörungen gekennzeichnet. Diese Verzerrungen sind als sporadisch auftretende tonartige Reststörungen (musical tones) bzw. als Verfärbungen des Sprachsignals wahrnehmbar. Solche Fehler im Ausgangssignal werden wegen ihrer tonalen Struktur als äußerst störend empfunden und verschlechtern den subjektiven Höreindruck. In letzter Zeit sind deshalb Verfahren mit dem Ziel entwickelt worden, möglichst alle auftre- tenden Verzerrungen zu unterdrücken. So wurden zum Beispiel nichtlineare Methoden, bekannt aus der Bildverarbeitung, oder spezielle Detektionsalgorithmen entworfen, um das Problem geschlossen zu lösen. Besonders neu sind Verfahren, die psychoakustische Eigenschaften des menschlichen Gehörs nutzen, um wenigstens einen Teil der auftretenden Verzerrungen zu verdecken. So kommen hier Methoden zum Einsatz, die durch Formulierung einer psychoakustischen Gewichtungsregel versuchen, einen optimalen Kompromiß zwischen Höhe der Geräuschdämpfung, der Reststörungen und der resultierenden Sprachverständlichkeit zu finden. In der vorliegenden Arbeit diente ein klassisches einkanaliges Geräuschreduktionsverfahren als Ausgangsbasis für die Entwicklung eines neuen psychoakustisch-parametrischen Verfah- rens. Dabei wurde von Modellen der Spracherzeugung und Wahrnehmung der menschlichen Sprache ausgegangen, um geeignete Methoden für die psychoakustische Geräuschreduktion und Signalverbesserung zu finden. Das Ergebnis sind drei neue Verfahren, die sich je nach Eingangssignal adaptiv auf die Charakteristik des Gehörs einstellen und dabei Verzerrungen des Sprachsignals und der Reststörung unterhalb der psychoakustischen Wahrnehmbarkeits- schwelle, der sogenannten Mithörschwelle, halten. Das führt zu einer spürbaren Verbesserung des subjektiven Höreindrucks und hat positiven Einfluß auf die Sprachverständlichkeit. In wesentlichen Bestandteilen dieser Arbeit werden Aspekte der psychologischen Wahrnehmung akustischer Signale und bekannte psychoakustische Eigenschaften des menschlichen Gehörs für die auditive Signalverbesserung, Geräuschreduktion und die Identifikation akustischer Systeme ausgenutzt. Dementsprechend wird im ersten Teil eine kurze Einführung in die Theorie der Signalverarbeitung und Psychoakustik gegeben. Daran anschließend folgt die Vorstellung eines Verfahrens zur auditiven Signalverbesserung und Geräuschreduktion unter Ausnutzung psychoakustischer Verdeckungseffekte. Dieser Abschnitt ist besonders ausführlich gestaltet, da er den Hauptbestandteil dieser Arbeit bildet. Der dritte Teil erläutert experimentelle Untersuchungen und die Bewertung der verschiedenen Verfahren. Abschließend folgen Zusammenfassung und ein wissenschaftlicher Ausblick.
Die typische Aufgabe eines Nahbereichsradarnetzes ist es, Fahrzeuge in einem definierten Überwachungsbereich, beispielsweise dem Rollfeld eines Flughafens, zu detektieren, zu orten und ihre Spur zu verfolgen. Wegen der stark unterschiedlichen Radarrückstreuquerschnitte der Radarziele sind die Anforderungen an den verfügbaren Dynamikbereich der einzelnen eingesetzten Radarempfänger sehr hoch. Bei niedriger Radarsignalleistung ist daher die Verwendung eines Impulskompressionsverfahrens notwendig. Beim Nahbereichsnetz NRN, im Rahmen dessen Entwicklung auch die vorliegende Arbeit entstand, wird zudem ein neuartiges Ortungsprinzip eingesetzt, weshalb die Radarstationen mit feststehenden, d. h. nicht-rotierenden Antennen mit breiter Antennencharakteristik ausgestattet werden können. Radarsignale setzen sich aus den Echosignalen von den Radarsendeimpuls reflektierenden Objekten, sowie dem Rauschen zusammen. Bei den reflektierenden Objekten handelt es nicht nur um die interessierenden Radarziele, d. h. die zu detektierenden Fahrzeuge. Wegen der Bodennähe, in der ein Nahbereichsradarnetz betrieben wird, sowie der zumindest beim NRN breiten Antennencharakteristiken erfaßt der Radastrahl eine Vielzahl weiterer Radarreflektoren, deren Echosignal, Cluttersignal genannt, das eigentliche Nutzsignalüberlagert. Darüberhinaus verursacht der Einsatz eines Impulskompressionsverfahrens i. a. eine künstliche störende Signalkomponente, die sogenannten Impulskompressionsnebenmaxima, die auch Eigenclutter genannt werden. Durch den Einsatz eines erwartungstreuen Impulskompressionsverfahrens beim NRN wird theoretisch keine Eigenclutterkomponente erzeugt. Es existieren jedoch Effekte, die die Eigenclutterfreiheit zerstören. Diese werden im ersten Teil der Arbeit untersucht. Es wird gezeigt, wie die Eigenclutterfreiheit wiederhergestellt werden kann. Im zweiten Teil der Arbeit wird das Cluttersignal von reflektierenden Objekten anhand von mit dem NRN gemessenen Signalzeitreihen analysiert. Ein Modell zur Beschreibung des Cluttersignals wird entwickelt. Mit den Methoden der Detektionstheorie wird ein optimales Filter- und Detektionsverfahren für ein vollständig unbekanntes Nutzsignal in einem durch dieses Modell beschreibbaren Störsignal abgeleitet. Um dieses Verfahren einzusetzen, ist die Kenntnis der Modellparameter erforderlich. Prinzipiell existieren verschiedene Methoden, die sich im Laufe der Zeit verändernden Modellparameter zu schätzen. Das Filter- und Detektionsverfahren kann dann stetig an die aktuellen Schätzungen der Parameter des Cluttersignalmodells adaptiert werden. Die Schätzung liefert jedoch im Falle des Vorhandenseins von Nutzsignalkomponenten verfälschte Parameterwerte. In der vorliegenden Arbeit werden verschiedene Methoden zur Adaptionskontrolle vorgeschlagen, die den Einfluß dieser verfälschten Parameterschätzungen auf die Nutzsignaldetektion minimieren. Damit existiert ein Algorithmus, der adaptiv aus dem Echosignal cluttersignalbeschreibende Parameter bestimmt, die ihrerseits von einem Filter- und Detektionsalgorithmus verwendet werden, um ein eventuell im Echosignal vorhandenes Nutzsignal optimal zu detektieren. Anhand von Radarechosignalen, die mit dem NRN bei Meßkampagnen aufgezeichnet wurden, sowie anhand von Simulationen wurde schließlich die Leistungsfähigkeit des entwickelten adaptiven Filter- und Detektionsverfahrens mit Adaptionskontrolle beim Einsatz in einem Nahbereichsradarnetz gezeigt.
At present the standardization of third generation (3G) mobile radio systems is the subject of worldwide research activities. These systems will cope with the market demand for high data rate services and the system requirement for exibility concerning the offered services and the transmission qualities. However, there will be de ciencies with respect to high capacity, if 3G mobile radio systems exclusively use single antennas. Very promising technique developed for increasing the capacity of 3G mobile radio systems the application is adaptive antennas. In this thesis, the benefits of using adaptive antennas are investigated for 3G mobile radio systems based on Time Division CDMA (TD-CDMA), which forms part of the European 3G mobile radio air interface standard adopted by the ETSI, and is intensively studied within the standardization activities towards a worldwide 3G air interface standard directed by the 3GPP (3rd Generation Partnership Project). One of the most important issues related to adaptive antennas is the analysis of the benefits of using adaptive antennas compared to single antennas. In this thesis, these bene ts are explained theoretically and illustrated by computer simulation results for both data detection, which is performed according to the joint detection principle, and channel estimation, which is applied according to the Steiner estimator, in the TD-CDMA uplink. The theoretical explanations are based on well-known solved mathematical problems. The simulation results illustrating the benefits of adaptive antennas are produced by employing a novel simulation concept, which offers a considerable reduction of the simulation time and complexity, as well as increased exibility concerning the use of different system parameters, compared to the existing simulation concepts for TD-CDMA. Furthermore, three novel techniques are presented which can be used in systems with adaptive antennas for additionally improving the system performance compared to single antennas. These techniques concern the problems of code-channel mismatch, of user separation in the spatial domain, and of intercell interference, which, as it is shown in the thesis, play a critical role on the performance of TD-CDMA with adaptive antennas. Finally, a novel approach for illustrating the performance differences between the uplink and downlink of TD-CDMA based mobile radio systems in a straightforward manner is presented. Since a cellular mobile radio system with adaptive antennas is considered, the ultimate goal is the investigation of the overall system efficiency rather than the efficiency of a single link. In this thesis, the efficiency of TD-CDMA is evaluated through its spectrum efficiency and capacity, which are two closely related performance measures for cellular mobile radio systems. Compared to the use of single antennas, the use of adaptive antennas allows impressive improvements of both spectrum efficiency and capacity. Depending on the mobile radio channel model and the user velocity, improvement factors range from six to 10.7 for the spectrum efficiency, and from 6.7 to 12.6 for the spectrum capacity of TD-CDMA. Thus, adaptive antennas constitute a promising technique for capacity increase of future mobile communications systems.
Sowohl die gesteigerte Komplexität der Signalverarbeitungsalgorithmen und das umfangreichere Diensteangebot als auch die zum Erzielen der hohen erforderlichen Rechenleistungen erforderliche Parallelverarbeitung führen künftig zu einer stark ansteigenden Komplexität der digitalen Signalverarbeitung in Mobilfunksystemen. Diese Komplexität ist nur mit einem hierarchischen Modellierungs- und Entwurfsprozeß beherrschbar. Während die niedrigeren Hierarchieebenen der Programmierung und des Hardwareentwurfs bereits heute gut beherrscht werden, besteht noch Unklarheit bei den Entwurfsverfahren auf der höheren Systemebene. Die vorliegende Arbeit liefert einen Beitrag zur Systematisierung des Entwurfs auf höheren Hierarchieebenen. Hierzu wird der Entwurf eines Experimentalsystems für das JD-CDMA-Mobilfunkkonzept auf der Systemebene betrachtet. Es wird gezeigt, daß das Steuerkreismodell ein angemessenes Modell für die digitale Signalverarbeitung in einem Mobilfunksystem auf der Systemebene ist. Das Steuerkreismodell läßt sich einerseits direkt auf die in zukünftigen Mobilfunksystemen einzusetzenden Multiprozessorsysteme abbilden und entspricht andererseits auch der nachrichtentechnischen Sichtweise der Aufgabenstellung, in der das Mobilfunksystem durch die auszuführenden Algorithmen beschrieben wird. Das Steuerkreismodell ist somit ein geeignetes Bindeglied, um von der Aufgabenstellung zu einer Implementierung zu gelangen. Weiterhin wird gezeigt, daß das Steuerkreismodell sehr modellierungsmächtig ist, und sein Einsatz im Gegensatz zu vielen bereits bekannten Entwurfsverfahren nicht auf mittels Datenflußmodellen beschreibbare Systeme begrenzt ist. Die klassischen, aus der von Datenflußmodellen ausgehenden Systemsynthese bekannten Entwurfsschritte Allokierung, Ablaufplanung und Bindung können im Kontext der Steuerkreismodellierung als Verfahren zur Konstruktion der Steuerwerksaufgabe verstanden werden. Speziell für das Experimentalsystem werden zwei verschiedene Ablaufsteuerungsstrategien modelliert und untersucht. Die volldynamische Ablaufsteuerung wird zur Laufzeit durchgeführt und ist daher nicht darauf angewiesen, daß die auszuführenden Abläufe a priori bekannt sind. Bei der selbsttaktenden Ablaufsteuerung werden die hier a priori bekannten Abläufe zum Zeitpunkt der Systemkonstruktion fest geplant, und zur Laufzeit wird dieser Plan nur noch ausgeführt. Schließlich werden noch die Auswirkungen der paketvermittelten burstförmigen Nachrichtenübertragung auf die digitale Signalverarbeitung in zukünftigen Mobilfunksystemen untersucht. Es wird gezeigt, daß es durch Datenpufferung sehr gut möglich ist, die Rechenlast in einem Mobilfunksystem zu mitteln.