Refine
Year of publication
Document Type
- Doctoral Thesis (75) (remove)
Keywords
- Mobilfunk (10)
- Model checking (5)
- mobile radio (4)
- MIMO (3)
- Niederspannungsnetz (3)
- OFDM (3)
- System-on-Chip (3)
- Verifikation (3)
- Bounded Model Checking (2)
- Elektromobilität (2)
Faculty / Organisational entity
- Fachbereich Elektrotechnik und Informationstechnik (75) (remove)
In today’s world, mobile communication has become one of the most widely used technologies corroborated by growing number of mobile subscriptions and extensive usage of mobile multimedia services. It is a key challenge for the network operators to accommodate such large number of users and high traffic volume. Further, several day-to-day scenarios such as public transportation, public events etc., are now characterized with high mobile data
usage. A large number of users avail cellular services in such situations posing
high load to the respective base stations. This results in increased number of dropped connections, blocking of new access attempts and blocking of handovers (HO). The users in such system will thus be subjected to poor
Quality of Experience (QoE). Beforehand knowledge of the changing data traffic dynamics associated with such practical situations will assist in designing
radio resource management schemes aiming to ease the forthcoming congestion situations. The key hypothesis of this thesis is that consideration and utilization of additional context information regarding user, network and his environment is valuable in designing such smart Radio Resource Management(RRM) schemes. Methods are developed to predict the user cell transitions, considering the fact that mobility of the users is not purely random but rather direction oriented. This is particularly used in case of traffic dense moving network or group of users moving jointly in the same vehicle (e.g., bus, train, etc.) to
predict the propagation of high load situation among cells well in advance.
This enables a proactive triggering of load balancing (LB) in cells anticipating
the arrival of high load situation and accommodating the incoming user group or moving network. The evaluated KPIs such as blocked access
attempts, dropped connections and blocked HO are reduced.
Further, everyday scenario of dynamic crowd formation is considered as another potential case of high load situation. In real world scenarios such as open air festivals, shopping malls, stadiums or public events, several mobile users gather to form a crowd. This poses high load situation to the respective serving base station at the site of crowd formation, thereby leading to congestion. As a consequence, mobile users are subjected to poor QoE due to high dropping and blocking rates. A framework to predict crowd formation in a cell is developed based on coalition of user cell transition prediction, cluster detection and trajectory prediction. This framework is suitably used to prompt context aware load balancing mechanism and activate a small cell at the probable site of crowd formation. Simulations show that proactive LB
reduces the dropping of users (23%), blocking of users (10%) and blocked
HO (15%). In addition, activation of a Small Cell (SC) at the site of frequent
crowd formation leads to further reductions in dropping of users (60%),
blocking of users (56%) and blocked HO (59%).
Similar to the framework for crowd formation prediction, a concept is developed for predicting vehicular traffic jams. Many vehicular users avail broadband cellular services on a daily basis while traveling. The density of such vehicular users change dynamically in a cell and at certain sites (e.g.
signal lights), traffic jams arise frequently leading to a high load situation at
respective serving base station. A traffic prediction algorithm is developed
from cellular network perspective as a coalition strategy consisting of schemes to predict user cell transition, vehicular cluster/moving network detection, user velocity monitoring etc. The traffic status indication provided by the algorithm is then used to trigger LB and activate/deactivate a small cell suitably. The evaluated KPIs such as blocked access attempts, dropped connections
and blocked HO are reduced by approximately 10%, 18% and 18%, respectively due to LB. In addition, switching ON of SC reduces blocked access attempts, dropped connections and blocked HO by circa 42%, 82% and 81%, respectively.
Amidst increasing number of connected devices and traffic volume, another key issue for today’s network is to provide uniform service quality
despite high mobility. Further, urban scenarios are often characterized by
coverage holes which hinder service continuity. A context aware resource allocation scheme is proposed which uses enhanced mobility prediction to facilitate service continuity. Mobility prediction takes into account additional information about the user’s origin and possible destination to predict next road segment. If a coverage hole is anticipated in upcoming road, then additional
resources are allocated to respective user and data is buffered suitably.
The buffered data is used when the user is in a coverage hole to improve service continuity. Simulation shows improvement in throughput (in coverage
hole) by circa 80% and service interruption is reduced by around 90%, for a
non-real-time streaming service. Additionally, investigation of context aware procedures is carried out with a focus on user mobility, to find commonalities among different procedures and a general framework is proposed to support mobility context awareness. The new information and interfaces which are required from various entities
(e.g., vehicular infrastructure) are discussed as well.
Device-to-Device (D2D) communications commonly refer to the technology
that enables direct communication between devices, hence relieving the
base station from traffic routing. Thus, D2D communication is a feasible
solution in crowded situations, where users in proximity requesting to communicate with one another could be granted D2D links for communication, thereby easing the traffic load to serving base station. D2D links can potentially
reuse the radio resources from cellular users (known as D2D underlay) leading to better spectral utilization. However, the mutual interference can hinder system performance. For instance, if D2D links are reusing cellular uplink resources then D2D transmissions cause interference to cellular uplink at base station. Whereas, cellular transmissions cause interference to
D2D receivers. To cope up with such issues, location aware resource allocation
schemes are proposed for D2D communication. The key aim of such RA scheme is to reuse resources with minimal interference. The RA scheme based on virtual sectoring of a cell leads to approximately 15% more established
links and 25% more capacity with respect to a random resource allocation. D2D transmissions cause significant interference to cellular links with
which they reuse physical resource blocks, thereby hindering cellular performance. Regulating D2D transmissions to mitigate the aforementioned problem would mean sub-optimal exploitation of D2D communications. As
a solution, post-resource allocation power control at cellular users is proposed.
Three schemes namely interference aware power control, blind power
control and threshold based power control are discussed. Simulation results
show reductions in dropping of cellular users due to interference from D2D
transmissions, improvement in throughput at base station (uplink) while not hindering the D2D performance.
In dieser Arbeit wird ein formales Modell zur Beschreibung von hardwarenaher Software
vorgestellt: Die Programmnetzliste.
Die Programmnetzliste (PN) besteht aus Instruktionszellen
die in einem gerichteten azyklischen Graph verbunden sind und dabei
alle Ausführungspfade des betrachteten Programms beinhaltet. Die einzelnen Instruktionszellen
repräsentieren eine Instruktion oder eine Instruktionssequenz. Die PN verfügt
über eine explizite Darstellung des Programmablaufs und eine implizite Modellierung des
Datenpfads und ist als Modell für die Verifikation von Software nutzbar. Die Software
wird dabei auf Maschinencode-Level betrachtet.
Die Modellgenerierung besteht aus wenigen und gut automatisierbaren Schritten. Als
Grundlage dient ein – ggf. unvollständiger – Kontrollfluss Graph (CFG), der aus der Software
generiert werden kann. Die Modellgenerierung besteht aus zwei Schritten.
Der erste Schritt ist die Erzeugung des expliziten Programmablaufs, indem der CFG
abgerollt wird. Dabei wird ein sogenannter Execution-Graph (EXG) erzeugt, der alle
möglichen Ausführungspfade des betrachteten Programms beinhaltet. Um dieses Modell
so kompakt wie möglich zu halten, werden unterschiedliche Techniken verwendet – wie
das Zusammenführen gemeinsamer Pfade und das Erkennen von “toten” Verzweigungen
im Programm, die an der entsprechenden Stelle niemals ausgeführt werden.
Im Anschluss wird im zweiten Schritt der Execution-Graph in die Programmnetzliste
(PN) übersetzt. Dabei werden alle Knoten im EXG durch eine entsprechende Instruktionszelle
ersetzt. Die Kanten des Graphen entsprechen dabei dem Programmzustand. Der
Programmzustand setzt sich aus den Variablen im Speicher wie auch dem Architekturzustand
des unterliegenden Prozessors zusammen.
Ergänzt wird der Programmzustand in der Programmnetzliste um ein sogenanntes
Active-Bit, welches es ermöglicht den aktiven Pfad in der Netzliste zu markieren. Das
ist notwendig, da die Software immer nur einen Pfad gleichzeitig ausführen kann, aber
die PN alle möglichen Pfade beinhaltet. Auf der Programmnetzliste können dann mit Hilfe
von Hardware Property Checkern basierend auf BMC oder IPC diverse Eigenschaften
bewiesen werden.
Zusätzlich wird die Programmnetzliste um die Fähigkeit zur Interruptmodellierung
erweitert.
Ethernet has become an established communication technology in industrial automation. This was possible thanks to the tremendous technological advances and enhancements of Ethernet such as increasing the link-speed, integrating the full-duplex transmission and the use of switches. However these enhancements were still not enough for certain high deterministic industrial applications such as motion control, which requires cycle time below one millisecond and jitter or delay deviation below one microsecond. To meet these high timing requirements, machine and plant manufacturers had to extend the standard Ethernet with real-time capability. As a result, vendor-specific and non-IEEE standard-compliant "Industrial Ethernet" (IE) solutions have emerged.
The IEEE Time-Sensitive Networking (TSN) Task Group specifies new IEEE-conformant functionalities and mechanisms to enable the determinism missing from Ethernet. Standard-compliant systems are very attractive to the industry because they guarantee investment security and sustainable solutions. TSN is considered therefore to be an opportunity to increase the performance of established Industrial-Ethernet systems and to move forward to Industry 4.0, which require standard mechanisms.
The challenge remains, however, for the Industrial Ethernet organizations to combine their protocols with the TSN standards without running the risk of creating incompatible technologies. TSN specifies 9 standards and enhancements that handle multiple communication aspects. In this thesis, the evaluation of the use of TSN in industrial real-time communication is restricted to four deterministic standards: IEEE802.1AS-Rev, IEEE802.1Qbu IEEE802.3br and IEEE802.1Qbv. The specification of these TSN sub-standards was finished at an early research stage of the thesis and hardware prototypes were available.
Integrating TSN into the Industrial-Ethernet protocols is considered a substantial strategical challenge for the industry. The benefits, limits and risks are too complex to estimate without a thorough investigation. The large number of Standard enhancements makes it hard to select the required/appropriate functionalities.
In order to cover all real-time classes in the automation [9], four established Industrial-Ethernet protocols have been selected for evaluation and combination with TSN as well as other performance relevant communication features.
The objectives of this thesis are to
(1) Provide theoretical, simulation and experimental evaluation-methodologies for the timing performance analysis of the deterministic TSN-standards mentioned above. Multiple test-plans are specified to evaluate the performance and compatibility of early version TSN-prototypes from different providers.
(2) Investigate multiple approaches and deduce migration strategies to integrate these features into the established Industrial-Ethernet protocols: Sercos III, Profinet IRT, Profinet RT and Ethernet/IP. A scenario of coexistence of time-critical traffic with other traffic in a TSN-network proves that the timing performance for highly deterministic applications, e.g. motion-control, can only be guaranteed by the TSN scheduling algorithm IEEE802.1Qbv.
Based on a requirements survey of highly deterministic industrial applications, multiple network scenarios and experiments are presented. The results are summarized into two case studies. The first case study shows that TSN alone is not enough to meet these requirements. The second case study investigates the benefits of additional mechanisms (Gigabit link-speed, minimum cycle time modeling, frame forwarding mechanisms, frame structure, topology migration, etc.) in combination with the TSN features. An implementation prototype of the proposed system and a simulation case study are used for the evaluation of the approach. The prototype is used for the evaluation and validation of the simulation model. Due to given scalability constraints of the prototype (no cut-through functionalities, limited number of TSN-prototypes, etc…), a realistic simulation model, using the network simulation tool OMNEST / OMNeT++, is conducted.
The obtained evaluation results show that a minimum cycle time ≤1 ms and a maximum jitter ≤1 μs can be achieved with the presented approaches.
Ein Beitrag zur Zustandsschätzung in Niederspannungsnetzen mit niedrigredundanter Messwertaufnahme
(2020)
Durch den wachsenden Anteil an Erzeugungsanlagen und leistungsstarken Verbrauchern aus dem Verkehr- und Wärmesektor kommen Niederspannungsnetze immer näher an ihre Betriebsgrenzen. Da für die Niederspannungsnetze bisher keine Messwerterfassung vorgesehen war, können Netzbetreiber Grenzverletzungen nicht erkennen. Um dieses zu ändern, werden deutsche Anschlussnutzer in Zukunft flächendeckend mit modernen Messeinrichtungen oder intelligenten Messsystemen (auch als Smart Meter bezeichnet) ausgestattet sein. Diese sind in der Lage über eine Kommunikationseinheit, das Smart-Meter-Gateway, Messdaten an die Netzbetreiber zu senden. Werden Messdaten aber als personenbezogene Netzzustandsdaten deklariert, so ist aus Datenschutzgründen eine Erhebung dieser Daten weitgehend untersagt.
Ziel dieser Arbeit ist es eine Zustandsschätzung zu entwickeln, die auch bei niedrigredundanter Messwertaufnahme für den Netzbetrieb von Niederspannungsnetzen anwendbare Ergebnisse liefert. Neben geeigneten Algorithmen zur Zustandsschätzung ist dazu die Generierung von Ersatzwerten im Fokus.
Die Untersuchungen und Erkenntnisse dieser Arbeit tragen dazu bei, den Verteilnetzbetreibern bei den maßgeblichen Entscheidungen in Bezug auf die Zustandsschätzung in Niederspannungsnetzen zu unterstützen. Erst wenn Niederspannungsnetze mit Hilfe der Zustandsschätzung beobachtbar sind, können darauf aufbauende Konzepte zur Regelung entwickelt werden, um die Energiewende zu unterstützen.
The neural networks have been extensively used for tasks based on image sensors. These models have, in the past decade, consistently performed better than other machine learning methods on tasks of computer vision. It is understood that methods for transfer learning from neural networks trained on large datasets can reduce the total data requirement while training new neural network models. These methods tend not to perform well when the data recording sensor or the recording environment is unique from the existing large datasets. The machine learning literature provides various methods for prior-information inclusion in a learning model. Such methods employ methods like designing biases into the data representation vectors, enforcing priors or physical constraints on the models. Including such information into neural networks for the image frames and image-sequence classification is hard because of the very high dimensional neural network mapping function and little information about the relation between the neural network parameters. In this thesis, we introduce methods for evaluating the statistically learned data representation and combining these information descriptors. We have introduced methods for including information into neural networks. In a series of experiments, we have demonstrated methods for adding the existing model or task information to neural networks. This is done by 1) Adding architectural constraints based on the physical shape information of the input data, 2) including weight priors on neural networks by training them to mimic statistical and physical properties of the data (hand shapes), and 3) by including the knowledge about the classes involved in the classification tasks to modify the neural network outputs. These methods are demonstrated, and their positive influence on the hand shape and hand gesture classification tasks are reported. This thesis also proposes methods for combination of statistical and physical models with parametrized learning models and show improved performances with constant data size. Eventually, these proposals are tied together to develop an in-car hand-shape and hand-gesture classifier based on a Time of Flight sensor.
Basierend auf den Erkenntnissen und Erfahrungen zum Klimawandel ergaben sich in den letzten Jahren weltweit enorme energie- und klimapolitische Veränderungen. Dies führt zu einem immer stärken Wandel der Erzeugungs-, Verbrauchs- und Versorgungsstrukturen unserer Energiesysteme. Der Fokus der Energieerzeugung auf fluktuierenden erneuerbaren Energieträgern erfordert einen weitreichenderen Einsatz von Flexibilitäten als dies bisher der Fall war.
Diese Arbeit diskutiert den Einsatz von Wärmepumpen und Speichersystemen als Flexibilitäten im Kontext des Zellularen Ansatzes der Energieversorgung. Dazu werden die Flexibilitätspotentiale von Wärmepumpen -Speichersystemen auf drei Betrachtungsebenen untersucht und validiert. Erstere berücksichtigt die Wärmepumpe, den thermischen Speicher und thermische Lasten in einer generellen Potentialbetrachtung. Darauf aufbauend folgt die Betrachtung der Wärmepumpen-Speichersysteme im Rahmen einer Haushalts-Zelle als energetische Einheit, gefolgt von Untersuchungen im Niederspannungs-Zellkontext. Zur Abbildung des Flexibilitätsverhaltens werden detaillierte Modelle der Wandler und Speicher sowie deren Steuerungen entwickelt und anhand von Zeitreihensimulationen analysiert und evaluiert.
Die zentrale Frage ob Wärmepumpen mit Speichersystemen einen Beitrag als Flexibilität zum Gelingen der Energiewende leisten können kann mit einem klaren Ja beantwortet werden. Dennoch sind die beim Einsatz von Wärmepumpen-Speichersystemen als Flexibilität zu beachtenden Randbedingungen vielfältig und bedürfen, je nach Anwendungszweck der Flexibilität, einer genauen Betrachtung. Die entscheidenden Faktoren sind dabei die Außentemperatur, der zeitliche Kontext, das Netz und die Wirtschaftlichkeit.
Entwicklung eines Verfahrens zur dreiphasigen Zustandsschätzung in vermaschten Niederspannungsnetzen
(2018)
Betreiber von Niederspannungsnetzen sind im Zuge der Energiewende durch den anhaltenden Ausbau dezentraler Erzeugungsanlagen und dem Aufkommen der Elektromobilität mit steigenden Netzauslastungen konfrontiert. Zukünftig wird ein sicherer Netzbetrieb ohne Leitungsüberlastungen grundsätzlich nur gewährleistet sein, wenn der Netzzustand durch geeignete Systeme ermittelt wird und auf dessen Basis ein intelligentes Netzmanagement mit regelnden Eingriffen erfolgt.
Diese Arbeit befasst sich mit der Entwicklung und dem Test eines Verfahrens zur dreiphasigen Zustandsschätzung in vermaschten Niederspannungsnetzen. Als Eingangsdaten dienen dabei Spannungs- und Strommesswerte, welche im Wesentlichen durch Smart Meter an Hausanschlusspunkten messtechnisch erfasst werden. Das Verfahren zielt darauf ab, Grenzwertverletzungen mit einer hohen Wahrscheinlichkeit zu erkennen.
Schwerpunkte der Betrachtung sind neben der Systemkonzeptionierung zum einen die Vorverarbeitung der Systemeingangsdaten im Rahmen der Generierung von Ersatzmesswerten sowie der Erkennung von Topologiefehlern und zum anderem die Entwicklung eines Schätzalgorithmus mit linearem Messmodell und der Möglichkeit zur Lokalisierung grob falscher Messdaten.
Der zunehmende Ausbau dezentraler Erzeugungsanlagen sowie die steigende Anzahl an Elektrofahrzeugen stellen die Niederspannungsnetze vor neue Herausforderungen. Neben der Einhaltung des zulässigen Spannungsbands führen Erzeugungsanlagen und neue Lasten zu einer zunehmenden thermischen Auslastung der Leitungen. Einfache, konventionelle Maßnahmen wie Topologieänderungen zu vermascht betriebenen Niederspannungsnetzen sind ein erster hilfreicher und kostengünstiger Ansatz, bieten aber keinen grundsätzlichen Schutz vor einer thermischen Überlastung der Betriebsmittel. Diese Arbeit befasst sich mit der Konzeption eines Spannungs- und Wirkleistungsreglers für vermaschte Niederspannungsnetze. Durch den Regler erfolgt eine messtechnische Erfassung der Spannungen und Ströme in einzelnen Messpunkten des Niederspannungsnetzes. Mit Hilfe eines speziellen Kennlinienverfahrens kann eine Leistungsverschiebung in einzelnen Netzmaschen hervorgerufen und vorgegebene Soll- oder Grenzwerte eingehalten werden. In vorliegender Arbeit werden die analytischen Grundlagen des Reglers, seine Hardware sowie das Kennlinienverfahren zusammen mit den realisierbaren Regelkonzepten vorgestellt. Die Ergebnisse aus Simulationsstudien, Labor- und Feldtests stellen die Effektivität des Reglers eindeutig dar und werden diskutiert.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
Die Versorgungsaufgaben für Niederspannungsnetze werden sich in den kommenden Jahrzehnten durch die weitere Verbreitung von Photovoltaikanlagen, Wärmepumpenheizungen und Elektroautomobilen gegenüber denen des Jahres 2018 voraussichtlich stark ändern. In der Praxis verbreitete Planungsgrundsätze für den Neubau von Niederspannungsnetzen sind veraltet, denn sie stammen vielfach in ihren Grundzügen aus Zeiten, in denen die neuen Lasten und Einspeisungen nicht erwartet und dementsprechend nicht berücksichtigt wurden. Der Bedarf für neue Planungsgrundsätze fällt zeitlich mit der Verfügbarkeit regelbarer Ortsnetztransformatoren (rONT) zusammen, die zur Verbesserung der Spannungsverhältnisse im Netz eingesetzt werden können. Die hier entwickelten neuen Planungsgrundsätze erfordern für ländliche und vorstädtische Versorgungsaufgaben (nicht jedoch für städtische Versorgungsaufgaben) den rONT-Einsatz, um die hohen erwarteten Leistungen des Jahres 2040 zu geringen Kosten beherrschen zu können. Eine geeignete rONT-Standardregelkennlinie wird angegeben. In allen Fällen werden abschnittsweise parallelverlegte Kabel mit dem Querschnitt 240 mm² empfohlen.
Die Einführung des Internets hat einen stetigen Wandel des täglichen,
sowie beruflichen Alltags verursacht. Hierbei ist eine deutliche Verlagerung
in den virtuellen Raum (Internet) festzustellen. Zusätzlich hat
die Einführung von sozialen Netzwerken, wie beispielsweise Facebook
das Verlangen des Nutzers immer „online“ zu sein, deutlich verstärkt.
Hinzu kommen die kontinuierlich wachsenden Datenmengen, welche beispielsweise
durch Videostreaming (YouTube oder Internet Protocol Television
(IPTV)) oder den Austausch von Bildern verursacht werden.
Zusätzlich verursachen neue Dienste, welche beispielsweise im Rahmen
vom Internet der Dinge und auch Industrie 4.0 eingeführt werden, zusätzliche
Datenmengen. Aktuelle Technologien wie Long Term Evolution
Advanced (LTE-A) im Funkbereich und Very High Speed Digital Subsciber
Line (VDSL) beziehungsweise Glasfaser in kabelgebundenen Netzen,
versuchen diesen Anforderungen gerecht zu werden.
Angesichts der steigenden Anforderungen an die Mobilität des Nutzers,
ist die Verwendung von Funktechnologien unabdingbar. In Verbindung
mit dem stetig wachsenden Datenaufkommen und den ansteigenden
Datenraten ist ein wachsender Bedarf an Spektrum, also freien,
beziehungsweise ungenutzten Frequenzbereichen einhergehend. Für die
Identifikation geeigneter Bereiche müssen allerdings eine Vielzahl von
Parametern und Einflussfaktoren betrachtet werden. Einer der entscheidenden
Parameter ist die entstehende Dämpfung im betrachteten Frequenzbereich,
da diese mit steigender Frequenz größer wird und somit
die resultierende Abdeckung bei gleichbleibender Sendeleistung sinkt.
In aktuellen Funksystemen werden Frequenzen < 6 GHz verwendet, da
diese von den Ausbreitungseigenschaften geeignete Eigenschaften aufweisen.
Des Weiteren müssen vorhandene Nutzungsrechte, Inhaber des
Spektrums, Nutzungsbedingungen und so weiter im Vorfeld abgeklärt
werden. In Deutschland wird die Koordination von der Bundesnetzagentur
vorgenommen.
Aufgrund der Vielfalt der vorhandenen Dienste und Anwendungen ist
es leicht ersichtlich, dass der Frequenzbereich < 6 GHz stark ausgelastet
ist. Neben den kontinuierlich ausgelasteten Diensten wie zum Beispiel
Long Term Evolution (LTE) oder Digital Video Broadcast (DVB), gibt
es spektrale Bereiche, die nur eine geringe zeitliche Auslastung aufweisen.
Markant hierfür sind Frequenzbereiche, welche beispielsweise ausschließlich
für militärische Nutzung reserviert sind. Bei genauerer Betrachtung
fällt auf, dass sich dies nicht ausschließlich auf den zeitlichen Bereich
beschränkt, vielmehr ergibt sich eine Kombination aus zeitlicher und
räumlicher Beschränkung, da die Nutzung meist auf einen räumlichen
Bereich eingrenzbar ist. Eine weitere Einschränkung resultiert aus der
derzeit starren Vergabe von Frequenzbereichen. Die Zuteilung basiert
auf langwierigen Antragsverfahren und macht somit eine kurzfristige variable
Zuteilung unmöglich.
Um diesem Problem gerecht zu werden, erfolgt im Rahmen dieser Arbeit
die Entwicklung eines generischen Spektrum-Management-Systems
(SMSs) zur dynamischen Zuteilung vorhandener Ressourcen. Eine Anforderung
an das System ist die Unterstützung von bereits bekannten
Spektrum Sharing Verfahren, wie beispielsweise Licensed Shared Access
(LSA) beziehungsweise Authorized Shared Access (ASA) oder Spectrum
Load Smoothing (SLS). Hierfür wird eine Analyse der derzeit bekannten
Sharing Verfahren vorgenommen und diese bezüglich ihrer Anwendbarkeit
charakterisiert. DesWeiteren werden die Frequenzbereiche unterhalb
6 GHz hinsichtlich ihrer Verwendbarkeiten und regulatorischen Anforderungen
betrachtet. Zusätzlich wird ein erweiterter Anforderungskatalog
an das Spektrum-Management-System (SMS) entwickelt, welcher
als Grundlage für das Systemdesign verwendet wird. Essentiell ist hierbei,
dass alle (potentiellen) Nutzer beziehungsweise Inhaber eines spektralen
Bereiches die Funktionalität eines derartigen Systems verwenden
können. Hieraus ergibt sich bereits die Anforderung der Skalierbarkeit
des Systems. Zur Entwicklung einer geeigneten Systemarchitektur werden
bereits vorhandene Lösungsansätze zur Verwaltung und Speicherung
von Daten hinsichtlich ihrer Anwendbarkeit verglichen und bewertet.
Des Weiteren erfolgt die Einbeziehung der geografischen Position.
Um dies adäquat gewährleisten zu können, werden hierarchische Strukturen
in Netzwerken untersucht und auf ihre Verwendbarkeit geprüft.
Das Ziel dieser Arbeit ist die Entwicklung eines Spektrum-Management-
Systems (SMSs) durch Adaption bereits vorhandener Technologien und
Verfahren, sowie der Berücksichtigung aller definierten Anforderungen.
Es hat sich gezeigt, dass die Verwendung einer zentralisierten Broker-
Lösung nicht geeignet ist, da die Verzögerungszeit einen exponentiellförmigen
Verlauf bezüglich der Anzahl der Anfragen aufweist und somit
nicht skaliert. Dies kann mittels einer Distributed Hash Table (DHT)-
basierten Erweiterung überwunden werden ohne dabei die Funktionalität
der Broker-Lösung einzuschränken. Für die Einbringung der Geoinformation
hat sich die hierarchische Struktur, vergleichbar zum Domain
Naming Service (DNS) als geeignet erwiesen.
Als Parameter für die Evaluierung hat sich die resultierende Zugriffszeit,
das heißt die Zeit welche das System benötigt um Anfragen zu
bearbeiten, sowie die resultierende Anzahl der versorgbaren Nutzer herausgestellt.
Für die Simulation wird ein urbanes Areal mit fünf Gebäuden
betrachtet. In der Mitte befindet sich ein sechsstöckiges Firmengebäude,
welches in jedem Stockwerk mit einem Wireless Local Area Network Access
Point (WLAN-AP) ausgestattet ist. Umliegend befinden sich vier
Privathäuser, welche jeweils mit einem WLAN-AP ausgestattet sind.
Das komplette Areal wird von drei Mobilfunkbetreibern mit je einer
Basisstation (BS) versorgt. Als Ausgangspunkt für die Evaluierung erfolgt
der Betrieb ohne SMS. Aus den Ergebnissen wird deutlich, dass
eine Überlastung der Long Term Evolution Basisstationen (LTE-BSen)
vorliegt (im Speziellen bei Betreiber A und B). Im zweiten Durchlauf
wird das Szenario mit einem SMS betrachtet. Zusätzlich kommen in diesem
Fall noch Mikro Basisstationen (Mikro-BSen) zum Einsatz, welche
von der Spezifikation vergleichbar zu einem Wireless Local Area Network
(WLAN) sind. Hier zeigt sich ein deutlich ausgewogeneres Systemverhalten.
Alle BSen und Access Points (APs) befinden sich deutlich
unterhalb der Volllastgrenze.
Die Untersuchungen im Rahmen dieser Arbeit belegen, dass ein heterogenes,
zeitweise überlastetes Funksystem, vollständig harmonisiert
werden kann. Des Weiteren ermöglicht der Einsatz eines SMSs die effiziente
Verwendung von temporär ungenutzten Frequenzbereichen (sogenannte
White- und Gray-spaces).
The complexity of modern real-time systems is increasing day by day. This inevitable rise in complexity predominantly stems from two contradicting requirements, i.e., ever increasing demand for functionality, and required low cost for the final product. The development of modern multi-processors and variety of network protocols and architectures have enabled such a leap in complexity and functionality possible. Albeit, efficient use of these multi-processors and network architectures is still a major problem. Moreover, the software design and its development process needs improvements in order to support rapid-prototyping for ever changing system designs. Therefore, in this dissertation, we provide solutions for different problems faced in the development and deployment process of real-time systems. The contributions presented in this thesis enable efficient utilization of system resources, rapid design & development and component modularity & portability.
In order to ease the certification process, time-triggered computation model is often used in distributed systems. However, time-triggered scheduling is NP-hard, due to which the process of schedule generation for complex large systems becomes convoluted. Large scheduler run-times and low scalability are two major problems with time-triggered scheduling. To solve these problems, we present a modular real-time scheduler based on a novel search-tree pruning technique, which consumes less time (compared to the state-of-the-art) in order to schedule tasks on large distributed time-triggered systems. In order to provide end-to-end guarantees, we also extend our modular scheduler to quickly generate schedules for time-triggered network traffic in large TTEthernet based networks. We evaluate our schedulers on synthetic but practical task-sets and demonstrate that our pruning technique efficiently reduces scheduler run-times and exhibits adequate scalability for future time-triggered distributed systems.
In safety critical systems, the certification process also requires strict isolation between independent components. This isolation is enforced by utilizing resource partitioning approach, where different criticality components execute in different partitions (each temporally and spatially isolated from each other). However, existing partitioning approaches use periodic servers or tasks to service aperiodic activities. This approach leads to utilization loss and potentially leads to large latencies. On the contrary to the periodic approaches, state-of-the-art aperiodic task admission algorithms do not suffer from problems like utilization loss. However, these approaches do not support partitioned scheduling or mixed-criticality execution environment. To solve this problem, we propose an algorithm for online admission of aperiodic tasks which provides job execution flexibility, jitter control and leads to lower latencies of aperiodic tasks.
For safety critical systems, fault-tolerance is one of the most important requirements. In time-triggered systems, modes are often used to ensure survivability against faults, i.e., when a fault is detected, current system configuration (or mode) is changed such that the overall system performance is either unaffected or degrades gracefully. In literature, it has been asserted that a task-set might be schedulable in individual modes but unschedulable during a mode-change. Moreover, conventional mode-change execution strategies might cause significant delays until the next mode is established. In order to address these issues, in this dissertation, we present an approach for schedulability analysis of mode-changes and propose mode-change delay reduction techniques in distributed system architecture defined by the DREAMS project. We evaluate our approach on an avionics use case and demonstrate that our approach can drastically reduce mode-change delays.
In order to manage increasing system complexity, real-time applications also require new design and development technologies. Other than fulfilling the technical requirements, the main features required from such technologies include modularity and re-usability. AUTOSAR is one of these technologies in automotive industry, which defines an open standard for software architecture of a real-time operating system. However, being an industrial standard, the available proprietary tools do not support model extensions and/or new developments by third-parties and, therefore, hinder the software evolution. To solve this problem, we developed an open-source AUTOSAR toolchain which supports application development and code generation for several modules. In order to exhibit the capabilities of our toolchain, we developed two case studies. These case studies demonstrate that our toolchain generates valid artifacts, avoids dirty workarounds and supports application development.
In order to cope with evolving system designs and hardware platforms, rapid-development of scheduling and analysis algorithms is required. In order to ease the process of algorithm development, a number of scheduling and analysis frameworks are proposed in literature. However, these frameworks focus on a specific class of applications and are limited in functionality. In this dissertation, we provide the skeleton of a scheduling and analysis framework for real-time systems. In order to support rapid-development, we also highlight different development components which promote code reuse and component modularity.
Field-effect transistor (FET) sensors and in particular their nanoscale variant of silicon nanowire transistors are very promising technology platforms for label-free biosensor applications. These devices directly detect the intrinsic electrical charge of biomolecules at the sensor’s liquid-solid interface. The maturity of micro fabrication techniques enables very large FET sensor arrays for massive multiplex detection. However, the direct detection of charged molecules in liquids faces a significant limitation due to a charge screening effect in physiological solutions, which inhibits the realization of point-of-care applications. As an alternative, impedance spectroscopy with FET devices has the potential to enable measurements in physiological samples. Even though promising studies were published in the field, impedimetric detection with silicon FET devices is not well understood.
The first goal of this thesis was to understand the device performances and to relate the effects seen in biosensing experiments to device and biomolecule types. A model approach should help to understand the capability and limitations of the impedimetric measurement method with FET biosensors. In addition, to obtain experimental results, a high precision readout device was needed. Consequently, the second goal was to build up multi-channel, highly accurate amplifier systems that would also enable future multi-parameter handheld devices.
A PSPICE FET model for potentiometric and impedimetric detection was adapted to the experiments and further expanded to investigate the sensing mechanism, the working principle, and effects of side parameters for the biosensor experiments. For potentiometric experiments, the pH sensitivity of the sensors was also included in this modelling approach. For impedimetric experiments, solutions of different conductivity were used to validate the suggested theories and assumptions. The impedance spectra showed two pronounced frequency domains: a low-pass characteristic at lower frequencies and a resonance effect at higher frequencies. The former can be interpreted as a contribution of the source and double layer capacitances. The latter can be interpreted as a combined effect of the drain capacitance with the operational amplifier in the transimpedance circuit.
Two readout systems, one as a laboratory system and one as a point-of-care demonstrator, were developed and used for several chemical and biosensing experiments. The PSPICE model applied to the sensors and circuits were utilized to optimize the systems and to explain the sensor responses. The systems as well as the developed modelling approach were a significant step towards portable instruments with combined transducer principles in future healthcare applications.
For many years, most distributed real-time systems employed data communication systems specially tailored to address the specific requirements of individual domains: for instance, Controlled Area Network (CAN) and Flexray in the automotive domain, ARINC 429 [FW10] and TTP [Kop95] in the aerospace domain. Some of these solutions were expensive, and eventually not well understood.
Mostly driven by the ever decreasing costs, the application of such distributed real-time system have drastically increased in the last years in different domains. Consequently, cross-domain communication systems are advantageous. Not only the number of distributed real-time systems have been increasing but also the number of nodes per system, have drastically increased, which in turn increases their network bandwidth requirements. Further, the system architectures have been changing, allowing for applications to spread computations among different computer nodes. For example, modern avionics systems moved from federated to integrated modular architecture, also increasing the network bandwidth requirements.
Ethernet (IEEE 802.3) [iee12] is a well established network standard. Further, it is fast, easy to install, and the interface ICs are cheap [Dec05]. However, Ethernet does not offer any temporal guarantee. Research groups from academia and industry have presented a number of protocols merging the benefits of Ethernet and the temporal guarantees required by distributed real-time systems. Two of these protocols are: Avionics Full-Duplex Switched Ethernet (AFDX) [AFD09] and Time-Triggered Ethernet (TTEthernet) [tim16]. In this dissertation, we propose solutions for two problems faced during the design of AFDX and TTEthernet networks: avoiding data loss due to buffer overflow in AFDX networks with multiple priority traffic, and scheduling of TTEthernet networks.
AFDX guarantees bandwidth separation and bounded transmission latency for each communication channel. Communication channels in AFDX networks are not synchronized, and therefore frames might compete for the same output port, requiring buffering to avoid data loss. To avoid buffer overflow and the resulting data loss, the network designer must reserve a safe, but not too pessimistic amount of memory of each buffer. The current AFDX standard allows for the classification of the network traffic with two priorities. Nevertheless, some commercial solutions provide multiple priorities, increasing the complexity of the buffer backlog analysis. The state-of-the-art AFDX buffer backlog analysis does not provide a method to compute deterministic upper bounds
iiifor buffer backlog of AFDX networks with multiple priority traffic. Therefore, in this dissertation we propose a method to address this open problem. Our method is based on the analysis of the largest busy period encountered by frames stored in a buffer. We identify the ingress (and respective egress) order of frames in the largest busy period that leads to the largest buffer backlog, and then compute the respective buffer backlog upper bound. We present experiments to measure the computational costs of our method.
In TTEthernet, nodes are synchronized, allowing for message transmission at well defined points in time, computed off-line and stored in a conflict-free scheduling table. The computation of such scheduling tables is a NP-complete problem [Kor92], which should be solved in reasonable time for industrial size networks. We propose an approach to efficiently compute a schedule for the TT communication channels in TTEthernet networks, in which we model the scheduling problem as a search tree. As the scheduler traverses the search tree, it schedules the communication channels on a physical link. We presented two approaches to traverse the search tree while progressively creating the vertices of the search tree. A valid schedule is found once the scheduler reaches a valid leaf. If on the contrary, it reaches an invalid leaf, the scheduler backtracks searching for a path to a valid leaf. We present a set of experiments to demonstrate the impact of the input parameters on the time taken to compute a feasible schedule or to deem the set of virtual links infeasible.
”In contemporary electronics 80% of a chip may perform digital functions but the 20%
of analog functions may take 80% of the development time.” [1]. Aggravating this, the
demands on analog design is increasing with rapid technology scaling. Most designs
have moved away from analog to digital domains, where possible, however, interacting
with the environment will always require analog to digital data conversion. Adding to
this problem, the number of sensors used in consumer and industry related products are
rapidly increasing. Designers of ADCs are dealing with this problem in several ways, the
most important is the migration towards digital designs and time domain techniques.
Time to Digital Converters (TDC) are becoming increasingly popular for robust signal
processing. Biological neurons make use of spikes, which carry spike timing information
and will not be affected by the problems related to technology scaling. Neuromorphic
ADCs still remain exotic with few implementations in sub-micron technologies Table 2.7.
Even among these few designs, the strengths of biological neurons are rarely exploited.
From a previous work [2], LUCOS, a high dynamic range image sensor, the efficiency
of spike processing has been validated. The ideas from this work can be generalized to
make a highly effective sensor signal conditioning system, which carries the promise to
be robust to technology scaling.
The goal of this work is to create a novel spiking neural ADC as a novel form of a
Multi-Sensor Signal Conditioning and Conversion system, which
• Will be able to interface with or be a part of a System on Chip with traditional
analog or advanced digital components.
• Will have a graceful degradation.
• Will be robust to noise and jitter related problems.
• Will be able to learn and adapt to static errors and dynamic errors.
• Will be capable of self-repair, self-monitoring and self-calibration
Sensory systems in humans and other animals analyze the environment using several
techniques. These techniques have been evolved and perfected to help the animal sur-
vive. Different animals specialize in different sense organs, however, the peripheral
neural network architectures remain similar among various animal species with few ex-
ceptions. While there are many biological sensing techniques present, most popularly
used engineering techniques are based on intensity detection, frequency detection, and
edge detection. These techniques are used with traditional analog processing (e.g., colorvi
sensors using filters), and with biological techniques (e.g. LUCOS chip [2]). The local-
ization capability of animals has never been fully utilized.
One of the most important capabilities for animals, vertebrates or invertebrates, is the
capability for localization. The object of localization can be predator, prey, sources of
water, or food. Since these are basic necessities for survival, they evolve much faster
due to the survival of the fittest. In fact, localization capabilities, even if the sensors
are different, have convergently evolved to have same processing methods (coincidence
detection) in their peripheral neurons (for e.g., forked tongue of a snake, antennae of
a cockroach, acoustic localization in fishes and mammals). This convergent evolution
increases the validity of the technique. In this work, localization concepts based on
acoustic localization and tropotaxis are investigated and employed for creation of novel
ADCs.
Unlike intensity and frequency detection, which are not linear (for e.g. eyes saturate in
bright light, loose color perception in low light), localization is inherently linear. This
is mainly because the accurate localization of predator or prey can be the difference
between life and death for an animal.
Figure 1 visually explains the ADC concept proposed in this work. This has two parts.
(1) Sensor to Spike(time) Conversion (SSC), (2) Spike(time) to Digital Conversion(SDC).
Both of the structures have been designed with models of biological neurons. The
combination of these two structures is called SSDC.
To efficiently implement the proposed concept, a comparison of several biological neural
models is made and two models are shortlisted. Various synapse structures are also
studied. From this study, Leaky Integrate and Fire neuron (LIF) is chosen since it
fulfills all the requirements of the proposed structure. The analog neuron and synapse
designs from Indiveri et. al. [3], [4] were taken, and simulations were conducted using
cadence and the behavioral equivalence with biological counterpart was checked. The
LIF neuron had features, that were not required for the proposed approach. A simple
LIF neuron stripped of these features and was designed to be as fast as allowed by the
technology.
The SDC was designed with the neural building blocks and the delays were designed
using buffer chains. This SDC converts incoming Time Interval Code (TIC) to sparse
place coding using coincidence detection. Coincidence detection is a property of spiking
neurons, which is a time domain equivalent of a Gaussian Kernel. The SDC is designed to
have an online reconfigurable Gaussian kernel width, weight, threshold, and refractory
period. The advantage of sparse place codes, which contain rank order coding wasvii
Figure 1: ADC as a localization problem (right), Jeffress model of sound localization
visualized (left). The values t 1 and t 2 indicate the time taken from the source to s1 and
s2 respectively.
described in our work [5]. A time based winner take all circuit with memory was created
based on a previous work [6] for reading out of sparse place codes asynchronously.
The SSC was also initially designed with the same building blocks. Additionally, a
differential synapse was designed for better SSC. The sensor element considered wasviii
a Wheatstone full bridge AMR sensor AFF755 from Sensitec GmbH. A reconfigurable
version of the synapse was also designed for a more generic sensor interface.
The first prototype chip SSDCα was designed with 257 modules of coincidence detectors
realizing the SDC and the SSC. Since the spike times are the most important information,
the spikes can be treated as digital pulses. This provides the capability for digital
communication between analog modules. This creates a lot of freedom for use of digital
processing between the discussed analog modules. This advantage is fully exploited
in the design of SSDCα. Three SSC modules are multiplexed to the SDC. These SSC
modules also provide outputs from the chip simultaneously. A rising edge detecting fixed
pulse width generation circuit is used to create pulses that are best suited for efficient
performance of the SDC. The delay lines are made reconfigurable to increase robustness
and modify the span of the SDC. The readout technique used in the first prototype is
a relatively slow but safe shift register. It is used to analyze the characteristics of the
core work. This will be replaced by faster alternatives discussed in the work. The area
of the chip is 8.5 mm 2 . It has a sampling rate from DC to 150 kHz. It has a resolution
from 8-bit to 13-bit. It has 28,200 transistors on the chip. It has been designed in 350
nm CMOS technology from ams. The chip has been manufactured and tested with a
sampling rate of 10 kHz and a theoretical resolution of 8 bits. However, due to the
limitations of our Time-Interval-Generator, we are able to confirm for only 4 bits of
resolution.
The key novel contributions of this work are
• Neuromorphic implementation of AD conversion as a localization problem based
on sound localization and tropotaxis concepts found in nature.
• Coincidence detection with sparse place coding to enhance resolution.
• Graceful degradation without redundant elements, inherent robustness to noise,
which helps in scaling of technologies
• Amenable to local adaptation and self-x features.
Conceptual goals have all been fulfilled, with the exception of adaptation. The feasibility
for local adaptation has been shown with promising results and further investigation is
required for future work. This thesis work acts as a baseline, paving the way for R&D
in a new direction. The chip design has used 350 nm ams hitkit as a vehicle to prove
the functionality of the core concept. The concept can be easily ported to present
aggressively-scaled-technologies and future technologies.
Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.
In current practices of system-on-chip (SoC) design a trend can be observed to integrate more and more low-level software components into the system hardware at different levels of granularity. The implementation of important control functions and communication structures is frequently shifted from the SoC’s hardware into its firmware. As a result, the tight coupling of hardware and software at a low level of granularity raises substantial verification challenges since the conventional practice of verifying hardware and software independently is no longer sufficient. This calls for new methods for verification based on a joint analysis of hardware and software.
This thesis proposes hardware-dependent models of low-level software for performing formal verification. The proposed models are conceived to represent the software integrated with its hardware environment according to the current SoC design practices. Two hardware/software integration scenarios are addressed in this thesis, namely, speed-independent communication of the processor with its hardware periphery and cycle-accurate integration of firmware into an SoC module. For speed-independent hardware/software integration an approach for equivalence checking of hardware-dependent software is proposed and an evaluated. For the case of cycle-accurate hardware/software integration, a model for hardware/software co-verification has been developed and experimentally evaluated by applying it to property checking.
A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)
Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.
Safety-related Systems (SRS) protect from the unacceptable risk resulting from failures of technical systems. The average probability of dangerous failure on demand (PFD) of these SRS in low demand mode is limited by standards. Probabilistic models are applied to determine the average PFD and verify the specified limits. In this thesis an effective framework for probabilistic modeling of complex SRS is provided. This framework enables to compute the average, instantaneous, and maximum PFD. In SRS, preventive maintenance (PM) is essential to achieve an average PFD in compliance with specified limits. PM intends to reveal dangerous undetected failures and provides repair if necessary. The introduced framework pays special attention to the precise and detailed modeling of PM. Multiple so far neglected degrees of freedom of the PM are considered, such as two types of elementwise PM at arbitrarily variable times. As shown by analyses, these degrees of freedom have a significant impact on the average, instantaneous, and maximum PFD. The PM is optimized to improve the average or maximum PFD or both. A well-known heuristic nonlinear optimization method (Nelder-Mead method) is applied to minimize the average or maximum PFD or a weighted trade-off. A significant improvement of the objectives and an improved protection are achieved. These improvements are achieved via the available degrees of freedom of the PM and without additional effort. Moreover, a set of rules is presented to decide for a given SRS if significant improvements will be achieved by optimization of the PM. These rules are based on the well-known characteristics of the SRS, e.g. redundancy or no redundancy, complete or incomplete coverage of PM. The presented rules aim to support the decision whether the optimization is advantageous for a given SRS and if it should be applied or not.
In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.