Refine
Year of publication
Document Type
- Doctoral Thesis (70)
- Conference Proceeding (17)
- Preprint (17)
- Article (12)
- Report (8)
- Habilitation (3)
- Other (2)
- Course Material (1)
Keywords
- Mobilfunk (12)
- Model checking (7)
- Ambient Intelligence (5)
- Netzwerk (5)
- mobile radio (5)
- MIMO (4)
- System-on-Chip (4)
- CDMA (3)
- Cache (3)
- DRAM (3)
Faculty / Organisational entity
- Fachbereich Elektrotechnik und Informationstechnik (130) (remove)
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.
This study presents an energy-efficient ultra-low voltage standard-cell based memory in 28nm FD-SOI. The storage element (standard-cell latch) is replaced with a full- custom designed latch with 50 % less area. Error-free operation is demonstrated down to 450mV @ 9MHz. By utilizing body bias (BB) @ VDD = 0.5 V performance spans from 20 MHz @ BB=0V to 110MHz @ BB=1V.
Die Versorgungsaufgaben für Niederspannungsnetze werden sich in den kommenden Jahrzehnten durch die weitere Verbreitung von Photovoltaikanlagen, Wärmepumpenheizungen und Elektroautomobilen gegenüber denen des Jahres 2018 voraussichtlich stark ändern. In der Praxis verbreitete Planungsgrundsätze für den Neubau von Niederspannungsnetzen sind veraltet, denn sie stammen vielfach in ihren Grundzügen aus Zeiten, in denen die neuen Lasten und Einspeisungen nicht erwartet und dementsprechend nicht berücksichtigt wurden. Der Bedarf für neue Planungsgrundsätze fällt zeitlich mit der Verfügbarkeit regelbarer Ortsnetztransformatoren (rONT) zusammen, die zur Verbesserung der Spannungsverhältnisse im Netz eingesetzt werden können. Die hier entwickelten neuen Planungsgrundsätze erfordern für ländliche und vorstädtische Versorgungsaufgaben (nicht jedoch für städtische Versorgungsaufgaben) den rONT-Einsatz, um die hohen erwarteten Leistungen des Jahres 2040 zu geringen Kosten beherrschen zu können. Eine geeignete rONT-Standardregelkennlinie wird angegeben. In allen Fällen werden abschnittsweise parallelverlegte Kabel mit dem Querschnitt 240 mm² empfohlen.
3D integration of solid-state memories and logic, as demonstrated by the Hybrid Memory Cube (HMC), offers major opportunities for revisiting near-memory computation and gives new hope to mitigate the power and performance losses caused by the “memory wall”. In this paper we present the first exploration steps towards design of the Smart Memory Cube (SMC), a new Processor-in-Memory (PIM) architecture that enhances the capabilities of the logic-base (LoB) in HMC. An accurate simulation environment has been developed, along with a full featured software stack. All offloading and dynamic overheads caused by the operating system, cache coherence, and memory management are considered, as well. Benchmarking results demonstrate up to 2X performance improvement in comparison with the host SoC, and around 1.5X against a similar host-side accelerator. Moreover, by scaling down the voltage and frequency of PIM’s processor it is possible to reduce energy by around 70% and 55% in comparison with the host and the accelerator, respectively.
Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.
This paper presents a completely systematic design procedure for asynchronous controllers.The initial step is the construction of a signal transition graph (STG, an interpreted Petri net) ofthe dialog between data path and controller: a formal representation without reference to timeor internal states. To implement concurrently operating control structures, and also to reducedesign effort and circuit cost, this STG can be decomposed into overlapping subnets. A univer-sal initial solution is then obtained by algorithmically constructing a primitive flow table fromeach component net. This step links the procedure to classical asynchronous design, in particu-lar to its proven optimization methods, without restricting the set of solutions. In contrast toother approaches, there is no need to extend the original STG intuitively.
Memory accesses are the bottleneck of modern computer systems both in terms of performance and energy. This barrier, known as "the Memory Wall", can be break by utilizing memristors. Memristors are novel passive electrical components with varying resistance based on the charge passing through the device [1]. In this abstract, the term "memristor" covers also an extension of the definition, memristive devices, which vary their resistance depending on a state variable [2]. While memristors are naturally used as memory cells, they can also be used for other applications, such as logic circuits [3].
We present a novel architecture that redefines the relationship between the memory and the processor by enabling data processing within the memory itself. Our architecture is based on a memristive memory array, in which we perform two basic logic operations: Imply (material implication) [4] and False.
Specification of asynchronous circuit behaviour becomes more complex as the
complexity of today’s System-On-a-Chip (SOC) design increases. This also causes
the Signal Transition Graphs (STGs) – interpreted Petri nets for the specification
of asynchronous circuit behaviour – to become bigger and more complex, which
makes it more difficult, sometimes even impossible, to synthesize an asynchronous
circuit from an STG with a tool like petrify [CKK+96] or CASCADE [BEW00].
It has, therefore, been suggested to decompose the STG as a first step; this
leads to a modular implementation [KWVB03] [KVWB05], which can reduce syn-
thesis effort by possibly avoiding state explosion or by allowing the use of library
elements. A decomposition approach for STGs was presented in [VW02] [KKT93]
[Chu87a]. The decomposition algorithm by Vogler and Wollowski [VW02] is based
on that of Chu [Chu87a] but is much more generally applicable than the one in
[KKT93] [Chu87a], and its correctness has been proved formally in [VW02].
This dissertation begins with Petri net background described in chapter 2.
It starts with a class of Petri nets called a place/transition (P/T) nets. Then
STGs, the subclass of P/T nets, is viewed. Background in net decomposition
is presented in chapter 3. It begins with the structural decomposition of P/T
nets for analysis purposes – liveness and boundedness of the net. Then STG
decomposition for synthesis from [VW02] is described.
The decomposition method from [VW02] still could be improved to deal with
STGs from real applications and to give better decomposition results. Some
improvements for [VW02] to improve decomposition result and increase algorithm
efficiency are discussed in chapter 4. These improvement ideas are suggested in
[KVWB04] and some of them are have been proved formally in [VK04].
The decomposition method from [VW02] is based on net reduction to find
an output block component. A large amount of work has to be done to reduce
an initial specification until the final component is found. This reduction is not
always possible, which causes input initially classified as irrelevant to become
relevant input for the component. But under certain conditions (e.g. if structural
auto-conflicts turn out to be non-dynamic) some of them could be reclassified as
irrelevant. If this is not done, the specifications become unnecessarily large, which
intern leads to unnecessarily large implemented circuits. Instead of reduction, a
new approach, presented in chapter 5, decomposes the original net into structural
components first. An initial output block component is found by composing the
structural components. Then, a final output block component is obtained by net
reduction.
As we cope with the structure of a net most of the time, it would be useful
to have a structural abstraction of the net. A structural abstraction algorithm
[Kan03] is presented in chapter 6. It can improve the performance in finding an
output block component in most of the cases [War05] [Taw04]. Also, the structure
net is in most cases smaller than the net itself. This increases the efficiency of the
decomposition algorithm because it allows the transitions contained in a node of
the structure graph to be contracted at the same time if the structure graph is
used as internal representation of the net.
Chapter 7 discusses the application of STG decomposition in asynchronous
circuit design. Application to speed independent circuits is discussed first. Af-
ter that 3D circuits synthesized from extended burst mode (XBM) specifications
are discussed. An algorithm for translating STG specifications to XBM specifi-
cations was first suggested by [BEW99]. This algorithm first derives the state
machine from the STG specification, then translates the state machine to XBM
specification. An XBM specification, though it is a state machine, allows some
concurrency. These concurrencies can be translated directly, without deriving
all of the possible states. An algorithm which directly translates STG to XBM
specifications, is presented in chapter 7.3.1. Finally DESI, a tool to decompose
STGs and its decomposition results are presented.
Das europäische Mobilfunksystem der dritten Generation heißt UMTS. UTRA - der terrestrische Funkzugang von UMTS - stellt zwei harmonisierte Luftschnittstellen zur Verfügung: Das TDD-basierte TD-CDMA und das FDD-basierte WCDMA. Das Duplexverfahren TDD bietet gegenüber FDD erhebliche Vorteile, z.B. können TDD-basierte Luftschnittstellen unterschiedliche Datenraten in der Aufwärts- und Abwärtsstrecke i.a. effizienter bereitstellen als FDD-basierte Luftschnittstellen. TD-CDMA ist Gegenstand dieser Arbeit. Die wichtigsten Details dieser Luftschnittstelle werden vorgestellt. Laufzeit und Interferenz sind wesentliche Gesichtspunkte beim Verwenden von TDD. Diese wesentlichen Gesichtspunkte werden eingehend für den Fall des betrachteten TD-CDMA untersucht. In UMTS spielen neben der Sprachübertragung insbesondere hochratige Datendienste und Multimediadienste eine wichtige Rolle. Die unterschiedlichen Qualitätsanforderungen dieser Dienste sind eine große Herausforderung für UMTS, insbesondere auf der physikalischen Ebene. Um den Qualitätsanforderungen verschiedener Dienste gerecht zu werden, definiert UTRA die L1/L2-Schnittstelle durch unterschiedliche Transportkanäle. Jeder Transportkanal garantiert durch die vorgegebene Datenrate, Verzögerung und maximal zulässige Bitfehlerrate eine bestimmte Qualität der Übertragung. Hieraus ergibt sich das Problem der Realisierung dieser Transportkanäle auf physikalischer Ebene. Dieses Problem wird in der vorliegenden Arbeit eingehend für TD-CDMA untersucht. Der UTRA-Standard bezeichnet die Realisierung eines Transportkanals als Transportformat. Wichtige Parameter des Transportformats sind das verwendete Pooling-Konzept, das eingesetzte FEC-Verfahren und die zugehörige Coderate. Um die Leistungsfähigkeit unterschiedlicher Transportformate quantitativ zu vergleichen, wird ein geeignetes Bewertungsmaß angegeben. Die zur Bewertung erforderlichen Meßwerte können nur durch Simulation auf Verbindungsebene ermittelt werden. Deshalb wird ein Programm für die Simulation von Transportformaten in TD-CDMA entwickelt. Bei der Entwicklung dieses Programms wird auf Konzepte, Techniken, Methoden und Prinzipien der Informatik für die Software-Entwicklung zurückgegriffen, um die Wiederverwendbarkeit und Änderbarkeit des Programms zu unterstützen. Außerdem werden wichtige Verfahren zur Reduzierung der Bitfehlerrate - die schnelle Leistungsregelung und die Antennendiversität - implementiert. Die Leistungsfähigkeit einer exemplarischen Auswahl von Transportformaten wird durch Simulation ermittelt und unter Verwendung des Bewertungsmaßes verglichen. Als FEC-Verfahren werden Turbo-Codes und die Code-Verkettung aus innerem Faltungscode und äußerem RS-Code eingesetzt. Es wird gezeigt, daß die untersuchten Verfahren zur Reduzierung der Bitfehlerrate wesentlichen Einfluß auf die Leistungsfähigkeit der Transportformate haben. Des weiteren wird gezeigt, daß die Transportformate mit Turbo-Codes bessere Ergebnisse erzielen als die Transportformate mit Code-Verkettung.
The dissertation describes a practically proven, particularly efficient approach for the verification of digital circuit designs. The approach outperforms simulation based verification wrt. final circuit quality as well as wrt. required verification effort. In the dissertation, the paradigm of transaction based verification is ported from simulation to formal verification. One consequence is a particular format of formal properties, called operation properties. Circuit descriptions are verified by proof of operation properties with Interval Property Checking (IPC), a particularly strong SAT based formal verification algorithm. Furtheron, a completeness checker is presented that identifies all verification gaps in sets of operation properties. This completeness checker can handle the large operation properties that arise, if this approach is applied to realistic circuits. The methodology of operation properties, Interval Property Checking, and the completeness checker form a symbiosis that is of particular benefit to the verification of digital circuit designs. On top of this symbiosis an approach to completely verify the interaction of completely verified modules has been developed by adaptation of the modelling theories of digital systems. The approach presented in the dissertation has proven in multiple commercial application projects that it indeed completely verifies modules. After reaching a termination criterion that is well defined by completeness checking, no further bugs were found in the verified modules. The approach is marketed by OneSpin Solutions GmbH, Munich, under the names "Operation Based Verification" and "Gap Free Verification".
Diese Arbeit beschreibt einen in der Praxis bereits vielfach erprobten, besonders leistungsfähigen Ansatz zur Verifikation digitaler Schaltungsentwürfe. Der Ansatz ist im Hinblick auf die Schaltungsqualität nach der Verifikation, als auch in Bezug auf den Verifikationsaufwand der simulationsbasierten Schaltungsverifikation deutlich überlegen. Die Arbeit überträgt zunächst das Paradigma der transaktionsbasierten Verifikation aus der Simulation in die formale Verifikation. Ein Ergebnis dieser Übertragung ist eine bestimmte Form von formalen Eigenschaften, die Operationseigenschaften genannt werden. Schaltungen werden mit Operationseigenschaften untersucht durch Interval Property Checking, einer be-sonders leistungsfähigen SAT-basierten funktionalen Verifikation. Dadurch können Schaltungen untersucht werden, die sonst als zu komplex für formale Verifikation gelten. Ferner beschreibt diese Arbeit ein für Mengen von Operationseigenschaften geeignetes Werkzeug, das alle Verifikationslücken aufdeckt, komplexitätsmäßig mit den Fähigkeiten der IPC-basierten Schaltungsuntersuchung Schritt hält und als Vollständigkeitprüfer bezeichnet wird. Die Methodik der Operationseigenschaften und die Technologie des IPC-basierten Eigenschaftsprüfers und des Vollständigkeitsprüfers gehen eine vorteilhafte Symbiose zum Vorteil der funktionalen Verifikation digitaler Schaltungen ein. Darauf aufbauend wird ein Verfahren zur lückenlosen Überprüfung der Verschaltung derartig verifizierter Module entwickelt, das aus den Theorien zur Modellierung digitaler Systeme abgeleitet ist. Der in dieser Arbeit vorgestellte Ansatz hat in vielen kommerziellen Anwendungsprojekten unter Beweis gestellt, dass er den Namen "vollständige funktionale Verifikation" zu Recht trägt, weil in diesen Anwendungsprojekten nach dem Erreichen eines durch die Vollständigkeitsprüfung wohldefinierten Abschlusses keine Fehler mehr gefunden wurden. Der Ansatz wird von OneSpin Solutions GmbH unter dem Namen "Operation Based Verification" und "Gap Free Verification" vermarktet.
As the sustained trend towards integrating more and more functionality into systems on a chip can be observed in all fields, their economic realization is a challenge for the chip making industry. This is, however, barely possible today, as the ability to design and verify such complex systems could not keep up with the rapid technological development. Owing to this productivity gap, a design methodology, mainly using pre designed and pre verifying blocks, is mandatory. The availability of such blocks, meeting the highest possible quality standards, is decisive for its success. Cost-effective, this can only be achieved by formal verification on the block-level, namely by checking properties, ranging over finite intervals of time. As this verification approach is based on constructing and solving Boolean equivalence problems, it allows for using backtrack search procedures, such as SAT. Recent improvements of the latter are responsible for its high capacity. Still, the verification of some classes of hardware designs, enjoying regular substructures or complex arithmetic data paths, is difficult and often intractable. For regular designs, this is mainly due to individual treatment of symmetrical parts of the search space by backtrack search procedures used. One approach to tackle these deficiencies, is to exploit the regular structure for problem reduction on the register transfer level (RTL). This work describes a new approach for property checking on the RTL, preserving the problem inherent structure for subsequent reduction. The reduction is based on eliminating symmetrical parts from bitvector functions, and hence, from the search space. Several approaches for symmetry reduction in search problems, based on invariance of a function under permutation of variables, have been previously proposed. Unfortunately, our investigations did not reveal this kind of symmetry in relevant cases. Instead, we propose a reduction based on symmetrical values, as we encounter them much more frequently in our industrial examples. Let \(f\) be a Boolean function. The values \(0\) and \(1\) are symmetrical values for a variable \(x\) in \(f\) iff there is a variable permutation \(\pi\) of the variables of \(f\), fixing \(x\), such that \(f|_{x=0} = \pi(f|_{x=1})\). Then the question whether \(f=1\) holds is independent from this variable, and it can be removed. By iterative application of this approach to all variables of \(f\), they are either all removed, leaving \(f=1\) or \(f=0\) trivially, or there is a variable \(x'\) with no such \(\pi\). The latter leads to the conclusion that \(f=1\) does not hold, as we found a counter-example either with \(x'=0\), or \(x'=1\). Extending this basic idea to vectors of variables, allows to elevate it to the RTL. There, self similarities in the function representation, resulting from the regular structure preserved, can be exploited, and as a consequence, symmetrical bitvector values can be found syntactically. In particular, bitvector term-rewriting techniques, isomorphism procedures for specially manipulated term graphs, and combinations thereof, are proposed. This approach dramatically reduces the computational effort needed for functional verification on the block-level and, in particular, for the important problem class of regular designs. It allows the verification of industrial designs previously intractable. The main contributions of this work are in providing a framework for dealing with bitvector functions algebraically, a concise description of bounded model checking on the register transfer level, as well as new reduction techniques and new approaches for finding and exploiting symmetrical values in bitvector functions.
This work shall provide a foundation for the cross-design of wireless networked control systems with limited resources. A cross-design methodology is devised, which includes principles for the modeling, analysis, design, and realization of low cost but high performance and intelligent wireless networked control systems. To this end, a framework is developed in which control algorithms and communication protocols are jointly designed, implemented, and optimized taking into consideration the limited communication, computing, memory, and energy resources of the low performance, low power, and low cost wireless nodes used. A special focus of the proposed methodology is on the prediction and minimization of the total energy consumption of the wireless network (i.e. maximization of the lifetime of wireless nodes) under control performance constraints (e.g. stability and robustness) in dynamic environments with uncertainty in resource availability, through the joint (offline/online) adaptation of communication protocol parameters and control algorithm parameters according to the traffic and channel conditions. Appropriate optimization approaches that exploit the structure of the optimization problems to be solved (e.g. linearity, affinity, convexity) and which are based on Linear Matrix Inequalities (LMIs), Dynamic Programming (DP), and Genetic Algorithms (GAs) are investigated. The proposed cross-design approach is evaluated on a testbed consisting of a real lab plant equipped with wireless nodes. Obtained results show the advantages of the proposed cross-design approach compared to standard approaches which are less flexible.
Model-based fault diagnosis and fault-tolerant control for a nonlinear electro-hydraulic system
(2010)
The work presented in this thesis discusses the model-based fault diagnosis and fault-tolerant control with application to a nonlinear electro-hydraulic system. High performance control with guaranteed safety and reliability for electro-hydraulic systems is a challenging task due to the high nonlinearity and system uncertainties. This thesis developed a diagnosis integrated fault-tolerant control (FTC) strategy for the electro-hydraulic system. In fault free case the nominal controller is in operation for achieving the best performance. If the fault occurs, the controller will be automatically reconfigured based on the fault information provided by the diagnosis system. Fault diagnosis and reconfigurable controller are the key parts for the proposed methodology. The system and sensor faults both are studied in the thesis. Fault diagnosis consists of fault detection and isolation (FDI). A model-base residual generating is realized by calculating the redundant information from the system model and available signal. In this thesis differential-geometric approach is employed, which gives a general formulation of FDI problem and is more compact and transparent among various model-based approaches. The principle of residual construction with differential-geometric method is to find an unobservable distribution. It indicates the existence of a system transformation, with which the unknown system disturbance can be decoupled. With the observability codistribution algorithm the local weak observability of transformed system is ensured. A Fault detection observer for the transformed system can be constructed to generate the residual. This method cannot isolated sensor faults. In the thesis the special decision making logic (DML) is designed based on the individual signal analysis of the residuals to isolate the fault. The reconfigurable controller is designed with the backstepping technique. Backstepping method is a recursive Lyapunov-based approach and can deal with nonlinear systems. Some system variables are considered as ``virtual controls'' during the design procedure. Then the feedback control laws and the associate Lyapunov function can be constructed by following step-by-step routine. For the electro-hydraulic system adaptive backstepping controller is employed for compensate the impact of the unknown external load in the fault free case. As soon as the fault is identified, the controller can be reconfigured according to the new modeling of faulty system. The system fault is modeled as the uncertainty of system and can be tolerated by parameter adaption. The senor fault acts to the system via controller. It can be modeled as parameter uncertainty of controller. All parameters coupled with the faulty measurement are replaced by its approximation. After the reconfiguration the pre-specified control performance can be recovered. FDI integrated FTC based on backstepping technique is implemented successfully on the electro-hydraulic testbed. The on-line robust FDI and controller reconfiguration can be achieved. The tracking performance of the controlled system is guaranteed and the considered faults can be tolerated. But the problem of theoretical robustness analysis for the time delay caused by the fault diagnosis is still open.
Three-dimensional (3D) integration using through- silicon via (TSV) has been used for memory designs. Content addressable memory (CAM) is an important component in digital systems. In this paper, we propose an evaluation tool for 3D CAMs, which can aid the designer to explore the delay and power of various partitioning strategies. Delay, power, and energy models of 3D CAM with respect to different architectures are built as well.
Die Paarungsstörung mit Pheromonen ist ein etabliertes Verfahren der ökologischen Schädlingsbekämpfung in vielen Bereichen der Landwirtschaft. Um dieses Verfahren zu optimieren, ist es erforderlich, genauere Erkenntnisse über die Verteilung des Pheromons über den behandelten Agrarflächen zu erhalten. Die Messung dieser Duftstoffe mit dem EAG-System ist eine Methode, mit der man schnell und zuverlässig Pheromonkonzentrationen im Freiland bestimmen kann. Diese Arbeit beschreibt Beiträge, die zur Weiterentwicklung des Systems von großer Bedeutung sind. Die Steuerung des Messablaufs durch eine Ablaufdatei, die erst zur Laufzeit ins Programm geladen wird, ermöglicht eine zeitgenaue und flexible Steuerung des Messsystems. Die Auswertung der Messergebnisse wird durch Methoden der Gesamtdarstellung der Konzentrationsberechnung und durch rigorose Fehlerbetrachtung auf eine solide Grundlage gestellt. Die für die Konzentrationsberechnung erforderlichen Grundvoraussetzungen werden anhand experimenteller Beispiele ausführlich erläutert und verfiziert. Zusätzlich wird durch ein iteratives Verfahren die Konzentrationsberechnung von der mathematischen oder empirischen Darstellung der Dosis-Wirkungskurve unabhängig gemacht. Zur Nutzung einer erweiterten EAG-Apparatur zur Messung komplexer Duftstoffgemische wurde das Messsystem im Bereich der Steuerung und der Auswertung tiefgreifend umgestaltet und vollständig einsatztauglich gemacht. Dazu wurde das Steuerungssystem erweitert, das Programm für die Messwerterfassung neu strukturiert, eine Methode zur Konzentrationsberechnung für Duftstoffgemische entwickelt und in einer entsprechenden Auswertesoftware implementiert. Das wichtigste experimentelle Ergebnis besteht in der Durchführung und Auswertung einer speziellen Messung, bei der das EAG-System parallel mit einer klassischen Gaschromatograph-Methode eingesetzt wurde. Die Ergebnisse ermöglichen erstmals eine absolute Festlegung der Konzentrations-Messergebnisse des EAG-Messsystems für das Pheromon des Apfelwicklers. Bisher konnten nur Ergebnisse in Relativen Einheiten angegeben werden.
Modern applications in the realms of wireless communication and mobile broadband Internet increase the demand for compact antennas with well defined directivity. Here, we present an approach for the design and implementation of hybrid antennas consisting of a classic feeding antenna that is near-field-coupled to a subwavelength resonator. In such a combined structure, the composite antenna always radiates at the resonance frequency of the subwavelength oscillator as well as at the resonance frequency of the feeding antenna. While the classic antenna serves as impedance-matched feeding element, the subwavelength resonator induces an additional resonance to the composite antenna. In general, these near-field coupled structures are known for decades and are lately published as near-field resonant parasitic antennas. We describe an antenna design consisting of a high-frequency electric dipole antenna at fd=25 GHz that couples to a low-frequency subwavelength split-ring resonator, which emits electromagnetic waves at fSRR=10.41 GHz. The radiating part of the antenna has a size of approximately 3.2mm×8mm×1mm and thus is electrically small at this frequency with a product k⋅a=0.5 . The input return loss of the antenna was moderate at −18 dB and it radiated at a spectral bandwidth of 120 MHz. The measured main lobe of the antenna was observed at 60∘ with a −3 dB angular width of 65∘ in the E-plane and at 130∘ with a −3 dB angular width of 145∘ in the H-plane
In this thesis a new family of codes for the use in optical high bit rate transmission systems with a direct sequence code division multiple access scheme component was developed and its performance examined. These codes were then used as orthogonal sequences for the coding of the different wavelength channels in a hybrid OCDMA/WDMA system. The overall performance was finally compared to a pure WDMA system. The common codes known up to date have the problem of needing very long sequence lengths in order to accommodate an adequate number of users. Thus, code sequence lengths of 1000 or more were necessary to reach bit error ratios of with only about 10 simultaneous users. However, these sequence lengths are unacceptable if signals with data rates higher than 100 MBit/s are to be transmitted, not to speak about the number of simultaneous users. Starting from the well known optical orthogonal codes (OOC) and under the assumption of synchronization among the participating transmitters - justified for high bit rate WDM transmission systems -, a new code family called ?modified optical orthogonal codes? (MOOC) was developed by minimizing the crosscorrelation products of each two sequences. By this, the number of simultaneous users could be increased by several orders of magnitude compared to the known codes so far. The obtained code sequences were then introduced in numerical simulations of a 80 GBit/s DWDM transmission system with 8 channels, each carrying a 10 GBit/s payload. Usual DWDM systems are featured by enormous efforts to minimize the spectral spacing between the various wavelength channels. These small spacings in combination with the high bit rates lead to very strict demands on the system components like laser diode, filters, multiplexers etc. Continuous channel monitoring and temperature regulations of sensitive components are inevitable, but often cannot prevent drop downs of the bit error ratio due to aging effects or outer influences like mechanical stress. The obtained results show that - very different to the pure WDM system - by orthogonally coding adjacent wavelength channels with the proposed MOOC, the overall system performance gets widely independent from system parameters like input powers, channel spacings and link lengths. Nonlinear effects like XPM that insert interchannel crosstalk are effectively fought. Furthermore, one can entirely dispense with the bandpass filters, thus simplifying the receiver structure, which is especially interesting for broadcast networks. A DWDM system upgraded with the OCDMA subsystem shows a very robust behavior against a variety of influences.
This paper presents the systematic synthesis of a fairly complex digitalcircuit and its CPLD implementation as an assemblage of communicatingasynchronous sequential circuits. The example, a VMEbus controller, waschosen because it has to control concurrent processes and to arbitrateconflicting requests.