Refine
Year of publication
Document Type
- Doctoral Thesis (84)
- Article (18)
- Conference Proceeding (17)
- Preprint (17)
- Report (9)
- Habilitation (3)
- Other (2)
- Bachelor Thesis (1)
- Course Material (1)
Has Fulltext
- yes (152)
Is part of the Bibliography
- no (152)
Keywords
- Mobilfunk (12)
- Model checking (7)
- Ambient Intelligence (5)
- Netzwerk (5)
- mobile radio (5)
- MIMO (4)
- System-on-Chip (4)
- CDMA (3)
- Cache (3)
- DRAM (3)
Faculty / Organisational entity
- Fachbereich Elektrotechnik und Informationstechnik (152) (remove)
Das europäische Mobilfunksystem der dritten Generation heißt UMTS. UTRA - der terrestrische Funkzugang von UMTS - stellt zwei harmonisierte Luftschnittstellen zur Verfügung: Das TDD-basierte TD-CDMA und das FDD-basierte WCDMA. Das Duplexverfahren TDD bietet gegenüber FDD erhebliche Vorteile, z.B. können TDD-basierte Luftschnittstellen unterschiedliche Datenraten in der Aufwärts- und Abwärtsstrecke i.a. effizienter bereitstellen als FDD-basierte Luftschnittstellen. TD-CDMA ist Gegenstand dieser Arbeit. Die wichtigsten Details dieser Luftschnittstelle werden vorgestellt. Laufzeit und Interferenz sind wesentliche Gesichtspunkte beim Verwenden von TDD. Diese wesentlichen Gesichtspunkte werden eingehend für den Fall des betrachteten TD-CDMA untersucht. In UMTS spielen neben der Sprachübertragung insbesondere hochratige Datendienste und Multimediadienste eine wichtige Rolle. Die unterschiedlichen Qualitätsanforderungen dieser Dienste sind eine große Herausforderung für UMTS, insbesondere auf der physikalischen Ebene. Um den Qualitätsanforderungen verschiedener Dienste gerecht zu werden, definiert UTRA die L1/L2-Schnittstelle durch unterschiedliche Transportkanäle. Jeder Transportkanal garantiert durch die vorgegebene Datenrate, Verzögerung und maximal zulässige Bitfehlerrate eine bestimmte Qualität der Übertragung. Hieraus ergibt sich das Problem der Realisierung dieser Transportkanäle auf physikalischer Ebene. Dieses Problem wird in der vorliegenden Arbeit eingehend für TD-CDMA untersucht. Der UTRA-Standard bezeichnet die Realisierung eines Transportkanals als Transportformat. Wichtige Parameter des Transportformats sind das verwendete Pooling-Konzept, das eingesetzte FEC-Verfahren und die zugehörige Coderate. Um die Leistungsfähigkeit unterschiedlicher Transportformate quantitativ zu vergleichen, wird ein geeignetes Bewertungsmaß angegeben. Die zur Bewertung erforderlichen Meßwerte können nur durch Simulation auf Verbindungsebene ermittelt werden. Deshalb wird ein Programm für die Simulation von Transportformaten in TD-CDMA entwickelt. Bei der Entwicklung dieses Programms wird auf Konzepte, Techniken, Methoden und Prinzipien der Informatik für die Software-Entwicklung zurückgegriffen, um die Wiederverwendbarkeit und Änderbarkeit des Programms zu unterstützen. Außerdem werden wichtige Verfahren zur Reduzierung der Bitfehlerrate - die schnelle Leistungsregelung und die Antennendiversität - implementiert. Die Leistungsfähigkeit einer exemplarischen Auswahl von Transportformaten wird durch Simulation ermittelt und unter Verwendung des Bewertungsmaßes verglichen. Als FEC-Verfahren werden Turbo-Codes und die Code-Verkettung aus innerem Faltungscode und äußerem RS-Code eingesetzt. Es wird gezeigt, daß die untersuchten Verfahren zur Reduzierung der Bitfehlerrate wesentlichen Einfluß auf die Leistungsfähigkeit der Transportformate haben. Des weiteren wird gezeigt, daß die Transportformate mit Turbo-Codes bessere Ergebnisse erzielen als die Transportformate mit Code-Verkettung.
Bei der Programmierung geht es in vielfältiger Form um Identifikation von Individuen: Speicherorte,Datentypen, Werte, Klassen, Objekte, Funktionen u.ä. müssen definierend oder selektierend identifiziert werden.Die Ausführungen zur Identifikation durch Zeigen oder Nennen sind verhältnismäßig kurz gehalten,wogegen der Identifikation durch Umschreiben sehr viel Raum gewidmet ist. Dies hat seinen Grunddarin, daß man zum Zeigen oder Nennen keine strukturierten Sprachformen benötigt, wohl aber zumUmschreiben. Daß die Betrachtungen der unterschiedlichen Formen funktionaler Umschreibungen soausführlich gehalten sind, geschah im Hinblick auf ihre Bedeutung für die Begriffswelt der funktionalen Programmierung. Man hätte zwar die Formen funktionaler Umschreibungen auch im Mosaikstein "Programmzweck versus Programmform" im Kontext des dort dargestellten Konzepts funktionaler Programme behandeln können, aber der Autor meint, daß der vorliegende Aufsatz der angemessenerePlatz dafür sei.
Basierend auf den Erkenntnissen und Erfahrungen zum Klimawandel ergaben sich in den letzten Jahren weltweit enorme energie- und klimapolitische Veränderungen. Dies führt zu einem immer stärken Wandel der Erzeugungs-, Verbrauchs- und Versorgungsstrukturen unserer Energiesysteme. Der Fokus der Energieerzeugung auf fluktuierenden erneuerbaren Energieträgern erfordert einen weitreichenderen Einsatz von Flexibilitäten als dies bisher der Fall war.
Diese Arbeit diskutiert den Einsatz von Wärmepumpen und Speichersystemen als Flexibilitäten im Kontext des Zellularen Ansatzes der Energieversorgung. Dazu werden die Flexibilitätspotentiale von Wärmepumpen -Speichersystemen auf drei Betrachtungsebenen untersucht und validiert. Erstere berücksichtigt die Wärmepumpe, den thermischen Speicher und thermische Lasten in einer generellen Potentialbetrachtung. Darauf aufbauend folgt die Betrachtung der Wärmepumpen-Speichersysteme im Rahmen einer Haushalts-Zelle als energetische Einheit, gefolgt von Untersuchungen im Niederspannungs-Zellkontext. Zur Abbildung des Flexibilitätsverhaltens werden detaillierte Modelle der Wandler und Speicher sowie deren Steuerungen entwickelt und anhand von Zeitreihensimulationen analysiert und evaluiert.
Die zentrale Frage ob Wärmepumpen mit Speichersystemen einen Beitrag als Flexibilität zum Gelingen der Energiewende leisten können kann mit einem klaren Ja beantwortet werden. Dennoch sind die beim Einsatz von Wärmepumpen-Speichersystemen als Flexibilität zu beachtenden Randbedingungen vielfältig und bedürfen, je nach Anwendungszweck der Flexibilität, einer genauen Betrachtung. Die entscheidenden Faktoren sind dabei die Außentemperatur, der zeitliche Kontext, das Netz und die Wirtschaftlichkeit.
Diese Arbeit beschreibt einen in der Praxis bereits vielfach erprobten, besonders leistungsfähigen Ansatz zur Verifikation digitaler Schaltungsentwürfe. Der Ansatz ist im Hinblick auf die Schaltungsqualität nach der Verifikation, als auch in Bezug auf den Verifikationsaufwand der simulationsbasierten Schaltungsverifikation deutlich überlegen. Die Arbeit überträgt zunächst das Paradigma der transaktionsbasierten Verifikation aus der Simulation in die formale Verifikation. Ein Ergebnis dieser Übertragung ist eine bestimmte Form von formalen Eigenschaften, die Operationseigenschaften genannt werden. Schaltungen werden mit Operationseigenschaften untersucht durch Interval Property Checking, einer be-sonders leistungsfähigen SAT-basierten funktionalen Verifikation. Dadurch können Schaltungen untersucht werden, die sonst als zu komplex für formale Verifikation gelten. Ferner beschreibt diese Arbeit ein für Mengen von Operationseigenschaften geeignetes Werkzeug, das alle Verifikationslücken aufdeckt, komplexitätsmäßig mit den Fähigkeiten der IPC-basierten Schaltungsuntersuchung Schritt hält und als Vollständigkeitprüfer bezeichnet wird. Die Methodik der Operationseigenschaften und die Technologie des IPC-basierten Eigenschaftsprüfers und des Vollständigkeitsprüfers gehen eine vorteilhafte Symbiose zum Vorteil der funktionalen Verifikation digitaler Schaltungen ein. Darauf aufbauend wird ein Verfahren zur lückenlosen Überprüfung der Verschaltung derartig verifizierter Module entwickelt, das aus den Theorien zur Modellierung digitaler Systeme abgeleitet ist. Der in dieser Arbeit vorgestellte Ansatz hat in vielen kommerziellen Anwendungsprojekten unter Beweis gestellt, dass er den Namen "vollständige funktionale Verifikation" zu Recht trägt, weil in diesen Anwendungsprojekten nach dem Erreichen eines durch die Vollständigkeitsprüfung wohldefinierten Abschlusses keine Fehler mehr gefunden wurden. Der Ansatz wird von OneSpin Solutions GmbH unter dem Namen "Operation Based Verification" und "Gap Free Verification" vermarktet.
This paper presents the systematic synthesis of a fairly complex digitalcircuit and its CPLD implementation as an assemblage of communicatingasynchronous sequential circuits. The example, a VMEbus controller, waschosen because it has to control concurrent processes and to arbitrateconflicting requests.
Utilization of Correlation Matrices in Adaptive Array Processors for Time-Slotted CDMA Uplinks
(2002)
It is well known that the performance of mobile radio systems can be significantly enhanced by the application of adaptive antennas which consist of multi-element antenna arrays plus signal processing circuitry. In the thesis the utilization of such antennas as receive antennas in the uplink of mobile radio air interfaces of the type TD-CDMA is studied. Especially, the incorporation of covariance matrices of the received interference signals into the signal processing algorithms is investigated with a view to improve the system performance as compared to state of the art adaptive antenna technology. These covariance matrices implicitly contain information on the directions of incidence of the interference signals, and this information may be exploited to reduce the effective interference power when processing the signals received by the array elements. As a basis for the investigations, first directional models of the mobile radio channels and of the interference impinging at the receiver are developed, which can be implemented on the computer at low cost. These channel models cover both outdoor and indoor environments. They are partly based on measured channel impulse responses and, therefore, allow a description of the mobile radio channels which comes sufficiently close to reality. Concerning the interference models, two cases are considered. In the one case, the interference signals arriving from different directions are correlated, and in the other case these signals are uncorrelated. After a visualization of the potential of adaptive receive antennas, data detection and channel estimation schemes for the TD-CDMA uplink are presented, which rely on such antennas under the consideration of interference covariance matrices. Of special interest is the detection scheme MSJD (Multi Step Joint Detection), which is a novel iterative approach to multi-user detection. Concerning channel estimation, the incorporation of the knowledge of the interference covariance matrix and of the correlation matrix of the channel impulse responses is enabled by an MMSE (Minimum Mean Square Error) based channel estimator. The presented signal processing concepts using covariance matrices for channel estimation and data detection are merged in order to form entire receiver structures. Important tasks to be fulfilled in such receivers are the estimation of the interference covariance matrices and the reconstruction of the received desired signals. These reconstructions are required when applying MSJD in data detection. The considered receiver structures are implemented on the computer in order to enable system simulations. The obtained simulation results show that the developed schemes are very promising in cases, where the impinging interference is highly directional, whereas in cases with the interference directions being more homogeneously distributed over the azimuth the consideration of the interference covariance matrices is of only limited benefit. The thesis can serve as a basis for practical system implementations.
In der Arbeit geht es um die Untersuchung von Mechanismen zur Energiegewinnung in Ambient Intelligence Systemen. Zunächst wird ein Überblick über die existierenden Möglichkeiten und deren zu grunde liegenden physikalischen Effekte gegeben. Dann wird die Energiegewinnung mittels Thermogeneratoren näher untersucht.
In conventional radio communication systems, the system design generally starts from the transmitter (Tx), i.e. the signal processing algorithm in the transmitter is a priori selected, and then the signal processing algorithm in the receiver is a posteriori determined to obtain the corresponding data estimate. Therefore, in these conventional communication systems, the transmitter can be considered the master and the receiver can be considered the slave. Consequently, such systems can be termed transmitter (Tx) oriented. In the case of Tx orientation, the a priori selected transmitter algorithm can be chosen with a view to arrive at particularly simple transmitter implementations. This advantage has to be countervailed by a higher implementation complexity of the a posteriori determined receiver algorithm. Opposed to the conventional scheme of Tx orientation, the design of communication systems can alternatively start from the receiver (Rx). Then, the signal processing algorithm in the receiver is a priori determined, and the transmitter algorithm results a posteriori. Such an unconventional approach to system design can be termed receiver (Rx) oriented. In the case of Rx orientation, the receiver algorithm can be a priori selected in such a way that the receiver complexity is minimum, and the a posteriori determined transmitter has to tolerate more implementation complexity. In practical communication systems the implementation complexity corresponds to the weight, volume, cost etc of the equipment. Therefore, the complexity is an important aspect which should be taken into account, when building practical communication systems. In mobile radio communication systems, the complexity of the mobile terminals (MTs) should be as low as possible, whereas more complicated implementations can be tolerated in the base station (BS). Having in mind the above mentioned complexity features of the rationales Tx orientation and Rx orientation, this means that in the uplink (UL), i.e. in the radio link from the MT to the BS, the quasi natural choice would be Tx orientation, which leads to low cost transmitters at the MTs, whereas in the downlink (DL), i.e. in the radio link from the BS to the MTs, the rationale Rx orientation would be the favorite alternative, because this results in simple receivers at the MTs. Mobile radio downlinks with the rationale Rx orientation are considered in the thesis. Modern mobile radio communication systems are cellular systems, in which both the intracell and intercell interferences exist. These interferences are the limiting factors for the performance of mobile radio systems. The intracell interference can be eliminated or at least reduced by joint signal processing with consideration of all the signals in the considered cell. However such joint signal processing is not feasible for the elimination of intercell interference in practical systems. Knowing that the detrimental effect of intercell interference grows with its average energy, the transmit energy radiated from the transmitter should be as low as possible to keep the intercell interference low. Low transmit energy is required also with respect to the growing electro-phobia of the public. The transmit energy reduction for multi-user mobile radio downlinks by the rationale Rx orientation is dealt with in the thesis. Among the questions still open in this research area, two questions of major importance are considered here. MIMO is an important feature with respect to the transmit power reduction of mobile radio systems. Therefore, first questionconcerns the linear Rx oriented transmission schemes combined with MIMO antenna structures. The investigations of the MIMO benefit on the linear Rx oriented transmission schemes are studied in the thesis. Utilization of unconventional multiply connected quantization schemes at the receiver has also great potential to reduce the transmit energy. Therefore, the second question considers the designing of non-linear Rx oriented transmission schemes combined with multiply connected quantization schemes.
Rapid growth in sensors and sensor technology introduces variety of products to the market. The increasing number of available sensor concepts and implementations demands more versatile sensor electronics and signal conditioning. Nowadays signal conditioning for the available spectrum of sensors is becoming more and more challenging. Moreover, developing a sensor signal conditioning ASIC is a function of cost, area, and robustness to maintain signal integrity. Field programmable analog approaches and the recent evolvable hardware approaches offer partial solution for advanced compensation as well as for rapid prototyping. The recent research field of evolutionary concepts focuses predominantly on digital and is at its advancement stage in analog domain. Thus, the main research goal is to combine the ever increasing industrial demand for sensor signal conditioning with evolutionary concepts and dynamically reconfigurable matched analog arrays implemented in main stream Complementary Metal Oxide Semiconductors (CMOS) technologies to yield an intelligent and smart sensor system with acceptable fault tolerance and the so called self-x features, such as self-monitoring, self-repairing and self-trimming. For this aim, the work suggests and progresses towards a novel, time continuous and dynamically reconfigurable signal conditioning hardware platform suitable to support variety of sensors. The state-of-the-art has been investigated with regard to existing programmable/reconfigurable analog devices and the common industrial application scenario and circuits, in particular including resource and sizing analysis for proper motivation of design decisions. The pursued intermediate granular level approach called as Field Programmable Medium-granular mixed signal Array (FPMA) offers flexibility, trimming and rapid prototyping capabilities. The proposed approach targets at the investigation of industrial applicability of evolvable hardware concepts and to merge it with reconfigurable or programmable analog concepts, and industrial electronics standards and needs for next generation robust and flexible sensor systems. The devised programmable sensor signal conditioning test chips, namely FPMA1/FPMA2, designed in 0.35 µm (C35B4) Austriamicrosystems, can be used as a single instance, off the shelf chip at the PCB level for conditioning or in the loop with dedicated software to inherit the aspired self-x features. The use of such self–x sensor system carries the promise of improved flexibility, better accuracy and reduced vulnerability to manufacturing deviations and drift. An embedded system, namely PHYTEC miniMODUL-515C was used to program and characterize the mixed-signal test chips in various feedback arrangements to answer some of the questions raised by the research goals. Wide range of established analog circuits, ranging from single output to fully differential amplifiers, was investigated at different hierarchical levels to realize circuits like instrumentation amplifier and filters. A more extensive design issues based on low-power like for e.g., sub-threshold design were investigated and a novel soft sleep mode idea was proposed. The bandwidth limitations observed in the state of the art fine granular approaches were enhanced by the proposed intermediate granular approach. The so designed sensor signal conditioning instrumentation amplifier was then compared to the commercially available products in the market like LT 1167, INA 125 and AD 8250. In an adaptive prototype, evolutionary approaches, in particular based on particle swarm optimization with multi-objectives, were just deployed to all the test samples of FPMA1/FMPA2 (15 each) to exhibit self-x properties and to recover from manufacturing variations and drift. The variations observed in the performance of the test samples were compensated through reconfiguration for the desired specification.
Ethernet has become an established communication technology in industrial automation. This was possible thanks to the tremendous technological advances and enhancements of Ethernet such as increasing the link-speed, integrating the full-duplex transmission and the use of switches. However these enhancements were still not enough for certain high deterministic industrial applications such as motion control, which requires cycle time below one millisecond and jitter or delay deviation below one microsecond. To meet these high timing requirements, machine and plant manufacturers had to extend the standard Ethernet with real-time capability. As a result, vendor-specific and non-IEEE standard-compliant "Industrial Ethernet" (IE) solutions have emerged.
The IEEE Time-Sensitive Networking (TSN) Task Group specifies new IEEE-conformant functionalities and mechanisms to enable the determinism missing from Ethernet. Standard-compliant systems are very attractive to the industry because they guarantee investment security and sustainable solutions. TSN is considered therefore to be an opportunity to increase the performance of established Industrial-Ethernet systems and to move forward to Industry 4.0, which require standard mechanisms.
The challenge remains, however, for the Industrial Ethernet organizations to combine their protocols with the TSN standards without running the risk of creating incompatible technologies. TSN specifies 9 standards and enhancements that handle multiple communication aspects. In this thesis, the evaluation of the use of TSN in industrial real-time communication is restricted to four deterministic standards: IEEE802.1AS-Rev, IEEE802.1Qbu IEEE802.3br and IEEE802.1Qbv. The specification of these TSN sub-standards was finished at an early research stage of the thesis and hardware prototypes were available.
Integrating TSN into the Industrial-Ethernet protocols is considered a substantial strategical challenge for the industry. The benefits, limits and risks are too complex to estimate without a thorough investigation. The large number of Standard enhancements makes it hard to select the required/appropriate functionalities.
In order to cover all real-time classes in the automation [9], four established Industrial-Ethernet protocols have been selected for evaluation and combination with TSN as well as other performance relevant communication features.
The objectives of this thesis are to
(1) Provide theoretical, simulation and experimental evaluation-methodologies for the timing performance analysis of the deterministic TSN-standards mentioned above. Multiple test-plans are specified to evaluate the performance and compatibility of early version TSN-prototypes from different providers.
(2) Investigate multiple approaches and deduce migration strategies to integrate these features into the established Industrial-Ethernet protocols: Sercos III, Profinet IRT, Profinet RT and Ethernet/IP. A scenario of coexistence of time-critical traffic with other traffic in a TSN-network proves that the timing performance for highly deterministic applications, e.g. motion-control, can only be guaranteed by the TSN scheduling algorithm IEEE802.1Qbv.
Based on a requirements survey of highly deterministic industrial applications, multiple network scenarios and experiments are presented. The results are summarized into two case studies. The first case study shows that TSN alone is not enough to meet these requirements. The second case study investigates the benefits of additional mechanisms (Gigabit link-speed, minimum cycle time modeling, frame forwarding mechanisms, frame structure, topology migration, etc.) in combination with the TSN features. An implementation prototype of the proposed system and a simulation case study are used for the evaluation of the approach. The prototype is used for the evaluation and validation of the simulation model. Due to given scalability constraints of the prototype (no cut-through functionalities, limited number of TSN-prototypes, etc…), a realistic simulation model, using the network simulation tool OMNEST / OMNeT++, is conducted.
The obtained evaluation results show that a minimum cycle time ≤1 ms and a maximum jitter ≤1 μs can be achieved with the presented approaches.
Channel estimation is of great importance in many wireless communication systems, since it influences the overall performance of a system significantly. Especially in multi-user and/or multi-antenna systems, i.e. generally in multi-branch systems, the requirements on channel estimation are very high, since the training signals or so called pilots that are used for channel estimation suffer from multiple access interference. Recently, in the context with such systems more and more attention is paid to concepts for joint channel estimation (JCE) which have the capability to eliminate the multiple access interference and also the interference between the channel coefficients. The performance of JCE can be evaluated in noise limited systems by the SNR degradation and in interference limited systems by the variation coefficient. Theoretical analysis carried out in this thesis verifies that both performance criteria are closely related to the patterns of the pilots used for JCE, no matter the signals are represented in the time domain or in the frequency domain. Optimum pilots like disjoint pilots, Walsh code based pilots or CAZAC code based pilots, whose constructions are described in this thesis, do not show any SNR degradation when being applied to multi-branch systems. It is shown that optimum pilots constructed in the time domain become optimum pilots in the frequency domain after a discrete Fourier transformation. Correspondingly, optimum pilots in the frequency domain become optimum pilots in the time domain after an inverse discrete Fourier transformation. However, even for optimum pilots different variation coefficients are obtained in interference limited systems. Furthermore, especially for OFDM-based transmission schemes the peak-to-average power ratio (PAPR) of the transmit signal is an important decision criteria for choosing the most suitable pilots. CAZAC code based pilots are the only pilots among the regarded pilot constructions that result in a PAPR of 0 dB for the transmit signal that origins in the transmitted pilots. When summarizing the analysis regarding the SNR degradation, the variation coefficient and the PAPR with respect to one single service area and considering the impact due to interference from other adjacent service areas that occur due to a certain choice of the pilots, one can conclude that CAZAC codes are the most suitable pilots for the application in JCE of multi-carrier multi-branch systems, especially in the case if CAZAC codes that origin in different mother codes are assigned to different adjacent service areas. The theoretical results of the thesis are verified by simulation results. The choice of the parameters for the frequency domain or time domain JCE is oriented towards the evaluated implementation complexity. According to the chosen parameterization of the regarded OFDM-based and FMT-based systems it is shown that a frequency domain JCE is the best choice for OFDM and a time domain JCE is the best choice for FMT applying CAZAC codes as pilots. The results of this thesis can be used as a basis for further theoretical research and also for future JCE implementation in wireless systems.
The nondestructive testing of multilayered materials is increasingly applied in
both scientific and industrial fields. In particular, developments in millimeter
wave and terahertz technology open up novel measurement applications, which
benefit from the nonionizing properties of this frequency range. One example is
the noncontact inspection of layer thicknesses. Frequently used measuring and
analysis methods lead to a resolution limit that is determined by the bandwidth
of the setup. This thesis analyzes the reliable evaluation of thinner layer thicknesses
using model-based signal processing.
The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.
Software defined radios can be implemented on general purpose processors (CPUs), e.g. based on a PC. A processor offers high flexibility: It can not only be used to process the data samples, but also to control receiver functions, display a waterfall or run demodulation software. However, processors can only handle signals of limited bandwidth due to their comparatively low processing speed. For signals of high bandwidth the SDR algorithms have to be implemented as custom designed digital circuits on an FPGA chip. An FPGA provides a very high processing speed, but also lacks flexibility and user interfaces. Recently the FPGA manufacturer Xilinx has
introduced a hybrid system on chip called Zynq, that combines both approaches. It features a dual ARM Cortex-A9 processor and an FPGA, that offer the flexibility of a processor with the processing speed of an FPGA on a single chip. The Zynq is therefore very interesting for use in SDRs. In this paper the
application of the Zynq and its evaluation board (Zedboard) will be discussed. As an example, a direct sampling receiver has been implemented on the Zedboard using a high-speed 16 bit ADC with 250 Msps.
In system theory, state is a key concept. Here, the word state refers to condition, as in the example Since he went into the hospital, his state of health worsened daily. This colloquial meaning was the starting point for defining the concept of state in system theory. System theory describes the relationship between input X and output Y, that is, between influence and reaction. In system theory, a system is something that shows an observable behavior that may be influenced. Therefore, apart from the system, there must be something else influencing and observing the reaction of the system. This is called the environment of the system.
Code coverage analysis plays an important role in the software testing process. More recently, the remarkable effectiveness of coverage feedback has triggered a broad interest in feedback-guided fuzzing. In this work, we discuss static instrumentation techniques for binary-level coverage analysis without compiler support. We show that the proposed techniques are precise, efficient, and transparent significantly beyond the state of the art.
We implement these techniques into two tools, namely, Spedi and bcov. Both tools are open source and publicly available. Spedi shows that the disassembly and function identification of stripped binaries can be highly accurate without resort to any external information. We build on these important results in bcov where we statically instrument x86-64 ELF binaries to track code coverage. However, improving efficiency and scaling to large real-world software required an orchestrated effort combining several techniques.
First, we bring a well-known probe pruning technique, for the first time, to binary-level instrumentation and effectively leverage its notion of superblocks to reduce overhead. Second, we introduce sliced microexecution, a robust technique for jump table analysis which improves CFG precision and enables us to instrument jump table entries. Additionally, smaller instructions in x86-64 pose a challenge for inserting detours. To address this challenge, we aggressively exploit padding bytes. Also, we introduce a greedy scheme to systematically host detours in neighboring basic blocks.
We evaluate bcov on a corpus of 95 binaries compiled from eight popular and well-tested packages like FFmpeg and LLVM. Two instrumentation policies, with different edge-level precision, are used to patch all functions in this corpus - over 1.6 million functions. Our precise policy has average performance and memory overheads of 14% and 22%, respectively. Instrumented binaries do not introduce any test regressions. The reported coverage is highly accurate with an average F-score of 99.86%. Finally, our jump table analysis is comparable to that of IDA Pro on gcc binaries and outperforms it on clang binaries.
Our work demonstrates that static instrumentation can offer unique advantages in comparison to established methods like compiler instrumentation and dynamic binary instrumentation. It also opens the door for many interesting applications of static instrumentation, which can go well beyond coverage analysis.
Unter Ambient Intelligence (AmI) wird die Integration verschiedener Technologien zu einer den Menschen umgebenden, (nahezu) unsichtbaren Gesamtheit verstanden. Diese Intelligente Umgebung wird möglich durch die Miniaturisierung hochintegrierter Bauteile (Sensoren, Aktuatoren und Rechnern), deren zunehmende Intelligenz und vor allem deren lokale und globale zunehmend drahtlose Vernetzung. Unter dem Titel Man-u-Faktur 2012 (man and factoring in 2012) wurde an der Technischen Universität Kaiserslautern im Rahmen des Forschungsschwerpunkts Ambient Intelligence ein Szenario entwickelt, das ein beeindruckendes Gesamtbild einer Technik, die den Menschen in den Mittelpunkt rückt, beschreibt. Man-u-Faktur 2012 steht dabei für ein Weiterdrehen des Rads der Industrialisierung von der heute üblichen variantenreichen, technologiezentrierten Massenfertigung hin zu einer kundenindividuellen, mitarbeiterzentrierten Maßfertigung. Im Speziellen wird hierunter der Aufbau massiv verteiler kunden- aber auch mitarbeiterfreundlicher Produktionsanlagen verstanden, die sich im hochdynamischen Umfeld entsprechend der jeweiligen Gegebenheiten anzupassen wissen. Der Mensch ist überall dort präsent, wo flexibles Arbeiten oder flexible Entscheidungen im Vordergrund stehen. In diesem Bericht wird der Einfluss von Ambient Intelligence beispielhaft auf die Vision einer Fahrradproduktion in der Man-u-Faktur 2012 angewandt. Aus diesem Szenario werden anschließend sowohl die zu entwickelnden Schlüsseltechnologien als auch die Einflüsse auf Wirtschaft und Gesellschaft abgeleitet.
In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.
Durch die stetige Zunahme von dezentralen Erzeugungsanlagen, den anstehenden Smart-Meter Rollout sowie die zu erwartende Elektrifizierung des Verkehrssektors (E-Mobilität) steht die Netzplanung und Netzbetriebsführung von Niederspannungsnetzen (NS-Netzen) in Deutschland vor großen Herausforderungen. In den letzten Jahren wurden daher viele Studien, Forschungs- und Demonstrationsprojekte zu den oben genannten Themen durchge-führt und die Ergebnisse sowie die entwickelten Methoden publiziert. Jedoch lassen sich die publizierten Methoden meist nicht nachbilden bzw. validieren, da die Untersuchungsmodelle oder die angesetzten Szenarien für Dritte nicht nachvollziehbar sind. Es fehlen einheitliche Netzmodelle, die die deutschen NS-Netze abbilden und für Ver-gleichsuntersuchungen herangezogen werden können, ähnlich dem Beispiel der nordamerikanischen Verteilnetzmodelle des IEEE.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle für NS-Netze wegen der hohen Anzahlen der NS-Netze und Verteilnetzbetreiber (VNB) nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer NS-Netze in wissenschaftlichen Publikationen aus daten-schutzrechtlichen Gründen meist nicht erwünscht. Für Untersuchungen im Rahmen eines Forschungsprojekts wurden darum möglichst charakteristische synthetische NS-Netzmodelle erstellt, die sich an gängigen deutschen Siedlungsstrukturen und üblichen Netzplanungsgrundsätzen orientieren. In dieser Arbeit werden diese NS-Netzmodelle sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare NS-Netzmodelle für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen sowie zur Methodenentwicklung verwendet werden.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle
für Mittelspannungsnetze (MS-Netze) wegen der hohen Anzahlen der MS-Netze und Verteilnetzbetreiber (VNB)
nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer MS-Netze in wissenschaftlichen Publikationen
aus datenschutzrechtlichen Gründen meist nicht erwünscht. In dieser Arbeit werden MS-Netzmodelle
sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare MS-Netzmodelle
für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen
sowie zur Methodenentwicklung verwendet werden.
Due to the steadily increasing number of decentralized generation units, the upcoming smart meter rollout and the expected electrification of the transport sector (e-mobility), grid planning and grid operation at low-voltage (LV) level are facing major challenges. Therefore, many studies, research and demonstration projects on the above topics have been carried out in recent years, and the results and the methods developed have been published. However, the published methods usually cannot be replicated or validated, since the majority of the examination models or the scenarios used are incomprehensible to third parties. There is a lack of uniform grid models that map the German LV grids and can be used for comparative investigations, which are similar to the example of the North American distribution grid models of the IEEE. In contrast to the transmission grid, whose structure is known with high accuracy, suitable grid models for LV grids are difficult to map because of the high number of LV grids and distribution system operators. Furthermore, a detailed description of real LV grids is usually not available in scientific publications for data privacy
reasons. For investigations within a research project, the most characteristic synthetic LV grid models have been created, which are based on common settlement structures and usual grid planning principles in Germany. In this work, these LV grid models, and their development are explained in detail. For the first time, comprehensible LV grid models for the middle European area are available to the public, which can be used as a benchmark for further scientific research and method developments.
This document is an English version of the paper which was originally written in German1. In addition, this paper discusses a few more aspects especially on the planning process of distribution grids in Germany.
Specification of asynchronous circuit behaviour becomes more complex as the
complexity of today’s System-On-a-Chip (SOC) design increases. This also causes
the Signal Transition Graphs (STGs) – interpreted Petri nets for the specification
of asynchronous circuit behaviour – to become bigger and more complex, which
makes it more difficult, sometimes even impossible, to synthesize an asynchronous
circuit from an STG with a tool like petrify [CKK+96] or CASCADE [BEW00].
It has, therefore, been suggested to decompose the STG as a first step; this
leads to a modular implementation [KWVB03] [KVWB05], which can reduce syn-
thesis effort by possibly avoiding state explosion or by allowing the use of library
elements. A decomposition approach for STGs was presented in [VW02] [KKT93]
[Chu87a]. The decomposition algorithm by Vogler and Wollowski [VW02] is based
on that of Chu [Chu87a] but is much more generally applicable than the one in
[KKT93] [Chu87a], and its correctness has been proved formally in [VW02].
This dissertation begins with Petri net background described in chapter 2.
It starts with a class of Petri nets called a place/transition (P/T) nets. Then
STGs, the subclass of P/T nets, is viewed. Background in net decomposition
is presented in chapter 3. It begins with the structural decomposition of P/T
nets for analysis purposes – liveness and boundedness of the net. Then STG
decomposition for synthesis from [VW02] is described.
The decomposition method from [VW02] still could be improved to deal with
STGs from real applications and to give better decomposition results. Some
improvements for [VW02] to improve decomposition result and increase algorithm
efficiency are discussed in chapter 4. These improvement ideas are suggested in
[KVWB04] and some of them are have been proved formally in [VK04].
The decomposition method from [VW02] is based on net reduction to find
an output block component. A large amount of work has to be done to reduce
an initial specification until the final component is found. This reduction is not
always possible, which causes input initially classified as irrelevant to become
relevant input for the component. But under certain conditions (e.g. if structural
auto-conflicts turn out to be non-dynamic) some of them could be reclassified as
irrelevant. If this is not done, the specifications become unnecessarily large, which
intern leads to unnecessarily large implemented circuits. Instead of reduction, a
new approach, presented in chapter 5, decomposes the original net into structural
components first. An initial output block component is found by composing the
structural components. Then, a final output block component is obtained by net
reduction.
As we cope with the structure of a net most of the time, it would be useful
to have a structural abstraction of the net. A structural abstraction algorithm
[Kan03] is presented in chapter 6. It can improve the performance in finding an
output block component in most of the cases [War05] [Taw04]. Also, the structure
net is in most cases smaller than the net itself. This increases the efficiency of the
decomposition algorithm because it allows the transitions contained in a node of
the structure graph to be contracted at the same time if the structure graph is
used as internal representation of the net.
Chapter 7 discusses the application of STG decomposition in asynchronous
circuit design. Application to speed independent circuits is discussed first. Af-
ter that 3D circuits synthesized from extended burst mode (XBM) specifications
are discussed. An algorithm for translating STG specifications to XBM specifi-
cations was first suggested by [BEW99]. This algorithm first derives the state
machine from the STG specification, then translates the state machine to XBM
specification. An XBM specification, though it is a state machine, allows some
concurrency. These concurrencies can be translated directly, without deriving
all of the possible states. An algorithm which directly translates STG to XBM
specifications, is presented in chapter 7.3.1. Finally DESI, a tool to decompose
STGs and its decomposition results are presented.
Die Paarungsstörung mit Pheromonen ist ein etabliertes Verfahren der ökologischen Schädlingsbekämpfung in vielen Bereichen der Landwirtschaft. Um dieses Verfahren zu optimieren, ist es erforderlich, genauere Erkenntnisse über die Verteilung des Pheromons über den behandelten Agrarflächen zu erhalten. Die Messung dieser Duftstoffe mit dem EAG-System ist eine Methode, mit der man schnell und zuverlässig Pheromonkonzentrationen im Freiland bestimmen kann. Diese Arbeit beschreibt Beiträge, die zur Weiterentwicklung des Systems von großer Bedeutung sind. Die Steuerung des Messablaufs durch eine Ablaufdatei, die erst zur Laufzeit ins Programm geladen wird, ermöglicht eine zeitgenaue und flexible Steuerung des Messsystems. Die Auswertung der Messergebnisse wird durch Methoden der Gesamtdarstellung der Konzentrationsberechnung und durch rigorose Fehlerbetrachtung auf eine solide Grundlage gestellt. Die für die Konzentrationsberechnung erforderlichen Grundvoraussetzungen werden anhand experimenteller Beispiele ausführlich erläutert und verfiziert. Zusätzlich wird durch ein iteratives Verfahren die Konzentrationsberechnung von der mathematischen oder empirischen Darstellung der Dosis-Wirkungskurve unabhängig gemacht. Zur Nutzung einer erweiterten EAG-Apparatur zur Messung komplexer Duftstoffgemische wurde das Messsystem im Bereich der Steuerung und der Auswertung tiefgreifend umgestaltet und vollständig einsatztauglich gemacht. Dazu wurde das Steuerungssystem erweitert, das Programm für die Messwerterfassung neu strukturiert, eine Methode zur Konzentrationsberechnung für Duftstoffgemische entwickelt und in einer entsprechenden Auswertesoftware implementiert. Das wichtigste experimentelle Ergebnis besteht in der Durchführung und Auswertung einer speziellen Messung, bei der das EAG-System parallel mit einer klassischen Gaschromatograph-Methode eingesetzt wurde. Die Ergebnisse ermöglichen erstmals eine absolute Festlegung der Konzentrations-Messergebnisse des EAG-Messsystems für das Pheromon des Apfelwicklers. Bisher konnten nur Ergebnisse in Relativen Einheiten angegeben werden.
Magnetic spin-based memory technologies are a promising solution to overcome the incoming limits of microelectronics. Nevertheless, the long write latency and high write energy of these memory technologies compared to SRAM make it difficult to use these for fast microprocessor memories, such as L1- Caches. However, the recent advent of the Spin Orbit Torque (SOT) technology changed the story: indeed, it potentially offers a writing speed comparable to SRAM with a much better density as SRAM and an infinite endurance, paving the way to a new paradigm in processor architectures, with introduction of non- volatility in all the levels of the memory hierarchy towards full normally-off and instant-on processors. This paper presents a full design flow, from device to system, allowing to evaluate the potential of SOT for microprocessor cache memories and very encouraging simulation results using this framework.
Sokrates und das Nichtwissen
(1997)
Sowohl die gesteigerte Komplexität der Signalverarbeitungsalgorithmen und das umfangreichere Diensteangebot als auch die zum Erzielen der hohen erforderlichen Rechenleistungen erforderliche Parallelverarbeitung führen künftig zu einer stark ansteigenden Komplexität der digitalen Signalverarbeitung in Mobilfunksystemen. Diese Komplexität ist nur mit einem hierarchischen Modellierungs- und Entwurfsprozeß beherrschbar. Während die niedrigeren Hierarchieebenen der Programmierung und des Hardwareentwurfs bereits heute gut beherrscht werden, besteht noch Unklarheit bei den Entwurfsverfahren auf der höheren Systemebene. Die vorliegende Arbeit liefert einen Beitrag zur Systematisierung des Entwurfs auf höheren Hierarchieebenen. Hierzu wird der Entwurf eines Experimentalsystems für das JD-CDMA-Mobilfunkkonzept auf der Systemebene betrachtet. Es wird gezeigt, daß das Steuerkreismodell ein angemessenes Modell für die digitale Signalverarbeitung in einem Mobilfunksystem auf der Systemebene ist. Das Steuerkreismodell läßt sich einerseits direkt auf die in zukünftigen Mobilfunksystemen einzusetzenden Multiprozessorsysteme abbilden und entspricht andererseits auch der nachrichtentechnischen Sichtweise der Aufgabenstellung, in der das Mobilfunksystem durch die auszuführenden Algorithmen beschrieben wird. Das Steuerkreismodell ist somit ein geeignetes Bindeglied, um von der Aufgabenstellung zu einer Implementierung zu gelangen. Weiterhin wird gezeigt, daß das Steuerkreismodell sehr modellierungsmächtig ist, und sein Einsatz im Gegensatz zu vielen bereits bekannten Entwurfsverfahren nicht auf mittels Datenflußmodellen beschreibbare Systeme begrenzt ist. Die klassischen, aus der von Datenflußmodellen ausgehenden Systemsynthese bekannten Entwurfsschritte Allokierung, Ablaufplanung und Bindung können im Kontext der Steuerkreismodellierung als Verfahren zur Konstruktion der Steuerwerksaufgabe verstanden werden. Speziell für das Experimentalsystem werden zwei verschiedene Ablaufsteuerungsstrategien modelliert und untersucht. Die volldynamische Ablaufsteuerung wird zur Laufzeit durchgeführt und ist daher nicht darauf angewiesen, daß die auszuführenden Abläufe a priori bekannt sind. Bei der selbsttaktenden Ablaufsteuerung werden die hier a priori bekannten Abläufe zum Zeitpunkt der Systemkonstruktion fest geplant, und zur Laufzeit wird dieser Plan nur noch ausgeführt. Schließlich werden noch die Auswirkungen der paketvermittelten burstförmigen Nachrichtenübertragung auf die digitale Signalverarbeitung in zukünftigen Mobilfunksystemen untersucht. Es wird gezeigt, daß es durch Datenpufferung sehr gut möglich ist, die Rechenlast in einem Mobilfunksystem zu mitteln.
Programs are linguistic structures which contain identifications of individuals: memory locations, data types, classes, objects, relations, functions etc. must be identified selectively or definingly. The first part of the essay which deals with identification by showing and designating is rather short, whereas the remaining part dealing with paraphrasing is rather long. The reason is that for an identification by showing or designating no linguistic compositions are needed, in contrast to the case of identification by paraphrasing. The different types of functional paraphrasing are covered here in great detail because the concept of functional paraphrasing is the foundation of functional programming. The author had to decide whether to cover this subject here or in his essay Purpose versus Form of Programs where the concept of functional programming is presented. Finally, the author came to the conclusion that this essay on identification is the more appropriate place.
The present thesis deals with a novel air interface concept for beyond 3G mobile radio systems. Signals received at a certain reference cell in a cellular system which originate in neighboring cells of the same cellular system are undesired and constitute the intercell interference. Due to intercell interference, the spectrum capacity of cellular systems is limited and therefore the reduction of intercell interference is an important goal in the design of future mobile radio systems. In the present thesis, a novel service area based air interface concept is investigated in which interference is combated by joint detection and joint transmission, providing an increased spectrum capacity as compared to state-of-the-art cellular systems. Various algorithms are studied, with the aid of which intra service area interference can be combated. In the uplink transmission, by optimum joint detection the probability of erroneous decision is minimized. Alternatively, suboptimum joint detection algorithms can be applied offering reduced complexity. By linear receive zero-forcing joint detection interference in a service area is eliminated, while by linear minimum mean square error joint detection a trade-off is performed between interference elimination and noise enhancement. Moreover, iterative joint detection is investigated and it is shown that convergence of the data estimates of iterative joint detection without data estimate refinement towards the data estimates of linear joint detection can be achieved. Iterative joint detection can be further enhanced by the refinement of the data estimates in each iteration. For the downlink transmission, the reciprocity of uplink and downlink channels is used by joint transmission eliminating the need for channel estimation and therefore allowing for simple mobile terminals. A novel algorithm for optimum joint transmission is presented and it is shown how transmit signals can be designed which result in the minimum possible average bit error probability at the mobile terminals. By linear transmit zero-forcing joint transmission interference in the downlink transmission is eliminated, whereas by iterative joint transmission transmit signals are constructed in an iterative manner. In a next step, the performance of joint detection and joint transmission in service area based systems is investigated. It is shown that the price to be paid for the interference suppression in service area based systems is the suboptimum use of the receive energy in the uplink transmission and of the transmit energy in the downlink transmission, with respect to the single user reference system. In the case of receive zero-forcing joint detection in the uplink and transmit zero-forcing joint transmission in the downlink, i.e., in the case of linear unbiased data transmission, it is shown that the same price, quantified by the energy efficiency, has to be paid for interference elimination in both uplink and downlink. Finally it is shown that if the system load is fixed, the number of active mobile terminals in a SA and hence the spectrum capacity can be increased without any significant reduction in the average energy efficiency of the data transmission.
In search of new technologies for optimizing the performance and space requirements of electronic and optical micro-circuits, the concept of spoof surface plasmon polaritons (SSPPs) has come to the fore of research in recent years. Due to the ability of SSPPs to confine and guide the energy of electromagnetic waves in a subwavelength space below the diffraction limit, SSPPs deliver all the tools to implement integrated circuits with a high integration rate. However, in order to guide SSPPs in the terahertz frequency range, it is necessary to carefully design metasurfaces that allow one to manipulate the spatio-temporal and spectral properties of the SSPPs at will. Here, we propose a specifically designed cut-wire metasurface that sustains strongly confined SSPP modes at terahertz frequencies. As we show by numerical simulations and also prove in experimental measurements, the proposed metasurface can tightly guide SSPPs on straight and curved pathways while maintaining their subwavelength field confinement perpendicular to the surface. Furthermore, we investigate the dependence of the spatio-temporal and spectral properties of the SSPP modes on the width of the metasurface lanes that can be composed of one, two or three cut-wires in the transverse direction. Our investigations deliver new insights into downsizing effects of guiding structures for SSPPs.
Multicore processors and Multiprocessor System-on-Chip (MPSoC) have become essential in Real-Time Systems (RTS) and Mixed-Criticality Systems (MCS) because of their additional computing capabilities that help reduce Size, Weight, and Power (SWaP), required wiring, and associated costs. In distributed systems, a single shared multicore or MPSoC node executes several applications, possibly of different criticality levels. However, there is interference between applications due to contention in shared resources such as CPU core, cache, memory, and network.
Existing allocation and scheduling methods for RTS and MCS often rely on implicit assumptions of the constant availability of individual resources, especially the CPU, to provide guaranteed progress of tasks. Most existing approaches aim to resolve contention in only a specific shared resource or a set of specific shared resources. Moreover, they handle a limited number of events such as task arrivals and task completions.
In distributed RTS and MCS with several nodes, each having multiple resources, if the applications, resource availability, or system configurations change, obtaining assumptions about resources becomes complicated. Thus, it is challenging to meet end-to-end constraints by considering each node, resource, or application individually.
Such RTS and MCS need global resource management to coordinate and dynamically adapt system-wide allocation of resources. In addition, the resource management can dynamically adapt applications to changing availability of resources and maintains a system-wide (global) view of resources and applications.
The overall aim of global resource management is twofold.
Firstly, it must ensure real-time applications meet their end-to-end deadlines even in the presence of faults and changing environmental conditions. Secondly, it must provide efficient resource utilization to improve the Quality of Service (QoS) of co-executing Best-Effort (BE) (or non-critical) applications.
A single fault in global resource management can render it useless. In the worst case, the resource management can make faulty decisions leading to a deadline miss in real-time applications. With the advent of Industry 4.0, cloud computing, and Internet-of-Things (IoT), it has become essential to combine stringent real-time constraints and reliability requirements with the need for an open-world assumption and ensure that the global resource management does not become an inviting target for attackers.
In this dissertation, we propose a domain-independent global resource management framework for distributed RTS and MCS consisting of heterogeneous nodes based on multicore processors or MPSoC. We initially developed the framework with the French Aerospace Lab -- ONERA and Thales Research & Technology during the DREAMS project and later extended it during SECREDAS and other internal projects. Unlike previous resource management frameworks RTS and MCS, we consider both safety and security for the framework itself.
To enable real-time industries to use cloud computing and enter a new market segment -- real-time operation as a cloud-based service, we propose a Real-Time-Cloud (RT-Cloud) based on global resource management for hosting RTS and MCS.
Finally, we present a mixed-criticality avionics use case for evaluating the capabilities of the global resource management framework in handling permanent core failures and temporal overload condition, and a railway use case to motivate the use of RT-Cloud with global resource management.
Regelkonzept für eine Niederspannungsnetzautomatisierung unter Verwendung des Merit-Order-Prinzips
(2022)
Durch die zunehmende Erzeugungsleistung auf Niederspannungsnetzebene (NS-Netzebene) durch Photovoltaikanlagen, sowie die Elektrifizierung des Wärme- und des Verkehrssektors sind Investitionen in die NS-Netze notwendig. Ein höherer Digitalisierungsgrad im NS-Netz birgt das Potential, die notwendigen Investitionen genauer zu identifizieren, und damit ggf. zu reduzieren oder zeitlich zu verschieben. Hierbei stellt die Markteinführung intelligenter Messsysteme, sog. Smart Meter, eine neue Möglichkeit dar, Messwerte aus dem NS-Netz zu erhalten und auf deren Grundlage die Stellgrößen verfügbarer Aktoren zu optimieren. Dazu stellt sich die Frage, wie Messdaten unterschiedlicher Messzyklen in einem Netzautomatisierungssystem genutzt werden können und wie sich das nicht-lineare ganzzahlige Optimierungsproblem der Stellgrößenoptimierung effizient lösen lässt. Diese Arbeit befasst sich mit der Lösung des Optimierungsproblems. Dazu kommt eine Stellgrößenoptimierung nach dem Merit-Order-Prinzip zur Anwendung.
Field-effect transistor (FET) sensors and in particular their nanoscale variant of silicon nanowire transistors are very promising technology platforms for label-free biosensor applications. These devices directly detect the intrinsic electrical charge of biomolecules at the sensor’s liquid-solid interface. The maturity of micro fabrication techniques enables very large FET sensor arrays for massive multiplex detection. However, the direct detection of charged molecules in liquids faces a significant limitation due to a charge screening effect in physiological solutions, which inhibits the realization of point-of-care applications. As an alternative, impedance spectroscopy with FET devices has the potential to enable measurements in physiological samples. Even though promising studies were published in the field, impedimetric detection with silicon FET devices is not well understood.
The first goal of this thesis was to understand the device performances and to relate the effects seen in biosensing experiments to device and biomolecule types. A model approach should help to understand the capability and limitations of the impedimetric measurement method with FET biosensors. In addition, to obtain experimental results, a high precision readout device was needed. Consequently, the second goal was to build up multi-channel, highly accurate amplifier systems that would also enable future multi-parameter handheld devices.
A PSPICE FET model for potentiometric and impedimetric detection was adapted to the experiments and further expanded to investigate the sensing mechanism, the working principle, and effects of side parameters for the biosensor experiments. For potentiometric experiments, the pH sensitivity of the sensors was also included in this modelling approach. For impedimetric experiments, solutions of different conductivity were used to validate the suggested theories and assumptions. The impedance spectra showed two pronounced frequency domains: a low-pass characteristic at lower frequencies and a resonance effect at higher frequencies. The former can be interpreted as a contribution of the source and double layer capacitances. The latter can be interpreted as a combined effect of the drain capacitance with the operational amplifier in the transimpedance circuit.
Two readout systems, one as a laboratory system and one as a point-of-care demonstrator, were developed and used for several chemical and biosensing experiments. The PSPICE model applied to the sensors and circuits were utilized to optimize the systems and to explain the sensor responses. The systems as well as the developed modelling approach were a significant step towards portable instruments with combined transducer principles in future healthcare applications.
Lowering the supply voltage of Static Random-Access Memories (SRAM) is key to reduce power consumption, however since this badly affects the circuit performances, it might lead to various forms of loss of functionality. In this work, we present silicon results showing significant yield improvement, achieved with write and read assist techniques on a 6T high- density bitcell manufactured in 40 nm technology. Data is successfully modeled with an original spice-based method that allows reproducing at high computing efficiency the effects of static negative bitline write assist, the effects of static wordline underdrive read assist, while the effects of read ability losses due to low-voltage operations on the yield are not taken into account in the model.
Der Einsatz von Freisprecheinrichtungen bei der Sprachkommunikation in Fahrzeugen erfor- dert die Reduktion der mit dem Sprachsignal erfaßten Umgebungsgeräusche. Die akustischen Störungen beeinträchtigen in der Regel die Verständlichkeit des zu übertragenden Sprachsi- gnals. In der Literatur wurden zahlreiche Verfahren und Ansätze zur Geräuschreduktion vor- geschlagen und beschrieben. Prinzipiell können diese Ansätze in drei Kategorien unterteilt werden: Einkanalige Geräuschreduktionssysteme, wie zum Beispiel das Verfahren der Spek- tralen Subtraktion, mehrkanalige Geräuschkompensationsverfahren, die mindestens ein Stör- geräusch-Referenzsignal benötigen, und adaptive Mikrophonarrays, die zur Erfassung des Sprachsignals ein richtungsselektives Reduktionsverfahren (beam forming) einsetzen. Diese Arbeit fokussiert ausschließlich auf das Problem der einkanaligen Geräuschreduktions- systeme, wie sie häufig in Kraftfahrzeugen oder Telefonen aus Kosten- und konstruktiven Gründen zu finden sind. Mehrkanalige Verfahren werden nur der Vollständigkeit halber am Rande behandelt. Einkanalige Verfahren sind durch den Kompromiß zwischen der Dämpfung der störenden Geräusche und den unvermeidbaren Verzerrungen des Sprachsignals und der verbleibenden Reststörungen gekennzeichnet. Diese Verzerrungen sind als sporadisch auftretende tonartige Reststörungen (musical tones) bzw. als Verfärbungen des Sprachsignals wahrnehmbar. Solche Fehler im Ausgangssignal werden wegen ihrer tonalen Struktur als äußerst störend empfunden und verschlechtern den subjektiven Höreindruck. In letzter Zeit sind deshalb Verfahren mit dem Ziel entwickelt worden, möglichst alle auftre- tenden Verzerrungen zu unterdrücken. So wurden zum Beispiel nichtlineare Methoden, bekannt aus der Bildverarbeitung, oder spezielle Detektionsalgorithmen entworfen, um das Problem geschlossen zu lösen. Besonders neu sind Verfahren, die psychoakustische Eigenschaften des menschlichen Gehörs nutzen, um wenigstens einen Teil der auftretenden Verzerrungen zu verdecken. So kommen hier Methoden zum Einsatz, die durch Formulierung einer psychoakustischen Gewichtungsregel versuchen, einen optimalen Kompromiß zwischen Höhe der Geräuschdämpfung, der Reststörungen und der resultierenden Sprachverständlichkeit zu finden. In der vorliegenden Arbeit diente ein klassisches einkanaliges Geräuschreduktionsverfahren als Ausgangsbasis für die Entwicklung eines neuen psychoakustisch-parametrischen Verfah- rens. Dabei wurde von Modellen der Spracherzeugung und Wahrnehmung der menschlichen Sprache ausgegangen, um geeignete Methoden für die psychoakustische Geräuschreduktion und Signalverbesserung zu finden. Das Ergebnis sind drei neue Verfahren, die sich je nach Eingangssignal adaptiv auf die Charakteristik des Gehörs einstellen und dabei Verzerrungen des Sprachsignals und der Reststörung unterhalb der psychoakustischen Wahrnehmbarkeits- schwelle, der sogenannten Mithörschwelle, halten. Das führt zu einer spürbaren Verbesserung des subjektiven Höreindrucks und hat positiven Einfluß auf die Sprachverständlichkeit. In wesentlichen Bestandteilen dieser Arbeit werden Aspekte der psychologischen Wahrnehmung akustischer Signale und bekannte psychoakustische Eigenschaften des menschlichen Gehörs für die auditive Signalverbesserung, Geräuschreduktion und die Identifikation akustischer Systeme ausgenutzt. Dementsprechend wird im ersten Teil eine kurze Einführung in die Theorie der Signalverarbeitung und Psychoakustik gegeben. Daran anschließend folgt die Vorstellung eines Verfahrens zur auditiven Signalverbesserung und Geräuschreduktion unter Ausnutzung psychoakustischer Verdeckungseffekte. Dieser Abschnitt ist besonders ausführlich gestaltet, da er den Hauptbestandteil dieser Arbeit bildet. Der dritte Teil erläutert experimentelle Untersuchungen und die Bewertung der verschiedenen Verfahren. Abschließend folgen Zusammenfassung und ein wissenschaftlicher Ausblick.
Das moderne Wohngebäude zeichnet sich durch einen niedrigen Heizwärmebedarf aus. Mit Zunahme der Sensitivität des Wohngebäudes bezüglich der Solarstrahlung aufgrund neuartiger Systeme wie transparenter Wärmedämmung, Phasenwechselmaterialien oder großer Fensterflächen, erweitert sich der herkömmliche Regelungsansatz zur Einhaltung des behaglichen Raumklimas. Das thermische Gebäudeverhalten definiert sich weitaus komplexer. Es kommen neben der notwendigen Heizung weitere Aktoren (Sonnenschutzeinrichtung, Lüftung) ins Spiel. Die Zunahme der realisierbaren, solaren Erträge bewirken im Winter fossile Energieeinsparungen. Im Sommer sind jedoch ohne geeignete Maßnahmen Überhitzungen die Folge. Mit Hilfe moderner und vernetzter Regelungstechnik können Wirtschaftlichkeit des Systems und Komfort optimiert werden. Hierzu wurden bewährte Simulationswerkzeuge erweitert. Moderne Komponenten wie Phasenwechselmaterialien, transparente Wärmedämmung und Verschattungssysteme auf Basis einer schaltenden Schicht im Glasverbund erfahren eine Modellbildung. Umfangreiche Validierungen zu den Teilmodellen und zum Gesamtmodell zeigen, dass eine realitätsnahe Abbildung erreicht wird. Grundlage dieser Validierungssequenzen waren Feldtestmessungen an bewohnten Gebäuden, sowie Ergebnisse von Systemtestständen. Die Aussagesicherheit des gesamten Gebäudemodells wurde durch eine sogenannte "Cross-Validation" mit anderen etablierten Simulationsprogrammen hergestellt. Mit der Schaffung der realitätsnahen Abbildung eines solaroptimierten Wohngebäudes, welches an eine heizungsunterstützende, solarthermische Anlage gekoppelt ist, wurde die Grundlage zur Entwicklung einer prädiktiven Wärmeflussregelung in Wohngebäuden mit erweitertem thermischen Verhalten gelegt. Eine Untersuchung zum dynamischen Verhalten auf periodische Anregung von Einflussgrößen, zeigt die dominanten Zeitkonstanten des Gebäudesystems auf. Einfallende Solarstrahlung durch Fenster wirkt sich am schnellsten auf die empfundene Raumtemperatur aus. Dies hat Auswirkung auf die prädiktive Regelung. Während im Winter die Solarstrahlung zur Heizungsunterstützung herangezogen werden soll, gilt es, im Sommer die Bewohner vor Überhitzung zu schützen. Damit die Regelung in der Heizperiode an sonnigen Tagen nicht unnötig vorheizt (d. h. frühzeitig abgeschalten wird, da der zukünftige Heizbedarf von der Sonne gedeckt werden kann) und damit Überhitzungen(vor allem im Sommer) vermieden werden können, wurde für das untersuchte Gebäudemodell ein Prognosehorizont für den Prädiktor bestimmt. Mit dem modellbasierten Regelungskonzept wurde ein übergreifendes Wärmemanagementsystem entwickelt, welches mit der Information einer lokalen Wettervorhersage den thermischen Zustand des Gebäudes vorhersagt. Aufgrund des im Regler implementierten, reduzierten Modells, berücksichtigt die Prädiktion die besonderen Eigenschaften der eingesetzten Fassadenkomponenten. Umfangreiche, simulationsgestützte Untersuchungen bewerten das Regelungskonzept. Als Referenzsystem dient ein Gebäude mit herkömmlichem Regelungskonzept.
Property-Driven Design
(2021)
We introduce Property-Driven Design, a tool-flow that guarantees formal soundness be- tween ESL and RTL and thus enables a shift-left of general functional verification by moving HW verification to higher abstraction layers. In addition, by generating a formal Verification IP (VIP) automatically from ESL descriptions, the entry hurdle to formal methods is reduced considerably, opening them to a wider audience, which effectively ‘democratizes’ them. Short feedback cycles reduce time spent on RTL verification and lead to higher-quality designs.
In diesem Aufsatz geht es um eine Klassifikation von Programmen nach zwei orthogonalen Kriterien.Programm und Software werden dabei nicht als Synonyme angesehen; Programm sein wird hiergleichgesetzt mit ausführbar sein, d.h. etwas ist dann und nur dann ein Programm, wenn man die Fragebeantworten kann, was es denn heißen solle, dieses Etwas werde ausgeführt. Es gibt durchaus Softwa-regebilde, bezüglich derer diese Frage keinen Sinn hat und die demzufolge auch keine Programme sind - beispielsweise eine Funktions - oder eine Klassenbibliothek.Klassifikation ist von Nutzen, wenn sie Vielfalt überschaubarer macht - die Vielfalt der Schüler einergroßen Schule wird überschaubarer, wenn die Schüler "klassifiziert" sind, d.h. wenn sie in ihren Klas-senzimmern sitzen. Die im folgenden vorgestellte Klassifikation soll die Vielfalt von Programmenüberschaubarer machen.
In dieser Arbeit wird ein formales Modell zur Beschreibung von hardwarenaher Software
vorgestellt: Die Programmnetzliste.
Die Programmnetzliste (PN) besteht aus Instruktionszellen
die in einem gerichteten azyklischen Graph verbunden sind und dabei
alle Ausführungspfade des betrachteten Programms beinhaltet. Die einzelnen Instruktionszellen
repräsentieren eine Instruktion oder eine Instruktionssequenz. Die PN verfügt
über eine explizite Darstellung des Programmablaufs und eine implizite Modellierung des
Datenpfads und ist als Modell für die Verifikation von Software nutzbar. Die Software
wird dabei auf Maschinencode-Level betrachtet.
Die Modellgenerierung besteht aus wenigen und gut automatisierbaren Schritten. Als
Grundlage dient ein – ggf. unvollständiger – Kontrollfluss Graph (CFG), der aus der Software
generiert werden kann. Die Modellgenerierung besteht aus zwei Schritten.
Der erste Schritt ist die Erzeugung des expliziten Programmablaufs, indem der CFG
abgerollt wird. Dabei wird ein sogenannter Execution-Graph (EXG) erzeugt, der alle
möglichen Ausführungspfade des betrachteten Programms beinhaltet. Um dieses Modell
so kompakt wie möglich zu halten, werden unterschiedliche Techniken verwendet – wie
das Zusammenführen gemeinsamer Pfade und das Erkennen von “toten” Verzweigungen
im Programm, die an der entsprechenden Stelle niemals ausgeführt werden.
Im Anschluss wird im zweiten Schritt der Execution-Graph in die Programmnetzliste
(PN) übersetzt. Dabei werden alle Knoten im EXG durch eine entsprechende Instruktionszelle
ersetzt. Die Kanten des Graphen entsprechen dabei dem Programmzustand. Der
Programmzustand setzt sich aus den Variablen im Speicher wie auch dem Architekturzustand
des unterliegenden Prozessors zusammen.
Ergänzt wird der Programmzustand in der Programmnetzliste um ein sogenanntes
Active-Bit, welches es ermöglicht den aktiven Pfad in der Netzliste zu markieren. Das
ist notwendig, da die Software immer nur einen Pfad gleichzeitig ausführen kann, aber
die PN alle möglichen Pfade beinhaltet. Auf der Programmnetzliste können dann mit Hilfe
von Hardware Property Checkern basierend auf BMC oder IPC diverse Eigenschaften
bewiesen werden.
Zusätzlich wird die Programmnetzliste um die Fähigkeit zur Interruptmodellierung
erweitert.
Memory accesses are the bottleneck of modern computer systems both in terms of performance and energy. This barrier, known as "the Memory Wall", can be break by utilizing memristors. Memristors are novel passive electrical components with varying resistance based on the charge passing through the device [1]. In this abstract, the term "memristor" covers also an extension of the definition, memristive devices, which vary their resistance depending on a state variable [2]. While memristors are naturally used as memory cells, they can also be used for other applications, such as logic circuits [3].
We present a novel architecture that redefines the relationship between the memory and the processor by enabling data processing within the memory itself. Our architecture is based on a memristive memory array, in which we perform two basic logic operations: Imply (material implication) [4] and False.
The Power and Energy Student Summit (PESS) is designed for students, young professionals and PhD-students in the field of power engineering. PESS offers the possibility to gain first experience in presentation, publication and discussion with a renowned audience of specialists. Therefore, the conference is accompanied and supervised by established scientists and experts. The venue changes every year. In 2018, the University of Kaiserslautern held the eighth PESS conference. This document presents the submissions of this conference.
The neural networks have been extensively used for tasks based on image sensors. These models have, in the past decade, consistently performed better than other machine learning methods on tasks of computer vision. It is understood that methods for transfer learning from neural networks trained on large datasets can reduce the total data requirement while training new neural network models. These methods tend not to perform well when the data recording sensor or the recording environment is unique from the existing large datasets. The machine learning literature provides various methods for prior-information inclusion in a learning model. Such methods employ methods like designing biases into the data representation vectors, enforcing priors or physical constraints on the models. Including such information into neural networks for the image frames and image-sequence classification is hard because of the very high dimensional neural network mapping function and little information about the relation between the neural network parameters. In this thesis, we introduce methods for evaluating the statistically learned data representation and combining these information descriptors. We have introduced methods for including information into neural networks. In a series of experiments, we have demonstrated methods for adding the existing model or task information to neural networks. This is done by 1) Adding architectural constraints based on the physical shape information of the input data, 2) including weight priors on neural networks by training them to mimic statistical and physical properties of the data (hand shapes), and 3) by including the knowledge about the classes involved in the classification tasks to modify the neural network outputs. These methods are demonstrated, and their positive influence on the hand shape and hand gesture classification tasks are reported. This thesis also proposes methods for combination of statistical and physical models with parametrized learning models and show improved performances with constant data size. Eventually, these proposals are tied together to develop an in-car hand-shape and hand-gesture classifier based on a Time of Flight sensor.
Safety-related Systems (SRS) protect from the unacceptable risk resulting from failures of technical systems. The average probability of dangerous failure on demand (PFD) of these SRS in low demand mode is limited by standards. Probabilistic models are applied to determine the average PFD and verify the specified limits. In this thesis an effective framework for probabilistic modeling of complex SRS is provided. This framework enables to compute the average, instantaneous, and maximum PFD. In SRS, preventive maintenance (PM) is essential to achieve an average PFD in compliance with specified limits. PM intends to reveal dangerous undetected failures and provides repair if necessary. The introduced framework pays special attention to the precise and detailed modeling of PM. Multiple so far neglected degrees of freedom of the PM are considered, such as two types of elementwise PM at arbitrarily variable times. As shown by analyses, these degrees of freedom have a significant impact on the average, instantaneous, and maximum PFD. The PM is optimized to improve the average or maximum PFD or both. A well-known heuristic nonlinear optimization method (Nelder-Mead method) is applied to minimize the average or maximum PFD or a weighted trade-off. A significant improvement of the objectives and an improved protection are achieved. These improvements are achieved via the available degrees of freedom of the PM and without additional effort. Moreover, a set of rules is presented to decide for a given SRS if significant improvements will be achieved by optimization of the PM. These rules are based on the well-known characteristics of the SRS, e.g. redundancy or no redundancy, complete or incomplete coverage of PM. The presented rules aim to support the decision whether the optimization is advantageous for a given SRS and if it should be applied or not.
As the sustained trend towards integrating more and more functionality into systems on a chip can be observed in all fields, their economic realization is a challenge for the chip making industry. This is, however, barely possible today, as the ability to design and verify such complex systems could not keep up with the rapid technological development. Owing to this productivity gap, a design methodology, mainly using pre designed and pre verifying blocks, is mandatory. The availability of such blocks, meeting the highest possible quality standards, is decisive for its success. Cost-effective, this can only be achieved by formal verification on the block-level, namely by checking properties, ranging over finite intervals of time. As this verification approach is based on constructing and solving Boolean equivalence problems, it allows for using backtrack search procedures, such as SAT. Recent improvements of the latter are responsible for its high capacity. Still, the verification of some classes of hardware designs, enjoying regular substructures or complex arithmetic data paths, is difficult and often intractable. For regular designs, this is mainly due to individual treatment of symmetrical parts of the search space by backtrack search procedures used. One approach to tackle these deficiencies, is to exploit the regular structure for problem reduction on the register transfer level (RTL). This work describes a new approach for property checking on the RTL, preserving the problem inherent structure for subsequent reduction. The reduction is based on eliminating symmetrical parts from bitvector functions, and hence, from the search space. Several approaches for symmetry reduction in search problems, based on invariance of a function under permutation of variables, have been previously proposed. Unfortunately, our investigations did not reveal this kind of symmetry in relevant cases. Instead, we propose a reduction based on symmetrical values, as we encounter them much more frequently in our industrial examples. Let \(f\) be a Boolean function. The values \(0\) and \(1\) are symmetrical values for a variable \(x\) in \(f\) iff there is a variable permutation \(\pi\) of the variables of \(f\), fixing \(x\), such that \(f|_{x=0} = \pi(f|_{x=1})\). Then the question whether \(f=1\) holds is independent from this variable, and it can be removed. By iterative application of this approach to all variables of \(f\), they are either all removed, leaving \(f=1\) or \(f=0\) trivially, or there is a variable \(x'\) with no such \(\pi\). The latter leads to the conclusion that \(f=1\) does not hold, as we found a counter-example either with \(x'=0\), or \(x'=1\). Extending this basic idea to vectors of variables, allows to elevate it to the RTL. There, self similarities in the function representation, resulting from the regular structure preserved, can be exploited, and as a consequence, symmetrical bitvector values can be found syntactically. In particular, bitvector term-rewriting techniques, isomorphism procedures for specially manipulated term graphs, and combinations thereof, are proposed. This approach dramatically reduces the computational effort needed for functional verification on the block-level and, in particular, for the important problem class of regular designs. It allows the verification of industrial designs previously intractable. The main contributions of this work are in providing a framework for dealing with bitvector functions algebraically, a concise description of bounded model checking on the register transfer level, as well as new reduction techniques and new approaches for finding and exploiting symmetrical values in bitvector functions.
The present thesis deals with multi-user mobile radio systems, and more specifically, the downlinks (DL) of such systems. As a key demand on future mobile radio systems, they should enable highest possible spectrum and energy efficiency. It is well known that, in principle, the utilization of multi-antennas in the form of MIMO systems, offers considerable potential to meet this demand. Concerning the energy issue, the DL is more critical than the uplink. This is due to the growing importance of wireless Internet applications, in which the DL data rates and, consequently, the radiated DL energies tend to be substantially higher than the corresponding uplink quantities. In this thesis, precoding schemes for MIMO multi-user mobile radio DLs are considered, where, in order to keep the complexity of the mobile terminals as low as possible, the rationale receiver orientation (RO) is adopted, with the main focus to further reduce the required transmit energy in such systems. Unfortunately, besides the mentioned low receiver complexity, conventional RO schemes, such as Transmit Zero Forcing (TxZF), do not offer any transmit energy reductions as compared to conventional transmitter oriented schemes. Therefore, the main goal of this thesis is the design and analysis of precoding schemes in which such transmit energy reductions become feasible - under virtually maintaining the low receiver complexity - by means of replacing the conventional unique mappings by the selectable representations of the data. Concerning the channel access scheme, Orthogonal Frequency Division Multiplex (OFDM) is presently being favored as the most promising candidate in the standardization process of the enhanced 3G and forthcoming 4G systems, because it allows a very flexible resource allocation and low receiver complexity. Receiver oriented MIMO OFDM multi-user downlink transmission, in which channel equalization is already performed in the transmitter of the access point, further contributes to low receiver complexity in the mobile terminals. For these reasons, OFDM is adopted in the target system of the considered receiver oriented precoding schemes. In the precoding schemes considered the knowledge of channel state information (CSI) in the access point in the form of the channel matrix is essential. Independently of the applied duplexing schemes FDD or TDD, the provision of this information to the access point is always erroneous. However, it is shown that the impact of such deviations not only scales with the variance of the channel estimation errors, but also with the required transmit energies. Accordingly, the reduced transmit energies of the precoding schemes with selectable data representation also have the advantage of a reduced sensitivity to imperfect knowledge of CSI. In fact, these two advantages are coupled with each other.
Die Versorgungsaufgaben für Niederspannungsnetze werden sich in den kommenden Jahrzehnten durch die weitere Verbreitung von Photovoltaikanlagen, Wärmepumpenheizungen und Elektroautomobilen gegenüber denen des Jahres 2018 voraussichtlich stark ändern. In der Praxis verbreitete Planungsgrundsätze für den Neubau von Niederspannungsnetzen sind veraltet, denn sie stammen vielfach in ihren Grundzügen aus Zeiten, in denen die neuen Lasten und Einspeisungen nicht erwartet und dementsprechend nicht berücksichtigt wurden. Der Bedarf für neue Planungsgrundsätze fällt zeitlich mit der Verfügbarkeit regelbarer Ortsnetztransformatoren (rONT) zusammen, die zur Verbesserung der Spannungsverhältnisse im Netz eingesetzt werden können. Die hier entwickelten neuen Planungsgrundsätze erfordern für ländliche und vorstädtische Versorgungsaufgaben (nicht jedoch für städtische Versorgungsaufgaben) den rONT-Einsatz, um die hohen erwarteten Leistungen des Jahres 2040 zu geringen Kosten beherrschen zu können. Eine geeignete rONT-Standardregelkennlinie wird angegeben. In allen Fällen werden abschnittsweise parallelverlegte Kabel mit dem Querschnitt 240 mm² empfohlen.
Dieses Szenario ist eine Erweiterung eines Teilszenarios von Human Centered Manufacturing. Dabei geht es um die Montage der Energieelektrik für industrielle Anlagen. Im Jahr 2015 enthält die Ausrüstung eines Elektromonteurs bei der Verdrahtung von Schaltschränken u.a. einen Schutzhelm mit integrierter Farbkamera, integriertem Mikrofon und einem Lautsprecher im Ohrbereich sowie einen automatisch gesteuerten Laserpointer. Auf der Baustelle sind keine Pläne mehr erforderlich. Der Monteur benötigt keinen Plan während der Montage.
In this thesis a new family of codes for the use in optical high bit rate transmission systems with a direct sequence code division multiple access scheme component was developed and its performance examined. These codes were then used as orthogonal sequences for the coding of the different wavelength channels in a hybrid OCDMA/WDMA system. The overall performance was finally compared to a pure WDMA system. The common codes known up to date have the problem of needing very long sequence lengths in order to accommodate an adequate number of users. Thus, code sequence lengths of 1000 or more were necessary to reach bit error ratios of with only about 10 simultaneous users. However, these sequence lengths are unacceptable if signals with data rates higher than 100 MBit/s are to be transmitted, not to speak about the number of simultaneous users. Starting from the well known optical orthogonal codes (OOC) and under the assumption of synchronization among the participating transmitters - justified for high bit rate WDM transmission systems -, a new code family called ?modified optical orthogonal codes? (MOOC) was developed by minimizing the crosscorrelation products of each two sequences. By this, the number of simultaneous users could be increased by several orders of magnitude compared to the known codes so far. The obtained code sequences were then introduced in numerical simulations of a 80 GBit/s DWDM transmission system with 8 channels, each carrying a 10 GBit/s payload. Usual DWDM systems are featured by enormous efforts to minimize the spectral spacing between the various wavelength channels. These small spacings in combination with the high bit rates lead to very strict demands on the system components like laser diode, filters, multiplexers etc. Continuous channel monitoring and temperature regulations of sensitive components are inevitable, but often cannot prevent drop downs of the bit error ratio due to aging effects or outer influences like mechanical stress. The obtained results show that - very different to the pure WDM system - by orthogonally coding adjacent wavelength channels with the proposed MOOC, the overall system performance gets widely independent from system parameters like input powers, channel spacings and link lengths. Nonlinear effects like XPM that insert interchannel crosstalk are effectively fought. Furthermore, one can entirely dispense with the bandpass filters, thus simplifying the receiver structure, which is especially interesting for broadcast networks. A DWDM system upgraded with the OCDMA subsystem shows a very robust behavior against a variety of influences.
Die systemtheoretische Begründung für die Einführung des Zustandsbegriffs findet man im Mosaik-stein "Der Zustandsbegriff in der Systemtheorie". Während sich die dortige Betrachtung sowohl mitkontinuierlichen als auch mit diskreten Systemen befaßt, wird hier die Betrachtung auf diskrete Sy-steme beschränkt.
Zur Entwicklung und Planung energiesparender Gebäude, zum Entwurf geeigneter Regelungsalgorithmen benötigt man detailliertes Wissen über das thermische und energetische Verhalten eines Gebäudes, das in Wechselwirkung mit seiner Umgebung und seinen Bewohnern steht. Dies leistet ein mathematisches Modell. Die Beschreibung großer, komplexer technischer Systeme führt zu hoch komplexen, umfangreichen mathematischen Modellen, die - zur Simulation implementiert - große Softwaresysteme ergeben. Es liegt daher nahe, Konzepte der Informatik auch in der mathematischen Modellbildung zu nutzen. Neben der Dekomposition in Teilsysteme, den Strukturierungskonzepten zur Beherrschung der Komplexität ist hier ein aktueller Forschungsgegenstand der Informatik von besonderem Interesse. Es handelt sich um die Nutzung der Wiederverwendung als methodisches Element des Softwareentwicklungsprozesses großer Systeme. Es wurde eine Modellbibliothek zur Simulation thermischen Gebäudeverhaltens in Modelica erstellt. Sie untergliedert sich in die Abschnitte Gebäude-, Thermohydraulik-, Umgebungs- und Algorithmenbibliothek. Die objektorientiert implementierten, nicht berechnungskausalen Modellkomponenten sind hierarchisch strukturiert. Ihre Implementierung orientiert sich am intuitiven physikalischen Verständnis des zu beschreibenden technischen Prozesses. So aggregiert ein Gebäude einzelne Räume, Fenster, Wände und diese wiederum einzelne Wandschichten.