Refine
Year of publication
Document Type
- Doctoral Thesis (84)
- Article (18)
- Conference Proceeding (17)
- Preprint (17)
- Report (9)
- Habilitation (3)
- Other (2)
- Bachelor Thesis (1)
- Course Material (1)
Has Fulltext
- yes (152)
Is part of the Bibliography
- no (152)
Keywords
- Mobilfunk (12)
- Model checking (7)
- Ambient Intelligence (5)
- Netzwerk (5)
- mobile radio (5)
- MIMO (4)
- System-on-Chip (4)
- CDMA (3)
- Cache (3)
- DRAM (3)
Faculty / Organisational entity
- Fachbereich Elektrotechnik und Informationstechnik (152) (remove)
Das europäische Mobilfunksystem der dritten Generation heißt UMTS. UTRA - der terrestrische Funkzugang von UMTS - stellt zwei harmonisierte Luftschnittstellen zur Verfügung: Das TDD-basierte TD-CDMA und das FDD-basierte WCDMA. Das Duplexverfahren TDD bietet gegenüber FDD erhebliche Vorteile, z.B. können TDD-basierte Luftschnittstellen unterschiedliche Datenraten in der Aufwärts- und Abwärtsstrecke i.a. effizienter bereitstellen als FDD-basierte Luftschnittstellen. TD-CDMA ist Gegenstand dieser Arbeit. Die wichtigsten Details dieser Luftschnittstelle werden vorgestellt. Laufzeit und Interferenz sind wesentliche Gesichtspunkte beim Verwenden von TDD. Diese wesentlichen Gesichtspunkte werden eingehend für den Fall des betrachteten TD-CDMA untersucht. In UMTS spielen neben der Sprachübertragung insbesondere hochratige Datendienste und Multimediadienste eine wichtige Rolle. Die unterschiedlichen Qualitätsanforderungen dieser Dienste sind eine große Herausforderung für UMTS, insbesondere auf der physikalischen Ebene. Um den Qualitätsanforderungen verschiedener Dienste gerecht zu werden, definiert UTRA die L1/L2-Schnittstelle durch unterschiedliche Transportkanäle. Jeder Transportkanal garantiert durch die vorgegebene Datenrate, Verzögerung und maximal zulässige Bitfehlerrate eine bestimmte Qualität der Übertragung. Hieraus ergibt sich das Problem der Realisierung dieser Transportkanäle auf physikalischer Ebene. Dieses Problem wird in der vorliegenden Arbeit eingehend für TD-CDMA untersucht. Der UTRA-Standard bezeichnet die Realisierung eines Transportkanals als Transportformat. Wichtige Parameter des Transportformats sind das verwendete Pooling-Konzept, das eingesetzte FEC-Verfahren und die zugehörige Coderate. Um die Leistungsfähigkeit unterschiedlicher Transportformate quantitativ zu vergleichen, wird ein geeignetes Bewertungsmaß angegeben. Die zur Bewertung erforderlichen Meßwerte können nur durch Simulation auf Verbindungsebene ermittelt werden. Deshalb wird ein Programm für die Simulation von Transportformaten in TD-CDMA entwickelt. Bei der Entwicklung dieses Programms wird auf Konzepte, Techniken, Methoden und Prinzipien der Informatik für die Software-Entwicklung zurückgegriffen, um die Wiederverwendbarkeit und Änderbarkeit des Programms zu unterstützen. Außerdem werden wichtige Verfahren zur Reduzierung der Bitfehlerrate - die schnelle Leistungsregelung und die Antennendiversität - implementiert. Die Leistungsfähigkeit einer exemplarischen Auswahl von Transportformaten wird durch Simulation ermittelt und unter Verwendung des Bewertungsmaßes verglichen. Als FEC-Verfahren werden Turbo-Codes und die Code-Verkettung aus innerem Faltungscode und äußerem RS-Code eingesetzt. Es wird gezeigt, daß die untersuchten Verfahren zur Reduzierung der Bitfehlerrate wesentlichen Einfluß auf die Leistungsfähigkeit der Transportformate haben. Des weiteren wird gezeigt, daß die Transportformate mit Turbo-Codes bessere Ergebnisse erzielen als die Transportformate mit Code-Verkettung.
Bei der Programmierung geht es in vielfältiger Form um Identifikation von Individuen: Speicherorte,Datentypen, Werte, Klassen, Objekte, Funktionen u.ä. müssen definierend oder selektierend identifiziert werden.Die Ausführungen zur Identifikation durch Zeigen oder Nennen sind verhältnismäßig kurz gehalten,wogegen der Identifikation durch Umschreiben sehr viel Raum gewidmet ist. Dies hat seinen Grunddarin, daß man zum Zeigen oder Nennen keine strukturierten Sprachformen benötigt, wohl aber zumUmschreiben. Daß die Betrachtungen der unterschiedlichen Formen funktionaler Umschreibungen soausführlich gehalten sind, geschah im Hinblick auf ihre Bedeutung für die Begriffswelt der funktionalen Programmierung. Man hätte zwar die Formen funktionaler Umschreibungen auch im Mosaikstein "Programmzweck versus Programmform" im Kontext des dort dargestellten Konzepts funktionaler Programme behandeln können, aber der Autor meint, daß der vorliegende Aufsatz der angemessenerePlatz dafür sei.
Basierend auf den Erkenntnissen und Erfahrungen zum Klimawandel ergaben sich in den letzten Jahren weltweit enorme energie- und klimapolitische Veränderungen. Dies führt zu einem immer stärken Wandel der Erzeugungs-, Verbrauchs- und Versorgungsstrukturen unserer Energiesysteme. Der Fokus der Energieerzeugung auf fluktuierenden erneuerbaren Energieträgern erfordert einen weitreichenderen Einsatz von Flexibilitäten als dies bisher der Fall war.
Diese Arbeit diskutiert den Einsatz von Wärmepumpen und Speichersystemen als Flexibilitäten im Kontext des Zellularen Ansatzes der Energieversorgung. Dazu werden die Flexibilitätspotentiale von Wärmepumpen -Speichersystemen auf drei Betrachtungsebenen untersucht und validiert. Erstere berücksichtigt die Wärmepumpe, den thermischen Speicher und thermische Lasten in einer generellen Potentialbetrachtung. Darauf aufbauend folgt die Betrachtung der Wärmepumpen-Speichersysteme im Rahmen einer Haushalts-Zelle als energetische Einheit, gefolgt von Untersuchungen im Niederspannungs-Zellkontext. Zur Abbildung des Flexibilitätsverhaltens werden detaillierte Modelle der Wandler und Speicher sowie deren Steuerungen entwickelt und anhand von Zeitreihensimulationen analysiert und evaluiert.
Die zentrale Frage ob Wärmepumpen mit Speichersystemen einen Beitrag als Flexibilität zum Gelingen der Energiewende leisten können kann mit einem klaren Ja beantwortet werden. Dennoch sind die beim Einsatz von Wärmepumpen-Speichersystemen als Flexibilität zu beachtenden Randbedingungen vielfältig und bedürfen, je nach Anwendungszweck der Flexibilität, einer genauen Betrachtung. Die entscheidenden Faktoren sind dabei die Außentemperatur, der zeitliche Kontext, das Netz und die Wirtschaftlichkeit.
Diese Arbeit beschreibt einen in der Praxis bereits vielfach erprobten, besonders leistungsfähigen Ansatz zur Verifikation digitaler Schaltungsentwürfe. Der Ansatz ist im Hinblick auf die Schaltungsqualität nach der Verifikation, als auch in Bezug auf den Verifikationsaufwand der simulationsbasierten Schaltungsverifikation deutlich überlegen. Die Arbeit überträgt zunächst das Paradigma der transaktionsbasierten Verifikation aus der Simulation in die formale Verifikation. Ein Ergebnis dieser Übertragung ist eine bestimmte Form von formalen Eigenschaften, die Operationseigenschaften genannt werden. Schaltungen werden mit Operationseigenschaften untersucht durch Interval Property Checking, einer be-sonders leistungsfähigen SAT-basierten funktionalen Verifikation. Dadurch können Schaltungen untersucht werden, die sonst als zu komplex für formale Verifikation gelten. Ferner beschreibt diese Arbeit ein für Mengen von Operationseigenschaften geeignetes Werkzeug, das alle Verifikationslücken aufdeckt, komplexitätsmäßig mit den Fähigkeiten der IPC-basierten Schaltungsuntersuchung Schritt hält und als Vollständigkeitprüfer bezeichnet wird. Die Methodik der Operationseigenschaften und die Technologie des IPC-basierten Eigenschaftsprüfers und des Vollständigkeitsprüfers gehen eine vorteilhafte Symbiose zum Vorteil der funktionalen Verifikation digitaler Schaltungen ein. Darauf aufbauend wird ein Verfahren zur lückenlosen Überprüfung der Verschaltung derartig verifizierter Module entwickelt, das aus den Theorien zur Modellierung digitaler Systeme abgeleitet ist. Der in dieser Arbeit vorgestellte Ansatz hat in vielen kommerziellen Anwendungsprojekten unter Beweis gestellt, dass er den Namen "vollständige funktionale Verifikation" zu Recht trägt, weil in diesen Anwendungsprojekten nach dem Erreichen eines durch die Vollständigkeitsprüfung wohldefinierten Abschlusses keine Fehler mehr gefunden wurden. Der Ansatz wird von OneSpin Solutions GmbH unter dem Namen "Operation Based Verification" und "Gap Free Verification" vermarktet.
This paper presents the systematic synthesis of a fairly complex digitalcircuit and its CPLD implementation as an assemblage of communicatingasynchronous sequential circuits. The example, a VMEbus controller, waschosen because it has to control concurrent processes and to arbitrateconflicting requests.
Utilization of Correlation Matrices in Adaptive Array Processors for Time-Slotted CDMA Uplinks
(2002)
It is well known that the performance of mobile radio systems can be significantly enhanced by the application of adaptive antennas which consist of multi-element antenna arrays plus signal processing circuitry. In the thesis the utilization of such antennas as receive antennas in the uplink of mobile radio air interfaces of the type TD-CDMA is studied. Especially, the incorporation of covariance matrices of the received interference signals into the signal processing algorithms is investigated with a view to improve the system performance as compared to state of the art adaptive antenna technology. These covariance matrices implicitly contain information on the directions of incidence of the interference signals, and this information may be exploited to reduce the effective interference power when processing the signals received by the array elements. As a basis for the investigations, first directional models of the mobile radio channels and of the interference impinging at the receiver are developed, which can be implemented on the computer at low cost. These channel models cover both outdoor and indoor environments. They are partly based on measured channel impulse responses and, therefore, allow a description of the mobile radio channels which comes sufficiently close to reality. Concerning the interference models, two cases are considered. In the one case, the interference signals arriving from different directions are correlated, and in the other case these signals are uncorrelated. After a visualization of the potential of adaptive receive antennas, data detection and channel estimation schemes for the TD-CDMA uplink are presented, which rely on such antennas under the consideration of interference covariance matrices. Of special interest is the detection scheme MSJD (Multi Step Joint Detection), which is a novel iterative approach to multi-user detection. Concerning channel estimation, the incorporation of the knowledge of the interference covariance matrix and of the correlation matrix of the channel impulse responses is enabled by an MMSE (Minimum Mean Square Error) based channel estimator. The presented signal processing concepts using covariance matrices for channel estimation and data detection are merged in order to form entire receiver structures. Important tasks to be fulfilled in such receivers are the estimation of the interference covariance matrices and the reconstruction of the received desired signals. These reconstructions are required when applying MSJD in data detection. The considered receiver structures are implemented on the computer in order to enable system simulations. The obtained simulation results show that the developed schemes are very promising in cases, where the impinging interference is highly directional, whereas in cases with the interference directions being more homogeneously distributed over the azimuth the consideration of the interference covariance matrices is of only limited benefit. The thesis can serve as a basis for practical system implementations.
In der Arbeit geht es um die Untersuchung von Mechanismen zur Energiegewinnung in Ambient Intelligence Systemen. Zunächst wird ein Überblick über die existierenden Möglichkeiten und deren zu grunde liegenden physikalischen Effekte gegeben. Dann wird die Energiegewinnung mittels Thermogeneratoren näher untersucht.
In conventional radio communication systems, the system design generally starts from the transmitter (Tx), i.e. the signal processing algorithm in the transmitter is a priori selected, and then the signal processing algorithm in the receiver is a posteriori determined to obtain the corresponding data estimate. Therefore, in these conventional communication systems, the transmitter can be considered the master and the receiver can be considered the slave. Consequently, such systems can be termed transmitter (Tx) oriented. In the case of Tx orientation, the a priori selected transmitter algorithm can be chosen with a view to arrive at particularly simple transmitter implementations. This advantage has to be countervailed by a higher implementation complexity of the a posteriori determined receiver algorithm. Opposed to the conventional scheme of Tx orientation, the design of communication systems can alternatively start from the receiver (Rx). Then, the signal processing algorithm in the receiver is a priori determined, and the transmitter algorithm results a posteriori. Such an unconventional approach to system design can be termed receiver (Rx) oriented. In the case of Rx orientation, the receiver algorithm can be a priori selected in such a way that the receiver complexity is minimum, and the a posteriori determined transmitter has to tolerate more implementation complexity. In practical communication systems the implementation complexity corresponds to the weight, volume, cost etc of the equipment. Therefore, the complexity is an important aspect which should be taken into account, when building practical communication systems. In mobile radio communication systems, the complexity of the mobile terminals (MTs) should be as low as possible, whereas more complicated implementations can be tolerated in the base station (BS). Having in mind the above mentioned complexity features of the rationales Tx orientation and Rx orientation, this means that in the uplink (UL), i.e. in the radio link from the MT to the BS, the quasi natural choice would be Tx orientation, which leads to low cost transmitters at the MTs, whereas in the downlink (DL), i.e. in the radio link from the BS to the MTs, the rationale Rx orientation would be the favorite alternative, because this results in simple receivers at the MTs. Mobile radio downlinks with the rationale Rx orientation are considered in the thesis. Modern mobile radio communication systems are cellular systems, in which both the intracell and intercell interferences exist. These interferences are the limiting factors for the performance of mobile radio systems. The intracell interference can be eliminated or at least reduced by joint signal processing with consideration of all the signals in the considered cell. However such joint signal processing is not feasible for the elimination of intercell interference in practical systems. Knowing that the detrimental effect of intercell interference grows with its average energy, the transmit energy radiated from the transmitter should be as low as possible to keep the intercell interference low. Low transmit energy is required also with respect to the growing electro-phobia of the public. The transmit energy reduction for multi-user mobile radio downlinks by the rationale Rx orientation is dealt with in the thesis. Among the questions still open in this research area, two questions of major importance are considered here. MIMO is an important feature with respect to the transmit power reduction of mobile radio systems. Therefore, first questionconcerns the linear Rx oriented transmission schemes combined with MIMO antenna structures. The investigations of the MIMO benefit on the linear Rx oriented transmission schemes are studied in the thesis. Utilization of unconventional multiply connected quantization schemes at the receiver has also great potential to reduce the transmit energy. Therefore, the second question considers the designing of non-linear Rx oriented transmission schemes combined with multiply connected quantization schemes.
Rapid growth in sensors and sensor technology introduces variety of products to the market. The increasing number of available sensor concepts and implementations demands more versatile sensor electronics and signal conditioning. Nowadays signal conditioning for the available spectrum of sensors is becoming more and more challenging. Moreover, developing a sensor signal conditioning ASIC is a function of cost, area, and robustness to maintain signal integrity. Field programmable analog approaches and the recent evolvable hardware approaches offer partial solution for advanced compensation as well as for rapid prototyping. The recent research field of evolutionary concepts focuses predominantly on digital and is at its advancement stage in analog domain. Thus, the main research goal is to combine the ever increasing industrial demand for sensor signal conditioning with evolutionary concepts and dynamically reconfigurable matched analog arrays implemented in main stream Complementary Metal Oxide Semiconductors (CMOS) technologies to yield an intelligent and smart sensor system with acceptable fault tolerance and the so called self-x features, such as self-monitoring, self-repairing and self-trimming. For this aim, the work suggests and progresses towards a novel, time continuous and dynamically reconfigurable signal conditioning hardware platform suitable to support variety of sensors. The state-of-the-art has been investigated with regard to existing programmable/reconfigurable analog devices and the common industrial application scenario and circuits, in particular including resource and sizing analysis for proper motivation of design decisions. The pursued intermediate granular level approach called as Field Programmable Medium-granular mixed signal Array (FPMA) offers flexibility, trimming and rapid prototyping capabilities. The proposed approach targets at the investigation of industrial applicability of evolvable hardware concepts and to merge it with reconfigurable or programmable analog concepts, and industrial electronics standards and needs for next generation robust and flexible sensor systems. The devised programmable sensor signal conditioning test chips, namely FPMA1/FPMA2, designed in 0.35 µm (C35B4) Austriamicrosystems, can be used as a single instance, off the shelf chip at the PCB level for conditioning or in the loop with dedicated software to inherit the aspired self-x features. The use of such self–x sensor system carries the promise of improved flexibility, better accuracy and reduced vulnerability to manufacturing deviations and drift. An embedded system, namely PHYTEC miniMODUL-515C was used to program and characterize the mixed-signal test chips in various feedback arrangements to answer some of the questions raised by the research goals. Wide range of established analog circuits, ranging from single output to fully differential amplifiers, was investigated at different hierarchical levels to realize circuits like instrumentation amplifier and filters. A more extensive design issues based on low-power like for e.g., sub-threshold design were investigated and a novel soft sleep mode idea was proposed. The bandwidth limitations observed in the state of the art fine granular approaches were enhanced by the proposed intermediate granular approach. The so designed sensor signal conditioning instrumentation amplifier was then compared to the commercially available products in the market like LT 1167, INA 125 and AD 8250. In an adaptive prototype, evolutionary approaches, in particular based on particle swarm optimization with multi-objectives, were just deployed to all the test samples of FPMA1/FMPA2 (15 each) to exhibit self-x properties and to recover from manufacturing variations and drift. The variations observed in the performance of the test samples were compensated through reconfiguration for the desired specification.
Ethernet has become an established communication technology in industrial automation. This was possible thanks to the tremendous technological advances and enhancements of Ethernet such as increasing the link-speed, integrating the full-duplex transmission and the use of switches. However these enhancements were still not enough for certain high deterministic industrial applications such as motion control, which requires cycle time below one millisecond and jitter or delay deviation below one microsecond. To meet these high timing requirements, machine and plant manufacturers had to extend the standard Ethernet with real-time capability. As a result, vendor-specific and non-IEEE standard-compliant "Industrial Ethernet" (IE) solutions have emerged.
The IEEE Time-Sensitive Networking (TSN) Task Group specifies new IEEE-conformant functionalities and mechanisms to enable the determinism missing from Ethernet. Standard-compliant systems are very attractive to the industry because they guarantee investment security and sustainable solutions. TSN is considered therefore to be an opportunity to increase the performance of established Industrial-Ethernet systems and to move forward to Industry 4.0, which require standard mechanisms.
The challenge remains, however, for the Industrial Ethernet organizations to combine their protocols with the TSN standards without running the risk of creating incompatible technologies. TSN specifies 9 standards and enhancements that handle multiple communication aspects. In this thesis, the evaluation of the use of TSN in industrial real-time communication is restricted to four deterministic standards: IEEE802.1AS-Rev, IEEE802.1Qbu IEEE802.3br and IEEE802.1Qbv. The specification of these TSN sub-standards was finished at an early research stage of the thesis and hardware prototypes were available.
Integrating TSN into the Industrial-Ethernet protocols is considered a substantial strategical challenge for the industry. The benefits, limits and risks are too complex to estimate without a thorough investigation. The large number of Standard enhancements makes it hard to select the required/appropriate functionalities.
In order to cover all real-time classes in the automation [9], four established Industrial-Ethernet protocols have been selected for evaluation and combination with TSN as well as other performance relevant communication features.
The objectives of this thesis are to
(1) Provide theoretical, simulation and experimental evaluation-methodologies for the timing performance analysis of the deterministic TSN-standards mentioned above. Multiple test-plans are specified to evaluate the performance and compatibility of early version TSN-prototypes from different providers.
(2) Investigate multiple approaches and deduce migration strategies to integrate these features into the established Industrial-Ethernet protocols: Sercos III, Profinet IRT, Profinet RT and Ethernet/IP. A scenario of coexistence of time-critical traffic with other traffic in a TSN-network proves that the timing performance for highly deterministic applications, e.g. motion-control, can only be guaranteed by the TSN scheduling algorithm IEEE802.1Qbv.
Based on a requirements survey of highly deterministic industrial applications, multiple network scenarios and experiments are presented. The results are summarized into two case studies. The first case study shows that TSN alone is not enough to meet these requirements. The second case study investigates the benefits of additional mechanisms (Gigabit link-speed, minimum cycle time modeling, frame forwarding mechanisms, frame structure, topology migration, etc.) in combination with the TSN features. An implementation prototype of the proposed system and a simulation case study are used for the evaluation of the approach. The prototype is used for the evaluation and validation of the simulation model. Due to given scalability constraints of the prototype (no cut-through functionalities, limited number of TSN-prototypes, etc…), a realistic simulation model, using the network simulation tool OMNEST / OMNeT++, is conducted.
The obtained evaluation results show that a minimum cycle time ≤1 ms and a maximum jitter ≤1 μs can be achieved with the presented approaches.
Channel estimation is of great importance in many wireless communication systems, since it influences the overall performance of a system significantly. Especially in multi-user and/or multi-antenna systems, i.e. generally in multi-branch systems, the requirements on channel estimation are very high, since the training signals or so called pilots that are used for channel estimation suffer from multiple access interference. Recently, in the context with such systems more and more attention is paid to concepts for joint channel estimation (JCE) which have the capability to eliminate the multiple access interference and also the interference between the channel coefficients. The performance of JCE can be evaluated in noise limited systems by the SNR degradation and in interference limited systems by the variation coefficient. Theoretical analysis carried out in this thesis verifies that both performance criteria are closely related to the patterns of the pilots used for JCE, no matter the signals are represented in the time domain or in the frequency domain. Optimum pilots like disjoint pilots, Walsh code based pilots or CAZAC code based pilots, whose constructions are described in this thesis, do not show any SNR degradation when being applied to multi-branch systems. It is shown that optimum pilots constructed in the time domain become optimum pilots in the frequency domain after a discrete Fourier transformation. Correspondingly, optimum pilots in the frequency domain become optimum pilots in the time domain after an inverse discrete Fourier transformation. However, even for optimum pilots different variation coefficients are obtained in interference limited systems. Furthermore, especially for OFDM-based transmission schemes the peak-to-average power ratio (PAPR) of the transmit signal is an important decision criteria for choosing the most suitable pilots. CAZAC code based pilots are the only pilots among the regarded pilot constructions that result in a PAPR of 0 dB for the transmit signal that origins in the transmitted pilots. When summarizing the analysis regarding the SNR degradation, the variation coefficient and the PAPR with respect to one single service area and considering the impact due to interference from other adjacent service areas that occur due to a certain choice of the pilots, one can conclude that CAZAC codes are the most suitable pilots for the application in JCE of multi-carrier multi-branch systems, especially in the case if CAZAC codes that origin in different mother codes are assigned to different adjacent service areas. The theoretical results of the thesis are verified by simulation results. The choice of the parameters for the frequency domain or time domain JCE is oriented towards the evaluated implementation complexity. According to the chosen parameterization of the regarded OFDM-based and FMT-based systems it is shown that a frequency domain JCE is the best choice for OFDM and a time domain JCE is the best choice for FMT applying CAZAC codes as pilots. The results of this thesis can be used as a basis for further theoretical research and also for future JCE implementation in wireless systems.
The nondestructive testing of multilayered materials is increasingly applied in
both scientific and industrial fields. In particular, developments in millimeter
wave and terahertz technology open up novel measurement applications, which
benefit from the nonionizing properties of this frequency range. One example is
the noncontact inspection of layer thicknesses. Frequently used measuring and
analysis methods lead to a resolution limit that is determined by the bandwidth
of the setup. This thesis analyzes the reliable evaluation of thinner layer thicknesses
using model-based signal processing.
The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.
Software defined radios can be implemented on general purpose processors (CPUs), e.g. based on a PC. A processor offers high flexibility: It can not only be used to process the data samples, but also to control receiver functions, display a waterfall or run demodulation software. However, processors can only handle signals of limited bandwidth due to their comparatively low processing speed. For signals of high bandwidth the SDR algorithms have to be implemented as custom designed digital circuits on an FPGA chip. An FPGA provides a very high processing speed, but also lacks flexibility and user interfaces. Recently the FPGA manufacturer Xilinx has
introduced a hybrid system on chip called Zynq, that combines both approaches. It features a dual ARM Cortex-A9 processor and an FPGA, that offer the flexibility of a processor with the processing speed of an FPGA on a single chip. The Zynq is therefore very interesting for use in SDRs. In this paper the
application of the Zynq and its evaluation board (Zedboard) will be discussed. As an example, a direct sampling receiver has been implemented on the Zedboard using a high-speed 16 bit ADC with 250 Msps.
In system theory, state is a key concept. Here, the word state refers to condition, as in the example Since he went into the hospital, his state of health worsened daily. This colloquial meaning was the starting point for defining the concept of state in system theory. System theory describes the relationship between input X and output Y, that is, between influence and reaction. In system theory, a system is something that shows an observable behavior that may be influenced. Therefore, apart from the system, there must be something else influencing and observing the reaction of the system. This is called the environment of the system.
Code coverage analysis plays an important role in the software testing process. More recently, the remarkable effectiveness of coverage feedback has triggered a broad interest in feedback-guided fuzzing. In this work, we discuss static instrumentation techniques for binary-level coverage analysis without compiler support. We show that the proposed techniques are precise, efficient, and transparent significantly beyond the state of the art.
We implement these techniques into two tools, namely, Spedi and bcov. Both tools are open source and publicly available. Spedi shows that the disassembly and function identification of stripped binaries can be highly accurate without resort to any external information. We build on these important results in bcov where we statically instrument x86-64 ELF binaries to track code coverage. However, improving efficiency and scaling to large real-world software required an orchestrated effort combining several techniques.
First, we bring a well-known probe pruning technique, for the first time, to binary-level instrumentation and effectively leverage its notion of superblocks to reduce overhead. Second, we introduce sliced microexecution, a robust technique for jump table analysis which improves CFG precision and enables us to instrument jump table entries. Additionally, smaller instructions in x86-64 pose a challenge for inserting detours. To address this challenge, we aggressively exploit padding bytes. Also, we introduce a greedy scheme to systematically host detours in neighboring basic blocks.
We evaluate bcov on a corpus of 95 binaries compiled from eight popular and well-tested packages like FFmpeg and LLVM. Two instrumentation policies, with different edge-level precision, are used to patch all functions in this corpus - over 1.6 million functions. Our precise policy has average performance and memory overheads of 14% and 22%, respectively. Instrumented binaries do not introduce any test regressions. The reported coverage is highly accurate with an average F-score of 99.86%. Finally, our jump table analysis is comparable to that of IDA Pro on gcc binaries and outperforms it on clang binaries.
Our work demonstrates that static instrumentation can offer unique advantages in comparison to established methods like compiler instrumentation and dynamic binary instrumentation. It also opens the door for many interesting applications of static instrumentation, which can go well beyond coverage analysis.
Unter Ambient Intelligence (AmI) wird die Integration verschiedener Technologien zu einer den Menschen umgebenden, (nahezu) unsichtbaren Gesamtheit verstanden. Diese Intelligente Umgebung wird möglich durch die Miniaturisierung hochintegrierter Bauteile (Sensoren, Aktuatoren und Rechnern), deren zunehmende Intelligenz und vor allem deren lokale und globale zunehmend drahtlose Vernetzung. Unter dem Titel Man-u-Faktur 2012 (man and factoring in 2012) wurde an der Technischen Universität Kaiserslautern im Rahmen des Forschungsschwerpunkts Ambient Intelligence ein Szenario entwickelt, das ein beeindruckendes Gesamtbild einer Technik, die den Menschen in den Mittelpunkt rückt, beschreibt. Man-u-Faktur 2012 steht dabei für ein Weiterdrehen des Rads der Industrialisierung von der heute üblichen variantenreichen, technologiezentrierten Massenfertigung hin zu einer kundenindividuellen, mitarbeiterzentrierten Maßfertigung. Im Speziellen wird hierunter der Aufbau massiv verteiler kunden- aber auch mitarbeiterfreundlicher Produktionsanlagen verstanden, die sich im hochdynamischen Umfeld entsprechend der jeweiligen Gegebenheiten anzupassen wissen. Der Mensch ist überall dort präsent, wo flexibles Arbeiten oder flexible Entscheidungen im Vordergrund stehen. In diesem Bericht wird der Einfluss von Ambient Intelligence beispielhaft auf die Vision einer Fahrradproduktion in der Man-u-Faktur 2012 angewandt. Aus diesem Szenario werden anschließend sowohl die zu entwickelnden Schlüsseltechnologien als auch die Einflüsse auf Wirtschaft und Gesellschaft abgeleitet.
In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.
Durch die stetige Zunahme von dezentralen Erzeugungsanlagen, den anstehenden Smart-Meter Rollout sowie die zu erwartende Elektrifizierung des Verkehrssektors (E-Mobilität) steht die Netzplanung und Netzbetriebsführung von Niederspannungsnetzen (NS-Netzen) in Deutschland vor großen Herausforderungen. In den letzten Jahren wurden daher viele Studien, Forschungs- und Demonstrationsprojekte zu den oben genannten Themen durchge-führt und die Ergebnisse sowie die entwickelten Methoden publiziert. Jedoch lassen sich die publizierten Methoden meist nicht nachbilden bzw. validieren, da die Untersuchungsmodelle oder die angesetzten Szenarien für Dritte nicht nachvollziehbar sind. Es fehlen einheitliche Netzmodelle, die die deutschen NS-Netze abbilden und für Ver-gleichsuntersuchungen herangezogen werden können, ähnlich dem Beispiel der nordamerikanischen Verteilnetzmodelle des IEEE.
Im Gegensatz zum Übertragungsnetz, dessen Struktur hinreichend genau bekannt ist, sind passende Netzmodelle für NS-Netze wegen der hohen Anzahlen der NS-Netze und Verteilnetzbetreiber (VNB) nur schwer abzubilden. Des Weiteren ist eine detaillierte Darstellung realer NS-Netze in wissenschaftlichen Publikationen aus daten-schutzrechtlichen Gründen meist nicht erwünscht. Für Untersuchungen im Rahmen eines Forschungsprojekts wurden darum möglichst charakteristische synthetische NS-Netzmodelle erstellt, die sich an gängigen deutschen Siedlungsstrukturen und üblichen Netzplanungsgrundsätzen orientieren. In dieser Arbeit werden diese NS-Netzmodelle sowie ihre Entwicklung im Detail erklärt. Damit stehen erstmals für die Öffentlichkeit nachvollziehbare NS-Netzmodelle für den deutschsprachigen Raum zur Verfügung. Sie können als Benchmark für wissenschaftliche Untersuchungen sowie zur Methodenentwicklung verwendet werden.