### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (45) (remove)

#### Language

- English (45) (remove)

#### Keywords

- Mobilfunk (5)
- MIMO (3)
- Model checking (3)
- OFDM (3)
- System-on-Chip (2)
- Verifikation (2)
- air interface (2)
- beyond 3G (2)
- impedance spectroscopy (2)
- A/D conversion (1)
- ADAS (1)
- AFDX (1)
- Adaptive Antennen (1)
- Adaptive Entzerrung (1)
- Arithmetic data-path (1)
- Backlog (1)
- Basisband (1)
- Bitvektor (1)
- Bounded Model Checking (1)
- Buffer (1)
- CAD (1)
- CMOS (1)
- CMOS-Schaltung (1)
- Channel estimation (1)
- Clock and Data Recovery Circuits (1)
- Codierung (1)
- Computeralgebra (1)
- Data Spreading (1)
- Data path (1)
- Datenrückgewinnungsschaltungen (1)
- Datenspreizung (1)
- Digitalmodulation (1)
- Downlink (1)
- Dynamically reconfigurable analog circuits (1)
- Elektrohydraulik (1)
- Empfangssignalverarbeitung (1)
- Empfängerorientierung (1)
- Entscheidungsproblem (1)
- Entwurfsautomation (1)
- Erfüllbarke (1)
- Erreichbarkeit (1)
- Firmware (1)
- Formal Verification (1)
- Formale Beschreibungstechnik (1)
- Formale Methode (1)
- Funktionale Sicherheit (1)
- Gemeinsame Kanalschaetzung (1)
- Giga bit per second (1)
- Gröbner basis (1)
- Hardware/Software co-verification (1)
- Hardwareverifikation (1)
- IEC 61508 (1)
- Informationsübertragung (1)
- Jitter (1)
- Kanalschätzung (1)
- Large Synchronous Networks (1)
- Layout (1)
- Leistungseffizienz (1)
- Logiksynthese (1)
- Low Jitter (1)
- Luftschnittstellen (1)
- MIMO Systeme (1)
- MIMO-Antennen (1)
- Mehrtraegeruebertragungsverfahren (1)
- Mobile Telekommunikation (1)
- Mobilfunksysteme (1)
- Modellbasierte Fehlerdiagnose (1)
- Modulationsübertragungsfunktion (1)
- Multicore Resource Management (1)
- Multicore Scheduling (1)
- Network (1)
- Neural ADC (1)
- Noise Control, Feature Extraction, Speech Recognition (1)
- OFDM mobile radio systems (1)
- OFDM-Mobilfunksysteme (1)
- Optische Abbildung (1)
- PSPICE (1)
- Permutationsäquivalenz (1)
- Photonische Kristalle (1)
- Power Efficiency (1)
- Programmverifikation (1)
- Property checking (1)
- Protocol Compliance (1)
- QoS (1)
- RTL (1)
- Reachability (1)
- Real-Time (1)
- Real-Time Systems (1)
- Regularität (1)
- Schaltwerk (1)
- Scheduler (1)
- Self-X (1)
- Sendesignalverarbeitung (1)
- Simulation (1)
- Spiking Neural ADC (1)
- Symbolic execution (1)
- Symmetrie (1)
- Symmetriebrechung (1)
- Synchronnetze (1)
- TD-CDMA (1)
- TTEthernet (1)
- Taktrückgewinnungsschaltungen (1)
- Time-Triggered (1)
- ToF (1)
- Upper bound (1)
- Verification (1)
- Vorverarbeitung (1)
- WCET (1)
- Zugesicherte Eigenschaft (1)
- autonomous networking (1)
- beam refocusing (1)
- biosensors (1)
- bitvector (1)
- bounded model checking (1)
- carrier-grade point-to-point radio networks (1)
- context awareness (1)
- context management (1)
- context-aware topology control (1)
- coordinated backhaul networks in rural areas (1)
- crossphase modulation (1)
- depth sensing (1)
- design automation (1)
- driver status and intention prediction (1)
- drowsiness detection (1)
- dynamic calibration (1)
- electro-hydraulic systems (1)
- fault-tolerant control (1)
- fehlertolerante Regelung (1)
- functional safety (1)
- fuzzy Q-learning (1)
- fuzzy logic (1)
- generic self-x sensor systems (1)
- generic sensor interface (1)
- handover optimzaiion (1)
- heterogeneous access management (1)
- imaging (1)
- ion-sensitive field-effect transistor (1)
- jenseits der dritten Generation (1)
- joint channel estimation (1)
- layout (1)
- logic synthesis (1)
- mehreren Uebertragungszweigen (1)
- mobile radio (1)
- mobile radio systems (1)
- mobility robustness optimization (1)
- model-based fault diagnosis (1)
- multi-carrier (1)
- multi-core processors (1)
- multi-domain modeling and evaluation methodology (1)
- multi-user (1)
- multiuser detection (1)
- multiuser transmission (1)
- negative refraction (1)
- non-conventional (1)
- optical code multiplex (1)
- photonic crystals (1)
- photonic crystals filter (1)
- preprocessing (1)
- preventive maintenance (1)
- probabilistic modeling (1)
- probability of dangerous failure on demand (1)
- property cheking (1)
- readout system (1)
- real-tiem (1)
- real-time scheduling (1)
- real-time systems (1)
- receiver orientation (1)
- regularity (1)
- reinforcement learning (1)
- safety-related systems (1)
- satisfiability (1)
- self calibration (1)
- self-optimizing networks (1)
- sensor fusion (1)
- sequential circuit (1)
- service area (1)
- silicon nanowire (1)
- symmetry (1)
- target sensitivity (1)
- technology mapping (1)
- time utility functions (1)
- timeliness (1)
- trade-off (1)
- verification (1)
- wavelength multiplex (1)
- wireless communications system (1)
- wireless networks (1)
- wireless sensor network (1)

#### Faculty / Organisational entity

- Fachbereich Elektrotechnik und Informationstechnik (45) (remove)

The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.

This work shall provide a foundation for the cross-design of wireless networked control systems with limited resources. A cross-design methodology is devised, which includes principles for the modeling, analysis, design, and realization of low cost but high performance and intelligent wireless networked control systems. To this end, a framework is developed in which control algorithms and communication protocols are jointly designed, implemented, and optimized taking into consideration the limited communication, computing, memory, and energy resources of the low performance, low power, and low cost wireless nodes used. A special focus of the proposed methodology is on the prediction and minimization of the total energy consumption of the wireless network (i.e. maximization of the lifetime of wireless nodes) under control performance constraints (e.g. stability and robustness) in dynamic environments with uncertainty in resource availability, through the joint (offline/online) adaptation of communication protocol parameters and control algorithm parameters according to the traffic and channel conditions. Appropriate optimization approaches that exploit the structure of the optimization problems to be solved (e.g. linearity, affinity, convexity) and which are based on Linear Matrix Inequalities (LMIs), Dynamic Programming (DP), and Genetic Algorithms (GAs) are investigated. The proposed cross-design approach is evaluated on a testbed consisting of a real lab plant equipped with wireless nodes. Obtained results show the advantages of the proposed cross-design approach compared to standard approaches which are less flexible.

The increasing complexity of modern SoC designs makes tasks of SoC formal verification
a lot more complex and challenging. This motivates the research community to develop
more robust approaches that enable efficient formal verification for such designs.
It is a common scenario to apply a correctness by integration strategy while a SoC
design is being verified. This strategy assumes formal verification to be implemented in
two major steps. First of all, each module of a SoC is considered and verified separately
from the other blocks of the system. At the second step – when the functional correctness
is successfully proved for every individual module – the communicational behavior has
to be verified between all the modules of the SoC. In industrial applications, SAT/SMT-based interval property checking(IPC) has become widely adopted for SoC verification. Using IPC approaches, a verification engineer is able to afford solving a wide range of important verification problems and proving functional correctness of diverse complex components in a modern SoC design. However, there exist critical parts of a design where formal methods often lack their robustness. State-of-the-art property checkers fail in proving correctness for a data path of an industrial central processing unit (CPU). In particular, arithmetic circuits of a realistic size (32 bits or 64 bits) – especially implementing multiplication algorithms – are well-known examples when SAT/SMT-based
formal verification may reach its capacity very fast. In cases like this, formal verification
is replaced with simulation-based approaches in practice. Simulation is a good methodology that may assure a high rate of discovered bugs hidden in a SoC design. However, in contrast to formal methods, a simulation-based technique cannot guarantee the absence of errors in a design. Thus, simulation may still miss some so-called corner-case bugs in the design. This may potentially lead to additional and very expensive costs in terms of time, effort, and investments spent for redesigns, refabrications, and reshipments of new chips.
The work of this thesis concentrates on studying and developing robust algorithms
for solving hard arithmetic decision problems. Such decision problems often originate from a task of RTL property checking for data-path designs. Proving properties of those
designs can efficiently be performed by solving SMT decision problems formulated with
the quantifier-free logic over fixed-sized bit vectors (QF-BV).
This thesis, firstly, proposes an effective algebraic approach based on a Gröbner basis theory that allows to efficiently decide arithmetic problems. Secondly, for the case of custom-designed components, this thesis describes a sophisticated modeling technique which is required to restore all the necessary arithmetic description from these components. Further, this thesis, also, explains how methods from computer algebra and the modeling techniques can be integrated into a common SMT solver. Finally, a new QF-BV SMT solver is introduced.

For many years real-time task models have focused the timing constraints on execution windows defined by earliest start times and deadlines for feasibility.
However, the utility of some application may vary among scenarios which yield correct behavior, and maximizing this utility improves the resource utilization.
For example, target sensitive applications have a target point where execution results in maximized utility, and an execution window for feasibility.
Execution around this point and within the execution window is allowed, albeit at lower utility.
The intensity of the utility decay accounts for the importance of the application.
Examples of such applications include multimedia and control; multimedia application are very popular nowadays and control applications are present in every automated system.
In this thesis, we present a novel real-time task model which provides for easy abstractions to express the timing constraints of target sensitive RT applications: the gravitational task model.
This model uses a simple gravity pendulum (or bob pendulum) system as a visualization model for trade-offs among target sensitive RT applications.
We consider jobs as objects in a pendulum system, and the target points as the central point.
Then, the equilibrium state of the physical problem is equivalent to the best compromise among jobs with conflicting targets.
Analogies with well-known systems are helpful to fill in the gap between application requirements and theoretical abstractions used in task models.
For instance, the so-called nature algorithms use key elements of physical processes to form the basis of an optimization algorithm.
Examples include the knapsack problem, traveling salesman problem, ant colony optimization, and simulated annealing.
We also present a few scheduling algorithms designed for the gravitational task model which fulfill the requirements for on-line adaptivity.
The scheduling of target sensitive RT applications must account for timing constraints, and the trade-off among tasks with conflicting targets.
Our proposed scheduling algorithms use the equilibrium state concept to order the execution sequence of jobs, and compute the deviation of jobs from their target points for increased system utility.
The execution sequence of jobs in the schedule has a significant impact on the equilibrium of jobs, and dominates the complexity of the problem --- the optimum solution is NP-hard.
We show the efficacy of our approach through simulations results and 3 target sensitive RT applications enhanced with the gravitational task model.

This thesis has the goal to propose measures which allow an increase of the power efficiency of OFDM transmission systems. As compared to OFDM transmission over AWGN channels, OFDM transmission over frequency selective radio channels requires a significantly larger transmit power in order to achieve a certain transmission quality. It is well known that this detrimental impact of frequency selectivity can be combated by frequency diversity. We revisit and further investigate an approach to frequency diversity based on the spreading of subsets of the data elements over corresponding subsets of the OFDM subcarriers and term this approach Partial Data Spreading (PDS). The size of said subsets, which we designate as spreading factor, is a design parameter of PDS, and by properly choosing , depending on the system designer's requirements, an adequate compromise between a good system performance and a low complexity can be found. We show how PDS can be combined with ML, MMSE and ZF data detection, and it is recognized that MMSE data detection offers a good compromise between performance and complexity. After having presented the utilization of PDS in OFDM transmission without FEC encoding, we also show that PDS readily lends itself for FEC encoded OFDM transmission. We display that in this case the system performance can be significantly enhanced by specific schemes of interleaving and utilization of reliabiliy information developed in the thesis. A severe problem of OFDM transmission is the large Peak-to-Average-Power Ratio (PAPR) of the OFDM symbols, which hampers the application of power efficient transmit amplifiers. Our investigations reveal that PDS inherently reduces the PAPR. Another approch to PAPR reduction is the well known scheme Selective Data Mapping (SDM). In the thesis it is shown that PDS can be beneficially combined with SDM to the scheme PDS-SDM with a view to jointly exploit the PAPR reduction potentials of both schemes. However, even when such a PAPR reduction is achieved, the amplitude maximum of the resulting OFDM symbols is not constant, but depends on the data content. This entails the disadvantage that the power amplifier cannot be designed, with a view to achieve a high power efficiency, for a fixed amplitude maximum, what would be desirable. In order to overcome this problem, we propose the scheme Optimum Clipping (OC), in which we obtain the desired fixed amplitude maximum by a specific combination of the measures clipping, filtering and rescaling. In OFDM transmission a certain number of OFDM subcarriers have to be sacrificed for pilot transmission in order to enable channel estimation in the receiver. For a given energy of the OFDM symbols, the question arises in which way this energy should be subdivided among the pilots and the data carrying OFDM subcarriers. If a large portion of the available transmit energy goes to the pilots, then the quality of channel estimation is good, however, the data detection performs poor. Data detection also performs poor if the energy provided for the pilots is too small, because then the channel estimate indispensable for data detection is not accurate enough. We present a scheme how to assign the energy to pilot and data OFDM subcarriers in an optimum way which minimizes the symbol error probability as the ultimate quality measure of the transmission. The major part of the thesis is dedicated to point-to-point OFDM transmission systems. Towards the end of the thesis we show that the PDS can be also applied to multipoint-to-point OFDM transmission systems encountered for instance in the uplinks of mobile radio systems.

Wireless Sensor Networks (WSN) are dynamically-arranged networks typically composed of a large number of arbitrarily-distributed sensor nodes with computing capabilities contributing to –at least– one common application. The main characteristic of these networks is that of being functionally constrained due to a scarce availability of resources and strong dependence on uncontrollable environmental factors. These conditions introduce severe restrictions on the applicability of classic real-time methods aiming at guaranteeing time-bounded communications. Existing real-time solutions tend to apply concepts that were originally not conceived for sensor networks, idealizing realistic application scenarios and overlooking at important design limitations. This results in a number of misleading practices contributing to approaches of restricted validity in real-world scenarios. Amending the confrontation between WSNs and real-time objectives starts with a review of the basic fundamentals of existing approaches. In doing so, this thesis presents an alternative approach based on a generalized timeliness notion suitable to the particularities of WSNs. The new conceptual notion allows the definition of feasible real-time objectives opening a new scope of possibilities not constrained to idealized systems. The core of this thesis is based on the definition and application of Quality of Service (QoS) trade-offs between timeliness and other significant QoS metrics. The analysis of local and global trade-offs provides a step-by-step methodology identifying the correlations between these quality metrics. This association enables the definition of alternative trade-off configurations (set points) influencing the quality performance of the network at selected instants of time. With the basic grounds established, the above concepts are embedded in a simple routing protocol constituting a proof of concept for the validity of the presented analysis. Extensive evaluations under realistic scenarios are driven on simulation environments as well as real testbeds, validating the consistency of this approach.

Model-based fault diagnosis and fault-tolerant control for a nonlinear electro-hydraulic system
(2010)

The work presented in this thesis discusses the model-based fault diagnosis and fault-tolerant control with application to a nonlinear electro-hydraulic system. High performance control with guaranteed safety and reliability for electro-hydraulic systems is a challenging task due to the high nonlinearity and system uncertainties. This thesis developed a diagnosis integrated fault-tolerant control (FTC) strategy for the electro-hydraulic system. In fault free case the nominal controller is in operation for achieving the best performance. If the fault occurs, the controller will be automatically reconfigured based on the fault information provided by the diagnosis system. Fault diagnosis and reconfigurable controller are the key parts for the proposed methodology. The system and sensor faults both are studied in the thesis. Fault diagnosis consists of fault detection and isolation (FDI). A model-base residual generating is realized by calculating the redundant information from the system model and available signal. In this thesis differential-geometric approach is employed, which gives a general formulation of FDI problem and is more compact and transparent among various model-based approaches. The principle of residual construction with differential-geometric method is to find an unobservable distribution. It indicates the existence of a system transformation, with which the unknown system disturbance can be decoupled. With the observability codistribution algorithm the local weak observability of transformed system is ensured. A Fault detection observer for the transformed system can be constructed to generate the residual. This method cannot isolated sensor faults. In the thesis the special decision making logic (DML) is designed based on the individual signal analysis of the residuals to isolate the fault. The reconfigurable controller is designed with the backstepping technique. Backstepping method is a recursive Lyapunov-based approach and can deal with nonlinear systems. Some system variables are considered as ``virtual controls'' during the design procedure. Then the feedback control laws and the associate Lyapunov function can be constructed by following step-by-step routine. For the electro-hydraulic system adaptive backstepping controller is employed for compensate the impact of the unknown external load in the fault free case. As soon as the fault is identified, the controller can be reconfigured according to the new modeling of faulty system. The system fault is modeled as the uncertainty of system and can be tolerated by parameter adaption. The senor fault acts to the system via controller. It can be modeled as parameter uncertainty of controller. All parameters coupled with the faulty measurement are replaced by its approximation. After the reconfiguration the pre-specified control performance can be recovered. FDI integrated FTC based on backstepping technique is implemented successfully on the electro-hydraulic testbed. The on-line robust FDI and controller reconfiguration can be achieved. The tracking performance of the controlled system is guaranteed and the considered faults can be tolerated. But the problem of theoretical robustness analysis for the time delay caused by the fault diagnosis is still open.

Channel estimation is of great importance in many wireless communication systems, since it influences the overall performance of a system significantly. Especially in multi-user and/or multi-antenna systems, i.e. generally in multi-branch systems, the requirements on channel estimation are very high, since the training signals or so called pilots that are used for channel estimation suffer from multiple access interference. Recently, in the context with such systems more and more attention is paid to concepts for joint channel estimation (JCE) which have the capability to eliminate the multiple access interference and also the interference between the channel coefficients. The performance of JCE can be evaluated in noise limited systems by the SNR degradation and in interference limited systems by the variation coefficient. Theoretical analysis carried out in this thesis verifies that both performance criteria are closely related to the patterns of the pilots used for JCE, no matter the signals are represented in the time domain or in the frequency domain. Optimum pilots like disjoint pilots, Walsh code based pilots or CAZAC code based pilots, whose constructions are described in this thesis, do not show any SNR degradation when being applied to multi-branch systems. It is shown that optimum pilots constructed in the time domain become optimum pilots in the frequency domain after a discrete Fourier transformation. Correspondingly, optimum pilots in the frequency domain become optimum pilots in the time domain after an inverse discrete Fourier transformation. However, even for optimum pilots different variation coefficients are obtained in interference limited systems. Furthermore, especially for OFDM-based transmission schemes the peak-to-average power ratio (PAPR) of the transmit signal is an important decision criteria for choosing the most suitable pilots. CAZAC code based pilots are the only pilots among the regarded pilot constructions that result in a PAPR of 0 dB for the transmit signal that origins in the transmitted pilots. When summarizing the analysis regarding the SNR degradation, the variation coefficient and the PAPR with respect to one single service area and considering the impact due to interference from other adjacent service areas that occur due to a certain choice of the pilots, one can conclude that CAZAC codes are the most suitable pilots for the application in JCE of multi-carrier multi-branch systems, especially in the case if CAZAC codes that origin in different mother codes are assigned to different adjacent service areas. The theoretical results of the thesis are verified by simulation results. The choice of the parameters for the frequency domain or time domain JCE is oriented towards the evaluated implementation complexity. According to the chosen parameterization of the regarded OFDM-based and FMT-based systems it is shown that a frequency domain JCE is the best choice for OFDM and a time domain JCE is the best choice for FMT applying CAZAC codes as pilots. The results of this thesis can be used as a basis for further theoretical research and also for future JCE implementation in wireless systems.

Photonic crystals are inhomogeneous dielectric media with periodic variation of the refractive index. A photonic crystal gives us new tools for the manipulation of photons and thus has received great interests in a variety of fields. Photonic crystals are expected to be used in novel optical devices such as thresholdless laser diodes, single-mode light emitting diodes, small waveguides with low-loss sharp bends, small prisms, and small integrated optical circuits. They can be operated in some aspects as "left handed materials" which are capable of focusing transmitted waves into a sub-wavelength spot due to negative refraction. The thesis is focused on the applications of photonic crystals in communications and optical imaging: • Photonic crystal structures for potential dispersion management in optical telecommunication systems • 2D non-uniform photonic crystal waveguides with a square lattice for wide-angle beam refocusing using negative refraction • 2D non-uniform photonic crystal slabs with triangular lattice for all-angle beam refocusing • Compact phase-shifted band-pass transmission filter based on photonic crystals

In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.

Rapid growth in sensors and sensor technology introduces variety of products to the market. The increasing number of available sensor concepts and implementations demands more versatile sensor electronics and signal conditioning. Nowadays signal conditioning for the available spectrum of sensors is becoming more and more challenging. Moreover, developing a sensor signal conditioning ASIC is a function of cost, area, and robustness to maintain signal integrity. Field programmable analog approaches and the recent evolvable hardware approaches offer partial solution for advanced compensation as well as for rapid prototyping. The recent research field of evolutionary concepts focuses predominantly on digital and is at its advancement stage in analog domain. Thus, the main research goal is to combine the ever increasing industrial demand for sensor signal conditioning with evolutionary concepts and dynamically reconfigurable matched analog arrays implemented in main stream Complementary Metal Oxide Semiconductors (CMOS) technologies to yield an intelligent and smart sensor system with acceptable fault tolerance and the so called self-x features, such as self-monitoring, self-repairing and self-trimming. For this aim, the work suggests and progresses towards a novel, time continuous and dynamically reconfigurable signal conditioning hardware platform suitable to support variety of sensors. The state-of-the-art has been investigated with regard to existing programmable/reconfigurable analog devices and the common industrial application scenario and circuits, in particular including resource and sizing analysis for proper motivation of design decisions. The pursued intermediate granular level approach called as Field Programmable Medium-granular mixed signal Array (FPMA) offers flexibility, trimming and rapid prototyping capabilities. The proposed approach targets at the investigation of industrial applicability of evolvable hardware concepts and to merge it with reconfigurable or programmable analog concepts, and industrial electronics standards and needs for next generation robust and flexible sensor systems. The devised programmable sensor signal conditioning test chips, namely FPMA1/FPMA2, designed in 0.35 µm (C35B4) Austriamicrosystems, can be used as a single instance, off the shelf chip at the PCB level for conditioning or in the loop with dedicated software to inherit the aspired self-x features. The use of such self–x sensor system carries the promise of improved flexibility, better accuracy and reduced vulnerability to manufacturing deviations and drift. An embedded system, namely PHYTEC miniMODUL-515C was used to program and characterize the mixed-signal test chips in various feedback arrangements to answer some of the questions raised by the research goals. Wide range of established analog circuits, ranging from single output to fully differential amplifiers, was investigated at different hierarchical levels to realize circuits like instrumentation amplifier and filters. A more extensive design issues based on low-power like for e.g., sub-threshold design were investigated and a novel soft sleep mode idea was proposed. The bandwidth limitations observed in the state of the art fine granular approaches were enhanced by the proposed intermediate granular approach. The so designed sensor signal conditioning instrumentation amplifier was then compared to the commercially available products in the market like LT 1167, INA 125 and AD 8250. In an adaptive prototype, evolutionary approaches, in particular based on particle swarm optimization with multi-objectives, were just deployed to all the test samples of FPMA1/FMPA2 (15 each) to exhibit self-x properties and to recover from manufacturing variations and drift. The variations observed in the performance of the test samples were compensated through reconfiguration for the desired specification.

The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.

Analog sensor electronics requires special care during design in order to increase the quality and precision of the signal, and the life time of the product. Nevertheless, it can experience static deviations due to the manufacturing tolerances, and dynamic deviations due to operating in non-ideal environment. Therefore, the advanced applications such as MEMS technology employs calibration loop to deal with the deviations, but unfortunately, it is considered only in the digital domain, which cannot cope with all the analog deviations such as saturation of the analog signal, etc. On the other hand, rapid-prototyping is essential to decrease the development time, and the cost of the products for small quantities. Recently, evolvable hardware has been developed with the motivation to cope with the mentioned sensor electronic problems. However the industrial specifications and requirements are not considered in the hardware learning loop. Indeed, it minimizes the error between the required output and the real output generated due to given test signal. The aim of this thesis is to synthesize the generic organic-computing sensor electronics and return hardware with predictable behavior for embedded system applications that gains the industrial acceptance; therefore, the hardware topology is constrained to the standard hardware topologies, the hardware standard specifications are included in the optimization, and hierarchical optimization are abstracted from the synthesis tools to evolve first the building blocks, then evolve the abstract level that employs these optimized blocks. On the other hand, measuring some of the industrial specifications needs expensive equipments and some others are time consuming which is not fortunate for embedded system applications. Therefore, the novel approach "mixtrinsic multi-objective optimization" is proposed that simulates/estimates the set of the specifications that is hard to be measured due to the cost or time requirements, while it measures intrinsically the set of the specifications that has high sensitivity to deviations. These approaches succeed to optimize the hardware to meet the industrial specifications with low cost measurement setup which is essential for embedded system applications.

The present thesis deals with multi-user mobile radio systems, and more specifically, the downlinks (DL) of such systems. As a key demand on future mobile radio systems, they should enable highest possible spectrum and energy efficiency. It is well known that, in principle, the utilization of multi-antennas in the form of MIMO systems, offers considerable potential to meet this demand. Concerning the energy issue, the DL is more critical than the uplink. This is due to the growing importance of wireless Internet applications, in which the DL data rates and, consequently, the radiated DL energies tend to be substantially higher than the corresponding uplink quantities. In this thesis, precoding schemes for MIMO multi-user mobile radio DLs are considered, where, in order to keep the complexity of the mobile terminals as low as possible, the rationale receiver orientation (RO) is adopted, with the main focus to further reduce the required transmit energy in such systems. Unfortunately, besides the mentioned low receiver complexity, conventional RO schemes, such as Transmit Zero Forcing (TxZF), do not offer any transmit energy reductions as compared to conventional transmitter oriented schemes. Therefore, the main goal of this thesis is the design and analysis of precoding schemes in which such transmit energy reductions become feasible - under virtually maintaining the low receiver complexity - by means of replacing the conventional unique mappings by the selectable representations of the data. Concerning the channel access scheme, Orthogonal Frequency Division Multiplex (OFDM) is presently being favored as the most promising candidate in the standardization process of the enhanced 3G and forthcoming 4G systems, because it allows a very flexible resource allocation and low receiver complexity. Receiver oriented MIMO OFDM multi-user downlink transmission, in which channel equalization is already performed in the transmitter of the access point, further contributes to low receiver complexity in the mobile terminals. For these reasons, OFDM is adopted in the target system of the considered receiver oriented precoding schemes. In the precoding schemes considered the knowledge of channel state information (CSI) in the access point in the form of the channel matrix is essential. Independently of the applied duplexing schemes FDD or TDD, the provision of this information to the access point is always erroneous. However, it is shown that the impact of such deviations not only scales with the variance of the channel estimation errors, but also with the required transmit energies. Accordingly, the reduced transmit energies of the precoding schemes with selectable data representation also have the advantage of a reduced sensitivity to imperfect knowledge of CSI. In fact, these two advantages are coupled with each other.

In the thesis the task of channel estimation in beyond 3G service area based mobile radio air interfaces is considered. A system concept named Joint Transmission and Detection Integrated Network (JOINT) forms the target platform for the investigations. A single service area of JOINT is considered, in which a number of mobile terminals is supported by a number of radio access points, which are connected to a central unit responsible for the signal processing. The modulation scheme of JOINT is OFDM. Pilot-aided channel estimation is considered, which has to be performed only in the uplink of JOINT, because the duplexing scheme TDD is applied. In this way, the complexity of the mobile terminals is reduced, because they do not need a channel estimator. Based on the signals received by the access points, the central unit estimates the channel transfer functions jointly for all mobile terminals. This is done by resorting to the a priori knowledge of the radiated pilot signals and by applying the technique of joint channel estimation, which is developed in the thesis. The quality of the gained estimates is judged by the degradation of their signal-to-noise ratio as compared to the signal-to-noise ratio of the respective estimates gained in the case of a single mobile terminal radiating its pilots. In the case of single-element receive antennas at the access points, said degradation depends solely on the structure of the applied pilots. In the thesis it is shown how by a proper design of the pilots the SNR degradation can be minimized. Besides using appropriate pilots, the performance of joint channel estimation can be further improved by the inclusion of additional a-priori information in the estimation process. An example of such additional information would be the knowledge of the directional properties of the radio channels. This knowledge can be gained if multi-element antennas are applied at the access points. Further, a-priori channel state information in the form of the power delay profiles of the radio channels can be included in the estimation process by the application of the minimum mean square error estimation principle for joint channel estimation. After having intensively studied the problem of joint channel estimation in JOINT, the thesis rounds itself by considering the impact of the unavoidable channel estimation errors on the performance of data estimation in JOINT. For the case of small channel estimation errors occurring due to the presence of noise at the access points, the performance of joint detection in the uplink and of joint transmission in the downlink of JOINT are investigated based on simulations. For the uplink, which utilizes joint detection, it is shown to which degree the bit error probability increases due to channel estimation errors. For the downlink, which utilizes joint transmission, channel estimation errors lead to an increase of the required transmit power, which can be quantified by the simulation results.

The thesis is focused on modelling and simulation of a Joint Transmission and Detection Integrated Network (JOINT), a novel air interface concept for B3G mobile radio systems. Besides the utilization of the OFDM transmission technique, which is a promising candidate for future mobile radio systems, and of the duplexing scheme time division duplexing (TDD), the subdivision of the geographical domain to be supported by mobile radio communications into service areas (SAs) is a highlighted concept of JOINT. A SA consists of neighboring sub-areas, which correspond to the cells of conventional cellular systems. The signals in a SA are jointly processed in a Central Unit (CU) in each SA. The CU performs joint channel estimation (JCE) and joint detection (JD) in the form of the receive-zero-forcing (RxZF) Filter for the uplink (UL) transmission and joint transmission (JT) in the form of the transmit-zero-forcing (TxZF) Filter for the downlink (DL) transmission. By these algorithms intra-SA multiple access interference (MAI) can be eliminated within the limits of the used model so that unbiased data estimates are obtained, and most of the computational effort is moved from mobile terminals (MTs) to the CU so that the MTs can do with low complexity. A simulation chain of JOINT has been established in the software MLDesigner by the author based on time discrete equivalent lowpass modelling. In this simulation chain, all key functionalities of JOINT are implemented. The simulation chain is designed for link level investigations. A number of channel models are implemented both for the single-SA scenario and the multiple-SA scenario so that the system performance of JOINT can be comprehensively studied. It is shown that in JOINT a duality or a symmetry of the MAI elimination in the UL and in the DL exists. Therefore, the typical noise enhancement going along with the MAI elimination by JD and JT, respectively, is the same in both links. In the simulations also the impact of channel estimation errors on the system performance is studied. In the multiple-SA scenario, due to the existence of the inter-SA MAI, which cannot be suppressed by the algorithms of JD and JT, the system performance in terms of the average bit error rate (BER) and the BER statistics degrades. A collection of simulation results show the potential of JOINT with respect to the improvement of the system performance and the enhancement of the spectrum e±ciency as compared to conventional cellular systems.

In conventional radio communication systems, the system design generally starts from the transmitter (Tx), i.e. the signal processing algorithm in the transmitter is a priori selected, and then the signal processing algorithm in the receiver is a posteriori determined to obtain the corresponding data estimate. Therefore, in these conventional communication systems, the transmitter can be considered the master and the receiver can be considered the slave. Consequently, such systems can be termed transmitter (Tx) oriented. In the case of Tx orientation, the a priori selected transmitter algorithm can be chosen with a view to arrive at particularly simple transmitter implementations. This advantage has to be countervailed by a higher implementation complexity of the a posteriori determined receiver algorithm. Opposed to the conventional scheme of Tx orientation, the design of communication systems can alternatively start from the receiver (Rx). Then, the signal processing algorithm in the receiver is a priori determined, and the transmitter algorithm results a posteriori. Such an unconventional approach to system design can be termed receiver (Rx) oriented. In the case of Rx orientation, the receiver algorithm can be a priori selected in such a way that the receiver complexity is minimum, and the a posteriori determined transmitter has to tolerate more implementation complexity. In practical communication systems the implementation complexity corresponds to the weight, volume, cost etc of the equipment. Therefore, the complexity is an important aspect which should be taken into account, when building practical communication systems. In mobile radio communication systems, the complexity of the mobile terminals (MTs) should be as low as possible, whereas more complicated implementations can be tolerated in the base station (BS). Having in mind the above mentioned complexity features of the rationales Tx orientation and Rx orientation, this means that in the uplink (UL), i.e. in the radio link from the MT to the BS, the quasi natural choice would be Tx orientation, which leads to low cost transmitters at the MTs, whereas in the downlink (DL), i.e. in the radio link from the BS to the MTs, the rationale Rx orientation would be the favorite alternative, because this results in simple receivers at the MTs. Mobile radio downlinks with the rationale Rx orientation are considered in the thesis. Modern mobile radio communication systems are cellular systems, in which both the intracell and intercell interferences exist. These interferences are the limiting factors for the performance of mobile radio systems. The intracell interference can be eliminated or at least reduced by joint signal processing with consideration of all the signals in the considered cell. However such joint signal processing is not feasible for the elimination of intercell interference in practical systems. Knowing that the detrimental effect of intercell interference grows with its average energy, the transmit energy radiated from the transmitter should be as low as possible to keep the intercell interference low. Low transmit energy is required also with respect to the growing electro-phobia of the public. The transmit energy reduction for multi-user mobile radio downlinks by the rationale Rx orientation is dealt with in the thesis. Among the questions still open in this research area, two questions of major importance are considered here. MIMO is an important feature with respect to the transmit power reduction of mobile radio systems. Therefore, first questionconcerns the linear Rx oriented transmission schemes combined with MIMO antenna structures. The investigations of the MIMO benefit on the linear Rx oriented transmission schemes are studied in the thesis. Utilization of unconventional multiply connected quantization schemes at the receiver has also great potential to reduce the transmit energy. Therefore, the second question considers the designing of non-linear Rx oriented transmission schemes combined with multiply connected quantization schemes.

The present thesis deals with a novel approach to increase the resource usage in digital communications. In digital communication systems, each information bearing data symbol is associated to a waveform which is transmitted over a physical medium. The time or frequency separations among the waveforms associated to the information data have always been chosen to avoid or limit the interference among them. By doing so, n the presence of a distortionless ideal channel, a single receive waveform is affected as little as possible by the presence of the other waveforms. The conditions necessary to meet the absence of any interference among the waveforms are well known and consist of a relationship between the minimum time separation among the waveforms and their bandwidth occupation or, equivalently, the minimum frequency separation and their time occupation. These conditions are referred to as Nyquist assumptions. The key idea of this work is to relax the Nyquist assumptions and to transmit with a time and/or frequency separation between the waveforms smaller than the minimum required to avoid interference. The reduction of the time and/or frequency separation generates not only an increment of the resource usage, but also a degradation in the quality of the received data. Therefore, to maintain a certain quality in the received signal, we have to increase the amount of transmitted power. We investigate the trade-off between the increment of the resource usage and the correspondent performance degradation in three different cases. The first case is the single carrier case in which all waveforms have the same spectrum, but have different temporal locations. The second one is the multi carrier case in which each waveform has its distinct spectrum and occupies all the available time. Finally, the hybrid case when each waveform has its unique time and frequency location. These different cases are framed within the general system modelling developed in the thesis so that they can be easily compared. We evaluate the potential of the key idea of the thesis by choosing a set of four possible waveforms with different characteristics. By doing so, we study the influence of the waveform characteristics in the three system configurations. We propose an interpretation of the results by modifying the well-known Shannon capacity formula and by explicitly expressing its dependency on the increment of resource usage and on the performance degradation. The results are very promising. We show that both in the case of a single carrier system with a time limited waveform and in the case of a multi-carrier system with a frequency limited waveform, the reduction of the time or frequency separation, respectively, has a positive effect on the channel capacity. The latter, depending on the actual SNR, can double or increase even more significantly.

The present thesis deals with a novel air interface concept for beyond 3G mobile radio systems. Signals received at a certain reference cell in a cellular system which originate in neighboring cells of the same cellular system are undesired and constitute the intercell interference. Due to intercell interference, the spectrum capacity of cellular systems is limited and therefore the reduction of intercell interference is an important goal in the design of future mobile radio systems. In the present thesis, a novel service area based air interface concept is investigated in which interference is combated by joint detection and joint transmission, providing an increased spectrum capacity as compared to state-of-the-art cellular systems. Various algorithms are studied, with the aid of which intra service area interference can be combated. In the uplink transmission, by optimum joint detection the probability of erroneous decision is minimized. Alternatively, suboptimum joint detection algorithms can be applied offering reduced complexity. By linear receive zero-forcing joint detection interference in a service area is eliminated, while by linear minimum mean square error joint detection a trade-off is performed between interference elimination and noise enhancement. Moreover, iterative joint detection is investigated and it is shown that convergence of the data estimates of iterative joint detection without data estimate refinement towards the data estimates of linear joint detection can be achieved. Iterative joint detection can be further enhanced by the refinement of the data estimates in each iteration. For the downlink transmission, the reciprocity of uplink and downlink channels is used by joint transmission eliminating the need for channel estimation and therefore allowing for simple mobile terminals. A novel algorithm for optimum joint transmission is presented and it is shown how transmit signals can be designed which result in the minimum possible average bit error probability at the mobile terminals. By linear transmit zero-forcing joint transmission interference in the downlink transmission is eliminated, whereas by iterative joint transmission transmit signals are constructed in an iterative manner. In a next step, the performance of joint detection and joint transmission in service area based systems is investigated. It is shown that the price to be paid for the interference suppression in service area based systems is the suboptimum use of the receive energy in the uplink transmission and of the transmit energy in the downlink transmission, with respect to the single user reference system. In the case of receive zero-forcing joint detection in the uplink and transmit zero-forcing joint transmission in the downlink, i.e., in the case of linear unbiased data transmission, it is shown that the same price, quantified by the energy efficiency, has to be paid for interference elimination in both uplink and downlink. Finally it is shown that if the system load is fixed, the number of active mobile terminals in a SA and hence the spectrum capacity can be increased without any significant reduction in the average energy efficiency of the data transmission.

We present new algorithms and provide an overall framework for the interaction of the classically separate steps of logic synthesis and physical layout in the design of VLSI circuits. Due to the continuous development of smaller sized fabrication processes and the subsequent domination of interconnect delays, the traditional separation of logical and physical design results in increasingly inaccurate cost functions and aggravates the design closure problem. Consequently, the interaction of physical and logical domains has become one of the greatest challenges in the design of VLSI circuits. To address this challenge, we propose different solutions for the control and datapath logic of a design, and show how to combine them to reach design closure.