### Refine

#### Year of publication

- 2008 (33) (remove)

#### Document Type

- Doctoral Thesis (33) (remove)

#### Language

- English (33) (remove)

#### Keywords

- Finite-Elemente-Methode (3)
- Computergraphik (2)
- Level-Set-Methode (2)
- Raumakustik (2)
- Room acoustics (2)
- Visualisierung (2)
- computer graphics (2)
- domain decomposition (2)
- mesh generation (2)
- virtual acoustics (2)
- visualization (2)
- Ab-initio-Rechnung (1)
- Aerosol (1)
- Aerosol Particles (1)
- Aerosol Partikeln (1)
- Annulus (1)
- Anthropogener Einfluss (1)
- Bayes-Entscheidungstheorie (1)
- Bifurkation (1)
- Bildverarbeitung (1)
- Biot Poroelastizitätgleichung (1)
- Bioturbation (1)
- Blattschneiderameisen (1)
- Bounded Model Checking (1)
- CDS (1)
- CMOS (1)
- CMOS-Schaltung (1)
- CPDO (1)
- Cauchy-Born Regel (1)
- Cauchy-Born Rule (1)
- Center Location (1)
- Circle Location (1)
- Clock and Data Recovery Circuits (1)
- Cluster (1)
- Clusterion (1)
- Combinatorial Optimization (1)
- Computersimulation (1)
- Continuum-Atomistic Multiscale Algorithm (1)
- Credit Risk (1)
- Curvature (1)
- Datenrückgewinnungsschaltungen (1)
- Defaultable Options (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Dicarbonsäuren (1)
- Dreidimensionale Rekonstruktion (1)
- Dynamically reconfigurable analog circuits (1)
- EPDM (1)
- Elastomer (1)
- Entscheidungsbaum (1)
- Erreichbarkeit (1)
- FEM (1)
- Festkörper (1)
- Festkörpergrenzschichten (1)
- Fiber suspension flow (1)
- Filtration (1)
- Finanzmathematik (1)
- Finite Element Method (1)
- Finite Elementes (1)
- Fluid-Struktur-Wechselwirkung (1)
- Fragmentierung (1)
- Galerkin Verfahren (1)
- Galerkin methods (1)
- Gasphase (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Generalisierte Plastizität (1)
- Giga bit per second (1)
- Gittererzeugung (1)
- Gleitverschleiß (1)
- Homogeneous deformation (1)
- Hub Location Problem (1)
- Hysterese (1)
- IRMPD (1)
- Infrarotspektroskopie (1)
- Jitter (1)
- Kohäsive Grenzschichten (1)
- Kompression (1)
- Kontinuums-Atomistische Kopplung (1)
- Kopplungsproblem (1)
- Krümmung (1)
- Laminare Grenzschicht (1)
- Large Synchronous Networks (1)
- Layout (1)
- Leichtbau (1)
- Level set methods (1)
- Local continuum (1)
- Location (1)
- Low Jitter (1)
- Lärmbelastung (1)
- Lärmimmission (1)
- MBS (1)
- MKS (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Massenspektrometrie (1)
- Materialermüdung (1)
- Materielle Kräfte (1)
- Mathematical Finance (1)
- Mehrkörpersystem (1)
- Methode der finiten Elemente (1)
- Mikroklima (1)
- Mikromorphe Kontinua (1)
- Mixed integer programming (1)
- Model checking (1)
- Molecular Dynamics (1)
- Molekulardynamik (1)
- Multicriteria optimization (1)
- Multiperiod planning (1)
- Nichtlineare Approximation (1)
- Nichtlineare Dynamik (1)
- Nichtlineare Finite-Elemente-Methode (1)
- Nichtlineare Kontinuumsmechanik (1)
- Niederschlag (1)
- Non--local atomistic (1)
- Nonlinear Optimization (1)
- Numerische Homogenisierung (1)
- Numerische Integration (1)
- Optische Zeichenerkennung (1)
- Order of printed copy (1)
- Phase Transition Effect (1)
- Phase Transition Effekt (1)
- Piezokeramik (1)
- Plastizität (1)
- Protocol Compliance (1)
- Protonentransf (1)
- Punktprozess (1)
- Reachability (1)
- Rollreibung (1)
- Standortprobleme (1)
- Stochastic Control (1)
- Stochastische optimale Kontrolle (1)
- Stokes-Gleichung (1)
- Stoßdämpfer (1)
- Sublimation (1)
- Synchronnetze (1)
- System-on-Chip (1)
- Systemidentifikation (1)
- Taktrückgewinnungsschaltungen (1)
- Technische Mechanik (1)
- Teilchen (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Tribologie (1)
- Tropische Geometrie (1)
- Unstrukturiertes Gitter (1)
- Vegetationsentwicklung (1)
- Verdampfung (1)
- Verifikation (1)
- Verzweigung <Mathematik> (1)
- Viskoelastizität (1)
- Waldökosystem (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- ab initio (1)
- acoustic modeling (1)
- benders decomposition (1)
- bifurcation (1)
- closure approximation (1)
- cluster (1)
- cohesive interface (1)
- computational dynamics (1)
- computational homogenization (1)
- consistent integration (1)
- decision support systems (1)
- document analysis (1)
- dynamic calibration (1)
- elastomer (1)
- environmental noise (1)
- ferroelectric fatigue (1)
- ferroelektrische Ermüdung (1)
- filtration (1)
- finite Elasto-Plastizität (1)
- finite elasto-plasticity (1)
- finite volume method (1)
- fluid structure interaction (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- generalized plasticity (1)
- generic self-x sensor systems (1)
- generic sensor interface (1)
- gitter (1)
- heuristic (1)
- hybrid lightweight structures (1)
- hybride Leichtbaustrukturen (1)
- inelastic multibody systems (1)
- inelastische Mehrkörpersysteme (1)
- interface problem (1)
- konsistente Integration (1)
- layout analysis (1)
- level set method (1)
- material forces (1)
- mathematical modelling (1)
- micromorphic continua (1)
- netzgenerierung (1)
- nichtlineare Modellreduktion (1)
- nonlinear model reduction (1)
- nonwovens (1)
- numerische Dynamik (1)
- optical character recognition (1)
- porous media (1)
- poröse Medien (1)
- rheology (1)
- rolling friction (1)
- self calibration (1)
- sliding wear (1)
- solid interfaces (1)
- solvation (1)
- stochastic optimal control (1)
- transition metals (1)
- transmission conditions (1)
- tribology (1)
- viscoelasticity (1)
- well-posedness (1)
- Ökologie (1)
- Übergangsbedingungen (1)
- Übergangsmetall (1)

#### Faculty / Organisational entity

Nanotechnology is now recognized as one of the most promising areas for technological
development in the 21st century. In materials research, the development of
polymer nanocomposites is rapidly emerging as a multidisciplinary research activity
whose results could widen the applications of polymers to the benefit of many different
industries. Nanocomposites are a new class of composites that are particle-filled
polymers for which at least one dimension of the dispersed particle is in the nanometer
range. In the related area polymer/clay nanocomposites have attracted considerable
interest because they often exhibit remarkable property improvements when
compared to virgin polymer or conventional micro- and macro- composites.
The present work addresses the toughening and reinforcement of thermoplastics via
a novel method which allows us to achieve micro- and nanocomposites. In this work
two matrices are used: amorphous polystyrene (PS) and semi-crystalline polyoxymethylene
(POM). Polyurethane (PU) was selected as the toughening agent for POM
and used in its latex form. It is noteworthy that the mean size of rubber latices is
closely matched with that of conventional toughening agents, impact modifiers.
Boehmite alumina and sodium fluorohectorite (FH) were used as reinforcements.
One of the criteria for selecting these fillers was that they are water swellable/
dispersible and thus their nanoscale dispersion can be achieved also in aqueous
polymer latex. A systematic study was performed on how to adapt discontinuousand
continuous manufacturing techniques for the related nanocomposites.
The dispersion of nanofillers was characterized by transmission, scanning electron
and atomic force microcopy (TEM, SEM and AFM respectively), X-ray diffraction
(XRD) techniques, and discussed. The crystallization of POM was studied by means
of differential scanning calorimetry and polarized light optical microscopy (DSC and
PLM, respectively). The mechanical and thermomechanical properties of the composites
were determined in uniaxial tensile, dynamic-mechanical thermal analysis
(DMTA), short-time creep tests, and thermogravimetric analysis (TGA).
PS composites were produced first by a discontinuous manufacturing technique,
whereby FH or alumina was incorporated in the PS matrix by melt blending with and
without latex precompounding of PS latex with the nanofiller. It was found that direct melt mixing (DM) of the nanofillers with PS resulted in micro-, whereas the latex mediated
pre-compounding (masterbatch technique, MB) in nanocomposites. FH was
not intercalated by PS when prepared by DM. On the other hand, FH was well dispersed
(mostly intercalated) in PS via the PS latex-mediated predispersion of FH following
the MB route. The nanocomposites produced by MB outperformed the DM
compounded microcomposites in respect to properties like stiffness, strength and
ductility based on dynamic-mechanical and static tensile tests. It was found that the
resistance to creep (summarized in master curves) of the nanocomposites were improved
compared to those of the microcomposites. Master curves (creep compliance
vs. time), constructed based on isothermal creep tests performed at different temperatures,
showed that the nanofiller reinforcement affects mostly the initial creep
compliance.
Next, ternary composites composed of POM, PU and boehmite alumina were produced
by melt blending with and without latex precompounding. Latex precompounding
served for the predispersion of the alumina particles. The related MB was produced
by mixing the PU latex with water dispersible boehmite alumina. The composites
produced by the MB technique outperformed the DM compounded composites in
respect to most of the thermal and mechanical characteristics.
Toughened and/or reinforced PS- and POM-based composites have been successfully
produced by a continuous extrusion technique, too. This technique resulted in
good dispersion of both nanofillers (boehmite) and impact modifier (PU). Compared
to the microcomposites obtained by conventional DM, the nanofiller dispersion became
finer and uniform when using the water-mediated predispersion. The resulting
structure markedly affected the mechanical properties (stiffness and creep resistance)
of the corresponding composites. The impact resistance of POM was highly
enhanced by the addition of PU rubber when manufactured by the continuous extrusion
manufacturing technique. This was traced to the dispersed PU particle size being
in the range required from conventional, impact modifiers.

This thesis is devoted to the study of tropical curves with emphasis on their enumerative geometry. Major results include a conceptual proof of the fact that the number of rational tropical plane curves interpolating an appropriate number of general points is independent of the choice of points, the computation of intersection products of Psi-classes on the moduli space of rational tropical curves, a computation of the number of tropical elliptic plane curves of given degree and fixed tropical j-invariant as well as a tropical analogue of the Riemann-Roch theorem for algebraic curves. The result are obtained in joint work with Hannah Markwig and/or Andreas Gathmann.

Today’s high-resolution digital images and videos require large amounts of storage space and transmission bandwidth. To cope with this, compression methods are necessary that reduce the required space while at the same time minimize visual artifacts. We propose a compression method based on a piecewise linear color interpolation induced by a triangulation of the image domain. We present methods to speed up significantly the optimization process for finding the triangulation. Furthermore, we extend the method to digital videos. Laser scanners to capture the surface of three-dimensional objects are widely used in industry nowadays, e.g., for reverse engineering or quality measurement. Hand-held scanning devices have the advantage that the laser device can be moved to any position, permitting a scan of complex objects. But operating a hand-held laser scanner is challenging. The operator has to keep track of the scanned regions in his mind, and has no feedback of the sample density unless he starts the surface reconstruction after finishing the scan. We present a system to support the operator by computing and rendering high-quality surface meshes of the captured data online, i.e., while he is still scanning, and in real time. Furthermore, it color-codes the rendered surface to reflect the surface quality. Thereby, instant feedback is provided, resulting in better scans in less time.

Sublimation (Evaporation) is widely used in different industrial applications. The important applications are the sublimation (evaporation) of small particles (solid and liquid), e.g., spray drying and fuel droplet evaporation. Since a few decades, sublimation technology has been used widely together with aerosol technology. This combination is aiming to get various products with desired compositions and morphologies. It can be used in the fields of nanoparticles generation, particle coating through physical vapor deposition (PVD) and particle structuring. This doctoral thesis deals with the experimental and theoretical investigations of sublimation (evaporation) kinetics of fine aerosol particles (droplets). The experimental study was conducted in a test plant including on-line control of the most important paramters, such as heating temperature, gas flow and pressure. On-line and in-line particle measurements (Optical sensor, APS) were employed. Relevant parameters in sublimation (evaporation) such as heating temperature, particle concentration and aerosol residence time were investigated. Polydispersed particles (droplets) were introduced into the test plant as precursor aerosols. Two kinds of materials were used as test materials, including inorganic particles of NH4Cl and organic particles of DEHS. NH4Cl particles with smooth surface and porous structure were put into the experiments, respectively. The influence of the particle morphology on the sublimation process was studied. Basing on the experiments, different theoretical models were developed. The simulation results under different parameters were compared with experimental results. The change of concentration of particles was specially discussed. The discussion was focused on the relationship of the total particle concentration and the change of single particles with diverse initial diameters. The study of the sublimation kinetics of particles with different morphologies and different specific surface areas was carried out. The factor of increased surface area on the sublimation process was taken into the simulation and the results were compared with experimental results. A sublimation (evaporation) kinetics was investigated in this thesis. Basing on the property of a material, such as molecular weight, molecular size and vapor pressure, the sublimation (evaporation) kinetics was described. The optimum sublimation (evaporation) conditions with respect to the material properties were advanced. A Phase Transition Effect during the sublimation (evaporation) was found, which describes the increase of the large particles on the cost of small particles. A similar effect is observed in crystal suspension (called Ostwald ripening) but with another physical background. In order to meet the need of in-line particle measurement, a hot gas sensor (O.P.C.) was developed in this study, for measuring the particle size and the size distribution of an aerosol. With the newly developed measuring cell, the operating conditions of the aerosol could be increased up to 500°C.

Dry Sliding and Rolling Tribotests of Carbon Black Filled EPDM Elastomers and Their FE Simulations
(2008)

Unlubricated sliding systems being economic and environmentally benign are already realized in bearings, where dry metal-plastic sliding pairs successfully replace lubricated metal-metal ones. Nowadays, a considerable part of the tribological research concentrates to realize unlubricated elastomer-metal sliding systems, and to extend the application field of lubrication-free slider elements. In this Thesis, characteristics of the dry sliding and friction are investigated for elastomer-metal sliding pairs. In this study ethylene-propylene-diene rubbers (EPDM) with and without carbon black (CB) filler were used. The filler content of the EPDMs was varied: EPDMs with 0-, 30-, 45- and 60 part per hundred rubber (phr) CB amount were investigated. Quasistatic tension and compression tests and dynamic mechanical thermal analysis (DMTA) were carried out to analyze the static a viscoelastic behavior of the EPDMs. The tribological properties of the EPDMs were investigated using dry roller (metal) – on – plate (rubber) type tests (ROP). During the ROP tests the normal load was varied. The coefficient of friction (COF) and the temperature were registered online during the tests, the loss volumes were determined after certain test durations. The worn surfaces of the rubbers and of the steel counterparts were analyzed using scanning electron microscope (SEM) to determine the wear mechanisms. Because possible chemical changes may take place during dry sliding due to the elevated contact temperature the chemical composition of the surfaces was also analyzed before and after the tribotests. For the latter investigations X-ray photoelectron spectroscopy (XPS), sessil drop tests and Raman spectroscopy were used. In addition, the dry sliding tribotests were simulated using finite element (FE) codes for the better understanding of the related wear mechanisms. Finally, as the internal damping effect of the elastomers plays a great role in the sliding wear process, their viscoelasticity has been taken into account. The effect of viscoelasticity was shown on example of rolling friction. To study the rolling COF for the EPDM with 30 phr CB (EPDM 30) an FE model was created which considered the viscoelastic behavior of the rubber during rolling. The results showed that the incorporated CB enhanced the mechanical and tribological properties (both COF and wear rate have been reduced) of the EPDMs. Further on, the CB content of the EPDM influences fundamentally the observed wear mechanisms. The wear characteristics changed also with the applied normal load. In case of the EPDM 30 a rubber tribofilm was found on the steel counterpart when tests were performed at high normal loads. Analysis of the chemical composition of the surfaces before and after the wear tests does not result in notable changes. It was demonstrated, that the FE method is powerful tool to model both, the dry sliding and rolling performances of elastomers.

This thesis shows an approach to combine the advantages of MBS tyre models and FEM models for the use in full vehicle simulations. The procedure proposed in this thesis aims to describe a nonlinear structure with a Finite Element approach combined with nonlinear model reduction methods. Unlike most model reduction methods - as the frequently used Craig-Bampton approach - the method of Proper Orthogonal Decomposition (POD) offers a projection basis suitable for nonlinear models. For the linear wave equation, the POD method is studied comparing two different choices of snapshot sets. Set 1 consists of deformation snapshots, and set 2 additionally contains velocities and accelerations. An error analysis proves no convergence guarantee for deformations only. For inclusion of derivatives it yields an error bound diminishing for small time steps. The numerical results show a better behaviour for the derivative snapshot method, as long as the sum of the left-over eigenvalues is significant. For the reduction of nonlinear systems - especially when using commercial software - it is necessary to decouple the reduced surrogate system from the full model. To achieve this, a lookup table approach is presented. It makes use of the preceding computation step with the full model necessary to set up the POD basis (training step). The nonlinear term of inner forces and the stiffness matrix are output and stored in a lookup table for the reduced system. Numerical examples include a nonlinear string in Matlab and an airspring computed in Abaqus. Both examples show that effort reductions of two orders of magnitude are possible within a reasonable error tolerance. The lookup approaches perform faster than the Trajectory Piecewise Linear (TPWL) method and produce comparable errors. Furthermore, the Abaqus example shows the influence of training excitation on the quality of the reduced model.

In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.

We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.

In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.

Rapid growth in sensors and sensor technology introduces variety of products to the market. The increasing number of available sensor concepts and implementations demands more versatile sensor electronics and signal conditioning. Nowadays signal conditioning for the available spectrum of sensors is becoming more and more challenging. Moreover, developing a sensor signal conditioning ASIC is a function of cost, area, and robustness to maintain signal integrity. Field programmable analog approaches and the recent evolvable hardware approaches offer partial solution for advanced compensation as well as for rapid prototyping. The recent research field of evolutionary concepts focuses predominantly on digital and is at its advancement stage in analog domain. Thus, the main research goal is to combine the ever increasing industrial demand for sensor signal conditioning with evolutionary concepts and dynamically reconfigurable matched analog arrays implemented in main stream Complementary Metal Oxide Semiconductors (CMOS) technologies to yield an intelligent and smart sensor system with acceptable fault tolerance and the so called self-x features, such as self-monitoring, self-repairing and self-trimming. For this aim, the work suggests and progresses towards a novel, time continuous and dynamically reconfigurable signal conditioning hardware platform suitable to support variety of sensors. The state-of-the-art has been investigated with regard to existing programmable/reconfigurable analog devices and the common industrial application scenario and circuits, in particular including resource and sizing analysis for proper motivation of design decisions. The pursued intermediate granular level approach called as Field Programmable Medium-granular mixed signal Array (FPMA) offers flexibility, trimming and rapid prototyping capabilities. The proposed approach targets at the investigation of industrial applicability of evolvable hardware concepts and to merge it with reconfigurable or programmable analog concepts, and industrial electronics standards and needs for next generation robust and flexible sensor systems. The devised programmable sensor signal conditioning test chips, namely FPMA1/FPMA2, designed in 0.35 µm (C35B4) Austriamicrosystems, can be used as a single instance, off the shelf chip at the PCB level for conditioning or in the loop with dedicated software to inherit the aspired self-x features. The use of such self–x sensor system carries the promise of improved flexibility, better accuracy and reduced vulnerability to manufacturing deviations and drift. An embedded system, namely PHYTEC miniMODUL-515C was used to program and characterize the mixed-signal test chips in various feedback arrangements to answer some of the questions raised by the research goals. Wide range of established analog circuits, ranging from single output to fully differential amplifiers, was investigated at different hierarchical levels to realize circuits like instrumentation amplifier and filters. A more extensive design issues based on low-power like for e.g., sub-threshold design were investigated and a novel soft sleep mode idea was proposed. The bandwidth limitations observed in the state of the art fine granular approaches were enhanced by the proposed intermediate granular approach. The so designed sensor signal conditioning instrumentation amplifier was then compared to the commercially available products in the market like LT 1167, INA 125 and AD 8250. In an adaptive prototype, evolutionary approaches, in particular based on particle swarm optimization with multi-objectives, were just deployed to all the test samples of FPMA1/FMPA2 (15 each) to exhibit self-x properties and to recover from manufacturing variations and drift. The variations observed in the performance of the test samples were compensated through reconfiguration for the desired specification.

Sound surrounds us all the time and in every place in our daily life, may it be pleasant music in a concert hall or disturbing noise emanating from a busy street in front of our home. The basic properties are the same for both kinds of sound, namely sound waves propagating from a source, but we perceive it in different ways depending on our current mood or if the sound is wanted or not. In this thesis both pleasant sound as well as disturbing noise is examined by means of simulating the sound and visualizing the results thereof. However, although the basic properties of music and traffic noise are the same, one is interested in different features. For example, in a concert hall, the reverberation time is an important quality measure, but if noise is considered only the resulting sound level, for example on ones balcony, is of interest. Such differences are reflected in different methods of simulation and required visualizations, therefore this thesis is divided into two parts. The first part about room acoustics deals with the simulation and novel visualizations for indoor sound and acoustic quality measures, such as definition (original "Deutlichkeit") and clarity index (original "Klarheitsmaß"). For the simulation two different methods, a geometric (phonon tracing) and a wave based (FEM) approach, are applied and compared. The visualization techniques give insight into the sound behaviour and the acoustic quality of a room from a global as well as a listener based viewpoint. Furthermore, an acoustic rendering equation is presented, which is used to render interference effects for different frequencies. Last but not least a novel visualization approach for low frequency sound is presented, which enables the topological analysis of pressure fields based on room eigenfrequencies. The second part about environmental noise is concerned with the simulation and visualization of outdoor sound with a focus on traffic noise. The simulation instruction prescribed by national regulations is discussed in detail, and an approach for the computation of noise volumes, as well as an extension to the simulation, allowing interactive noise calculation, are presented. Novel visualization and interaction techniques for the calculated noise data, incorporated in an interactive three dimensional environment, enabling the easy comprehension of noise problems, are presented. Furthermore additional information can be integrated into the framework to enhance the visualization of noise and the usability of the framework for different usages.

Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.

Colorectal cancer is the second most prevalent cancer form in both men and women in the Europe. In 2002, alimentary cancer (oesophagus, stomach, intestines) made up 26% of the annual incident cases of cancer amongst males in Europe, whereby about half of those were cancers of the colon and rectum (Eurostat 2002). Epidemiological evidence accumulating over the last decades indicates that besides a genetic disposition, diet plays a strong epigenetic role in the genesis of cancer. It is generally assumed that diet is causal for up to 80% of colorectal cancer (Bingham 2000). With the prospect of an approximated 50% rise in global cancer incidence over the first two decades of the 21st century, the World Health Organisation (WHO) has emphasized the need for an improvement in nutrition. Indeed there is increasing public health awareness with respect to nutrition. Today, living healthily is associated with less consumption of animal fats and red (processed) meat, moderate or no consumption of alcohol coupled with increased physical activity, and frequent intake of fruits, vegetables and whole grains (Bingham 1999; Johnson 2004). This idealogy partly stems from scientific epidemiological evidence supportive of an inverse correlation between the consumption of fruits and vegetables and the development cancer. Besides fibre and essential micro-nutrients like ascobate, folate, and tocopherols, the anti-carcinogenic properties of fruits and vegetables are generally thought to be rooted in the bioactivity of secondary plant components like flavonoids (Johnson 2004; Rice-Evans and Miller 1996; Rice-Evans 1995). Along with the increased public health awareness, has also come a burgeoning and lucrative dietary supplement industry, which markets products based on polyphenols and other potentially healthy compounds, sometimes with questionable promises of better health and increased longevity. These claims are based on accumulating in vitro and in vivo evidence indicating that flavonoids and polyphenols in fruits and vegetables can hinder proliferation, induce apoptosis of cancerous cells (Kern et al. 2005; Kumar et al. 2007; Thangapazham et al. 2007), act as antioxidants (Justino et al. 2006; Rice-Evans 1995) and influence cell signalling pathways (Marko et al. 2004; Joseph et al. 2007; Granado-Serrano et al. 2007), all of which are potential mechanisms proposed for their anti-carcinogenic activity. However, not only is the vast variety of supplements worrisome, but also problematic, is their easy accessibilty (just a click away on the internet) and the amount that can potentially be consumed. Such supplements are usually offered in pharmaceutical form (tablets, capsules, powder, concentrates) containing concentrations well beyond what is normally comsumable from the diet. For example, quercetin’s recommended intake is about 1g daily. However, estimates portend a possible daily increase of upto 1000 fold of the daily intake of quercetin (Hertog et al. 1995). Mindful of the concept of dose coined from the words of swiss scientist Paracelsus “What is it that is not poison? All things are poison and nothing is without poison. The right dose differentiates a poison and a remedy.” (“Alle Dinge sind Gift und nichts ist ohn’ Gift; allein die Dosis macht, dass ein Ding kein Gift ist”), it is thus conceivable that such high concentrations may not only reverse the acclaimed positive effects of flavonoids and polyphenols but also have negative effects thereby representing a health risk. The fact that direct evidence of the beneficial effects of flavonoids and polyphenols remains wanting, if not entirely lacking, coupled with the afore-mentioned marketing trend demands for a thorough examination of the possible adverse effects that may arise from increased consumption of flavonoids and polyphenols. The genesis and progression of cancer is usually accompanied by dysfunctional signalling of certain cell signalling pathways. Typical for colon carcinogenesis is the malfunctioning of the Wnt-signalling pathway, a pathway, which is crucial for the growth and development of normal colonocytes. The dysfunction of the Wnt-signalling pathway occurs in a manner that culminates in a proliferation stimulus of colonocytes, while differentiation is increasingly minimized. Hence, tumourigenesis is promoted. Interupting the proliferation stumuli by intervening in the actions of components of the Wnt-signalling pathway is one potential mechanism for the anti-carcinogenic action of flavonoids and polyphenols (Pahlke et al. 2006; Dashwood et al. 2002; Park et al. 2005). However, as previously hinted, the indulgence in the consumption of flavonoids and polyphenols based supplements could instead lead to a proliferation stimulus and provoke or promote carcinogenesis in normal cells or pre-cancerous cells respectively. The aim of this work was to

Fragmentation of habitats, especially of tropical rainforests, ranks globally among the most pervasive man-made disturbances of ecosystems. There is growing evidence for long-term effects of forest frag-mentation and the accompanying creation of artificial edges on ecosystem functioning and forest structure, which are altered in a way that generally transforms these forests into early successional systems. Edge-induced disruption of species interactions can be among the driving mechanisms governing this transformation. These species interactions can be direct (trophic interactions, competition, etc.) or indirect (modification of the resource availability for other organisms). Such indirect interactions are called ecosystem engineering. Leaf-cutting ants of the genus Atta are dominant herbivores and keystone-species in the Neotropics and have been called ecosystem engineers. In contrast to other prominent ecosystem engineers that have been substantially decimated by human activities some species of leaf-cutting ants profit from anthropogenic landscape alterations. Thus, leaf-cutting ants are a highly suitable model to investigate the potentially cascading effects caused by herbivores and ecosystem engineers in modern anthropogenic landscapes following fragmentation. The present thesis aims to describe this interplay between consequences of forest fragmentation for leaf-cutting ants and resulting impacts of leaf-cutting ants in fragmented forests. The cumulative thesis starts out with a review of 55 published articles demonstrating that herbivores, especially generalists, profoundly benefit from forest edges, often due to (1) favourable microenviron-mental conditions, (2) an edge-induced increase in food quantity/quality, and (3; less well documented) disrupted top-down regulation of herbivores (Wirth, Meyer et al. 2008; Progress in Botany 69:423-448). Field investigations in the heavily fragmented Atlantic Forest of Northeast Brazil (Coimbra forest) were subsequently carried out to evaluate patterns and hypotheses emerging from this review using leaf-cutting ants of the genus Atta as a model system. Colony densities of both Atta species occuring in the area changed similarly with distance to the edge but the magnitude of the effect was species-specific. Colony density of A. cephalotes was low in the forest interior (0.33 ± 1.11 /ha, pooling all zones >50 m into the forest) and sharply increased by a factor of about 8.5 towards the first 50 m (2.79 ± 3.3 /ha), while A. sexdens was more uniformly distributed (Wirth, Meyer et al. 2007; Journal of Tropical Ecology 23:501-505). The accumulation of Atta colonies persisted at physically stable forest edges over a four-year interval with no significant difference in densities between years despite high rates of colony turn-over (little less than 50% in 4 years). Stable hyper-abundant populations of leaf-cutting ants accord with the constantly high availability of pioneer plants (their preferred food source) as previously demonstrated at old stabilised forest edges in the region (Meyer et al. submitted; Biotropica). In addition, plants at the forest edge might be more attractive to leaf-cutting ants because of their physiological responses to the edge environment. In bioassays with laboratory colonies I demonstrated that drought-stressed plants are more attractive to leaf-cutting ants because of an increase in leaf nutrient content induced by osmoregulation (Meyer et al. 2006; Functional Ecology 20:973-981). Since plants along forest edges are more prone to experience drought stress, this mechanism might contribute to the high resource availabil-ity for leaf-cutting ants at forest edges. In light of the hyper-abundance of leaf-cutting ants within the forest edge zone (first 50 m), their po-tentially far-reaching ecological importance in anthropogenic landscapes is apparent. Based on previous colony-level estimates, we extrapolated that herbivory by A. cephalotes removes 36% of the available foliage at forest edges (compared to 6% in the forest interior). In addition, A. cephalotes acted as ecosys-tem engineers constructing large nests (on average 55 m2: 95%-CI: 22-136) that drastically altered forest structure. The ants opened gaps in the canopy and forest understory at nest sites, which allowed three times as much light to reach the nest surface as compared to the forest understory. This was accompa-nied by an increase in soil temperatures and a reduction in water availability. Modifications of microcli-mate and forest structure greatly surpassed previously published estimates. Since higher light levels were detectable up to about 4 m away from the nest edge, an area roughly four times as big as the actual nest (about 200 and 50 m2, respectively) was impacted by every colony, amounting to roughly 6% of the total area at the forest edge (Meyer et al. in preparation; Ecology). The hypothesized impacts of high cutting pressure and microclimatic alterations at nest sites on forest regeneration were directly tested using transplanted seedlings of six species of forest trees. Nests of A. cephalotes differentially impacted survival and growth of seedlings. Survival differed highly significantly between habitats and species and was generally high in the forest, yet low on nests where it correlated strongly with seed size of the species. These results indicate that the disturbance regime created by leaf-cutting ants differs from other distur-bances, since nest conditions select for plant species that profit from additional light, yet are large-seeded and have resprouting abilities, which are best suited to tolerate repeated defoliation on a nest (Meyer et al. in preparation; Journal of Tropical Ecology). On an ecosystem scale leaf-cutting ants might amplify edge-driven microclimatic alterations by very high rates of herbivory and the maintenance of canopy gaps above frequent nests. By allowing for an increased light penetration Atta may, ultimately, contribute to a dominating, self-replacing pioneer communities at forest edges, possibly creating a positive feed-back loop. Based on the persisting hyper-abundance of leaf-cutting ants at old edges of Coimbra forest and the multifarious impacts documented, we conclude that the ecological importance of leaf-cutting ants in pristine forests, where they are commonly believed to be keystone species despite very low colony densities, is greatly surpassed in anthropogenic landscapes In fragmented forests, Atta has been identified as an essential component of a disturbance regime that causes a post-fragmentation retrogressive succession. Apparently, these forests have reached a new self-replacing secondary state. I suggest additional human interference in form of thoughtful management in order to break this cycle of self-enhancing disturbance and to enable forest regeneration along the edges of threatened forest remnants. Thereby the situation of the forest as a whole can be ameliorated and the chances for a long-term retention of biodiversity in these landscapes increased.

In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.

In this work we study and investigate the minimum width annulus problem (MWAP), the circle center location or circle location problem (CLP) and the point center location or point location problem (PLP) on Rectilinear and Chebyshev planes as well as in networks. The relations between the problems have served as a basis for finding of elegant solution, algorithms for both new and well known problems. So, MWAP was formulated and investigated in Rectilinear space. In contrast to Euclidean metric, MWAP and PLP have at least one common optimal point. Therefore, MWAP on Rectilinear plane was solved in linear time with the help of PLP. Hence, the solution sequence was PLP-->MWAP. It was shown, that MWAP and CLP are equivalent. Thus, CLP can be also solved in linear time. The obtained results were analysed and transfered to Chebyshev metric. After that, the notions of circle, sphere and annulus in networks were introduced. It should be noted that the notion of a circle in a network is different from the notion of a cycle. An O(mn) time algorithm for solution of MWAP was constructed and implemented. The algorithm is based on the fact that the middle point of an edge represents an optimal solution of a local minimum width annulus on this edge. The resulting complexity is better than the complexity O(mn+n^2logn) in unweighted case of the fastest known algorithm for minimizing of the range function, which is mathematically equivalent to MWAP. MWAP in unweighted undirected networks was extended to the MWAP on subsets and to the restricted MWAP. Resulting problems were analysed and solved. Also the p–minimum width annulus problem was formulated and explored. This problem is NP–hard. However, the p–MWAP has been solved in polynomial O(m^2n^3p) time with a natural assumption, that each minimum width annulus covers all vertexes of a network having distances to the central point of annulus less than or equal to the radius of its outer circle. In contrast to the planar case MWAP in undirected unweighted networks have appeared to be a root problem among considered problems. During investigation of properties of circles in networks it was shown that the difference between planar and network circles is significant. This leads to the nonequivalence of CLP and MWAP in the general case. However, MWAP was effectively used in solution procedures for CLP giving the sequence MWAP-->CLP. The complexity of the developed and implemented algorithm is of order O(m^2n^2). It is important to mention that CLP in networks has been formulated for the first time in this work and differs from the well–studied location of cycles in networks. We have constructed an O(mn+n^2logn) algorithm for well–known PLP. The complexity of this algorithm is not worse than the complexity of the currently best algorithms. But the concept of the solution procedure is new – we use MWAP in order to solve PLP building the opposite to the planar case solution sequence MWAP-->PLP and this method has the following advantages: First, the lower bounds LB obtained in the solution procedure are proved to be in any case better than the strongest Halpern’s lower bound. Second, the developed algorithm is so simple that it can be easily applied to complex networks manually. Third, the empirical complexity of the algorithm is equal to O(mn). MWAP was extended to and explored in directed unweighted and weighted networks. The complexity bound O(n^2) of the developed algorithm for finding of the center of a minimum width annulus in the unweighted case does not depend on the number of edges in a network, because the problems can be solved in the order PLP-->MWAP. In the weighted case computational time is of order O(mn^2).

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.

Within this thesis a series of molecular species has been studied, with focus on hydrogen bonded species and on (solvated) transition metal complexes. Experimental techniques such as FT-ICR-MS and IRMPD were combined with ab initio calculations for the determination of structure and reactivity of the aforementioned types of systems. On the basis of high level electronic structure calculations of neutral water clusters (H2O)n with n = 17-21 a transitional size regime has been determined, where a structural stabilization between all-surface and interior configurations alternates with the addition or removal of a single water molecule. Electronic structure calculations suggested that for n = 17 and 19 the interior configuration would be energetically more stable than the all-surface one. The gas phase infrared spectrum of the singly hydrated ammonium ion, NH4+(H2O), had previously been recorded by photodissociation spectroscopy of mass selected ions and interpreted by means of ab initio calculations. The present work provides additional information on the shape of the potential energy curves of NH4+(H2O) along the N-H distance on MP2/aug-cc-pVDZ level of theory yielding an anharmonic potential shape. Calculation of potential energy curves of the O-H mode of the intramolecular hydrogen bond of various dicarboxylic acids (oxalic to adipic acid) revealed that the shapes of the potentials directly correlate to the size of the system and the resulting ring strain The shape of the potential is also influenced by the charge of the system. Calculation of anharmonic frequencies based on the VPT2 approach lead to reasonable results in all systems with narrow potentials. IRMPD spectra of complexes in the gas phase have been recorded for a series of cationic vanadium oxide complexes when reacted with acetonitrile, methanol and ethanol. The experimental spectra are compared to calculated absorption spectra. The systematic DFT study identifies potential candidates for reductive nitrile coupling in cationic transition metal acetonitrile complexes. On the basis of the calculations, the formation of metallacyclic structures in group 3 through 7 complexes can be ruled out. Solvation of the transition metal cation by five acetonitrile ligands leads to a reductive nitrile coupling reaction in three types of complexes, namely those containing either niobium, tantalum or tungsten.

Acidic zeolites like H-Y, H-ZSM-5, H-MCM-22 and H-MOR zeolites were found to be the selective adsorbents for the removal of thiophene from toluene or n-heptane as solvent. The competitive adsorption of toluene is found to influence the adsorption capacity for thiophene and is more predominant when high-alumina zeolites are used as adsorbents. This behaviour is also reflected by the results of the adsorption of thiophene on H-ZSM-5 zeolites with varied nSi/nAl ratios (viz. 13, 19 and 36) from toluene and n-heptane as solvents, respectively. UV-Vis spectroscopic results show that the oligomerization of thiophene leads to the formation of dimers and trimers on these zeolites. The oligomerization in acid zeolites is regarded to be dependent on the geometry of the pore system of the zeolites. The sulphur-containing compounds with more than one ring viz. benzothiophene, which are also present in substantial amounts in certain hydrocarbon fractions, are not adsorbed on H-ZSM-5 zeolites. This is obvious, as the diameter of the pore aperture of zeolite H-ZSM-5 is smaller than the molecular size of benzothiophene. Metal ion-exchanged FAU-type zeolites are found to be promising adsorbents for the removal of sulphur-containing compounds from model solutions. The introduction of Cu+-, Ni2+-, Ce3+-, La3+- and Y3+- ions into zeolite Na+-Y by aqueous ion-exchange substantially improves the adsorption capacity for thiophene from toluene or n-heptane as solvent. More than the absolute content of Cu+-ions, the presence of Cu+-ions at the sites exposed to supercages is believed to influence the adsorption of thiophene on Cu+-Y zeolite. It was shown experimentally for the case of Cu+-Y and Ce3+-Y that the supercages present in the FAU zeolite allow for an access of bulkier sulphur-containing compounds (viz. benzothiophene, dibenzothiophene and dimethyl dibenzothiophene). The presence of these bulkier compounds compete with thiophene and are preferentially adsorbed on Cu+-Y zeolite. IR spectroscopic results revealed that the adsorption of thiophene on Na+-Y, Cu+-Y and Ni2+-Y is primarily a result of the interaction of thiophene via pi-complexation between C=C double bond (of thiophene) and metal ions (in the zeolite framework). A different mode of interaction of thiophene with Ce3+-, La3+- and Y3+-metal ions was observed in the IR spectra of thiophene adsorbed on Ce3+-Y, La3+-Y and Y3+-Y zeolites, respectively. On these adsorbents, thiophene is believed to interact via a lone electron pair of the sulphur atom with metal ions present in the adsorbent (M-S interaction). The experimental results show that there is a large difference in the thiophene adsorption capacities of pi-complexation adsorbents (like Cu+-Y, Ni2+-Y) between the model solution with toluene as solvent and the model solution with n-heptane as solvent. The lower capacity of these zeolites for the adsorption of thiophene from toluene than from n-heptane as solvent is the clear indication of competition of toluene in interating with adsorbent in a way similar to thiophene. The difference in thiophene adsorption capacities is very low in the case of adsorbents Ce3+-Y, La3+-Y and Y3+-Y, which are believed to interact with thiophene predominantly by direct M3+-S bond (thiophene interacting with metal ion via a lone pair of electrons). TG-DTA analysis was used to study the regeneration behaviour of the adsorbents. Acid zeolites can be regenerated by simply heating at 400 °C in a flow of nitrogen whereas thiophene is chemically adsorbed on the metal ion. By contrast, it is not possible to regenerate by heating under idle inert gas flow. The only way to regenerate these adsorbents is to burn off the adsorbate, which eventually brings about an undesired emission of SOx. The exothermic peaks appeared at different temperatures in the heat flow profiles of Cu+-Y, Ce3+-Y, La3+-Y and Y3+-Y are also indicating that two different types of interaction are present as revealed by IR spectroscopy, too. One major difficulty in reducing the sulphur content in fuels to value below 10 ppm is the inability in removing alkyl dibenzothiophenes, viz. 4,6 dimethyl dibenzothiophene, by the existing catalytic hydrodesulphurization technique. Cu+-Y and Ce3+-Y were found in the present study to adsorb this compound from toluene to a certain extent. To meet the stringent regulations on sulphur content, selective adsorption by zeolites could be a valuable post-purification method after the catalytic hydrodesulphurization unit.

The main motivation of this contribution is to introduce a computational laboratory to analyse defects and fractures at the sub--micro scale. To this end, we have attempted to present a continuum--atomistic multiscale algorithm for the analysis of crystalline deformation, i.e. we have combined the above--mentioned Cauchy--Born rule within a finite element approximation (FEM) on the continuum region with a molecular dynamics (MD) resolution on the atomistic domain. The aim is twofold: on the one hand the stability, i.e. validity of the Cauchy--Born rule and its transition to non--affine deformation at the micron--scale is studied with the help of molecular dynamics approach to capture fine--scales features; on the other hand a horizontal FEM/MD, i.e. continuum atomistic coupling, is envisaged in order to study representative cases of crystalline defects. To cope with the latter we have introduced a horizontal coupling method for continuum--atomistic analysis.

The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.

This thesis covers two important fields in financial mathematics, namely the continuous time portfolio optimisation and credit risk modelling. We analyse optimisation problems of portfolios of Call and Put options on the stock and/or the zero coupon bond issued by a firm with default risk. We use the martingale approach for dynamic optimisation problems. Our findings show that the riskier the option gets, the less proportion of his wealth the investor allocates to the risky asset. Further, we analyse the Credit Default Swap (CDS) market quotes on the Eurobonds issued by Turkish sovereign for building the term structure of the sovereign credit risk. Two methods are introduced and compared for bootstrapping the risk-neutral probabilities of default (PD) in an intensity based (or reduced form) credit risk modelling approach. We compare the market-implied PDs with the actual PDs reported by credit rating agencies based on historical experience. Our results highlight the market price of the sovereign credit risk depending on the assigned rating category in the sampling period. Finally, we find an optimal leverage strategy for delivering the payments promised by a Constant Proportion Debt Obligation (CPDO). The problem is solved via the introduction and explicit solution of a stochastic control problem by transforming the related Hamilton-Jacobi-Bellman Equation into its dual. Contrary to the industry practise, the optimal leverage function we derive is a non-linear function of the CPDO asset value. The simulations show promising behaviour of the optimal leverage function compared with the one popular among practitioners.

The present thesis is concerned with the simulation of the loading behaviour of both hybrid lightweight structures and piezoelectric mesostructures, with a special focus on solid interfaces on the meso scale. Furthermore, an analytical review on bifurcation modes of continuum-interface problems is included. The inelastic interface behaviour is characterised by elastoplastic, viscous, damaging and fatigue-motivated models. For related numerical computations, the Finite Element Method is applied. In this context, so-called interface elements play an important role. The simulation results are reflected by numerous examples which are partially correlated to experimental data.

In the present contribution, a general framework for the completely consistent integration of nonlinear dissipative dynamics is proposed, that essentially relies on Finite Element methods in space and time. In this context, fully flexible structures as well as hybrid systems which consist of rigid bodies and inelastic flexible parts are considered. Thereby, special emphasis is placed on the resulting algorithmic fulfilment of fundamental balance equations, and the excellent performance of the presented concepts is demonstrated by means of several representative numerical examples, involving in particular finite elasto-plastic deformations.

Layout analysis--the division of page images into text blocks, lines, and determination of their reading order--is a major performance limiting step in large scale document digitization projects. This thesis addresses this problem in several ways: it presents new performance measures to identify important classes of layout errors, evaluates the performance of state-of-the-art layout analysis algorithms, presents a number of methods to reduce the error rate and catastrophic failures occurring during layout analysis, and develops a statistically motivated, trainable layout analysis system that addresses the needs of large-scale document analysis applications. An overview of the key contributions of this thesis is as follows. First, this thesis presents an efficient local adaptive thresholding algorithm that yields the same quality of binarization as that of state-of-the-art local binarization methods, but runs in time close to that of global thresholding methods, independent of the local window size. Tests on the UW-1 dataset demonstrate a 20-fold speedup compared to traditional local thresholding techniques. Then, this thesis presents a new perspective for document image cleanup. Instead of trying to explicitly detect and remove marginal noise, the approach focuses on locating the page frame, i.e. the actual page contents area. A geometric matching algorithm is presented to extract the page frame of a structured document. It is demonstrated that incorporating page frame detection step into document processing chain results in a reduction in OCR error rates from 4.3% to 1.7% (n=4,831,618 characters) on the UW-III dataset and layout-based retrieval error rates from 7.5% to 5.3% (n=815 documents) on the MARG dataset. The performance of six widely used page segmentation algorithms (x-y cut, smearing, whitespace analysis, constrained text-line finding, docstrum, and Voronoi) on the UW-III database is evaluated in this work using a state-of-the-art evaluation methodology. It is shown that current evaluation scores are insufficient for diagnosing specific errors in page segmentation and fail to identify some classes of serious segmentation errors altogether. Thus, a vectorial score is introduced that is sensitive to, and identifies, the most important classes of segmentation errors (over-, under-, and mis-segmentation) and what page components (lines, blocks, etc.) are affected. Unlike previous schemes, this evaluation method has a canonical representation of ground truth data and guarantees pixel-accurate evaluation results for arbitrary region shapes. Based on a detailed analysis of the errors made by different page segmentation algorithms, this thesis presents a novel combination of the line-based approach by Breuel with the area-based approach of Baird which solves the over-segmentation problem in area-based approaches. This new approach achieves a mean text-line extraction error rate of 4.4% (n=878 documents) on the UW-III dataset, which is the lowest among the analyzed algorithms. This thesis also describes a simple, fast, and accurate system for document image zone classification that results from a detailed comparative analysis of performance of widely used features in document analysis and content-based image retrieval. Using a novel combination of known algorithms, an error rate of 1.46% (n=13,811 zones) is achieved on the UW-III dataset in comparison to a state-of-the-art system that reports an error rate of 1.55% (n=24,177 zones) using more complicated techniques. In addition to layout analysis of Roman script documents, this work also presents the first high-performance layout analysis method for Urdu script. For that purpose a geometric text-line model for Urdu script is presented. It is shown that the method can accurately extract Urdu text-lines from documents of different layouts like prose books, poetry books, magazines, and newspapers. Finally, this thesis presents a novel algorithm for probabilistic layout analysis that specifically addresses the needs of large-scale digitization projects. The presented approach models known page layouts as a structural mixture model. A probabilistic matching algorithm is presented that gives multiple interpretations of input layout with associated probabilities. An algorithm based on A* search is presented for finding the most likely layout of a page, given its structural layout model. For training layout models, an EM-like algorithm is presented that is capable of learning the geometric variability of layout structures from data, without the need for a page segmentation ground-truth. Evaluation of the algorithm on documents from the MARG dataset shows an accuracy of above 95% for geometric layout analysis.

In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.

The main goal of this work is to model size effects, as they occur in materials with an intrinsic microstructure at the consideration of specimens that are not by orders larger than this microstructure. The micromorphic continuum theory as a generalized continuum theory is well suited to account for the occuring size effects. Thereby additional degrees of freedoms capture the independent deformations of these microstructures, while they provide additional balance equation. In this thesis, the deformational and configurational mechanics of the micromorphic continuum is exploited in a finite-deformation setting. A constitutive and numerical framework is developed, in which also the material-force method is advanced. Furthermore the multiscale modelling of thin material layers with a heterogeneous substructure is of interest. To this end, a computational homogenization framework is developed, which allows to obtain the constitutive relation between traction and separation based on the properties of the underlying micromorphic mesostructure numerically in a nested solution scheme. Within the context of micromorphic continuum mechanics, concepts of both gradient and micromorphic plasticity are developed by systematically varying key ingredients of the respective formulations.

This dissertation deals with the optimization of the web formation in a spunbond process for the production of artificial fabrics. A mathematical model of the process is presented. Based on the model, two kind of attributes to be optimized are considered, those related with the quality of the fabric and those describing the stability of the production process. The problem falls in the multicriteria and decision making framework. The functions involved on the model of the process are non linear, non convex and non differentiable. A strategy in two steps; exploration and continuation, is proposed to approximate numerically the Pareto frontier and alternative methods are proposed to navigate the set and support the decision making process. The proposed strategy is applied to a particular production process and numerical results are presented.

This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.

Computer-based simulation and visualization of acoustics of a virtual scene can aid during the design process of concert halls, lecture rooms, theaters, or living rooms. Because, not only the visual aspect of the room is important, but also its acoustics. In factory floors noise reduction is important since noise is hazardous to health. Despite the obvious dissimilarity between our aural and visual senses, many techniques required for the visualization of photo-realistic images and for the auralization of acoustic environments are quite similar. Both applications can be served by geometric methods such as particle- and ray tracing if we neglect a number of less important effects. By means of the simulation of room acoustics we want to predict the acoustic properties of a virtual model. For auralization, a pulse response filter needs to be assembled for each pair of source and listener positions. The convolution of this filter with an anechoic source signal provides the signal received at the listener position. Hence, the pulse response filter must contain all reverberations (echos) of a unit pulse, including their frequency decompositions due to absorption at different surface materials. For the room acoustic simulation a method named phonon tracing, since it is based on particles, is developed. The approach computes the energy or pressure decomposition for each particle (phonon) sent out from a sound source and uses this in a second pass (phonon collection) to construct the response filters for different listeners. This step can be performed in different precision levels. During the tracing step particle paths and additional information are stored in a so called phonon map. Using this map several sound visualization approaches were developed. From the visualization, the effect of different materials on the spectral energy / pressure distribution can be observed. The first few reflections already show whether certain frequency bands are rapidly absorbed. The absorbing materials can be identified and replaced in the virtual model, improving the overall acoustic quality of the simulated room. Furthermore an insight into the pressure / energy received at the listener position is possible. The phonon tracing algorithm as well as several sound visualization approaches are integrated into a common system utilizing Virtual Reality technologies in order to facilitate the immersion into the virtual scene. The system is a prototype developed within a project at the University of Kaiserslautern and is still a subject of further improvements. It consists of a stereoscopic back-projection system for visual rendering as well as professional audio equipment for auralization purposes.

The dissertation deals with the application of Hub Location models in public transport planning. The author proposes new mathematical models along with different solution approaches to solve the instances. Moreover, a novel multi-period formulation is proposed as an extension to the general model. Due to its high complexity heuristic approaches are formulated to find a good solution within a reasonable amount of time.

A modular level set algorithm is developed to study the interface and its movement for free moving boundary problems. The algorithm is divided into three basic modules : initialization, propagation and contouring. Initialization is the process of finding the signed distance function from closed objects. We discuss here, a methodology to find an accurate signed distance function from a closed, simply connected surface discretized by triangulation. We compute the signed distance function using the direct method and it is stored efficiently in the neighborhood of the interface by a narrow band level set method. A novel approach is employed to determine the correct sign of the distance function at convex-concave junctions of the surface. The accuracy and convergence of the method with respect to the surface resolution is studied. It is shown that the efficient organization of surface and narrow band data structures enables the solution of large industrial problems. We also compare the accuracy of the signed distance function by direct approach with Fast Marching Method (FMM). It is found that the direct approach is more accurate than FMM. Contouring is performed through a variant of the marching cube algorithm used for the isosurface construction from volumetric data sets. The algorithm is designed to keep foreground and background information consistent, contrary to the neutrality principle followed for surface rendering in computer graphics. The algorithm ensures that the isosurface triangulation is closed, non-degenerate and non-ambiguous. The constructed triangulation has desirable properties required for the generation of good volume meshes. These volume meshes are used in the boundary element method for the study of linear electrostatics. For estimating surface properties like interface position, normal and curvature accurately from a discrete level set function, a method based on higher order weighted least squares is developed. It is found that least squares approach is more accurate than finite difference approximation. Furthermore, the method of least squares requires a more compact stencil than those of finite difference schemes. The accuracy and convergence of the method depends on the surface resolution and the discrete mesh width. This approach is used in propagation for the study of mean curvature flow and bubble dynamics. The advantage of this approach is that the curvature is not discretized explicitly on the grid and is estimated on the interface. The method of constant velocity extension is employed for the propagation of the interface. With least squares approach, the mean curvature flow has considerable reduction in mass loss compared to finite difference techniques. In the bubble dynamics, the modules are used for the study of a bubble under the influence of surface tension forces to validate Young-Laplace law. It is found that the order of curvature estimation plays a crucial role for calculating accurate pressure difference between inside and outside of the bubble. Further, we study the coalescence of two bubbles under surface tension force. The application of these modules to various industrial problems is discussed.

The desire to model in ever increasing detail geometrical and physical features has lead to a steady increase in the number of points used in field solvers. While many solvers have been ported to parallel machines, grid generators have left behind. Sequential generation of meshes of large size is extremely problematic both in terms of time and memory requirements. Therefore, the need for developing parallel mesh generation technique is well justified. In this work a novel algorithm is presented for automatic parallel generation of tetrahedral computational meshes based on geometrical domain decomposition. It has a potential to remove this bottleneck. Different domain decomposition approaches and criteria have been investigated. Questions regarding time and memory consumption, efficiency of computations and quality of generated surface and volume meshes have been considered. As a result of the work parTgen (partitioner and parallel tetrahedral mesh generator) software package based on the developed algorithm has been created. Several real-life examples of relatively complex structures involving large meshes (of order 10^7-10^8 elements) are given. It has been shown that high mesh quality is achieved. Memory and time consumption are reduced significantly, and parallel algorithm is efficient.