### Refine

#### Year of publication

- 2008 (33) (remove)

#### Document Type

- Doctoral Thesis (33) (remove)

#### Language

- English (33) (remove)

#### Keywords

- Finite-Elemente-Methode (3)
- Computergraphik (2)
- Level-Set-Methode (2)
- Raumakustik (2)
- Room acoustics (2)
- Visualisierung (2)
- computer graphics (2)
- domain decomposition (2)
- mesh generation (2)
- virtual acoustics (2)

#### Faculty / Organisational entity

Nanotechnology is now recognized as one of the most promising areas for technological
development in the 21st century. In materials research, the development of
polymer nanocomposites is rapidly emerging as a multidisciplinary research activity
whose results could widen the applications of polymers to the benefit of many different
industries. Nanocomposites are a new class of composites that are particle-filled
polymers for which at least one dimension of the dispersed particle is in the nanometer
range. In the related area polymer/clay nanocomposites have attracted considerable
interest because they often exhibit remarkable property improvements when
compared to virgin polymer or conventional micro- and macro- composites.
The present work addresses the toughening and reinforcement of thermoplastics via
a novel method which allows us to achieve micro- and nanocomposites. In this work
two matrices are used: amorphous polystyrene (PS) and semi-crystalline polyoxymethylene
(POM). Polyurethane (PU) was selected as the toughening agent for POM
and used in its latex form. It is noteworthy that the mean size of rubber latices is
closely matched with that of conventional toughening agents, impact modifiers.
Boehmite alumina and sodium fluorohectorite (FH) were used as reinforcements.
One of the criteria for selecting these fillers was that they are water swellable/
dispersible and thus their nanoscale dispersion can be achieved also in aqueous
polymer latex. A systematic study was performed on how to adapt discontinuousand
continuous manufacturing techniques for the related nanocomposites.
The dispersion of nanofillers was characterized by transmission, scanning electron
and atomic force microcopy (TEM, SEM and AFM respectively), X-ray diffraction
(XRD) techniques, and discussed. The crystallization of POM was studied by means
of differential scanning calorimetry and polarized light optical microscopy (DSC and
PLM, respectively). The mechanical and thermomechanical properties of the composites
were determined in uniaxial tensile, dynamic-mechanical thermal analysis
(DMTA), short-time creep tests, and thermogravimetric analysis (TGA).
PS composites were produced first by a discontinuous manufacturing technique,
whereby FH or alumina was incorporated in the PS matrix by melt blending with and
without latex precompounding of PS latex with the nanofiller. It was found that direct melt mixing (DM) of the nanofillers with PS resulted in micro-, whereas the latex mediated
pre-compounding (masterbatch technique, MB) in nanocomposites. FH was
not intercalated by PS when prepared by DM. On the other hand, FH was well dispersed
(mostly intercalated) in PS via the PS latex-mediated predispersion of FH following
the MB route. The nanocomposites produced by MB outperformed the DM
compounded microcomposites in respect to properties like stiffness, strength and
ductility based on dynamic-mechanical and static tensile tests. It was found that the
resistance to creep (summarized in master curves) of the nanocomposites were improved
compared to those of the microcomposites. Master curves (creep compliance
vs. time), constructed based on isothermal creep tests performed at different temperatures,
showed that the nanofiller reinforcement affects mostly the initial creep
compliance.
Next, ternary composites composed of POM, PU and boehmite alumina were produced
by melt blending with and without latex precompounding. Latex precompounding
served for the predispersion of the alumina particles. The related MB was produced
by mixing the PU latex with water dispersible boehmite alumina. The composites
produced by the MB technique outperformed the DM compounded composites in
respect to most of the thermal and mechanical characteristics.
Toughened and/or reinforced PS- and POM-based composites have been successfully
produced by a continuous extrusion technique, too. This technique resulted in
good dispersion of both nanofillers (boehmite) and impact modifier (PU). Compared
to the microcomposites obtained by conventional DM, the nanofiller dispersion became
finer and uniform when using the water-mediated predispersion. The resulting
structure markedly affected the mechanical properties (stiffness and creep resistance)
of the corresponding composites. The impact resistance of POM was highly
enhanced by the addition of PU rubber when manufactured by the continuous extrusion
manufacturing technique. This was traced to the dispersed PU particle size being
in the range required from conventional, impact modifiers.

Rapid growth in sensors and sensor technology introduces variety of products to the market. The increasing number of available sensor concepts and implementations demands more versatile sensor electronics and signal conditioning. Nowadays signal conditioning for the available spectrum of sensors is becoming more and more challenging. Moreover, developing a sensor signal conditioning ASIC is a function of cost, area, and robustness to maintain signal integrity. Field programmable analog approaches and the recent evolvable hardware approaches offer partial solution for advanced compensation as well as for rapid prototyping. The recent research field of evolutionary concepts focuses predominantly on digital and is at its advancement stage in analog domain. Thus, the main research goal is to combine the ever increasing industrial demand for sensor signal conditioning with evolutionary concepts and dynamically reconfigurable matched analog arrays implemented in main stream Complementary Metal Oxide Semiconductors (CMOS) technologies to yield an intelligent and smart sensor system with acceptable fault tolerance and the so called self-x features, such as self-monitoring, self-repairing and self-trimming. For this aim, the work suggests and progresses towards a novel, time continuous and dynamically reconfigurable signal conditioning hardware platform suitable to support variety of sensors. The state-of-the-art has been investigated with regard to existing programmable/reconfigurable analog devices and the common industrial application scenario and circuits, in particular including resource and sizing analysis for proper motivation of design decisions. The pursued intermediate granular level approach called as Field Programmable Medium-granular mixed signal Array (FPMA) offers flexibility, trimming and rapid prototyping capabilities. The proposed approach targets at the investigation of industrial applicability of evolvable hardware concepts and to merge it with reconfigurable or programmable analog concepts, and industrial electronics standards and needs for next generation robust and flexible sensor systems. The devised programmable sensor signal conditioning test chips, namely FPMA1/FPMA2, designed in 0.35 µm (C35B4) Austriamicrosystems, can be used as a single instance, off the shelf chip at the PCB level for conditioning or in the loop with dedicated software to inherit the aspired self-x features. The use of such self–x sensor system carries the promise of improved flexibility, better accuracy and reduced vulnerability to manufacturing deviations and drift. An embedded system, namely PHYTEC miniMODUL-515C was used to program and characterize the mixed-signal test chips in various feedback arrangements to answer some of the questions raised by the research goals. Wide range of established analog circuits, ranging from single output to fully differential amplifiers, was investigated at different hierarchical levels to realize circuits like instrumentation amplifier and filters. A more extensive design issues based on low-power like for e.g., sub-threshold design were investigated and a novel soft sleep mode idea was proposed. The bandwidth limitations observed in the state of the art fine granular approaches were enhanced by the proposed intermediate granular approach. The so designed sensor signal conditioning instrumentation amplifier was then compared to the commercially available products in the market like LT 1167, INA 125 and AD 8250. In an adaptive prototype, evolutionary approaches, in particular based on particle swarm optimization with multi-objectives, were just deployed to all the test samples of FPMA1/FMPA2 (15 each) to exhibit self-x properties and to recover from manufacturing variations and drift. The variations observed in the performance of the test samples were compensated through reconfiguration for the desired specification.

This thesis is devoted to the study of tropical curves with emphasis on their enumerative geometry. Major results include a conceptual proof of the fact that the number of rational tropical plane curves interpolating an appropriate number of general points is independent of the choice of points, the computation of intersection products of Psi-classes on the moduli space of rational tropical curves, a computation of the number of tropical elliptic plane curves of given degree and fixed tropical j-invariant as well as a tropical analogue of the Riemann-Roch theorem for algebraic curves. The result are obtained in joint work with Hannah Markwig and/or Andreas Gathmann.

In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.

Within this thesis a series of molecular species has been studied, with focus on hydrogen bonded species and on (solvated) transition metal complexes. Experimental techniques such as FT-ICR-MS and IRMPD were combined with ab initio calculations for the determination of structure and reactivity of the aforementioned types of systems. On the basis of high level electronic structure calculations of neutral water clusters (H2O)n with n = 17-21 a transitional size regime has been determined, where a structural stabilization between all-surface and interior configurations alternates with the addition or removal of a single water molecule. Electronic structure calculations suggested that for n = 17 and 19 the interior configuration would be energetically more stable than the all-surface one. The gas phase infrared spectrum of the singly hydrated ammonium ion, NH4+(H2O), had previously been recorded by photodissociation spectroscopy of mass selected ions and interpreted by means of ab initio calculations. The present work provides additional information on the shape of the potential energy curves of NH4+(H2O) along the N-H distance on MP2/aug-cc-pVDZ level of theory yielding an anharmonic potential shape. Calculation of potential energy curves of the O-H mode of the intramolecular hydrogen bond of various dicarboxylic acids (oxalic to adipic acid) revealed that the shapes of the potentials directly correlate to the size of the system and the resulting ring strain The shape of the potential is also influenced by the charge of the system. Calculation of anharmonic frequencies based on the VPT2 approach lead to reasonable results in all systems with narrow potentials. IRMPD spectra of complexes in the gas phase have been recorded for a series of cationic vanadium oxide complexes when reacted with acetonitrile, methanol and ethanol. The experimental spectra are compared to calculated absorption spectra. The systematic DFT study identifies potential candidates for reductive nitrile coupling in cationic transition metal acetonitrile complexes. On the basis of the calculations, the formation of metallacyclic structures in group 3 through 7 complexes can be ruled out. Solvation of the transition metal cation by five acetonitrile ligands leads to a reductive nitrile coupling reaction in three types of complexes, namely those containing either niobium, tantalum or tungsten.

This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.

Sound surrounds us all the time and in every place in our daily life, may it be pleasant music in a concert hall or disturbing noise emanating from a busy street in front of our home. The basic properties are the same for both kinds of sound, namely sound waves propagating from a source, but we perceive it in different ways depending on our current mood or if the sound is wanted or not. In this thesis both pleasant sound as well as disturbing noise is examined by means of simulating the sound and visualizing the results thereof. However, although the basic properties of music and traffic noise are the same, one is interested in different features. For example, in a concert hall, the reverberation time is an important quality measure, but if noise is considered only the resulting sound level, for example on ones balcony, is of interest. Such differences are reflected in different methods of simulation and required visualizations, therefore this thesis is divided into two parts. The first part about room acoustics deals with the simulation and novel visualizations for indoor sound and acoustic quality measures, such as definition (original "Deutlichkeit") and clarity index (original "Klarheitsmaß"). For the simulation two different methods, a geometric (phonon tracing) and a wave based (FEM) approach, are applied and compared. The visualization techniques give insight into the sound behaviour and the acoustic quality of a room from a global as well as a listener based viewpoint. Furthermore, an acoustic rendering equation is presented, which is used to render interference effects for different frequencies. Last but not least a novel visualization approach for low frequency sound is presented, which enables the topological analysis of pressure fields based on room eigenfrequencies. The second part about environmental noise is concerned with the simulation and visualization of outdoor sound with a focus on traffic noise. The simulation instruction prescribed by national regulations is discussed in detail, and an approach for the computation of noise volumes, as well as an extension to the simulation, allowing interactive noise calculation, are presented. Novel visualization and interaction techniques for the calculated noise data, incorporated in an interactive three dimensional environment, enabling the easy comprehension of noise problems, are presented. Furthermore additional information can be integrated into the framework to enhance the visualization of noise and the usability of the framework for different usages.

Acidic zeolites like H-Y, H-ZSM-5, H-MCM-22 and H-MOR zeolites were found to be the selective adsorbents for the removal of thiophene from toluene or n-heptane as solvent. The competitive adsorption of toluene is found to influence the adsorption capacity for thiophene and is more predominant when high-alumina zeolites are used as adsorbents. This behaviour is also reflected by the results of the adsorption of thiophene on H-ZSM-5 zeolites with varied nSi/nAl ratios (viz. 13, 19 and 36) from toluene and n-heptane as solvents, respectively. UV-Vis spectroscopic results show that the oligomerization of thiophene leads to the formation of dimers and trimers on these zeolites. The oligomerization in acid zeolites is regarded to be dependent on the geometry of the pore system of the zeolites. The sulphur-containing compounds with more than one ring viz. benzothiophene, which are also present in substantial amounts in certain hydrocarbon fractions, are not adsorbed on H-ZSM-5 zeolites. This is obvious, as the diameter of the pore aperture of zeolite H-ZSM-5 is smaller than the molecular size of benzothiophene. Metal ion-exchanged FAU-type zeolites are found to be promising adsorbents for the removal of sulphur-containing compounds from model solutions. The introduction of Cu+-, Ni2+-, Ce3+-, La3+- and Y3+- ions into zeolite Na+-Y by aqueous ion-exchange substantially improves the adsorption capacity for thiophene from toluene or n-heptane as solvent. More than the absolute content of Cu+-ions, the presence of Cu+-ions at the sites exposed to supercages is believed to influence the adsorption of thiophene on Cu+-Y zeolite. It was shown experimentally for the case of Cu+-Y and Ce3+-Y that the supercages present in the FAU zeolite allow for an access of bulkier sulphur-containing compounds (viz. benzothiophene, dibenzothiophene and dimethyl dibenzothiophene). The presence of these bulkier compounds compete with thiophene and are preferentially adsorbed on Cu+-Y zeolite. IR spectroscopic results revealed that the adsorption of thiophene on Na+-Y, Cu+-Y and Ni2+-Y is primarily a result of the interaction of thiophene via pi-complexation between C=C double bond (of thiophene) and metal ions (in the zeolite framework). A different mode of interaction of thiophene with Ce3+-, La3+- and Y3+-metal ions was observed in the IR spectra of thiophene adsorbed on Ce3+-Y, La3+-Y and Y3+-Y zeolites, respectively. On these adsorbents, thiophene is believed to interact via a lone electron pair of the sulphur atom with metal ions present in the adsorbent (M-S interaction). The experimental results show that there is a large difference in the thiophene adsorption capacities of pi-complexation adsorbents (like Cu+-Y, Ni2+-Y) between the model solution with toluene as solvent and the model solution with n-heptane as solvent. The lower capacity of these zeolites for the adsorption of thiophene from toluene than from n-heptane as solvent is the clear indication of competition of toluene in interating with adsorbent in a way similar to thiophene. The difference in thiophene adsorption capacities is very low in the case of adsorbents Ce3+-Y, La3+-Y and Y3+-Y, which are believed to interact with thiophene predominantly by direct M3+-S bond (thiophene interacting with metal ion via a lone pair of electrons). TG-DTA analysis was used to study the regeneration behaviour of the adsorbents. Acid zeolites can be regenerated by simply heating at 400 °C in a flow of nitrogen whereas thiophene is chemically adsorbed on the metal ion. By contrast, it is not possible to regenerate by heating under idle inert gas flow. The only way to regenerate these adsorbents is to burn off the adsorbate, which eventually brings about an undesired emission of SOx. The exothermic peaks appeared at different temperatures in the heat flow profiles of Cu+-Y, Ce3+-Y, La3+-Y and Y3+-Y are also indicating that two different types of interaction are present as revealed by IR spectroscopy, too. One major difficulty in reducing the sulphur content in fuels to value below 10 ppm is the inability in removing alkyl dibenzothiophenes, viz. 4,6 dimethyl dibenzothiophene, by the existing catalytic hydrodesulphurization technique. Cu+-Y and Ce3+-Y were found in the present study to adsorb this compound from toluene to a certain extent. To meet the stringent regulations on sulphur content, selective adsorption by zeolites could be a valuable post-purification method after the catalytic hydrodesulphurization unit.

This thesis covers two important fields in financial mathematics, namely the continuous time portfolio optimisation and credit risk modelling. We analyse optimisation problems of portfolios of Call and Put options on the stock and/or the zero coupon bond issued by a firm with default risk. We use the martingale approach for dynamic optimisation problems. Our findings show that the riskier the option gets, the less proportion of his wealth the investor allocates to the risky asset. Further, we analyse the Credit Default Swap (CDS) market quotes on the Eurobonds issued by Turkish sovereign for building the term structure of the sovereign credit risk. Two methods are introduced and compared for bootstrapping the risk-neutral probabilities of default (PD) in an intensity based (or reduced form) credit risk modelling approach. We compare the market-implied PDs with the actual PDs reported by credit rating agencies based on historical experience. Our results highlight the market price of the sovereign credit risk depending on the assigned rating category in the sampling period. Finally, we find an optimal leverage strategy for delivering the payments promised by a Constant Proportion Debt Obligation (CPDO). The problem is solved via the introduction and explicit solution of a stochastic control problem by transforming the related Hamilton-Jacobi-Bellman Equation into its dual. Contrary to the industry practise, the optimal leverage function we derive is a non-linear function of the CPDO asset value. The simulations show promising behaviour of the optimal leverage function compared with the one popular among practitioners.

The desire to model in ever increasing detail geometrical and physical features has lead to a steady increase in the number of points used in field solvers. While many solvers have been ported to parallel machines, grid generators have left behind. Sequential generation of meshes of large size is extremely problematic both in terms of time and memory requirements. Therefore, the need for developing parallel mesh generation technique is well justified. In this work a novel algorithm is presented for automatic parallel generation of tetrahedral computational meshes based on geometrical domain decomposition. It has a potential to remove this bottleneck. Different domain decomposition approaches and criteria have been investigated. Questions regarding time and memory consumption, efficiency of computations and quality of generated surface and volume meshes have been considered. As a result of the work parTgen (partitioner and parallel tetrahedral mesh generator) software package based on the developed algorithm has been created. Several real-life examples of relatively complex structures involving large meshes (of order 10^7-10^8 elements) are given. It has been shown that high mesh quality is achieved. Memory and time consumption are reduced significantly, and parallel algorithm is efficient.

A modular level set algorithm is developed to study the interface and its movement for free moving boundary problems. The algorithm is divided into three basic modules : initialization, propagation and contouring. Initialization is the process of finding the signed distance function from closed objects. We discuss here, a methodology to find an accurate signed distance function from a closed, simply connected surface discretized by triangulation. We compute the signed distance function using the direct method and it is stored efficiently in the neighborhood of the interface by a narrow band level set method. A novel approach is employed to determine the correct sign of the distance function at convex-concave junctions of the surface. The accuracy and convergence of the method with respect to the surface resolution is studied. It is shown that the efficient organization of surface and narrow band data structures enables the solution of large industrial problems. We also compare the accuracy of the signed distance function by direct approach with Fast Marching Method (FMM). It is found that the direct approach is more accurate than FMM. Contouring is performed through a variant of the marching cube algorithm used for the isosurface construction from volumetric data sets. The algorithm is designed to keep foreground and background information consistent, contrary to the neutrality principle followed for surface rendering in computer graphics. The algorithm ensures that the isosurface triangulation is closed, non-degenerate and non-ambiguous. The constructed triangulation has desirable properties required for the generation of good volume meshes. These volume meshes are used in the boundary element method for the study of linear electrostatics. For estimating surface properties like interface position, normal and curvature accurately from a discrete level set function, a method based on higher order weighted least squares is developed. It is found that least squares approach is more accurate than finite difference approximation. Furthermore, the method of least squares requires a more compact stencil than those of finite difference schemes. The accuracy and convergence of the method depends on the surface resolution and the discrete mesh width. This approach is used in propagation for the study of mean curvature flow and bubble dynamics. The advantage of this approach is that the curvature is not discretized explicitly on the grid and is estimated on the interface. The method of constant velocity extension is employed for the propagation of the interface. With least squares approach, the mean curvature flow has considerable reduction in mass loss compared to finite difference techniques. In the bubble dynamics, the modules are used for the study of a bubble under the influence of surface tension forces to validate Young-Laplace law. It is found that the order of curvature estimation plays a crucial role for calculating accurate pressure difference between inside and outside of the bubble. Further, we study the coalescence of two bubbles under surface tension force. The application of these modules to various industrial problems is discussed.

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.

This dissertation deals with the optimization of the web formation in a spunbond process for the production of artificial fabrics. A mathematical model of the process is presented. Based on the model, two kind of attributes to be optimized are considered, those related with the quality of the fabric and those describing the stability of the production process. The problem falls in the multicriteria and decision making framework. The functions involved on the model of the process are non linear, non convex and non differentiable. A strategy in two steps; exploration and continuation, is proposed to approximate numerically the Pareto frontier and alternative methods are proposed to navigate the set and support the decision making process. The proposed strategy is applied to a particular production process and numerical results are presented.

This thesis shows an approach to combine the advantages of MBS tyre models and FEM models for the use in full vehicle simulations. The procedure proposed in this thesis aims to describe a nonlinear structure with a Finite Element approach combined with nonlinear model reduction methods. Unlike most model reduction methods - as the frequently used Craig-Bampton approach - the method of Proper Orthogonal Decomposition (POD) offers a projection basis suitable for nonlinear models. For the linear wave equation, the POD method is studied comparing two different choices of snapshot sets. Set 1 consists of deformation snapshots, and set 2 additionally contains velocities and accelerations. An error analysis proves no convergence guarantee for deformations only. For inclusion of derivatives it yields an error bound diminishing for small time steps. The numerical results show a better behaviour for the derivative snapshot method, as long as the sum of the left-over eigenvalues is significant. For the reduction of nonlinear systems - especially when using commercial software - it is necessary to decouple the reduced surrogate system from the full model. To achieve this, a lookup table approach is presented. It makes use of the preceding computation step with the full model necessary to set up the POD basis (training step). The nonlinear term of inner forces and the stiffness matrix are output and stored in a lookup table for the reduced system. Numerical examples include a nonlinear string in Matlab and an airspring computed in Abaqus. Both examples show that effort reductions of two orders of magnitude are possible within a reasonable error tolerance. The lookup approaches perform faster than the Trajectory Piecewise Linear (TPWL) method and produce comparable errors. Furthermore, the Abaqus example shows the influence of training excitation on the quality of the reduced model.

Today’s high-resolution digital images and videos require large amounts of storage space and transmission bandwidth. To cope with this, compression methods are necessary that reduce the required space while at the same time minimize visual artifacts. We propose a compression method based on a piecewise linear color interpolation induced by a triangulation of the image domain. We present methods to speed up significantly the optimization process for finding the triangulation. Furthermore, we extend the method to digital videos. Laser scanners to capture the surface of three-dimensional objects are widely used in industry nowadays, e.g., for reverse engineering or quality measurement. Hand-held scanning devices have the advantage that the laser device can be moved to any position, permitting a scan of complex objects. But operating a hand-held laser scanner is challenging. The operator has to keep track of the scanned regions in his mind, and has no feedback of the sample density unless he starts the surface reconstruction after finishing the scan. We present a system to support the operator by computing and rendering high-quality surface meshes of the captured data online, i.e., while he is still scanning, and in real time. Furthermore, it color-codes the rendered surface to reflect the surface quality. Thereby, instant feedback is provided, resulting in better scans in less time.

The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.

The dissertation deals with the application of Hub Location models in public transport planning. The author proposes new mathematical models along with different solution approaches to solve the instances. Moreover, a novel multi-period formulation is proposed as an extension to the general model. Due to its high complexity heuristic approaches are formulated to find a good solution within a reasonable amount of time.

The main motivation of this contribution is to introduce a computational laboratory to analyse defects and fractures at the sub--micro scale. To this end, we have attempted to present a continuum--atomistic multiscale algorithm for the analysis of crystalline deformation, i.e. we have combined the above--mentioned Cauchy--Born rule within a finite element approximation (FEM) on the continuum region with a molecular dynamics (MD) resolution on the atomistic domain. The aim is twofold: on the one hand the stability, i.e. validity of the Cauchy--Born rule and its transition to non--affine deformation at the micron--scale is studied with the help of molecular dynamics approach to capture fine--scales features; on the other hand a horizontal FEM/MD, i.e. continuum atomistic coupling, is envisaged in order to study representative cases of crystalline defects. To cope with the latter we have introduced a horizontal coupling method for continuum--atomistic analysis.

Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.

Layout analysis--the division of page images into text blocks, lines, and determination of their reading order--is a major performance limiting step in large scale document digitization projects. This thesis addresses this problem in several ways: it presents new performance measures to identify important classes of layout errors, evaluates the performance of state-of-the-art layout analysis algorithms, presents a number of methods to reduce the error rate and catastrophic failures occurring during layout analysis, and develops a statistically motivated, trainable layout analysis system that addresses the needs of large-scale document analysis applications. An overview of the key contributions of this thesis is as follows. First, this thesis presents an efficient local adaptive thresholding algorithm that yields the same quality of binarization as that of state-of-the-art local binarization methods, but runs in time close to that of global thresholding methods, independent of the local window size. Tests on the UW-1 dataset demonstrate a 20-fold speedup compared to traditional local thresholding techniques. Then, this thesis presents a new perspective for document image cleanup. Instead of trying to explicitly detect and remove marginal noise, the approach focuses on locating the page frame, i.e. the actual page contents area. A geometric matching algorithm is presented to extract the page frame of a structured document. It is demonstrated that incorporating page frame detection step into document processing chain results in a reduction in OCR error rates from 4.3% to 1.7% (n=4,831,618 characters) on the UW-III dataset and layout-based retrieval error rates from 7.5% to 5.3% (n=815 documents) on the MARG dataset. The performance of six widely used page segmentation algorithms (x-y cut, smearing, whitespace analysis, constrained text-line finding, docstrum, and Voronoi) on the UW-III database is evaluated in this work using a state-of-the-art evaluation methodology. It is shown that current evaluation scores are insufficient for diagnosing specific errors in page segmentation and fail to identify some classes of serious segmentation errors altogether. Thus, a vectorial score is introduced that is sensitive to, and identifies, the most important classes of segmentation errors (over-, under-, and mis-segmentation) and what page components (lines, blocks, etc.) are affected. Unlike previous schemes, this evaluation method has a canonical representation of ground truth data and guarantees pixel-accurate evaluation results for arbitrary region shapes. Based on a detailed analysis of the errors made by different page segmentation algorithms, this thesis presents a novel combination of the line-based approach by Breuel with the area-based approach of Baird which solves the over-segmentation problem in area-based approaches. This new approach achieves a mean text-line extraction error rate of 4.4% (n=878 documents) on the UW-III dataset, which is the lowest among the analyzed algorithms. This thesis also describes a simple, fast, and accurate system for document image zone classification that results from a detailed comparative analysis of performance of widely used features in document analysis and content-based image retrieval. Using a novel combination of known algorithms, an error rate of 1.46% (n=13,811 zones) is achieved on the UW-III dataset in comparison to a state-of-the-art system that reports an error rate of 1.55% (n=24,177 zones) using more complicated techniques. In addition to layout analysis of Roman script documents, this work also presents the first high-performance layout analysis method for Urdu script. For that purpose a geometric text-line model for Urdu script is presented. It is shown that the method can accurately extract Urdu text-lines from documents of different layouts like prose books, poetry books, magazines, and newspapers. Finally, this thesis presents a novel algorithm for probabilistic layout analysis that specifically addresses the needs of large-scale digitization projects. The presented approach models known page layouts as a structural mixture model. A probabilistic matching algorithm is presented that gives multiple interpretations of input layout with associated probabilities. An algorithm based on A* search is presented for finding the most likely layout of a page, given its structural layout model. For training layout models, an EM-like algorithm is presented that is capable of learning the geometric variability of layout structures from data, without the need for a page segmentation ground-truth. Evaluation of the algorithm on documents from the MARG dataset shows an accuracy of above 95% for geometric layout analysis.