Doctoral Thesis
Refine
Year of publication
- 2008 (33) (remove)
Document Type
- Doctoral Thesis (33) (remove)
Language
- English (33) (remove)
Has Fulltext
- yes (33)
Keywords
- Finite-Elemente-Methode (3)
- Computergraphik (2)
- Level-Set-Methode (2)
- Raumakustik (2)
- Room acoustics (2)
- Visualisierung (2)
- computer graphics (2)
- domain decomposition (2)
- mesh generation (2)
- virtual acoustics (2)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (15)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (7)
- Kaiserslautern - Fachbereich Informatik (4)
- Kaiserslautern - Fachbereich Chemie (3)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (3)
- Kaiserslautern - Fachbereich Biologie (1)
This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.
The present thesis is concerned with the simulation of the loading behaviour of both hybrid lightweight structures and piezoelectric mesostructures, with a special focus on solid interfaces on the meso scale. Furthermore, an analytical review on bifurcation modes of continuum-interface problems is included. The inelastic interface behaviour is characterised by elastoplastic, viscous, damaging and fatigue-motivated models. For related numerical computations, the Finite Element Method is applied. In this context, so-called interface elements play an important role. The simulation results are reflected by numerous examples which are partially correlated to experimental data.
The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.
We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.
In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.
Nanotechnology is now recognized as one of the most promising areas for technological
development in the 21st century. In materials research, the development of
polymer nanocomposites is rapidly emerging as a multidisciplinary research activity
whose results could widen the applications of polymers to the benefit of many different
industries. Nanocomposites are a new class of composites that are particle-filled
polymers for which at least one dimension of the dispersed particle is in the nanometer
range. In the related area polymer/clay nanocomposites have attracted considerable
interest because they often exhibit remarkable property improvements when
compared to virgin polymer or conventional micro- and macro- composites.
The present work addresses the toughening and reinforcement of thermoplastics via
a novel method which allows us to achieve micro- and nanocomposites. In this work
two matrices are used: amorphous polystyrene (PS) and semi-crystalline polyoxymethylene
(POM). Polyurethane (PU) was selected as the toughening agent for POM
and used in its latex form. It is noteworthy that the mean size of rubber latices is
closely matched with that of conventional toughening agents, impact modifiers.
Boehmite alumina and sodium fluorohectorite (FH) were used as reinforcements.
One of the criteria for selecting these fillers was that they are water swellable/
dispersible and thus their nanoscale dispersion can be achieved also in aqueous
polymer latex. A systematic study was performed on how to adapt discontinuousand
continuous manufacturing techniques for the related nanocomposites.
The dispersion of nanofillers was characterized by transmission, scanning electron
and atomic force microcopy (TEM, SEM and AFM respectively), X-ray diffraction
(XRD) techniques, and discussed. The crystallization of POM was studied by means
of differential scanning calorimetry and polarized light optical microscopy (DSC and
PLM, respectively). The mechanical and thermomechanical properties of the composites
were determined in uniaxial tensile, dynamic-mechanical thermal analysis
(DMTA), short-time creep tests, and thermogravimetric analysis (TGA).
PS composites were produced first by a discontinuous manufacturing technique,
whereby FH or alumina was incorporated in the PS matrix by melt blending with and
without latex precompounding of PS latex with the nanofiller. It was found that direct melt mixing (DM) of the nanofillers with PS resulted in micro-, whereas the latex mediated
pre-compounding (masterbatch technique, MB) in nanocomposites. FH was
not intercalated by PS when prepared by DM. On the other hand, FH was well dispersed
(mostly intercalated) in PS via the PS latex-mediated predispersion of FH following
the MB route. The nanocomposites produced by MB outperformed the DM
compounded microcomposites in respect to properties like stiffness, strength and
ductility based on dynamic-mechanical and static tensile tests. It was found that the
resistance to creep (summarized in master curves) of the nanocomposites were improved
compared to those of the microcomposites. Master curves (creep compliance
vs. time), constructed based on isothermal creep tests performed at different temperatures,
showed that the nanofiller reinforcement affects mostly the initial creep
compliance.
Next, ternary composites composed of POM, PU and boehmite alumina were produced
by melt blending with and without latex precompounding. Latex precompounding
served for the predispersion of the alumina particles. The related MB was produced
by mixing the PU latex with water dispersible boehmite alumina. The composites
produced by the MB technique outperformed the DM compounded composites in
respect to most of the thermal and mechanical characteristics.
Toughened and/or reinforced PS- and POM-based composites have been successfully
produced by a continuous extrusion technique, too. This technique resulted in
good dispersion of both nanofillers (boehmite) and impact modifier (PU). Compared
to the microcomposites obtained by conventional DM, the nanofiller dispersion became
finer and uniform when using the water-mediated predispersion. The resulting
structure markedly affected the mechanical properties (stiffness and creep resistance)
of the corresponding composites. The impact resistance of POM was highly
enhanced by the addition of PU rubber when manufactured by the continuous extrusion
manufacturing technique. This was traced to the dispersed PU particle size being
in the range required from conventional, impact modifiers.
Layout analysis--the division of page images into text blocks, lines, and determination of their reading order--is a major performance limiting step in large scale document digitization projects. This thesis addresses this problem in several ways: it presents new performance measures to identify important classes of layout errors, evaluates the performance of state-of-the-art layout analysis algorithms, presents a number of methods to reduce the error rate and catastrophic failures occurring during layout analysis, and develops a statistically motivated, trainable layout analysis system that addresses the needs of large-scale document analysis applications. An overview of the key contributions of this thesis is as follows. First, this thesis presents an efficient local adaptive thresholding algorithm that yields the same quality of binarization as that of state-of-the-art local binarization methods, but runs in time close to that of global thresholding methods, independent of the local window size. Tests on the UW-1 dataset demonstrate a 20-fold speedup compared to traditional local thresholding techniques. Then, this thesis presents a new perspective for document image cleanup. Instead of trying to explicitly detect and remove marginal noise, the approach focuses on locating the page frame, i.e. the actual page contents area. A geometric matching algorithm is presented to extract the page frame of a structured document. It is demonstrated that incorporating page frame detection step into document processing chain results in a reduction in OCR error rates from 4.3% to 1.7% (n=4,831,618 characters) on the UW-III dataset and layout-based retrieval error rates from 7.5% to 5.3% (n=815 documents) on the MARG dataset. The performance of six widely used page segmentation algorithms (x-y cut, smearing, whitespace analysis, constrained text-line finding, docstrum, and Voronoi) on the UW-III database is evaluated in this work using a state-of-the-art evaluation methodology. It is shown that current evaluation scores are insufficient for diagnosing specific errors in page segmentation and fail to identify some classes of serious segmentation errors altogether. Thus, a vectorial score is introduced that is sensitive to, and identifies, the most important classes of segmentation errors (over-, under-, and mis-segmentation) and what page components (lines, blocks, etc.) are affected. Unlike previous schemes, this evaluation method has a canonical representation of ground truth data and guarantees pixel-accurate evaluation results for arbitrary region shapes. Based on a detailed analysis of the errors made by different page segmentation algorithms, this thesis presents a novel combination of the line-based approach by Breuel with the area-based approach of Baird which solves the over-segmentation problem in area-based approaches. This new approach achieves a mean text-line extraction error rate of 4.4% (n=878 documents) on the UW-III dataset, which is the lowest among the analyzed algorithms. This thesis also describes a simple, fast, and accurate system for document image zone classification that results from a detailed comparative analysis of performance of widely used features in document analysis and content-based image retrieval. Using a novel combination of known algorithms, an error rate of 1.46% (n=13,811 zones) is achieved on the UW-III dataset in comparison to a state-of-the-art system that reports an error rate of 1.55% (n=24,177 zones) using more complicated techniques. In addition to layout analysis of Roman script documents, this work also presents the first high-performance layout analysis method for Urdu script. For that purpose a geometric text-line model for Urdu script is presented. It is shown that the method can accurately extract Urdu text-lines from documents of different layouts like prose books, poetry books, magazines, and newspapers. Finally, this thesis presents a novel algorithm for probabilistic layout analysis that specifically addresses the needs of large-scale digitization projects. The presented approach models known page layouts as a structural mixture model. A probabilistic matching algorithm is presented that gives multiple interpretations of input layout with associated probabilities. An algorithm based on A* search is presented for finding the most likely layout of a page, given its structural layout model. For training layout models, an EM-like algorithm is presented that is capable of learning the geometric variability of layout structures from data, without the need for a page segmentation ground-truth. Evaluation of the algorithm on documents from the MARG dataset shows an accuracy of above 95% for geometric layout analysis.
In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.
In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.
Colorectal cancer is the second most prevalent cancer form in both men and women in the Europe. In 2002, alimentary cancer (oesophagus, stomach, intestines) made up 26% of the annual incident cases of cancer amongst males in Europe, whereby about half of those were cancers of the colon and rectum (Eurostat 2002). Epidemiological evidence accumulating over the last decades indicates that besides a genetic disposition, diet plays a strong epigenetic role in the genesis of cancer. It is generally assumed that diet is causal for up to 80% of colorectal cancer (Bingham 2000). With the prospect of an approximated 50% rise in global cancer incidence over the first two decades of the 21st century, the World Health Organisation (WHO) has emphasized the need for an improvement in nutrition. Indeed there is increasing public health awareness with respect to nutrition. Today, living healthily is associated with less consumption of animal fats and red (processed) meat, moderate or no consumption of alcohol coupled with increased physical activity, and frequent intake of fruits, vegetables and whole grains (Bingham 1999; Johnson 2004). This idealogy partly stems from scientific epidemiological evidence supportive of an inverse correlation between the consumption of fruits and vegetables and the development cancer. Besides fibre and essential micro-nutrients like ascobate, folate, and tocopherols, the anti-carcinogenic properties of fruits and vegetables are generally thought to be rooted in the bioactivity of secondary plant components like flavonoids (Johnson 2004; Rice-Evans and Miller 1996; Rice-Evans 1995). Along with the increased public health awareness, has also come a burgeoning and lucrative dietary supplement industry, which markets products based on polyphenols and other potentially healthy compounds, sometimes with questionable promises of better health and increased longevity. These claims are based on accumulating in vitro and in vivo evidence indicating that flavonoids and polyphenols in fruits and vegetables can hinder proliferation, induce apoptosis of cancerous cells (Kern et al. 2005; Kumar et al. 2007; Thangapazham et al. 2007), act as antioxidants (Justino et al. 2006; Rice-Evans 1995) and influence cell signalling pathways (Marko et al. 2004; Joseph et al. 2007; Granado-Serrano et al. 2007), all of which are potential mechanisms proposed for their anti-carcinogenic activity. However, not only is the vast variety of supplements worrisome, but also problematic, is their easy accessibilty (just a click away on the internet) and the amount that can potentially be consumed. Such supplements are usually offered in pharmaceutical form (tablets, capsules, powder, concentrates) containing concentrations well beyond what is normally comsumable from the diet. For example, quercetin’s recommended intake is about 1g daily. However, estimates portend a possible daily increase of upto 1000 fold of the daily intake of quercetin (Hertog et al. 1995). Mindful of the concept of dose coined from the words of swiss scientist Paracelsus “What is it that is not poison? All things are poison and nothing is without poison. The right dose differentiates a poison and a remedy.” (“Alle Dinge sind Gift und nichts ist ohn’ Gift; allein die Dosis macht, dass ein Ding kein Gift ist”), it is thus conceivable that such high concentrations may not only reverse the acclaimed positive effects of flavonoids and polyphenols but also have negative effects thereby representing a health risk. The fact that direct evidence of the beneficial effects of flavonoids and polyphenols remains wanting, if not entirely lacking, coupled with the afore-mentioned marketing trend demands for a thorough examination of the possible adverse effects that may arise from increased consumption of flavonoids and polyphenols. The genesis and progression of cancer is usually accompanied by dysfunctional signalling of certain cell signalling pathways. Typical for colon carcinogenesis is the malfunctioning of the Wnt-signalling pathway, a pathway, which is crucial for the growth and development of normal colonocytes. The dysfunction of the Wnt-signalling pathway occurs in a manner that culminates in a proliferation stimulus of colonocytes, while differentiation is increasingly minimized. Hence, tumourigenesis is promoted. Interupting the proliferation stumuli by intervening in the actions of components of the Wnt-signalling pathway is one potential mechanism for the anti-carcinogenic action of flavonoids and polyphenols (Pahlke et al. 2006; Dashwood et al. 2002; Park et al. 2005). However, as previously hinted, the indulgence in the consumption of flavonoids and polyphenols based supplements could instead lead to a proliferation stimulus and provoke or promote carcinogenesis in normal cells or pre-cancerous cells respectively. The aim of this work was to