### Refine

#### Year of publication

- 2008 (33) (remove)

#### Document Type

- Doctoral Thesis (33) (remove)

#### Language

- English (33) (remove)

#### Keywords

- Finite-Elemente-Methode (3)
- Computergraphik (2)
- Level-Set-Methode (2)
- Raumakustik (2)
- Room acoustics (2)
- Visualisierung (2)
- computer graphics (2)
- domain decomposition (2)
- mesh generation (2)
- virtual acoustics (2)

#### Faculty / Organisational entity

Nanotechnology is now recognized as one of the most promising areas for technological
development in the 21st century. In materials research, the development of
polymer nanocomposites is rapidly emerging as a multidisciplinary research activity
whose results could widen the applications of polymers to the benefit of many different
industries. Nanocomposites are a new class of composites that are particle-filled
polymers for which at least one dimension of the dispersed particle is in the nanometer
range. In the related area polymer/clay nanocomposites have attracted considerable
interest because they often exhibit remarkable property improvements when
compared to virgin polymer or conventional micro- and macro- composites.
The present work addresses the toughening and reinforcement of thermoplastics via
a novel method which allows us to achieve micro- and nanocomposites. In this work
two matrices are used: amorphous polystyrene (PS) and semi-crystalline polyoxymethylene
(POM). Polyurethane (PU) was selected as the toughening agent for POM
and used in its latex form. It is noteworthy that the mean size of rubber latices is
closely matched with that of conventional toughening agents, impact modifiers.
Boehmite alumina and sodium fluorohectorite (FH) were used as reinforcements.
One of the criteria for selecting these fillers was that they are water swellable/
dispersible and thus their nanoscale dispersion can be achieved also in aqueous
polymer latex. A systematic study was performed on how to adapt discontinuousand
continuous manufacturing techniques for the related nanocomposites.
The dispersion of nanofillers was characterized by transmission, scanning electron
and atomic force microcopy (TEM, SEM and AFM respectively), X-ray diffraction
(XRD) techniques, and discussed. The crystallization of POM was studied by means
of differential scanning calorimetry and polarized light optical microscopy (DSC and
PLM, respectively). The mechanical and thermomechanical properties of the composites
were determined in uniaxial tensile, dynamic-mechanical thermal analysis
(DMTA), short-time creep tests, and thermogravimetric analysis (TGA).
PS composites were produced first by a discontinuous manufacturing technique,
whereby FH or alumina was incorporated in the PS matrix by melt blending with and
without latex precompounding of PS latex with the nanofiller. It was found that direct melt mixing (DM) of the nanofillers with PS resulted in micro-, whereas the latex mediated
pre-compounding (masterbatch technique, MB) in nanocomposites. FH was
not intercalated by PS when prepared by DM. On the other hand, FH was well dispersed
(mostly intercalated) in PS via the PS latex-mediated predispersion of FH following
the MB route. The nanocomposites produced by MB outperformed the DM
compounded microcomposites in respect to properties like stiffness, strength and
ductility based on dynamic-mechanical and static tensile tests. It was found that the
resistance to creep (summarized in master curves) of the nanocomposites were improved
compared to those of the microcomposites. Master curves (creep compliance
vs. time), constructed based on isothermal creep tests performed at different temperatures,
showed that the nanofiller reinforcement affects mostly the initial creep
compliance.
Next, ternary composites composed of POM, PU and boehmite alumina were produced
by melt blending with and without latex precompounding. Latex precompounding
served for the predispersion of the alumina particles. The related MB was produced
by mixing the PU latex with water dispersible boehmite alumina. The composites
produced by the MB technique outperformed the DM compounded composites in
respect to most of the thermal and mechanical characteristics.
Toughened and/or reinforced PS- and POM-based composites have been successfully
produced by a continuous extrusion technique, too. This technique resulted in
good dispersion of both nanofillers (boehmite) and impact modifier (PU). Compared
to the microcomposites obtained by conventional DM, the nanofiller dispersion became
finer and uniform when using the water-mediated predispersion. The resulting
structure markedly affected the mechanical properties (stiffness and creep resistance)
of the corresponding composites. The impact resistance of POM was highly
enhanced by the addition of PU rubber when manufactured by the continuous extrusion
manufacturing technique. This was traced to the dispersed PU particle size being
in the range required from conventional, impact modifiers.

This thesis is devoted to the study of tropical curves with emphasis on their enumerative geometry. Major results include a conceptual proof of the fact that the number of rational tropical plane curves interpolating an appropriate number of general points is independent of the choice of points, the computation of intersection products of Psi-classes on the moduli space of rational tropical curves, a computation of the number of tropical elliptic plane curves of given degree and fixed tropical j-invariant as well as a tropical analogue of the Riemann-Roch theorem for algebraic curves. The result are obtained in joint work with Hannah Markwig and/or Andreas Gathmann.

Today’s high-resolution digital images and videos require large amounts of storage space and transmission bandwidth. To cope with this, compression methods are necessary that reduce the required space while at the same time minimize visual artifacts. We propose a compression method based on a piecewise linear color interpolation induced by a triangulation of the image domain. We present methods to speed up significantly the optimization process for finding the triangulation. Furthermore, we extend the method to digital videos. Laser scanners to capture the surface of three-dimensional objects are widely used in industry nowadays, e.g., for reverse engineering or quality measurement. Hand-held scanning devices have the advantage that the laser device can be moved to any position, permitting a scan of complex objects. But operating a hand-held laser scanner is challenging. The operator has to keep track of the scanned regions in his mind, and has no feedback of the sample density unless he starts the surface reconstruction after finishing the scan. We present a system to support the operator by computing and rendering high-quality surface meshes of the captured data online, i.e., while he is still scanning, and in real time. Furthermore, it color-codes the rendered surface to reflect the surface quality. Thereby, instant feedback is provided, resulting in better scans in less time.

Sublimation (Evaporation) is widely used in different industrial applications. The important applications are the sublimation (evaporation) of small particles (solid and liquid), e.g., spray drying and fuel droplet evaporation. Since a few decades, sublimation technology has been used widely together with aerosol technology. This combination is aiming to get various products with desired compositions and morphologies. It can be used in the fields of nanoparticles generation, particle coating through physical vapor deposition (PVD) and particle structuring. This doctoral thesis deals with the experimental and theoretical investigations of sublimation (evaporation) kinetics of fine aerosol particles (droplets). The experimental study was conducted in a test plant including on-line control of the most important paramters, such as heating temperature, gas flow and pressure. On-line and in-line particle measurements (Optical sensor, APS) were employed. Relevant parameters in sublimation (evaporation) such as heating temperature, particle concentration and aerosol residence time were investigated. Polydispersed particles (droplets) were introduced into the test plant as precursor aerosols. Two kinds of materials were used as test materials, including inorganic particles of NH4Cl and organic particles of DEHS. NH4Cl particles with smooth surface and porous structure were put into the experiments, respectively. The influence of the particle morphology on the sublimation process was studied. Basing on the experiments, different theoretical models were developed. The simulation results under different parameters were compared with experimental results. The change of concentration of particles was specially discussed. The discussion was focused on the relationship of the total particle concentration and the change of single particles with diverse initial diameters. The study of the sublimation kinetics of particles with different morphologies and different specific surface areas was carried out. The factor of increased surface area on the sublimation process was taken into the simulation and the results were compared with experimental results. A sublimation (evaporation) kinetics was investigated in this thesis. Basing on the property of a material, such as molecular weight, molecular size and vapor pressure, the sublimation (evaporation) kinetics was described. The optimum sublimation (evaporation) conditions with respect to the material properties were advanced. A Phase Transition Effect during the sublimation (evaporation) was found, which describes the increase of the large particles on the cost of small particles. A similar effect is observed in crystal suspension (called Ostwald ripening) but with another physical background. In order to meet the need of in-line particle measurement, a hot gas sensor (O.P.C.) was developed in this study, for measuring the particle size and the size distribution of an aerosol. With the newly developed measuring cell, the operating conditions of the aerosol could be increased up to 500°C.

Dry Sliding and Rolling Tribotests of Carbon Black Filled EPDM Elastomers and Their FE Simulations
(2008)

Unlubricated sliding systems being economic and environmentally benign are already realized in bearings, where dry metal-plastic sliding pairs successfully replace lubricated metal-metal ones. Nowadays, a considerable part of the tribological research concentrates to realize unlubricated elastomer-metal sliding systems, and to extend the application field of lubrication-free slider elements. In this Thesis, characteristics of the dry sliding and friction are investigated for elastomer-metal sliding pairs. In this study ethylene-propylene-diene rubbers (EPDM) with and without carbon black (CB) filler were used. The filler content of the EPDMs was varied: EPDMs with 0-, 30-, 45- and 60 part per hundred rubber (phr) CB amount were investigated. Quasistatic tension and compression tests and dynamic mechanical thermal analysis (DMTA) were carried out to analyze the static a viscoelastic behavior of the EPDMs. The tribological properties of the EPDMs were investigated using dry roller (metal) – on – plate (rubber) type tests (ROP). During the ROP tests the normal load was varied. The coefficient of friction (COF) and the temperature were registered online during the tests, the loss volumes were determined after certain test durations. The worn surfaces of the rubbers and of the steel counterparts were analyzed using scanning electron microscope (SEM) to determine the wear mechanisms. Because possible chemical changes may take place during dry sliding due to the elevated contact temperature the chemical composition of the surfaces was also analyzed before and after the tribotests. For the latter investigations X-ray photoelectron spectroscopy (XPS), sessil drop tests and Raman spectroscopy were used. In addition, the dry sliding tribotests were simulated using finite element (FE) codes for the better understanding of the related wear mechanisms. Finally, as the internal damping effect of the elastomers plays a great role in the sliding wear process, their viscoelasticity has been taken into account. The effect of viscoelasticity was shown on example of rolling friction. To study the rolling COF for the EPDM with 30 phr CB (EPDM 30) an FE model was created which considered the viscoelastic behavior of the rubber during rolling. The results showed that the incorporated CB enhanced the mechanical and tribological properties (both COF and wear rate have been reduced) of the EPDMs. Further on, the CB content of the EPDM influences fundamentally the observed wear mechanisms. The wear characteristics changed also with the applied normal load. In case of the EPDM 30 a rubber tribofilm was found on the steel counterpart when tests were performed at high normal loads. Analysis of the chemical composition of the surfaces before and after the wear tests does not result in notable changes. It was demonstrated, that the FE method is powerful tool to model both, the dry sliding and rolling performances of elastomers.

This thesis shows an approach to combine the advantages of MBS tyre models and FEM models for the use in full vehicle simulations. The procedure proposed in this thesis aims to describe a nonlinear structure with a Finite Element approach combined with nonlinear model reduction methods. Unlike most model reduction methods - as the frequently used Craig-Bampton approach - the method of Proper Orthogonal Decomposition (POD) offers a projection basis suitable for nonlinear models. For the linear wave equation, the POD method is studied comparing two different choices of snapshot sets. Set 1 consists of deformation snapshots, and set 2 additionally contains velocities and accelerations. An error analysis proves no convergence guarantee for deformations only. For inclusion of derivatives it yields an error bound diminishing for small time steps. The numerical results show a better behaviour for the derivative snapshot method, as long as the sum of the left-over eigenvalues is significant. For the reduction of nonlinear systems - especially when using commercial software - it is necessary to decouple the reduced surrogate system from the full model. To achieve this, a lookup table approach is presented. It makes use of the preceding computation step with the full model necessary to set up the POD basis (training step). The nonlinear term of inner forces and the stiffness matrix are output and stored in a lookup table for the reduced system. Numerical examples include a nonlinear string in Matlab and an airspring computed in Abaqus. Both examples show that effort reductions of two orders of magnitude are possible within a reasonable error tolerance. The lookup approaches perform faster than the Trajectory Piecewise Linear (TPWL) method and produce comparable errors. Furthermore, the Abaqus example shows the influence of training excitation on the quality of the reduced model.

In recent years, formal property checking has become adopted successfully in industry and is used increasingly to solve the industrial verification tasks. This success results from property checking formulations that are well adapted to specific methodologies. In particular, assertion checking and property checking methodologies based on Bounded Model Checking or related techniques have matured tremendously during the last decade and are well supported by industrial methodologies. This is particularly true for formal property checking of computational System-on-Chip (SoC) modules. This work is based on a SAT-based formulation of property checking called Interval Property Checking (IPC). IPC originates in the Siemens company and is in industrial use since the mid 1990s. IPC handles a special type of safety properties, which specify operations in intervals between abstract starting and ending states. This paves the way for extremely efficient proving procedures. However, there are still two problems in the IPC-based verification methodology flow that reduce the productivity of the methodology and sometimes hamper adoption of IPC. First, IPC may return false counterexamples since its computational bounded circuit model only captures local reachability information, i.e., long-term dependencies may be missed. If this happens, the properties need to be strengthened with reachability invariants in order to rule out the spurious counterexamples. Identifying strong enough invariants is a laborious manual task. Second, a set of properties needs to be formulated manually for each individual design to be verified. This set, however, isn’t re-usable for different designs. This work exploits special features of communication modules in SoCs to solve these problems and to improve the productivity of the IPC methodology flow. First, the work proposes a decomposition-based reachability analysis to solve the problem of identifying reachability information automatically. Second, this work develops a generic, reusable set of properties for protocol compliance verification.

We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.

In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.

Rapid growth in sensors and sensor technology introduces variety of products to the market. The increasing number of available sensor concepts and implementations demands more versatile sensor electronics and signal conditioning. Nowadays signal conditioning for the available spectrum of sensors is becoming more and more challenging. Moreover, developing a sensor signal conditioning ASIC is a function of cost, area, and robustness to maintain signal integrity. Field programmable analog approaches and the recent evolvable hardware approaches offer partial solution for advanced compensation as well as for rapid prototyping. The recent research field of evolutionary concepts focuses predominantly on digital and is at its advancement stage in analog domain. Thus, the main research goal is to combine the ever increasing industrial demand for sensor signal conditioning with evolutionary concepts and dynamically reconfigurable matched analog arrays implemented in main stream Complementary Metal Oxide Semiconductors (CMOS) technologies to yield an intelligent and smart sensor system with acceptable fault tolerance and the so called self-x features, such as self-monitoring, self-repairing and self-trimming. For this aim, the work suggests and progresses towards a novel, time continuous and dynamically reconfigurable signal conditioning hardware platform suitable to support variety of sensors. The state-of-the-art has been investigated with regard to existing programmable/reconfigurable analog devices and the common industrial application scenario and circuits, in particular including resource and sizing analysis for proper motivation of design decisions. The pursued intermediate granular level approach called as Field Programmable Medium-granular mixed signal Array (FPMA) offers flexibility, trimming and rapid prototyping capabilities. The proposed approach targets at the investigation of industrial applicability of evolvable hardware concepts and to merge it with reconfigurable or programmable analog concepts, and industrial electronics standards and needs for next generation robust and flexible sensor systems. The devised programmable sensor signal conditioning test chips, namely FPMA1/FPMA2, designed in 0.35 µm (C35B4) Austriamicrosystems, can be used as a single instance, off the shelf chip at the PCB level for conditioning or in the loop with dedicated software to inherit the aspired self-x features. The use of such self–x sensor system carries the promise of improved flexibility, better accuracy and reduced vulnerability to manufacturing deviations and drift. An embedded system, namely PHYTEC miniMODUL-515C was used to program and characterize the mixed-signal test chips in various feedback arrangements to answer some of the questions raised by the research goals. Wide range of established analog circuits, ranging from single output to fully differential amplifiers, was investigated at different hierarchical levels to realize circuits like instrumentation amplifier and filters. A more extensive design issues based on low-power like for e.g., sub-threshold design were investigated and a novel soft sleep mode idea was proposed. The bandwidth limitations observed in the state of the art fine granular approaches were enhanced by the proposed intermediate granular approach. The so designed sensor signal conditioning instrumentation amplifier was then compared to the commercially available products in the market like LT 1167, INA 125 and AD 8250. In an adaptive prototype, evolutionary approaches, in particular based on particle swarm optimization with multi-objectives, were just deployed to all the test samples of FPMA1/FMPA2 (15 each) to exhibit self-x properties and to recover from manufacturing variations and drift. The variations observed in the performance of the test samples were compensated through reconfiguration for the desired specification.

Sound surrounds us all the time and in every place in our daily life, may it be pleasant music in a concert hall or disturbing noise emanating from a busy street in front of our home. The basic properties are the same for both kinds of sound, namely sound waves propagating from a source, but we perceive it in different ways depending on our current mood or if the sound is wanted or not. In this thesis both pleasant sound as well as disturbing noise is examined by means of simulating the sound and visualizing the results thereof. However, although the basic properties of music and traffic noise are the same, one is interested in different features. For example, in a concert hall, the reverberation time is an important quality measure, but if noise is considered only the resulting sound level, for example on ones balcony, is of interest. Such differences are reflected in different methods of simulation and required visualizations, therefore this thesis is divided into two parts. The first part about room acoustics deals with the simulation and novel visualizations for indoor sound and acoustic quality measures, such as definition (original "Deutlichkeit") and clarity index (original "Klarheitsmaß"). For the simulation two different methods, a geometric (phonon tracing) and a wave based (FEM) approach, are applied and compared. The visualization techniques give insight into the sound behaviour and the acoustic quality of a room from a global as well as a listener based viewpoint. Furthermore, an acoustic rendering equation is presented, which is used to render interference effects for different frequencies. Last but not least a novel visualization approach for low frequency sound is presented, which enables the topological analysis of pressure fields based on room eigenfrequencies. The second part about environmental noise is concerned with the simulation and visualization of outdoor sound with a focus on traffic noise. The simulation instruction prescribed by national regulations is discussed in detail, and an approach for the computation of noise volumes, as well as an extension to the simulation, allowing interactive noise calculation, are presented. Novel visualization and interaction techniques for the calculated noise data, incorporated in an interactive three dimensional environment, enabling the easy comprehension of noise problems, are presented. Furthermore additional information can be integrated into the framework to enhance the visualization of noise and the usability of the framework for different usages.

Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.

Colorectal cancer is the second most prevalent cancer form in both men and women in the Europe. In 2002, alimentary cancer (oesophagus, stomach, intestines) made up 26% of the annual incident cases of cancer amongst males in Europe, whereby about half of those were cancers of the colon and rectum (Eurostat 2002). Epidemiological evidence accumulating over the last decades indicates that besides a genetic disposition, diet plays a strong epigenetic role in the genesis of cancer. It is generally assumed that diet is causal for up to 80% of colorectal cancer (Bingham 2000). With the prospect of an approximated 50% rise in global cancer incidence over the first two decades of the 21st century, the World Health Organisation (WHO) has emphasized the need for an improvement in nutrition. Indeed there is increasing public health awareness with respect to nutrition. Today, living healthily is associated with less consumption of animal fats and red (processed) meat, moderate or no consumption of alcohol coupled with increased physical activity, and frequent intake of fruits, vegetables and whole grains (Bingham 1999; Johnson 2004). This idealogy partly stems from scientific epidemiological evidence supportive of an inverse correlation between the consumption of fruits and vegetables and the development cancer. Besides fibre and essential micro-nutrients like ascobate, folate, and tocopherols, the anti-carcinogenic properties of fruits and vegetables are generally thought to be rooted in the bioactivity of secondary plant components like flavonoids (Johnson 2004; Rice-Evans and Miller 1996; Rice-Evans 1995). Along with the increased public health awareness, has also come a burgeoning and lucrative dietary supplement industry, which markets products based on polyphenols and other potentially healthy compounds, sometimes with questionable promises of better health and increased longevity. These claims are based on accumulating in vitro and in vivo evidence indicating that flavonoids and polyphenols in fruits and vegetables can hinder proliferation, induce apoptosis of cancerous cells (Kern et al. 2005; Kumar et al. 2007; Thangapazham et al. 2007), act as antioxidants (Justino et al. 2006; Rice-Evans 1995) and influence cell signalling pathways (Marko et al. 2004; Joseph et al. 2007; Granado-Serrano et al. 2007), all of which are potential mechanisms proposed for their anti-carcinogenic activity. However, not only is the vast variety of supplements worrisome, but also problematic, is their easy accessibilty (just a click away on the internet) and the amount that can potentially be consumed. Such supplements are usually offered in pharmaceutical form (tablets, capsules, powder, concentrates) containing concentrations well beyond what is normally comsumable from the diet. For example, quercetin’s recommended intake is about 1g daily. However, estimates portend a possible daily increase of upto 1000 fold of the daily intake of quercetin (Hertog et al. 1995). Mindful of the concept of dose coined from the words of swiss scientist Paracelsus “What is it that is not poison? All things are poison and nothing is without poison. The right dose differentiates a poison and a remedy.” (“Alle Dinge sind Gift und nichts ist ohn’ Gift; allein die Dosis macht, dass ein Ding kein Gift ist”), it is thus conceivable that such high concentrations may not only reverse the acclaimed positive effects of flavonoids and polyphenols but also have negative effects thereby representing a health risk. The fact that direct evidence of the beneficial effects of flavonoids and polyphenols remains wanting, if not entirely lacking, coupled with the afore-mentioned marketing trend demands for a thorough examination of the possible adverse effects that may arise from increased consumption of flavonoids and polyphenols. The genesis and progression of cancer is usually accompanied by dysfunctional signalling of certain cell signalling pathways. Typical for colon carcinogenesis is the malfunctioning of the Wnt-signalling pathway, a pathway, which is crucial for the growth and development of normal colonocytes. The dysfunction of the Wnt-signalling pathway occurs in a manner that culminates in a proliferation stimulus of colonocytes, while differentiation is increasingly minimized. Hence, tumourigenesis is promoted. Interupting the proliferation stumuli by intervening in the actions of components of the Wnt-signalling pathway is one potential mechanism for the anti-carcinogenic action of flavonoids and polyphenols (Pahlke et al. 2006; Dashwood et al. 2002; Park et al. 2005). However, as previously hinted, the indulgence in the consumption of flavonoids and polyphenols based supplements could instead lead to a proliferation stimulus and provoke or promote carcinogenesis in normal cells or pre-cancerous cells respectively. The aim of this work was to

Fragmentation of habitats, especially of tropical rainforests, ranks globally among the most pervasive man-made disturbances of ecosystems. There is growing evidence for long-term effects of forest frag-mentation and the accompanying creation of artificial edges on ecosystem functioning and forest structure, which are altered in a way that generally transforms these forests into early successional systems. Edge-induced disruption of species interactions can be among the driving mechanisms governing this transformation. These species interactions can be direct (trophic interactions, competition, etc.) or indirect (modification of the resource availability for other organisms). Such indirect interactions are called ecosystem engineering. Leaf-cutting ants of the genus Atta are dominant herbivores and keystone-species in the Neotropics and have been called ecosystem engineers. In contrast to other prominent ecosystem engineers that have been substantially decimated by human activities some species of leaf-cutting ants profit from anthropogenic landscape alterations. Thus, leaf-cutting ants are a highly suitable model to investigate the potentially cascading effects caused by herbivores and ecosystem engineers in modern anthropogenic landscapes following fragmentation. The present thesis aims to describe this interplay between consequences of forest fragmentation for leaf-cutting ants and resulting impacts of leaf-cutting ants in fragmented forests. The cumulative thesis starts out with a review of 55 published articles demonstrating that herbivores, especially generalists, profoundly benefit from forest edges, often due to (1) favourable microenviron-mental conditions, (2) an edge-induced increase in food quantity/quality, and (3; less well documented) disrupted top-down regulation of herbivores (Wirth, Meyer et al. 2008; Progress in Botany 69:423-448). Field investigations in the heavily fragmented Atlantic Forest of Northeast Brazil (Coimbra forest) were subsequently carried out to evaluate patterns and hypotheses emerging from this review using leaf-cutting ants of the genus Atta as a model system. Colony densities of both Atta species occuring in the area changed similarly with distance to the edge but the magnitude of the effect was species-specific. Colony density of A. cephalotes was low in the forest interior (0.33 ± 1.11 /ha, pooling all zones >50 m into the forest) and sharply increased by a factor of about 8.5 towards the first 50 m (2.79 ± 3.3 /ha), while A. sexdens was more uniformly distributed (Wirth, Meyer et al. 2007; Journal of Tropical Ecology 23:501-505). The accumulation of Atta colonies persisted at physically stable forest edges over a four-year interval with no significant difference in densities between years despite high rates of colony turn-over (little less than 50% in 4 years). Stable hyper-abundant populations of leaf-cutting ants accord with the constantly high availability of pioneer plants (their preferred food source) as previously demonstrated at old stabilised forest edges in the region (Meyer et al. submitted; Biotropica). In addition, plants at the forest edge might be more attractive to leaf-cutting ants because of their physiological responses to the edge environment. In bioassays with laboratory colonies I demonstrated that drought-stressed plants are more attractive to leaf-cutting ants because of an increase in leaf nutrient content induced by osmoregulation (Meyer et al. 2006; Functional Ecology 20:973-981). Since plants along forest edges are more prone to experience drought stress, this mechanism might contribute to the high resource availabil-ity for leaf-cutting ants at forest edges. In light of the hyper-abundance of leaf-cutting ants within the forest edge zone (first 50 m), their po-tentially far-reaching ecological importance in anthropogenic landscapes is apparent. Based on previous colony-level estimates, we extrapolated that herbivory by A. cephalotes removes 36% of the available foliage at forest edges (compared to 6% in the forest interior). In addition, A. cephalotes acted as ecosys-tem engineers constructing large nests (on average 55 m2: 95%-CI: 22-136) that drastically altered forest structure. The ants opened gaps in the canopy and forest understory at nest sites, which allowed three times as much light to reach the nest surface as compared to the forest understory. This was accompa-nied by an increase in soil temperatures and a reduction in water availability. Modifications of microcli-mate and forest structure greatly surpassed previously published estimates. Since higher light levels were detectable up to about 4 m away from the nest edge, an area roughly four times as big as the actual nest (about 200 and 50 m2, respectively) was impacted by every colony, amounting to roughly 6% of the total area at the forest edge (Meyer et al. in preparation; Ecology). The hypothesized impacts of high cutting pressure and microclimatic alterations at nest sites on forest regeneration were directly tested using transplanted seedlings of six species of forest trees. Nests of A. cephalotes differentially impacted survival and growth of seedlings. Survival differed highly significantly between habitats and species and was generally high in the forest, yet low on nests where it correlated strongly with seed size of the species. These results indicate that the disturbance regime created by leaf-cutting ants differs from other distur-bances, since nest conditions select for plant species that profit from additional light, yet are large-seeded and have resprouting abilities, which are best suited to tolerate repeated defoliation on a nest (Meyer et al. in preparation; Journal of Tropical Ecology). On an ecosystem scale leaf-cutting ants might amplify edge-driven microclimatic alterations by very high rates of herbivory and the maintenance of canopy gaps above frequent nests. By allowing for an increased light penetration Atta may, ultimately, contribute to a dominating, self-replacing pioneer communities at forest edges, possibly creating a positive feed-back loop. Based on the persisting hyper-abundance of leaf-cutting ants at old edges of Coimbra forest and the multifarious impacts documented, we conclude that the ecological importance of leaf-cutting ants in pristine forests, where they are commonly believed to be keystone species despite very low colony densities, is greatly surpassed in anthropogenic landscapes In fragmented forests, Atta has been identified as an essential component of a disturbance regime that causes a post-fragmentation retrogressive succession. Apparently, these forests have reached a new self-replacing secondary state. I suggest additional human interference in form of thoughtful management in order to break this cycle of self-enhancing disturbance and to enable forest regeneration along the edges of threatened forest remnants. Thereby the situation of the forest as a whole can be ameliorated and the chances for a long-term retention of biodiversity in these landscapes increased.

In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.

In this work we study and investigate the minimum width annulus problem (MWAP), the circle center location or circle location problem (CLP) and the point center location or point location problem (PLP) on Rectilinear and Chebyshev planes as well as in networks. The relations between the problems have served as a basis for finding of elegant solution, algorithms for both new and well known problems. So, MWAP was formulated and investigated in Rectilinear space. In contrast to Euclidean metric, MWAP and PLP have at least one common optimal point. Therefore, MWAP on Rectilinear plane was solved in linear time with the help of PLP. Hence, the solution sequence was PLP-->MWAP. It was shown, that MWAP and CLP are equivalent. Thus, CLP can be also solved in linear time. The obtained results were analysed and transfered to Chebyshev metric. After that, the notions of circle, sphere and annulus in networks were introduced. It should be noted that the notion of a circle in a network is different from the notion of a cycle. An O(mn) time algorithm for solution of MWAP was constructed and implemented. The algorithm is based on the fact that the middle point of an edge represents an optimal solution of a local minimum width annulus on this edge. The resulting complexity is better than the complexity O(mn+n^2logn) in unweighted case of the fastest known algorithm for minimizing of the range function, which is mathematically equivalent to MWAP. MWAP in unweighted undirected networks was extended to the MWAP on subsets and to the restricted MWAP. Resulting problems were analysed and solved. Also the p–minimum width annulus problem was formulated and explored. This problem is NP–hard. However, the p–MWAP has been solved in polynomial O(m^2n^3p) time with a natural assumption, that each minimum width annulus covers all vertexes of a network having distances to the central point of annulus less than or equal to the radius of its outer circle. In contrast to the planar case MWAP in undirected unweighted networks have appeared to be a root problem among considered problems. During investigation of properties of circles in networks it was shown that the difference between planar and network circles is significant. This leads to the nonequivalence of CLP and MWAP in the general case. However, MWAP was effectively used in solution procedures for CLP giving the sequence MWAP-->CLP. The complexity of the developed and implemented algorithm is of order O(m^2n^2). It is important to mention that CLP in networks has been formulated for the first time in this work and differs from the well–studied location of cycles in networks. We have constructed an O(mn+n^2logn) algorithm for well–known PLP. The complexity of this algorithm is not worse than the complexity of the currently best algorithms. But the concept of the solution procedure is new – we use MWAP in order to solve PLP building the opposite to the planar case solution sequence MWAP-->PLP and this method has the following advantages: First, the lower bounds LB obtained in the solution procedure are proved to be in any case better than the strongest Halpern’s lower bound. Second, the developed algorithm is so simple that it can be easily applied to complex networks manually. Third, the empirical complexity of the algorithm is equal to O(mn). MWAP was extended to and explored in directed unweighted and weighted networks. The complexity bound O(n^2) of the developed algorithm for finding of the center of a minimum width annulus in the unweighted case does not depend on the number of edges in a network, because the problems can be solved in the order PLP-->MWAP. In the weighted case computational time is of order O(mn^2).

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.

Within this thesis a series of molecular species has been studied, with focus on hydrogen bonded species and on (solvated) transition metal complexes. Experimental techniques such as FT-ICR-MS and IRMPD were combined with ab initio calculations for the determination of structure and reactivity of the aforementioned types of systems. On the basis of high level electronic structure calculations of neutral water clusters (H2O)n with n = 17-21 a transitional size regime has been determined, where a structural stabilization between all-surface and interior configurations alternates with the addition or removal of a single water molecule. Electronic structure calculations suggested that for n = 17 and 19 the interior configuration would be energetically more stable than the all-surface one. The gas phase infrared spectrum of the singly hydrated ammonium ion, NH4+(H2O), had previously been recorded by photodissociation spectroscopy of mass selected ions and interpreted by means of ab initio calculations. The present work provides additional information on the shape of the potential energy curves of NH4+(H2O) along the N-H distance on MP2/aug-cc-pVDZ level of theory yielding an anharmonic potential shape. Calculation of potential energy curves of the O-H mode of the intramolecular hydrogen bond of various dicarboxylic acids (oxalic to adipic acid) revealed that the shapes of the potentials directly correlate to the size of the system and the resulting ring strain The shape of the potential is also influenced by the charge of the system. Calculation of anharmonic frequencies based on the VPT2 approach lead to reasonable results in all systems with narrow potentials. IRMPD spectra of complexes in the gas phase have been recorded for a series of cationic vanadium oxide complexes when reacted with acetonitrile, methanol and ethanol. The experimental spectra are compared to calculated absorption spectra. The systematic DFT study identifies potential candidates for reductive nitrile coupling in cationic transition metal acetonitrile complexes. On the basis of the calculations, the formation of metallacyclic structures in group 3 through 7 complexes can be ruled out. Solvation of the transition metal cation by five acetonitrile ligands leads to a reductive nitrile coupling reaction in three types of complexes, namely those containing either niobium, tantalum or tungsten.

Acidic zeolites like H-Y, H-ZSM-5, H-MCM-22 and H-MOR zeolites were found to be the selective adsorbents for the removal of thiophene from toluene or n-heptane as solvent. The competitive adsorption of toluene is found to influence the adsorption capacity for thiophene and is more predominant when high-alumina zeolites are used as adsorbents. This behaviour is also reflected by the results of the adsorption of thiophene on H-ZSM-5 zeolites with varied nSi/nAl ratios (viz. 13, 19 and 36) from toluene and n-heptane as solvents, respectively. UV-Vis spectroscopic results show that the oligomerization of thiophene leads to the formation of dimers and trimers on these zeolites. The oligomerization in acid zeolites is regarded to be dependent on the geometry of the pore system of the zeolites. The sulphur-containing compounds with more than one ring viz. benzothiophene, which are also present in substantial amounts in certain hydrocarbon fractions, are not adsorbed on H-ZSM-5 zeolites. This is obvious, as the diameter of the pore aperture of zeolite H-ZSM-5 is smaller than the molecular size of benzothiophene. Metal ion-exchanged FAU-type zeolites are found to be promising adsorbents for the removal of sulphur-containing compounds from model solutions. The introduction of Cu+-, Ni2+-, Ce3+-, La3+- and Y3+- ions into zeolite Na+-Y by aqueous ion-exchange substantially improves the adsorption capacity for thiophene from toluene or n-heptane as solvent. More than the absolute content of Cu+-ions, the presence of Cu+-ions at the sites exposed to supercages is believed to influence the adsorption of thiophene on Cu+-Y zeolite. It was shown experimentally for the case of Cu+-Y and Ce3+-Y that the supercages present in the FAU zeolite allow for an access of bulkier sulphur-containing compounds (viz. benzothiophene, dibenzothiophene and dimethyl dibenzothiophene). The presence of these bulkier compounds compete with thiophene and are preferentially adsorbed on Cu+-Y zeolite. IR spectroscopic results revealed that the adsorption of thiophene on Na+-Y, Cu+-Y and Ni2+-Y is primarily a result of the interaction of thiophene via pi-complexation between C=C double bond (of thiophene) and metal ions (in the zeolite framework). A different mode of interaction of thiophene with Ce3+-, La3+- and Y3+-metal ions was observed in the IR spectra of thiophene adsorbed on Ce3+-Y, La3+-Y and Y3+-Y zeolites, respectively. On these adsorbents, thiophene is believed to interact via a lone electron pair of the sulphur atom with metal ions present in the adsorbent (M-S interaction). The experimental results show that there is a large difference in the thiophene adsorption capacities of pi-complexation adsorbents (like Cu+-Y, Ni2+-Y) between the model solution with toluene as solvent and the model solution with n-heptane as solvent. The lower capacity of these zeolites for the adsorption of thiophene from toluene than from n-heptane as solvent is the clear indication of competition of toluene in interating with adsorbent in a way similar to thiophene. The difference in thiophene adsorption capacities is very low in the case of adsorbents Ce3+-Y, La3+-Y and Y3+-Y, which are believed to interact with thiophene predominantly by direct M3+-S bond (thiophene interacting with metal ion via a lone pair of electrons). TG-DTA analysis was used to study the regeneration behaviour of the adsorbents. Acid zeolites can be regenerated by simply heating at 400 °C in a flow of nitrogen whereas thiophene is chemically adsorbed on the metal ion. By contrast, it is not possible to regenerate by heating under idle inert gas flow. The only way to regenerate these adsorbents is to burn off the adsorbate, which eventually brings about an undesired emission of SOx. The exothermic peaks appeared at different temperatures in the heat flow profiles of Cu+-Y, Ce3+-Y, La3+-Y and Y3+-Y are also indicating that two different types of interaction are present as revealed by IR spectroscopy, too. One major difficulty in reducing the sulphur content in fuels to value below 10 ppm is the inability in removing alkyl dibenzothiophenes, viz. 4,6 dimethyl dibenzothiophene, by the existing catalytic hydrodesulphurization technique. Cu+-Y and Ce3+-Y were found in the present study to adsorb this compound from toluene to a certain extent. To meet the stringent regulations on sulphur content, selective adsorption by zeolites could be a valuable post-purification method after the catalytic hydrodesulphurization unit.

The main motivation of this contribution is to introduce a computational laboratory to analyse defects and fractures at the sub--micro scale. To this end, we have attempted to present a continuum--atomistic multiscale algorithm for the analysis of crystalline deformation, i.e. we have combined the above--mentioned Cauchy--Born rule within a finite element approximation (FEM) on the continuum region with a molecular dynamics (MD) resolution on the atomistic domain. The aim is twofold: on the one hand the stability, i.e. validity of the Cauchy--Born rule and its transition to non--affine deformation at the micron--scale is studied with the help of molecular dynamics approach to capture fine--scales features; on the other hand a horizontal FEM/MD, i.e. continuum atomistic coupling, is envisaged in order to study representative cases of crystalline defects. To cope with the latter we have introduced a horizontal coupling method for continuum--atomistic analysis.