### Refine

#### Year of publication

- 2008 (66) (remove)

#### Document Type

- Doctoral Thesis (33)
- Report (21)
- Preprint (8)
- Study Thesis (2)
- Periodical Part (1)
- Working Paper (1)

#### Language

- English (66) (remove)

#### Keywords

- Finite-Elemente-Methode (3)
- Computergraphik (2)
- Level-Set-Methode (2)
- Raumakustik (2)
- Room acoustics (2)
- Visualisierung (2)
- computer graphics (2)
- domain decomposition (2)
- energy minimization (2)
- free surface (2)

#### Faculty / Organisational entity

In this paper we develop a data-driven mixture of vector autoregressive models with exogenous components. The process is assumed to change regimes according to an underlying Markov process. In contrast to the hidden Markov setup, we allow the transition probabilities of the underlying Markov process to depend on past time series values and exogenous variables. Such processes have potential applications to modeling brain signals. For example, brain activity at time t (measured by electroencephalograms) will can be modeled as a function of both its past values as well as exogenous variables (such as visual or somatosensory stimuli). Furthermore, we establish stationarity, geometric ergodicity and the existence of moments for these processes under suitable conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as deriving the asymptotic behavior of various statistics and model parameter estimators.

We propose a constraint-based approach for the two-dimensional rectangular packing problem with orthogonal orientations. This problem is to arrange a set of rectangles that can be rotated by 90 degrees into a rectangle of minimal size such that no two rectangles overlap. It arises in the placement of electronic devices during the layout of 2.5D System-in-Package integrated electronic systems. Moffitt et al. [8] solve the packing without orientations with a branch and bound approach and use constraint propagation. We generalize their propagation techniques to allow orientations. Our approach is compared to a mixed-integer program and we provide results that outperform it.

Abstract. An efficient approach to the numerical upscaling of thermal conductivities of fibrous media, e.g. insulation materials, is considered. First, standard cell problems for a second order elliptic equation are formulated for a proper piece of random fibrous structure, following homogenization theory. Next, a graph formed by the fibers is considered, and a second order elliptic equation with suitable boundary conditions is solved on this graph only. Replacing the boundary value problem for the full cell with an auxiliary problem with special boundary conditions on a connected subdomain of highly conductive material is justified in a previous work of the authors. A discretization on the graph is presented here, and error estimates are provided. The efficient implementation of the algorithm is discussed. A number of numerical experiments is presented in order to illustrate the performance of the proposed method.

A Lattice Boltzmann Method for immiscible multiphase flow simulations using the Level Set Method
(2008)

We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.

Finding a delivery plan for cancer radiation treatment using multileaf collimators operating in ''step-and-shoot mode'' can be formulated mathematically as a problem of decomposing an integer matrix into a weighted sum of binary matrices having the consecutive-ones property - and sometimes other properties related to the collimator technology. The efficiency of the delivery plan is measured by both the sum of weights in the decomposition, known as the total beam-on time, and the number of different binary matrices appearing in it, referred to as the cardinality, the latter being closely related to the set-up time of the treatment. In practice, the total beam-on time is usually restricted to its minimum possible value, (which is easy to find), and a decomposition that minimises cardinality (subject to this restriction) is sought.

The problem discussed in this paper is motivated by the new recycling directiveWEEE of the EC. The core of this law is, that each company which sells electrical or electronic equipment in a European country has the obligation to recollect and recycle an amount of returned items which is proportional to its market share. To assign collection stations to companies, in Germany for one product type a territory design approach is planned. However, in contrast to classical territory design, the territories should be geographically as dispersed as possible to avoid that a company, resp. its logistics provider responsible for the recollection, gains a monopoly in some region. First, we identify an appropriate measure for the dispersion of a territory. Afterwards, we present a first mathematical programming model for this new problem as well as a solution method based on the GRASP methodology. Extensive computational results illustrate the suitability of the model and assess the effectiveness of the heuristic.

In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.

The main goal of this work is to model size effects, as they occur in materials with an intrinsic microstructure at the consideration of specimens that are not by orders larger than this microstructure. The micromorphic continuum theory as a generalized continuum theory is well suited to account for the occuring size effects. Thereby additional degrees of freedoms capture the independent deformations of these microstructures, while they provide additional balance equation. In this thesis, the deformational and configurational mechanics of the micromorphic continuum is exploited in a finite-deformation setting. A constitutive and numerical framework is developed, in which also the material-force method is advanced. Furthermore the multiscale modelling of thin material layers with a heterogeneous substructure is of interest. To this end, a computational homogenization framework is developed, which allows to obtain the constitutive relation between traction and separation based on the properties of the underlying micromorphic mesostructure numerically in a nested solution scheme. Within the context of micromorphic continuum mechanics, concepts of both gradient and micromorphic plasticity are developed by systematically varying key ingredients of the respective formulations.

Computer-based simulation and visualization of acoustics of a virtual scene can aid during the design process of concert halls, lecture rooms, theaters, or living rooms. Because, not only the visual aspect of the room is important, but also its acoustics. In factory floors noise reduction is important since noise is hazardous to health. Despite the obvious dissimilarity between our aural and visual senses, many techniques required for the visualization of photo-realistic images and for the auralization of acoustic environments are quite similar. Both applications can be served by geometric methods such as particle- and ray tracing if we neglect a number of less important effects. By means of the simulation of room acoustics we want to predict the acoustic properties of a virtual model. For auralization, a pulse response filter needs to be assembled for each pair of source and listener positions. The convolution of this filter with an anechoic source signal provides the signal received at the listener position. Hence, the pulse response filter must contain all reverberations (echos) of a unit pulse, including their frequency decompositions due to absorption at different surface materials. For the room acoustic simulation a method named phonon tracing, since it is based on particles, is developed. The approach computes the energy or pressure decomposition for each particle (phonon) sent out from a sound source and uses this in a second pass (phonon collection) to construct the response filters for different listeners. This step can be performed in different precision levels. During the tracing step particle paths and additional information are stored in a so called phonon map. Using this map several sound visualization approaches were developed. From the visualization, the effect of different materials on the spectral energy / pressure distribution can be observed. The first few reflections already show whether certain frequency bands are rapidly absorbed. The absorbing materials can be identified and replaced in the virtual model, improving the overall acoustic quality of the simulated room. Furthermore an insight into the pressure / energy received at the listener position is possible. The phonon tracing algorithm as well as several sound visualization approaches are integrated into a common system utilizing Virtual Reality technologies in order to facilitate the immersion into the virtual scene. The system is a prototype developed within a project at the University of Kaiserslautern and is still a subject of further improvements. It consists of a stereoscopic back-projection system for visual rendering as well as professional audio equipment for auralization purposes.

An easy numerical handling of time-dependent problems with complicated geometries, free moving boundaries and interfaces, or oscillating solutions is of great importance for many applications, e.g., in fluid dynamics (free surface and multiphase flows, fluid-structure interactions [22, 18, 24]), failure mechanics (crack growth and propagation [4]), magnetohydrodynamics (accretion disks, jets and cloud simulation [6]), biophysics and -chemistry. Appropriate discretizations, so-called mesh-less methods, have been developed during the last decades to meet these challenging demands and to relieve the burden of remeshing and successive mesh generation being faced by the conventional mesh-based methods, [16, 10, 3]. The prearranged mesh is an artificial constraint to ensure compatibility of the mesh-based interpolant schemes, that often conflicts with the real physical conditions of the continuum model. Then, remeshing becomes inevitable, which is not only extremely time- and storage consuming but also the source for numerical errors and hence the gradual loss of computational accuracy. Apart from this advantage, mesh-less methods also lead to fundamentally better approximations regarding aspects, such as smoothness, nonlocal interpolation character, flexible connectivity, refinement and enrichment procedures, [16]. The common idea of mesh-less methods is the discretization of the domain of interest by a finite set of independent, randomly distributed particles moving with a characteristic velocity of the problem. Location and distribution of the particles then account for the time-dependent description of the geometry, data and solution. Thereby, the global solution is linearly superposed from the local information carried by the particles. In classical particle methods [20, 21], the respective weight functions are Dirac distributions which yield solutions in a distributional sense.

In this paper, the analysis of one approach for the regularization of pure Neumann problems for second order elliptical equations, e.g., Poisson’s equation and linear elasticity equations, is presented. The main topic under consideration is the behavior of the condition number of the regularized problem. A general framework for the analysis is presented. This allows to determine a form of regularization term which leads to the “natural” asymptotic of the condition number of the regularized problem with respect to mesh parameter. Some numerical results, which support theoretical analysis are presented as well. The main motivation for the presented research is to develop theoretical background for an efficient and robust implementation of the solver for pure Neumann problems for the linear elasticity equations. Such solvers usually are needed in a number of domain decomposition methods, e.g. FETI. Developed approaches are planed to be used in software, developing in ITWM, e.g. KneeMech simulation software.

The present thesis is concerned with the simulation of the loading behaviour of both hybrid lightweight structures and piezoelectric mesostructures, with a special focus on solid interfaces on the meso scale. Furthermore, an analytical review on bifurcation modes of continuum-interface problems is included. The inelastic interface behaviour is characterised by elastoplastic, viscous, damaging and fatigue-motivated models. For related numerical computations, the Finite Element Method is applied. In this context, so-called interface elements play an important role. The simulation results are reflected by numerous examples which are partially correlated to experimental data.

This paper introduces methods for the detection of anisotropies which are caused by compression of regular three-dimensional point patterns. Isotropy tests based on directional summary statistics and estimators for the compression factor are developed. These allow not only for the detection of anisotropies but also for the estimation of their strength. Using simulated data the power of the methods and the dependence of the power on the intensity, the degree of regularity, and the compression strength are studied. The motivation of this paper is the investigation of anisotropies in the structure of polar ice. Therefore, our methods are applied to the point patterns of centres of air pores extracted from tomographic images of ice cores. This way the presence of anisotropies in the ice caused by the compression of the ice sheet as well as an increase of their strength with increasing depth are shown.

Annual Report 2007
(2008)

Annual Report, Jahrbuch AG Magnetismus

In this work we study and investigate the minimum width annulus problem (MWAP), the circle center location or circle location problem (CLP) and the point center location or point location problem (PLP) on Rectilinear and Chebyshev planes as well as in networks. The relations between the problems have served as a basis for finding of elegant solution, algorithms for both new and well known problems. So, MWAP was formulated and investigated in Rectilinear space. In contrast to Euclidean metric, MWAP and PLP have at least one common optimal point. Therefore, MWAP on Rectilinear plane was solved in linear time with the help of PLP. Hence, the solution sequence was PLP-->MWAP. It was shown, that MWAP and CLP are equivalent. Thus, CLP can be also solved in linear time. The obtained results were analysed and transfered to Chebyshev metric. After that, the notions of circle, sphere and annulus in networks were introduced. It should be noted that the notion of a circle in a network is different from the notion of a cycle. An O(mn) time algorithm for solution of MWAP was constructed and implemented. The algorithm is based on the fact that the middle point of an edge represents an optimal solution of a local minimum width annulus on this edge. The resulting complexity is better than the complexity O(mn+n^2logn) in unweighted case of the fastest known algorithm for minimizing of the range function, which is mathematically equivalent to MWAP. MWAP in unweighted undirected networks was extended to the MWAP on subsets and to the restricted MWAP. Resulting problems were analysed and solved. Also the p–minimum width annulus problem was formulated and explored. This problem is NP–hard. However, the p–MWAP has been solved in polynomial O(m^2n^3p) time with a natural assumption, that each minimum width annulus covers all vertexes of a network having distances to the central point of annulus less than or equal to the radius of its outer circle. In contrast to the planar case MWAP in undirected unweighted networks have appeared to be a root problem among considered problems. During investigation of properties of circles in networks it was shown that the difference between planar and network circles is significant. This leads to the nonequivalence of CLP and MWAP in the general case. However, MWAP was effectively used in solution procedures for CLP giving the sequence MWAP-->CLP. The complexity of the developed and implemented algorithm is of order O(m^2n^2). It is important to mention that CLP in networks has been formulated for the first time in this work and differs from the well–studied location of cycles in networks. We have constructed an O(mn+n^2logn) algorithm for well–known PLP. The complexity of this algorithm is not worse than the complexity of the currently best algorithms. But the concept of the solution procedure is new – we use MWAP in order to solve PLP building the opposite to the planar case solution sequence MWAP-->PLP and this method has the following advantages: First, the lower bounds LB obtained in the solution procedure are proved to be in any case better than the strongest Halpern’s lower bound. Second, the developed algorithm is so simple that it can be easily applied to complex networks manually. Third, the empirical complexity of the algorithm is equal to O(mn). MWAP was extended to and explored in directed unweighted and weighted networks. The complexity bound O(n^2) of the developed algorithm for finding of the center of a minimum width annulus in the unweighted case does not depend on the number of edges in a network, because the problems can be solved in the order PLP-->MWAP. In the weighted case computational time is of order O(mn^2).

The goal of a multicriteria program is to explore different possibilities and their respective compromises which adequately represent the nondominated set. An exact description will in most cases fail because the number of efficient solutions is either too large or even infinite. We approximate the nondominated by computing a finite collection of nondominated points. Different ideas have been applied, including nonnegative weighted scalarization, Tchebycheff weighted scalarization, block norms and epsilon-constraints. Block norms are the building blocks for the inner and outer approximation algorithms proposed by Klamroth. We review these algorithms and propose three different variants. However, block norm based algorithms require to solve a sequence of subproblems, the number of subproblems becomes relatively high for six criteria and even intractable for real applications with nine criteria. Thus, we use bilevel linear programming to derive an approximation algorithm. We finally analyze and compare the approximation quality, running time and numerical convergence of the proposed methods.

The purpose of this paper is the canonical connection of classical global gravity field determination following the concept of Stokes (1849), Bruns (1878), and Neumann (1887) on the one hand and modern locally oriented multiscale computation by use of adaptive locally supported wavelets on the other hand. Essential tools are regularization methods of the Green, Neumann, and Stokes integral representations. The multiscale approximation is guaranteed simply as linear difference scheme by use of Green, Neumann, and Stokes wavelets, respectively. As an application, gravity anomalies caused by plumes are investigated for the Hawaiian and Iceland areas.

In the present contribution, a general framework for the completely consistent integration of nonlinear dissipative dynamics is proposed, that essentially relies on Finite Element methods in space and time. In this context, fully flexible structures as well as hybrid systems which consist of rigid bodies and inelastic flexible parts are considered. Thereby, special emphasis is placed on the resulting algorithmic fulfilment of fundamental balance equations, and the excellent performance of the presented concepts is demonstrated by means of several representative numerical examples, involving in particular finite elasto-plastic deformations.

Determination of interaction between MCT1 and CAII via a mathematical and physiological approach
(2008)

The enzyme carbonic anhydrase isoform II (CAII), catalysing the hydration and dehydration of CO2, enhances transport activity of the monocarboxylate transporter isoform I (MCT1, SLC16A1) expressed in Xenopus oocytes by a mechanism that does not require CAII catalytic activity (Becker et al. (2005) J. Biol. Chem., 280). In the present study, we have investigated the mechanism of the CAII induced increase in transport activity by using electrophysiological techniques and a mathematical model of the MCT1 transport cycle. The model consists of six states arranged in cyclic fashion and features an ordered, mirror-symmetric, binding mechanism were binding and unbinding of the proton to the transport protein is considered to be the rate limiting step under physiological conditions. An explicit rate expression for the substrate °ux is derived using model reduction techniques. By treating the pools of intra- and extracellular MCT1 substrates as dynamic states, the time dependent kinetics are obtained by integration using the derived expression for the substrate °ux. The simulations were compared with experimental data obtained from MCT1-expressing oocytes injected with di®erent amounts of CAII. The model suggests that CAII increases the e®ective rate constants of the proton reactions, possibly by working as a proton antenna.

Dry Sliding and Rolling Tribotests of Carbon Black Filled EPDM Elastomers and Their FE Simulations
(2008)

Unlubricated sliding systems being economic and environmentally benign are already realized in bearings, where dry metal-plastic sliding pairs successfully replace lubricated metal-metal ones. Nowadays, a considerable part of the tribological research concentrates to realize unlubricated elastomer-metal sliding systems, and to extend the application field of lubrication-free slider elements. In this Thesis, characteristics of the dry sliding and friction are investigated for elastomer-metal sliding pairs. In this study ethylene-propylene-diene rubbers (EPDM) with and without carbon black (CB) filler were used. The filler content of the EPDMs was varied: EPDMs with 0-, 30-, 45- and 60 part per hundred rubber (phr) CB amount were investigated. Quasistatic tension and compression tests and dynamic mechanical thermal analysis (DMTA) were carried out to analyze the static a viscoelastic behavior of the EPDMs. The tribological properties of the EPDMs were investigated using dry roller (metal) – on – plate (rubber) type tests (ROP). During the ROP tests the normal load was varied. The coefficient of friction (COF) and the temperature were registered online during the tests, the loss volumes were determined after certain test durations. The worn surfaces of the rubbers and of the steel counterparts were analyzed using scanning electron microscope (SEM) to determine the wear mechanisms. Because possible chemical changes may take place during dry sliding due to the elevated contact temperature the chemical composition of the surfaces was also analyzed before and after the tribotests. For the latter investigations X-ray photoelectron spectroscopy (XPS), sessil drop tests and Raman spectroscopy were used. In addition, the dry sliding tribotests were simulated using finite element (FE) codes for the better understanding of the related wear mechanisms. Finally, as the internal damping effect of the elastomers plays a great role in the sliding wear process, their viscoelasticity has been taken into account. The effect of viscoelasticity was shown on example of rolling friction. To study the rolling COF for the EPDM with 30 phr CB (EPDM 30) an FE model was created which considered the viscoelastic behavior of the rubber during rolling. The results showed that the incorporated CB enhanced the mechanical and tribological properties (both COF and wear rate have been reduced) of the EPDMs. Further on, the CB content of the EPDM influences fundamentally the observed wear mechanisms. The wear characteristics changed also with the applied normal load. In case of the EPDM 30 a rubber tribofilm was found on the steel counterpart when tests were performed at high normal loads. Analysis of the chemical composition of the surfaces before and after the wear tests does not result in notable changes. It was demonstrated, that the FE method is powerful tool to model both, the dry sliding and rolling performances of elastomers.

Fragmentation of habitats, especially of tropical rainforests, ranks globally among the most pervasive man-made disturbances of ecosystems. There is growing evidence for long-term effects of forest frag-mentation and the accompanying creation of artificial edges on ecosystem functioning and forest structure, which are altered in a way that generally transforms these forests into early successional systems. Edge-induced disruption of species interactions can be among the driving mechanisms governing this transformation. These species interactions can be direct (trophic interactions, competition, etc.) or indirect (modification of the resource availability for other organisms). Such indirect interactions are called ecosystem engineering. Leaf-cutting ants of the genus Atta are dominant herbivores and keystone-species in the Neotropics and have been called ecosystem engineers. In contrast to other prominent ecosystem engineers that have been substantially decimated by human activities some species of leaf-cutting ants profit from anthropogenic landscape alterations. Thus, leaf-cutting ants are a highly suitable model to investigate the potentially cascading effects caused by herbivores and ecosystem engineers in modern anthropogenic landscapes following fragmentation. The present thesis aims to describe this interplay between consequences of forest fragmentation for leaf-cutting ants and resulting impacts of leaf-cutting ants in fragmented forests. The cumulative thesis starts out with a review of 55 published articles demonstrating that herbivores, especially generalists, profoundly benefit from forest edges, often due to (1) favourable microenviron-mental conditions, (2) an edge-induced increase in food quantity/quality, and (3; less well documented) disrupted top-down regulation of herbivores (Wirth, Meyer et al. 2008; Progress in Botany 69:423-448). Field investigations in the heavily fragmented Atlantic Forest of Northeast Brazil (Coimbra forest) were subsequently carried out to evaluate patterns and hypotheses emerging from this review using leaf-cutting ants of the genus Atta as a model system. Colony densities of both Atta species occuring in the area changed similarly with distance to the edge but the magnitude of the effect was species-specific. Colony density of A. cephalotes was low in the forest interior (0.33 ± 1.11 /ha, pooling all zones >50 m into the forest) and sharply increased by a factor of about 8.5 towards the first 50 m (2.79 ± 3.3 /ha), while A. sexdens was more uniformly distributed (Wirth, Meyer et al. 2007; Journal of Tropical Ecology 23:501-505). The accumulation of Atta colonies persisted at physically stable forest edges over a four-year interval with no significant difference in densities between years despite high rates of colony turn-over (little less than 50% in 4 years). Stable hyper-abundant populations of leaf-cutting ants accord with the constantly high availability of pioneer plants (their preferred food source) as previously demonstrated at old stabilised forest edges in the region (Meyer et al. submitted; Biotropica). In addition, plants at the forest edge might be more attractive to leaf-cutting ants because of their physiological responses to the edge environment. In bioassays with laboratory colonies I demonstrated that drought-stressed plants are more attractive to leaf-cutting ants because of an increase in leaf nutrient content induced by osmoregulation (Meyer et al. 2006; Functional Ecology 20:973-981). Since plants along forest edges are more prone to experience drought stress, this mechanism might contribute to the high resource availabil-ity for leaf-cutting ants at forest edges. In light of the hyper-abundance of leaf-cutting ants within the forest edge zone (first 50 m), their po-tentially far-reaching ecological importance in anthropogenic landscapes is apparent. Based on previous colony-level estimates, we extrapolated that herbivory by A. cephalotes removes 36% of the available foliage at forest edges (compared to 6% in the forest interior). In addition, A. cephalotes acted as ecosys-tem engineers constructing large nests (on average 55 m2: 95%-CI: 22-136) that drastically altered forest structure. The ants opened gaps in the canopy and forest understory at nest sites, which allowed three times as much light to reach the nest surface as compared to the forest understory. This was accompa-nied by an increase in soil temperatures and a reduction in water availability. Modifications of microcli-mate and forest structure greatly surpassed previously published estimates. Since higher light levels were detectable up to about 4 m away from the nest edge, an area roughly four times as big as the actual nest (about 200 and 50 m2, respectively) was impacted by every colony, amounting to roughly 6% of the total area at the forest edge (Meyer et al. in preparation; Ecology). The hypothesized impacts of high cutting pressure and microclimatic alterations at nest sites on forest regeneration were directly tested using transplanted seedlings of six species of forest trees. Nests of A. cephalotes differentially impacted survival and growth of seedlings. Survival differed highly significantly between habitats and species and was generally high in the forest, yet low on nests where it correlated strongly with seed size of the species. These results indicate that the disturbance regime created by leaf-cutting ants differs from other distur-bances, since nest conditions select for plant species that profit from additional light, yet are large-seeded and have resprouting abilities, which are best suited to tolerate repeated defoliation on a nest (Meyer et al. in preparation; Journal of Tropical Ecology). On an ecosystem scale leaf-cutting ants might amplify edge-driven microclimatic alterations by very high rates of herbivory and the maintenance of canopy gaps above frequent nests. By allowing for an increased light penetration Atta may, ultimately, contribute to a dominating, self-replacing pioneer communities at forest edges, possibly creating a positive feed-back loop. Based on the persisting hyper-abundance of leaf-cutting ants at old edges of Coimbra forest and the multifarious impacts documented, we conclude that the ecological importance of leaf-cutting ants in pristine forests, where they are commonly believed to be keystone species despite very low colony densities, is greatly surpassed in anthropogenic landscapes In fragmented forests, Atta has been identified as an essential component of a disturbance regime that causes a post-fragmentation retrogressive succession. Apparently, these forests have reached a new self-replacing secondary state. I suggest additional human interference in form of thoughtful management in order to break this cycle of self-enhancing disturbance and to enable forest regeneration along the edges of threatened forest remnants. Thereby the situation of the forest as a whole can be ameliorated and the chances for a long-term retention of biodiversity in these landscapes increased.

Colorectal cancer is the second most prevalent cancer form in both men and women in the Europe. In 2002, alimentary cancer (oesophagus, stomach, intestines) made up 26% of the annual incident cases of cancer amongst males in Europe, whereby about half of those were cancers of the colon and rectum (Eurostat 2002). Epidemiological evidence accumulating over the last decades indicates that besides a genetic disposition, diet plays a strong epigenetic role in the genesis of cancer. It is generally assumed that diet is causal for up to 80% of colorectal cancer (Bingham 2000). With the prospect of an approximated 50% rise in global cancer incidence over the first two decades of the 21st century, the World Health Organisation (WHO) has emphasized the need for an improvement in nutrition. Indeed there is increasing public health awareness with respect to nutrition. Today, living healthily is associated with less consumption of animal fats and red (processed) meat, moderate or no consumption of alcohol coupled with increased physical activity, and frequent intake of fruits, vegetables and whole grains (Bingham 1999; Johnson 2004). This idealogy partly stems from scientific epidemiological evidence supportive of an inverse correlation between the consumption of fruits and vegetables and the development cancer. Besides fibre and essential micro-nutrients like ascobate, folate, and tocopherols, the anti-carcinogenic properties of fruits and vegetables are generally thought to be rooted in the bioactivity of secondary plant components like flavonoids (Johnson 2004; Rice-Evans and Miller 1996; Rice-Evans 1995). Along with the increased public health awareness, has also come a burgeoning and lucrative dietary supplement industry, which markets products based on polyphenols and other potentially healthy compounds, sometimes with questionable promises of better health and increased longevity. These claims are based on accumulating in vitro and in vivo evidence indicating that flavonoids and polyphenols in fruits and vegetables can hinder proliferation, induce apoptosis of cancerous cells (Kern et al. 2005; Kumar et al. 2007; Thangapazham et al. 2007), act as antioxidants (Justino et al. 2006; Rice-Evans 1995) and influence cell signalling pathways (Marko et al. 2004; Joseph et al. 2007; Granado-Serrano et al. 2007), all of which are potential mechanisms proposed for their anti-carcinogenic activity. However, not only is the vast variety of supplements worrisome, but also problematic, is their easy accessibilty (just a click away on the internet) and the amount that can potentially be consumed. Such supplements are usually offered in pharmaceutical form (tablets, capsules, powder, concentrates) containing concentrations well beyond what is normally comsumable from the diet. For example, quercetin’s recommended intake is about 1g daily. However, estimates portend a possible daily increase of upto 1000 fold of the daily intake of quercetin (Hertog et al. 1995). Mindful of the concept of dose coined from the words of swiss scientist Paracelsus “What is it that is not poison? All things are poison and nothing is without poison. The right dose differentiates a poison and a remedy.” (“Alle Dinge sind Gift und nichts ist ohn’ Gift; allein die Dosis macht, dass ein Ding kein Gift ist”), it is thus conceivable that such high concentrations may not only reverse the acclaimed positive effects of flavonoids and polyphenols but also have negative effects thereby representing a health risk. The fact that direct evidence of the beneficial effects of flavonoids and polyphenols remains wanting, if not entirely lacking, coupled with the afore-mentioned marketing trend demands for a thorough examination of the possible adverse effects that may arise from increased consumption of flavonoids and polyphenols. The genesis and progression of cancer is usually accompanied by dysfunctional signalling of certain cell signalling pathways. Typical for colon carcinogenesis is the malfunctioning of the Wnt-signalling pathway, a pathway, which is crucial for the growth and development of normal colonocytes. The dysfunction of the Wnt-signalling pathway occurs in a manner that culminates in a proliferation stimulus of colonocytes, while differentiation is increasingly minimized. Hence, tumourigenesis is promoted. Interupting the proliferation stumuli by intervening in the actions of components of the Wnt-signalling pathway is one potential mechanism for the anti-carcinogenic action of flavonoids and polyphenols (Pahlke et al. 2006; Dashwood et al. 2002; Park et al. 2005). However, as previously hinted, the indulgence in the consumption of flavonoids and polyphenols based supplements could instead lead to a proliferation stimulus and provoke or promote carcinogenesis in normal cells or pre-cancerous cells respectively. The aim of this work was to

We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.

Sublimation (Evaporation) is widely used in different industrial applications. The important applications are the sublimation (evaporation) of small particles (solid and liquid), e.g., spray drying and fuel droplet evaporation. Since a few decades, sublimation technology has been used widely together with aerosol technology. This combination is aiming to get various products with desired compositions and morphologies. It can be used in the fields of nanoparticles generation, particle coating through physical vapor deposition (PVD) and particle structuring. This doctoral thesis deals with the experimental and theoretical investigations of sublimation (evaporation) kinetics of fine aerosol particles (droplets). The experimental study was conducted in a test plant including on-line control of the most important paramters, such as heating temperature, gas flow and pressure. On-line and in-line particle measurements (Optical sensor, APS) were employed. Relevant parameters in sublimation (evaporation) such as heating temperature, particle concentration and aerosol residence time were investigated. Polydispersed particles (droplets) were introduced into the test plant as precursor aerosols. Two kinds of materials were used as test materials, including inorganic particles of NH4Cl and organic particles of DEHS. NH4Cl particles with smooth surface and porous structure were put into the experiments, respectively. The influence of the particle morphology on the sublimation process was studied. Basing on the experiments, different theoretical models were developed. The simulation results under different parameters were compared with experimental results. The change of concentration of particles was specially discussed. The discussion was focused on the relationship of the total particle concentration and the change of single particles with diverse initial diameters. The study of the sublimation kinetics of particles with different morphologies and different specific surface areas was carried out. The factor of increased surface area on the sublimation process was taken into the simulation and the results were compared with experimental results. A sublimation (evaporation) kinetics was investigated in this thesis. Basing on the property of a material, such as molecular weight, molecular size and vapor pressure, the sublimation (evaporation) kinetics was described. The optimum sublimation (evaporation) conditions with respect to the material properties were advanced. A Phase Transition Effect during the sublimation (evaporation) was found, which describes the increase of the large particles on the cost of small particles. A similar effect is observed in crystal suspension (called Ostwald ripening) but with another physical background. In order to meet the need of in-line particle measurement, a hot gas sensor (O.P.C.) was developed in this study, for measuring the particle size and the size distribution of an aerosol. With the newly developed measuring cell, the operating conditions of the aerosol could be increased up to 500°C.

Summary. We present a model of exible rods | based on Kirchhoff\\\'s geometrically exact theory | which is suitable for the fast simulation of quasistatic deformations within VR or functional DMU applications. Unlike simple models of \\\"mass & spring\\\" type typically used in VR applications, our model provides a proper coupling of bending and torsion. The computational approach comprises a variational formulation combined with a nite dierence discretization of the continuum model. Approximate solutions of the equilibrium equations for sequentially varying boundary conditions are obtained by means of energy minimization using a nonlinear CG method. The computational performance of our model proves to be sucient for the interactive manipulation of exible cables in assembly simulation.

In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.

In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.

Layout analysis--the division of page images into text blocks, lines, and determination of their reading order--is a major performance limiting step in large scale document digitization projects. This thesis addresses this problem in several ways: it presents new performance measures to identify important classes of layout errors, evaluates the performance of state-of-the-art layout analysis algorithms, presents a number of methods to reduce the error rate and catastrophic failures occurring during layout analysis, and develops a statistically motivated, trainable layout analysis system that addresses the needs of large-scale document analysis applications. An overview of the key contributions of this thesis is as follows. First, this thesis presents an efficient local adaptive thresholding algorithm that yields the same quality of binarization as that of state-of-the-art local binarization methods, but runs in time close to that of global thresholding methods, independent of the local window size. Tests on the UW-1 dataset demonstrate a 20-fold speedup compared to traditional local thresholding techniques. Then, this thesis presents a new perspective for document image cleanup. Instead of trying to explicitly detect and remove marginal noise, the approach focuses on locating the page frame, i.e. the actual page contents area. A geometric matching algorithm is presented to extract the page frame of a structured document. It is demonstrated that incorporating page frame detection step into document processing chain results in a reduction in OCR error rates from 4.3% to 1.7% (n=4,831,618 characters) on the UW-III dataset and layout-based retrieval error rates from 7.5% to 5.3% (n=815 documents) on the MARG dataset. The performance of six widely used page segmentation algorithms (x-y cut, smearing, whitespace analysis, constrained text-line finding, docstrum, and Voronoi) on the UW-III database is evaluated in this work using a state-of-the-art evaluation methodology. It is shown that current evaluation scores are insufficient for diagnosing specific errors in page segmentation and fail to identify some classes of serious segmentation errors altogether. Thus, a vectorial score is introduced that is sensitive to, and identifies, the most important classes of segmentation errors (over-, under-, and mis-segmentation) and what page components (lines, blocks, etc.) are affected. Unlike previous schemes, this evaluation method has a canonical representation of ground truth data and guarantees pixel-accurate evaluation results for arbitrary region shapes. Based on a detailed analysis of the errors made by different page segmentation algorithms, this thesis presents a novel combination of the line-based approach by Breuel with the area-based approach of Baird which solves the over-segmentation problem in area-based approaches. This new approach achieves a mean text-line extraction error rate of 4.4% (n=878 documents) on the UW-III dataset, which is the lowest among the analyzed algorithms. This thesis also describes a simple, fast, and accurate system for document image zone classification that results from a detailed comparative analysis of performance of widely used features in document analysis and content-based image retrieval. Using a novel combination of known algorithms, an error rate of 1.46% (n=13,811 zones) is achieved on the UW-III dataset in comparison to a state-of-the-art system that reports an error rate of 1.55% (n=24,177 zones) using more complicated techniques. In addition to layout analysis of Roman script documents, this work also presents the first high-performance layout analysis method for Urdu script. For that purpose a geometric text-line model for Urdu script is presented. It is shown that the method can accurately extract Urdu text-lines from documents of different layouts like prose books, poetry books, magazines, and newspapers. Finally, this thesis presents a novel algorithm for probabilistic layout analysis that specifically addresses the needs of large-scale digitization projects. The presented approach models known page layouts as a structural mixture model. A probabilistic matching algorithm is presented that gives multiple interpretations of input layout with associated probabilities. An algorithm based on A* search is presented for finding the most likely layout of a page, given its structural layout model. For training layout models, an EM-like algorithm is presented that is capable of learning the geometric variability of layout structures from data, without the need for a page segmentation ground-truth. Evaluation of the algorithm on documents from the MARG dataset shows an accuracy of above 95% for geometric layout analysis.

Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.

The theory of the two-scale convergence was applied to homogenization of elasto-plastic composites with a periodic structure and exponential hardening law. The theory is based on the fact that the elastic as well as the plastic part of the stress field two-scale converges to a limit, which is factorized by parts, depending only on macroscopic characteristics, represented in terms of corresponding part of the homogenised stress tensor and only on stress concentration tensor, related to the micro-geometry and elastic or plastic micro-properties of composite components. The theory was applied to metallic matrix material with Ludwik and Hocket-Sherby hardening law and pure elastic inclusions in two numerical examples. Results were compared with results of mechanical averaging based on the self-consistent methods.

The main motivation of this contribution is to introduce a computational laboratory to analyse defects and fractures at the sub--micro scale. To this end, we have attempted to present a continuum--atomistic multiscale algorithm for the analysis of crystalline deformation, i.e. we have combined the above--mentioned Cauchy--Born rule within a finite element approximation (FEM) on the continuum region with a molecular dynamics (MD) resolution on the atomistic domain. The aim is twofold: on the one hand the stability, i.e. validity of the Cauchy--Born rule and its transition to non--affine deformation at the micron--scale is studied with the help of molecular dynamics approach to capture fine--scales features; on the other hand a horizontal FEM/MD, i.e. continuum atomistic coupling, is envisaged in order to study representative cases of crystalline defects. To cope with the latter we have introduced a horizontal coupling method for continuum--atomistic analysis.

We study the complexity of finding extreme pure Nash equilibria in symmetric network congestion games and analyse how it depends on the graph topology and the number of users. In our context best and worst equilibria are those with minimum respectively maximum total latency. We establish that both problems can be solved by a Greedy algorithm with a suitable tie breaking rule on parallel links. On series-parallel graphs finding a worst Nash equilibrium is NP-hard for two or more users while finding a best one is solvable in polynomial time for two users and NP-hard for three or more. Additionally we establish NP-hardness in the strong sense for the problem of finding a worst Nash equilibrium on a general acyclic graph.

The dissertation deals with the application of Hub Location models in public transport planning. The author proposes new mathematical models along with different solution approaches to solve the instances. Moreover, a novel multi-period formulation is proposed as an extension to the general model. Due to its high complexity heuristic approaches are formulated to find a good solution within a reasonable amount of time.

Given a directed graph G = (N,A), a tension is a function from A to R which satisfies Kirchhoff\\\'s law for voltages. There are two well-known tension problems on graphs. In the minimum cost tension problem (MCT), a cost vector is given and a tension satisfying lower and upper bounds is seeked such that the total cost is minimum. In the maximum tension problem (MaxT), the graph contains 2 special nodes and an arc between them. The aim is to find the maximum tension on this arc. In this study we assume that both problems are feasible and have finite optimal solutions and analyze their inverse versions under rectilinear and Chebyshev distances. In the inverse minimum cost tension problem we adjust the cost parameter to make a given feasible solution the optimum, whereas in inverse maximum tension problem the bounds of the arcs are modified. We show, by extending the results of Ahuja and Orlin (2002), that these inverse tension problems are in a way \\\"dual\\\" to the inverse network flows. We prove that the inverse minimum cost tension problem under rectilinear norm is equivalent to solving a minimum cost tension problem, while under unit weight Chebyshev norm it can be solved by finding a minimum mean cost residual cut. Moreover, inverse maximum tension problem under rectilinear norm can be solved as a maximum tension problem on the same graph with new arc bounds. Finally, we provide a generalization of the inverse problems to monotropic programming problems with linear costs.

The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.

Today’s high-resolution digital images and videos require large amounts of storage space and transmission bandwidth. To cope with this, compression methods are necessary that reduce the required space while at the same time minimize visual artifacts. We propose a compression method based on a piecewise linear color interpolation induced by a triangulation of the image domain. We present methods to speed up significantly the optimization process for finding the triangulation. Furthermore, we extend the method to digital videos. Laser scanners to capture the surface of three-dimensional objects are widely used in industry nowadays, e.g., for reverse engineering or quality measurement. Hand-held scanning devices have the advantage that the laser device can be moved to any position, permitting a scan of complex objects. But operating a hand-held laser scanner is challenging. The operator has to keep track of the scanned regions in his mind, and has no feedback of the sample density unless he starts the surface reconstruction after finishing the scan. We present a system to support the operator by computing and rendering high-quality surface meshes of the captured data online, i.e., while he is still scanning, and in real time. Furthermore, it color-codes the rendered surface to reflect the surface quality. Thereby, instant feedback is provided, resulting in better scans in less time.

Open cell foams are a promising and versatile class of porous materials. Open metal foams serve as crash absorbers and catalysts, metal and ceramic foams are used for filtering, and open polymer foams are hidden in every-day-life items like mattresses or chairs. Due to their high porosity, classical 2d quantitative analysis can give only very limited information about the microstructure of open foams. On the other hand, micro computed tomography (μCT) yields high quality 3d images of open foams. Thus 3d imaging is the method of choice for open cell foams. In this report we summarise a variety of methods for the analysis of the resulting volume images of open foam structures developed or refined and applied at the Fraunhofer ITWM over a course of nearly ten years: The model based determination of mean characteristics like the mean cell volume or the mean strut thickness demanding only a simple binarisation as well as the image analytic cell reconstruction yielding empirical distributions of cell characteristics.

Minimum Cut Tree Games
(2008)

In this paper we introduce a cooperative game based on the minimum cut tree problem which is also known as multi-terminal maximum flow problem. Minimum cut tree games are shown to be totally balanced and a solution in their core can be obtained in polynomial time. This special core allocation is closely related to the solution of the original graph theoretical problem. We give an example showing that the game is not supermodular in general, however, it is for special cases and for some of those we give an explicit formula for the calculation of the Shapley value.

This thesis shows an approach to combine the advantages of MBS tyre models and FEM models for the use in full vehicle simulations. The procedure proposed in this thesis aims to describe a nonlinear structure with a Finite Element approach combined with nonlinear model reduction methods. Unlike most model reduction methods - as the frequently used Craig-Bampton approach - the method of Proper Orthogonal Decomposition (POD) offers a projection basis suitable for nonlinear models. For the linear wave equation, the POD method is studied comparing two different choices of snapshot sets. Set 1 consists of deformation snapshots, and set 2 additionally contains velocities and accelerations. An error analysis proves no convergence guarantee for deformations only. For inclusion of derivatives it yields an error bound diminishing for small time steps. The numerical results show a better behaviour for the derivative snapshot method, as long as the sum of the left-over eigenvalues is significant. For the reduction of nonlinear systems - especially when using commercial software - it is necessary to decouple the reduced surrogate system from the full model. To achieve this, a lookup table approach is presented. It makes use of the preceding computation step with the full model necessary to set up the POD basis (training step). The nonlinear term of inner forces and the stiffness matrix are output and stored in a lookup table for the reduced system. Numerical examples include a nonlinear string in Matlab and an airspring computed in Abaqus. Both examples show that effort reductions of two orders of magnitude are possible within a reasonable error tolerance. The lookup approaches perform faster than the Trajectory Piecewise Linear (TPWL) method and produce comparable errors. Furthermore, the Abaqus example shows the influence of training excitation on the quality of the reduced model.

This dissertation deals with the optimization of the web formation in a spunbond process for the production of artificial fabrics. A mathematical model of the process is presented. Based on the model, two kind of attributes to be optimized are considered, those related with the quality of the fabric and those describing the stability of the production process. The problem falls in the multicriteria and decision making framework. The functions involved on the model of the process are non linear, non convex and non differentiable. A strategy in two steps; exploration and continuation, is proposed to approximate numerically the Pareto frontier and alternative methods are proposed to navigate the set and support the decision making process. The proposed strategy is applied to a particular production process and numerical results are presented.

In this paper, we are going to propose the first mathematical model for Multi- Period Hub Location Problems (MPHLP). We apply this mixed integer program- ming model on public transport planning and call it Multi-Period Hub Location Problem for Public Transport (MPHLPPT). In fact, HLPPT model proposed earlier by the authors is extended to include more facts and features of the real-life application. In order to solve instances of this problem where existing standard solvers fail, a solution approach based on a greedy neighborhood search is developed. The computational results substantiate the efficiency of our solution approach to solve instances of MPHLPPT.

Structuring global supply chain networks is a complex decision-making process. The typical inputs to such a process consist of a set of customer zones to serve, a set of products to be manufactured and distributed, demand projections for the different customer zones, and information about future conditions, costs (e.g. for production and transportation) and resources (e.g. capacities, available raw materials). Given the above inputs, companies have to decide where to locate new service facilities (e.g. plants, warehouses), how to allocate procurement and production activities to the variousmanufacturing facilities, and how to manage the transportation of products through the supply chain network in order to satisfy customer demands. We propose a mathematical modelling framework capturing many practical aspects of network design problems simultaneously. For problems of reasonable size we report on computational experience with standard mathematical programming software. The discussion is extended with other decisions required by many real-life applications in strategic supply chain planning. In particular, the multi-period nature of some decisions is addressed by a more comprehensivemodel, which is solved by a specially tailored heuristic approach. The numerical results suggest that the solution procedure can identify high quality solutions within reasonable computational time.

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.

Noughts and Crosses
(2008)

The Comparative Manifesto Project (CMP) dataset is the only dataset providing information about the positions of parties for comparative researchers across time and countries. This article evaluates its structure and finds a peculiarity: A high number of zeros and their unequal distribution across items, countries and time. They influence the results of any procedure to build a scale, but especially those using factor analyses. The article shows that zeroes have different meanings: Firstly, there are substantial zeroes in line with saliency theory. Secondly, zeroes exist for non-substantial reasons: The length of a manifesto and the percentage of uncoded sentences, both strongly varying across time and country. We quantify the problem and propose a procedure to identify data points containing non-substantial zeroes. For the future comparative use of the dataset we plead for a theoretical selection of items combined with the information about the likelihood that zeroes are substantially meaningful.

A modular level set algorithm is developed to study the interface and its movement for free moving boundary problems. The algorithm is divided into three basic modules : initialization, propagation and contouring. Initialization is the process of finding the signed distance function from closed objects. We discuss here, a methodology to find an accurate signed distance function from a closed, simply connected surface discretized by triangulation. We compute the signed distance function using the direct method and it is stored efficiently in the neighborhood of the interface by a narrow band level set method. A novel approach is employed to determine the correct sign of the distance function at convex-concave junctions of the surface. The accuracy and convergence of the method with respect to the surface resolution is studied. It is shown that the efficient organization of surface and narrow band data structures enables the solution of large industrial problems. We also compare the accuracy of the signed distance function by direct approach with Fast Marching Method (FMM). It is found that the direct approach is more accurate than FMM. Contouring is performed through a variant of the marching cube algorithm used for the isosurface construction from volumetric data sets. The algorithm is designed to keep foreground and background information consistent, contrary to the neutrality principle followed for surface rendering in computer graphics. The algorithm ensures that the isosurface triangulation is closed, non-degenerate and non-ambiguous. The constructed triangulation has desirable properties required for the generation of good volume meshes. These volume meshes are used in the boundary element method for the study of linear electrostatics. For estimating surface properties like interface position, normal and curvature accurately from a discrete level set function, a method based on higher order weighted least squares is developed. It is found that least squares approach is more accurate than finite difference approximation. Furthermore, the method of least squares requires a more compact stencil than those of finite difference schemes. The accuracy and convergence of the method depends on the surface resolution and the discrete mesh width. This approach is used in propagation for the study of mean curvature flow and bubble dynamics. The advantage of this approach is that the curvature is not discretized explicitly on the grid and is estimated on the interface. The method of constant velocity extension is employed for the propagation of the interface. With least squares approach, the mean curvature flow has considerable reduction in mass loss compared to finite difference techniques. In the bubble dynamics, the modules are used for the study of a bubble under the influence of surface tension forces to validate Young-Laplace law. It is found that the order of curvature estimation plays a crucial role for calculating accurate pressure difference between inside and outside of the bubble. Further, we study the coalescence of two bubbles under surface tension force. The application of these modules to various industrial problems is discussed.

On Abstract Shapes of RNA
(2008)

As any RNA sequence can be folded in many different ways, there are lots of different possible secondary structures for a given sequence. Most computational prediction methods based on free energy minimization compute a number of suboptimal foldings and we have to identify the native structures among all these possible secondary structures. For this reason, much effort has been made to develop approaches for identifying good predictions of RNA secondary structure. Using the abstract shapes approach as introduced by Giegerich et al., each class of similar secondary structures is represented by one shape and the native structures can be found among the top shape representatives. In this article, we derive some interesting results answering enumeration problems for abstract shapes and secondary structures of RNA. We start by computing symptotical representations for the number of shape representations of length n. Our main goal is to find out how much the search space can be reduced by using the concept of abstract shapes. To reach this goal, we analyze the number of secondary structures and shapes compatible with an RNA sequence of length n under the assumption that base pairing is allowed between arbitrary pairs of bases analytically and compare their exponential growths. Additionally, we analyze the number of secondary structures compatible with an RNA sequence of length n under the assumptions that base pairing is allowed only between certain pairs of bases and that the structures meet some appropriate conditions. The exponential growth factors of the resulting asymptotics are compared to the corresponding experimentally obtained value as given by Giegerich et al.

This paper provides a brief overview of two linear inverse problems concerned with the determination of the Earth’s interior: inverse gravimetry and normal mode tomography. Moreover, a vector spline method is proposed for a combined solution of both problems. This method uses localised basis functions, which are based on reproducing kernels, and is related to approaches which have been successfully applied to the inverse gravimetric problem and the seismic traveltime tomography separately.

This article focuses on the analytical analysis of the free energy in a realistic model for RNA secondary structures. In fact, the free energy in a stochastic model derived from a database of small and large subunit ribosomal RNA (SSU and LSU rRNA) data is studied. A common thermody-namic model for computing the free energy of a given RNA secondary structure, as well as stochastic context-free grammars and generating functions are used to derive the desired results. These results include asymptotics for the expected free energy and for the corresponding variance of a random RNA secondary structure. The quality of our model is judged by comparing the derived results to the used database of SSU and LSU rRNA data. At the end of this article, it is discussed how our results could be used to help on identifying good predictions of RNA secondary structure.

We present an optimal control approach for the isothermal film casting process with free surfaces described by averaged Navier-Stokes equations. We control the thickness of the film at the take-up point using the shape of the nozzle. The control goal consists in finding an even thickness profile. To achieve this goal, we minimize an appropriate cost functional. The resulting minimization problem is solved numerically by a steepest descent method. The gradient of the cost functional is approximated using the adjoint variables of the problem with fixed film width. Numerical simulations show the applicability of the proposed method.