### Refine

#### Year of publication

- 2005 (68) (remove)

#### Document Type

- Doctoral Thesis (31)
- Report (15)
- Preprint (14)
- Diploma Thesis (4)
- Conference Proceeding (1)
- Master's Thesis (1)
- Periodical Part (1)
- Working Paper (1)

#### Language

- English (68) (remove)

#### Has Fulltext

- yes (68) (remove)

#### Keywords

- Mehrskalenanalyse (3)
- Wavelet (3)
- Approximation (2)
- Computeralgebra (2)
- Elastoplastizität (2)
- Galerkin-Methode (2)
- Geometric Ergodicity (2)
- Jiang's model (2)
- Jiang-Modell (2)
- Mobilfunk (2)
- Modellierung (2)
- Navier-Stokes-Gleichung (2)
- Poisson-Gleichung (2)
- Randwertproblem / Schiefe Ableitung (2)
- Sobolev-Raum (2)
- air interface (2)
- Ableitung höherer Ordnung (1)
- Aggregation (1)
- Algebraic dependence of commuting elements (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraische Geometrie (1)
- Algorithmus (1)
- Apoptosis (1)
- Arc distance (1)
- Ascorbat (1)
- Ascorbinsäure (1)
- Ascorbylradikal (1)
- Audiodeskription (1)
- Ausfallrisiko (1)
- Automatische Differentiation (1)
- Automatische Klassifikation (1)
- Automatisches Beweisverfahren (1)
- Barriers (1)
- Basisband (1)
- Bernstejn-Polynom (1)
- Betriebsfestigkeit (1)
- Blattschneiderameisen (1)
- Bottom-up (1)
- Boundary Value Problem (1)
- Box Algorithms (1)
- CHAMP <Satellitenmission> (1)
- Channel estimation (1)
- Computer Algebra System (1)
- Computeralgebra System (1)
- Container (1)
- Crane (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Derivatives (1)
- Differentialinklusionen (1)
- Diffusionskoeffizient (1)
- Diffusionsmessung (1)
- Diffusionsmodell (1)
- Digitalmodulation (1)
- Discrete Bicriteria Optimization (1)
- Domänenumklappen (1)
- Dynamic Network Flow Problem (1)
- Dynamische Topographie (1)
- EM algorithm (1)
- EPR (1)
- ESR (1)
- Effizienter Algorithmus (1)
- Elastizität (1)
- Elastoplasticity (1)
- Elektronenspinresonanz (1)
- Eliminationsverfahren (1)
- Empfängerorientierung (1)
- Evacuation Planning (1)
- Extreme Events (1)
- FPM (1)
- Feedfoward Neural Networks (1)
- Filippov theory (1)
- Filippov-Theorie (1)
- Filtergesetz (1)
- Finite Elemente Methode (1)
- Finite Pointset Method (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- Flooding (1)
- Folgar-Tucker model (1)
- GARCH (1)
- GARCH Modelle (1)
- GOCE <Satellitenmission> (1)
- GOCE <satellite mission> (1)
- GRACE (1)
- GRACE <Satellitenmission> (1)
- GRACE <satellite mission> (1)
- Gemeinsame Kanalschaetzung (1)
- Geodäsie (1)
- Geodätischer Satellit (1)
- Geographical Information Systems (1)
- Geometrische Ergodizität (1)
- Gravitational Field (1)
- Gravitationsfeld (1)
- Gröbner-Basis (1)
- Harmonische Spline-Funktion (1)
- Hidden Markov models for Financial Time Series (1)
- Higher Order Differentials as Boundary Data (1)
- Homogenisierung <Mathematik> (1)
- Hydrological Gravity Variations (1)
- Hydrologie (1)
- Implementierung (1)
- Intensität (1)
- Inverses Problem (1)
- Kanalschätzung (1)
- Knuth-Bendix completion (1)
- Knuth-Bendix-Vervollständigung (1)
- Kombinatorik (1)
- Kommutative Algebra (1)
- Konstruktive Approximation (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsphysik (1)
- Kreitderivaten (1)
- Kugel (1)
- Kugelflächenfunktion (1)
- Kugelfunktion (1)
- Large-Scale Problems (1)
- Lattice-Boltzmann method (1)
- Lineare Elastizitätstheorie (1)
- Lokalkompakte Kerne (1)
- Luftschnittstellen (1)
- MIDI <Musikelektronik> (1)
- MIMO (1)
- MIMO-Antennen (1)
- MIR (1)
- MP3 (1)
- Maschinelles Lernen (1)
- Mathematical Physics (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Mehrtraegeruebertragungsverfahren (1)
- Mikroelektronik (1)
- Minimum Cost Network Flow Problem (1)
- Modulationsübertragungsfunktion (1)
- Morphismus (1)
- Multileaf collimator (1)
- Multiobjective programming (1)
- Multiple objective optimization (1)
- Music Information Retrieval (1)
- Musik / Artes liberales (1)
- Nanocomposites (1)
- Nekrose (1)
- Netzwerksynthese (1)
- New Towns (1)
- Nichtkommutative Algebra (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nonlinear multigrid (1)
- Numerische Mathematik (1)
- OFDM (1)
- OFDM mobile radio systems (1)
- OFDM-Mobilfunksysteme (1)
- Papiermaschine (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Phasengleichgewicht (1)
- Poisson line process (1)
- Polymere (1)
- Portfoliomanagement (1)
- Poröser Stoff (1)
- Preimage of an ideal under a morphism of algebras (1)
- ROS (1)
- Ratenabhängigkeit (1)
- Reaktive Sauerstoffspezies (1)
- Redundanzvermeidung (1)
- Regularisierung (1)
- Representation Theory (1)
- Restricted Regions (1)
- Rhabdomyolyse (1)
- Richtungsableitung (1)
- Scheduling (1)
- Sensitivitäten (1)
- Simulation (1)
- Skelettmuskel (1)
- Software-Architektur (1)
- Spannungs-Dehn (1)
- Spherical Harmonics (1)
- Spherical Location Problem (1)
- Spherical Wavelets (1)
- Sphäre (1)
- Sphärische Wavelets (1)
- Spline-Wavelets (1)
- Split Operator (1)
- Split-Operator (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Statine (1)
- Stochastisches Feld (1)
- Test for Changepoint (1)
- Thermodynamik (1)
- Time Series (1)
- Tonsignal (1)
- Transportation Problem (1)
- Tropenökologie (1)
- Unschärferelation (1)
- Upwind-Verfahren (1)
- Verbundwerkstoffe (1)
- Viskosität (1)
- Vitamin C (1)
- Vitamin C-Derivate (1)
- Wahrscheinlichkeitsfunktion (1)
- Waldfragmentierung (1)
- Wavelet-Analyse (1)
- Wavelets auf der Kugel und der Sphäre (1)
- Zeitliche Veränderungen (1)
- acoustic absorption (1)
- adaptive refinement (1)
- air drag (1)
- algebraic constraints (1)
- analoge Mikroelektronik (1)
- apoptosis (1)
- ascorbate (1)
- ascorbic acid (1)
- ascorbyl radical (1)
- automated theorem proving (1)
- automatic differentiation (1)
- ball (1)
- beyond 3G (1)
- bottom-up (1)
- combinatorics (1)
- composite materials (1)
- computeralgebra (1)
- constructive approximation (1)
- default time (1)
- derivative-free iterative method (1)
- differential inclusions (1)
- diffusion coefficient (1)
- diffusion measurement (1)
- diffusion model (1)
- domain switching (1)
- durability (1)
- dynamical topography (1)
- effective elastic moduli (1)
- efficient solution (1)
- elastoplasticity (1)
- epsilon-constraint method (1)
- explicit jump immersed interface method (1)
- extreme solutions (1)
- face value (1)
- facets (1)
- fiber orientation (1)
- fiber-turbulence interaction scales (1)
- finite element method (1)
- flexible fibers (1)
- float glass (1)
- flow resistivity (1)
- forest fragmentation (1)
- heat radiation (1)
- hub covering (1)
- hub location (1)
- implementation (1)
- initial temperature (1)
- initial temperature reconstruction (1)
- integer programming (1)
- intensity (1)
- invariants (1)
- inverse problem (1)
- jenseits der dritten Generation (1)
- joint channel estimation (1)
- large scale integer programming (1)
- leaf-cutting ants (1)
- level-set (1)
- linear kinetics theory (1)
- lineare kinetische Theorie (1)
- locally compact kernels (1)
- lokalisierende Kerne (1)
- mehreren Uebertragungszweigen (1)
- mixed convection (1)
- mobile radio (1)
- multi-carrier (1)
- multi-user (1)
- multicriteria optimization (1)
- necrosis (1)
- network flows (1)
- network synthesis (1)
- nichtlineare Netzwerke (1)
- non-Newtonian flow in porous media (1)
- non-conventional (1)
- non-woven (1)
- nonlinear circuits (1)
- nonlinear heat equation (1)
- nonlinear inverse problem (1)
- numerics (1)
- optimization (1)
- optimization algorithms (1)
- phase space (1)
- political districting (1)
- portfolio (1)
- probabilistic approach (1)
- properly efficient solution (1)
- radiative heat transfer (1)
- random -Gaussian aerodynamic force (1)
- random system of fibers (1)
- rate-dependency (1)
- real-time (1)
- receiver orientation (1)
- regular surface (1)
- regularization (1)
- reguläre Fläche (1)
- representative systems (1)
- rhabdomyolysis (1)
- sales territory alignment (1)
- scalarization (1)
- sensitivities (1)
- service area (1)
- shape optimization (1)
- simulation (1)
- skeletal muscle cells (1)
- spline-wavelets (1)
- statin (1)
- stochastic dif (1)
- superposed fluids (1)
- territory desgin (1)
- thermodynamic model (1)
- topological sensitivity (1)
- topology optimization (1)
- trace stability (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- tropical ecology (1)
- tropical rainforest (1)
- tropischer Regenwald (1)
- urban elevation (1)
- virtual material design (1)
- viscosity model (1)
- white noise (1)
- wireless communications system (1)

#### Faculty / Organisational entity

A method to correct the elastic stress tensor at a fixed point of an elastoplastic body, which is subject to exterior loads, is presented and analysed. In contrast to uniaxial corrections (Neuber or ESED), our method takes multiaxial phenomena like ratchetting or cyclic hardening/softening into account by use of Jiang's model. Our numerical algorithm is designed for the case that the scalar load functions are piecewise linear and can be used in connection with critical plane/multiaxial rainflow methods in high cycle fatigue analysis. In addition, a local existence and uniqueness result of Jiang's equations is given.

The level-set method has been recently introduced in the field of shape optimization, enabling a smooth representation of the boundaries on a fixed mesh and therefore leading to fast numerical algorithms. However, most of these algorithms use a Hamilton-Jacobi equation to connect the evolution of the level-set function with the deformation of the contours, and consequently they cannot create any new holes in the domain (at least in 2D). In this work, we propose an evolution equation for the level-set function based on a generalization of the concept of topological gradient. This results in a new algorithm allowing for all kinds of topology changes.

By means of the limit and jump relations of classical potential theory with respect to the vectorial Helmholtz equation a wavelet approach is established on a regular surface. The multiscale procedure is constructed in such a way that the emerging scalar, vectorial and tensorial potential kernels act as scaling functions. Corresponding wavelets are defined via a canonical refinement equation. A tree algorithm for fast decomposition of a complex-valued vector field given on a regular surface is developed based on numerical integration rules. By virtue of this tree algorithm, an effcient numerical method for the solution of vectorial Fredholm integral equations on regular surfaces is discussed in more detail. The resulting multiscale formulation is used to solve boundary-value problems for the time harmonic Maxwell's equations corresponding to regular surfaces.

This document introduces the extension of Katja to support position structures and explains the subtleties of their application as well as the design decisions made and problems solved with respect to their implementation. The Katja system was first introduced by Jan Schäfer in the context of his project work and is based on the MAX system developed by Arnd Poetzsch-Heffter.

Automated theorem proving is a search problem and, by its undecidability, a very difficult one. The challenge in the development of a practically successful prover is the mapping of the extensively developed theory into a program that runs efficiently on a computer. Starting from a level-based system model for automated theorem provers, in this work we present different techniques that are important for the development of powerful equational theorem provers. The contributions can be divided into three areas: Architecture. We present a novel prover architecture that is based on a set-based compression scheme. With moderate additional computational costs we achieve a substantial reduction of the memory requirements. Further wins are architectural clarity, the easy provision of proof objects, and a new way to parallelize a prover which shows respectable speed-ups in practice. The compact representation paves the way to new applications of automated equational provers in the area of verification systems. Algorithms. To improve the speed of a prover we need efficient solutions for the most time-consuming sub-tasks. We demonstrate improvements of several orders of magnitude for two of the most widely used term orderings, LPO and KBO. Other important contributions are a novel generic unsatisfiability test for ordering constraints and, based on that, a sufficient ground reducibility criterion with an excellent cost-benefit ratio. Redundancy avoidance. The notion of redundancy is of central importance to justify simplifying inferences which are used to prune the search space. In our experience with unfailing completion, the usual notion of redundancy is not strong enough. In the presence of associativity and commutativity, the provers often get stuck enumerating equations that are permutations of each other. By extending and refining the proof ordering, many more equations can be shown redundant. Furthermore, our refinement of the unfailing completion approach allows us to use redundant equations for simplification without the need to consider them for generating inferences. We describe the efficient implementation of several redundancy criteria and experimentally investigate their influence on the proof search. The combination of these techniques results in a considerable improvement of the practical performance of a prover, which we demonstrate with extensive experiments for the automated theorem prover Waldmeister. The progress achieved allows the prover to solve problems that were previously out of reach. This considerably enhances the potential of the prover and opens up the way for new applications.

Aggregation of Large-Scale Network Flow Problems with Application to Evacuation Planning at SAP
(2005)

Our initial situation is as follows: The blueprint of the ground floor of SAP’s main building the EVZ is given and the open question on how mathematic can support the evacuation’s planning process ? To model evacuation processes in advance as well as for existing buildings two models can be considered: macro- and microscopic models. Microscopic models emphasize the individual movement of evacuees. These models consider individual parameters such as walking speed, reaction time or physical abilities as well as the interaction of evacuees during the evacuation process. Because of the fact that the microscopic model requires lots of data, simulations are taken for implementation. Most of the current approaches concerning simulation are based on cellular automats. In contrast to microscopic models, macroscopic models do not consider individual parameters such as the physical abilities of the evacuees. This means that the evacuees are treated as a homogenous group for which only common characteristics are considered; an average human being is assumed. We do not have that much data as in the case of the microscopic models. Therefore, the macroscopic models are mainly based on optimization approaches. In most cases, a building or any other evacuation object is represented through a static network. A time horizon T is added, in order to be able to describe the evolution of the evacuation process over time. Connecting these two components we finally get a dynamic network. Based on this network, dynamic network flow problems are formulated, which can map evacuation processes. We focused on the macroscopic model in our thesis. Our main focus concerning the transfer from the real world problem (e.g. supporting the evacuation planning) will be the modeling of the blueprint as a dynamic network. After modeling the blueprint as a dynamic network, it will be no problem to give a formulation of a dynamic network flow problem, the so-called evacuation problem, which seeks for an optimal evacuation time. However, we have to solve a static large-scale network flow problem to derive a solution for this formulation. In order to reduce the network size, we will examine the possibility of applying aggregation to the evacuation problem. Aggregation (lat. aggregare = piling, affiliate; lat. aggregatio = accumulation, union; the act of gathering something together) was basically used to reduce the size of general large-scale linear or integer programs. The results gained for the general problem definitions were then applied to the transportation problem and the minimum cost network flow problem. We review this theory in detail and look on how results derived there can be used for the evacuation problem, too.

This thesis contains the mathematical treatment of a special class of analog microelectronic circuits called translinear circuits. The goal is to provide foundations of a new coherent synthesis approach for this class of circuits. The mathematical methods of the suggested synthesis approach come from graph theory, combinatorics, and from algebraic geometry, in particular symbolic methods from computer algebra. Translinear circuits form a very special class of analog circuits, because they rely on nonlinear device models, but still allow a very structured approach to network analysis and synthesis. Thus, translinear circuits play the role of a bridge between the "unknown space" of nonlinear circuit theory and the very well exploited domain of linear circuit theory. The nonlinear equations describing the behavior of translinear circuits possess a strong algebraic structure that is nonetheless flexible enough for a wide range of nonlinear functionality. Furthermore, translinear circuits offer several technical advantages like high functional density, low supply voltage and insensitivity to temperature. This unique profile is the reason that several authors consider translinear networks as the key to systematic synthesis methods for nonlinear circuits. The thesis proposes the usage of a computer-generated catalog of translinear network topologies as a synthesis tool. The idea to compile such a catalog has grown from the observation that on the one hand, the topology of a translinear network must satisfy strong constraints which severely limit the number of "admissible" topologies, in particular for networks with few transistors, and on the other hand, the topology of a translinear network already fixes its essential behavior, at least for static networks, because the so-called translinear principle requires the continuous parameters of all transistors to be the same. Even though the admissible topologies are heavily restricted, it is a highly nontrivial task to compile such a catalog. Combinatorial techniques have been adapted to undertake this task. In a catalog of translinear network topologies, prototype network equations can be stored along with each topology. When a circuit with a specified behavior is to be designed, one can search the catalog for a network whose equations can be matched with the desired behavior. In this context, two algebraic problems arise: To set up a meaningful equation for a network in the catalog, an elimination of variables must be performed, and to test whether a prototype equation from the catalog and a specified equation of desired behavior can be "matched", a complex system of polynomial equations must be solved, where the solutions are restricted to a finite set of integers. Sophisticated algorithms from computer algebra are applied in both cases to perform the symbolic computations. All mentioned algorithms have been implemented using C++, Singular, and Mathematica, and are successfully applied to actual design problems of humidity sensor circuitry at Analog Microelectronics GmbH, Mainz. As result of the research conducted, an exhaustive catalog of all static formal translinear networks with at most eight transistors is available. The application for the humidity sensor system proves the applicability of the developed synthesis approach. The details and implementations of the algorithms are worked out only for static networks, but can easily be adopted for dynamic networks as well. While the implementation of the combinatorial algorithms is stand-alone software written "from scratch" in C++, the implementation of the algebraic algorithms, namely the symbolic treatment of the network equations and the match finding, heavily rely on the sophisticated Gröbner basis engine of Singular and thus on more than a decade of experience contained in a special-purpose computer algebra system. It should be pointed out that the thesis contains the new observation that the translinear loop equations of a translinear network are precisely represented by the toric ideal of the network's translinear digraph. Altogether, this thesis confirms and strengthenes the key role of translinear circuits as systematically designable nonlinear circuits.

Annual Report 2004
(2005)

Annual Report, Jahrbuch AG Magnetismus

Competing Neural Networks as Models for Non Stationary Financial Time Series -Changepoint Analysis-
(2005)

The problem of structural changes (variations) play a central role in many scientific fields. One of the most current debates is about climatic changes. Further, politicians, environmentalists, scientists, etc. are involved in this debate and almost everyone is concerned with the consequences of climatic changes. However, in this thesis we will not move into the latter direction, i.e. the study of climatic changes. Instead, we consider models for analyzing changes in the dynamics of observed time series assuming these changes are driven by a non-observable stochastic process. To this end, we consider a first order stationary Markov Chain as hidden process and define the Generalized Mixture of AR-ARCH model(GMAR-ARCH) which is an extension of the classical ARCH model to suit to model with dynamical changes. For this model we provide sufficient conditions that ensure its geometric ergodic property. Further, we define a conditional likelihood given the hidden process and a pseudo conditional likelihood in turn. For the pseudo conditional likelihood we assume that at each time instant the autoregressive and volatility functions can be suitably approximated by given Feedfoward Networks. Under this setting the consistency of the parameter estimates is derived and versions of the well-known Expectation Maximization algorithm and Viterbi Algorithm are designed to solve the problem numerically. Moreover, considering the volatility functions to be constants, we establish the consistency of the autoregressive functions estimates given some parametric classes of functions in general and some classes of single layer Feedfoward Networks in particular. Beside this hidden Markov Driven model, we define as alternative a Weighted Least Squares for estimating the time of change and the autoregressive functions. For the latter formulation, we consider a mixture of independent nonlinear autoregressive processes and assume once more that the autoregressive functions can be approximated by given single layer Feedfoward Networks. We derive the consistency and asymptotic normality of the parameter estimates. Further, we prove the convergence of Backpropagation for this setting under some regularity assumptions. Last but not least, we consider a Mixture of Nonlinear autoregressive processes with only one abrupt unknown changepoint and design a statistical test that can validate such changes.

Within the last decades, a remarkable development in materials science took place -- nowadays, materials are not only constructed for the use of inert structures but rather designed for certain predefined functions. This innovation was accompanied with the appearance of smart materials with reliable recognition, discrimination and capability of action as well as reaction. Even though ferroelectric materials serve smartly in real applications, they also possess several restrictions at high performance usage. The behavior of these materials is almost linear under the action of low electric fields or low mechanical stresses, but exhibits strong non-linear response under high electric fields or mechanical stresses. High electromechanical loading conditions result in a change of the spontaneous polarization direction with respect to individual domains, which is commonly referred to as domain switching. The aim of the present work is to develop a three-dimensional coupled finite element model, to study the rate-independent and rate-dependent behavior of piezoelectric materials including domain switching based on a micromechanical approach. The proposed model is first elaborated within a two-dimensional finite element setting for piezoelectric materials. Subsequently, the developed two-dimensional model is extended to the three-dimensional case. This work starts with developing a micromechanical model for ferroelectric materials. Ferroelectric materials exhibit ferroelectric domain switching, which refers to the reorientation of domains and occurs under purely electrical loading. For the simulation, a bulk piezoceramic material is considered and each grain is represented by one finite element. In reality, the grains in the bulk ceramics material are randomly oriented. This property is taken into account by applying random orientation as well as uniform distribution for individual elements. Poly-crystalline ferroelectric materials at un-poled virgin state can consequently be characterized by randomly oriented polarization vectors. Energy reduction of individual domains is adopted as a criterion for the initiation of domain switching processes. The macroscopic response of the bulk material is predicted by classical volume-averaging techniques. In general, domain switching does not only depend on external loads but also on neighboring grains, which is commonly denoted as the grain boundary effect. These effects are incorporated into the developed framework via a phenomenologically motivated probabilistic approach by relating the actual energy level to a critical energy level. Subsequently, the order of the chosen polynomial function is optimized so that simulations nicely match measured data. A rate-dependent polarization framework is proposed, which is applied to cyclic electrical loading at various frequencies. The reduction in free energy of a grain is used as a criterion for the onset of the domain switching processes. Nucleation in new grains and propagation of the domain walls during domain switching is modeled by a linear kinetics theory. The simulated results show that for increasing loading frequency the macroscopic coercive field is also increasing and the remanent polarization increases at lower loading amplitudes. The second part of this work is focused on ferroelastic domain switching, which refers to the reorientation of domains under purely mechanical loading. Under sufficiently high mechanical loading, however, the strain directions within single domains reorient with respect to the applied loading direction. The reduction in free energy of a grain is used as a criterion for the domain switching process. The macroscopic response of the bulk material is computed for the hysteresis curve (stress vs strain) whereby uni-axial and quasi-static loading conditions are applied on the bulk material specimen. Grain boundary effects are addressed by incorporating the developed probabilistic approach into this framework and the order of the polynomial function is optimized so that simulations match measured data. Rate dependent domain switching effects are captured for various frequencies and mechanical loading amplitudes by means of the developed volume fraction concept which relates the particular time interval to the switching portion. The final part of this work deals with ferroelectric and ferroelastic domain switching and refers to the reorientation of domains under coupled electromechanical loading. If this free energy for combined electromechanical loading exceeds the critical energy barrier elements are allowed to switch. Firstly, hysteresis and butterfly curves under purely electrical loading are discussed. Secondly, additional mechanical loads in axial and lateral directions are applied to the specimen. The simulated results show that an increasing compressive stress results in enlarged domain switching ranges and that the hysteresis and butterfly curves flatten at higher mechanical loading levels.

The following three papers present recent developments in nonlinear Galerkin schemes for solving the spherical Navier-Stokes equation, in wavelet theory based on the 3-dimensional ball, and in multiscale solutions of the Poisson equation inside the ball, that have been presented at the 76th GAMM Annual Meeting in Luxemburg. Part A: A Nonlinear Galerkin Scheme Involving Vectorial and Tensorial Spherical Wavelets for Solving the Incompressible Navier-Stokes Equation on the Sphere The spherical Navier-Stokes equation plays a fundamental role in meteorology by modelling meso-scale (stratified) atmospherical flows. This article introduces a wavelet based nonlinear Galerkin method applied to the Navier-Stokes equation on the rotating sphere. In detail, this scheme is implemented by using divergence free vectorial spherical wavelets, and its convergence is proven. To improve numerical efficiency an extension of the spherical panel clustering algorithm to vectorial and tensorial kernels is constructed. This method enables the rapid computation of the wavelet coefficients of the nonlinear advection term. Thereby, we also indicate error estimates. Finally, extensive numerical simulations for the nonlinear interaction of three vortices are presented. Part B: Methods of Resolution for the Poisson Equation on the 3D Ball Within the article at hand, we investigate the Poisson equation solved by an integral operator, originating from an ansatz by Greens functions. This connection between mass distributions and the gravitational force is essential to investigate, especially inside the Earth, where structures and phenomena are not sufficiently known and plumbable. Since the operator stated above does not solve the equation for all square-integrable functions, the solution space will be decomposed by a multiscale analysis in terms of scaling functions. Classical Euclidean wavelet theory appears not to be the appropriate choice. Ansatz functions are chosen to be reflecting the rotational invariance of the ball. In these terms, the operator itself is finally decomposed and replaced by versions more manageable, revealing structural information about itself. Part C: Wavelets on the 3–dimensional Ball In this article wavelets on a ball in R^3 are introduced. Corresponding properties like an approximate identity and decomposition/reconstruction (scale step property) are proved. The advantage of this approach compared to a classical Fourier analysis in orthogonal polynomials is a better localization of the used ansatz functions.

This diploma thesis examines logistic problems occurring in a container terminal. The thesis focuses on the scheduling of cranes handling containers in a port. Two problems are discussed in detail: the yard crane scheduling of rubber-tired gantry cranes (RMGC) which move freely among the container blocks, and the scheduling of rail-mounted gantry cranes (RMGC) which can only move within a yard zone. The problems are formulated as integer programs. For each of the two problems discussed, two models are presented: In one model, the crane tasks are interpreted as jobs with release times and processing times while in the other model, it is assumed that the tasks can be modeled as generic workload measured in crane minutes. It is shown that the problems are NP-hard in the strong sense. Heuristic solution procedures are developed and evaluated by numerical results. Further ideas which could lead to other solution procedures are presented and some interesting special cases are discussed.

In order to optimize the acoustic properties of a stacked fiber non-woven, the microstructure of the non-woven is modeled by a macroscopically homogeneous random system of straight cylinders (tubes). That is, the fibers are modeled by a spatially stationary random system of lines (Poisson line process), dilated by a sphere. Pressing the non-woven causes anisotropy. In our model, this anisotropy is described by a one parametric distribution of the direction of the fibers. In the present application, the anisotropy parameter has to be estimated from 2d reflected light microscopic images of microsections of the non-woven. After fitting the model, the flow is computed in digitized realizations of the stochastic geometric model using the lattice Boltzmann method. Based on the flow resistivity, the formulas of Delany and Bazley predict the frequency-dependent acoustic absorption of the non-woven in the impedance tube. Using the geometric model, the description of a non-woven with improved acoustic absorption properties is obtained in the following way: First, the fiber thicknesses, porosity and anisotropy of the fiber system are modified. Then the flow and acoustics simulations are performed in the new sample. These two steps are repeatedc for various sets of parameters. Finally, the set of parameters for the geometric model leading to the best acoustic absorption is chosen.

This thesis aims at an overall improvement of the diffusion coefficient predictions. For this reason the theoretical determination of diffusion, viscosity, and thermodynamics in liquid systems is discussed. Furthermore, the experimental determination of diffusion coefficients is also part of this work. All investigations presented are carried out for organic binary liquid mixtures. Diffusion coefficient data of 9 highly nonideal binary mixtures are reported over the whole concentration range at various temperatures, (25, 30, and 35) °C. All mixtures investigated in a Taylor dispersion apparatus consist of an alcohol (ethanol, 1-propanol, or 1-butanol) dissolved in hexane, cyclohexane, carbon tetrachloride, or toluene. The uncertainty of the reported data is estimated to be within 310-11 m2s-1. To compute the thermodynamic correction factor an excess Gibbs energy model is required. Therefore, the applicability of COSMOSPACE to binary VLE predictions is thoroughly investigated. For this purpose a new method is developed to determine the required molecular parameters such as segment types, areas, volumes, and interaction parameters. So-called sigma profiles form the basis of this approach which describe the screening charge densities appearing on a molecule’s surface. To improve the prediction results a constrained two-parameter fitting strategy is also developed. These approaches are crucial to guarantee the physical significance of the segment parameters. Finally, the prediction quality of this approach is compared to the findings of the Wilson model, UNIQUAC, and the a priori predictive method COSMO-RS for a broad range of thermodynamic situations. The results show that COSMOSPACE yields results of similar quality compared to the Wilson model, while both perform much better than UNIQUAC and COSMO-RS. Since viscosity influences also the diffusion process, a new mixture viscosity model has been developed on the basis of Eyring’s absolute reaction rate theory. The nonidealities of the mixture are accounted for with the thermodynamically consistent COSMOSPACE approach. The required model and component parameters are derived from sigma-profiles, which form the basis of the a priori predictive method COSMO-RS. To improve the model performance two segment parameters are determined from a least-squares analysis to experimental viscosity data, whereas a constraint optimisation procedure is applied. In this way the parameters retain their physical meaning. Finally, the viscosity calculations of this approach are compared to the findings of the Eyring-UNIQUAC model for a broad range of chemical mixtures. These results show that the new Eyring-COSMOSPACE approach is superior to the frequently employed Eyring-UNIQUAC method. Finally, on the basis of Eyring’s absolute reaction rate theory a new model for the Maxwell-Stefan diffusivity has been developed. This model, an extension of the Vignes equation, describes the concentration dependence of the diffusion coefficient in terms of the diffusivities at infinite dilution and an additional excess Gibbs energy contribution. This energy part allows the explicit consideration of thermodynamic nonidealities within the modelling of this transport property. If the same set of interaction parameters, which has been derived from VLE data, is applied for this part and for the thermodynamic correction, a theoretically sound modelling of VLE and diffusion can be achieved. The influence of viscosity and thermodynamics on the model accuracy is thoroughly investigated. For this purpose diffusivities of 85 binary mixtures consisting of alkanes, cycloalkanes, halogenated alkanes, aromatics, ketones, and alcohols are computed. The average relative deviation between experimental data and computed values is approximately 8 % depending on the choice of the gE-model. These results indicate that this model is superior to some widely used methods. In summary, it can be said that the new approach facilitates the prediction of diffusion coefficients. The final equation is mathematically simple, universally applicable, and the prediction quality is as good as other models recently developed without having to worry about additional parameters, like pure component physical property data, self diffusion coefficients, or mixture viscosities. In contrast to many other models, the influence of the mixture viscosity can be omitted. Though a viscosity model is not required in the prediction of diffusion coefficients with the new equation, the models presented in this work allow a consistent modelling approach of diffusion, viscosity, and thermodynamics in liquid systems.

We will give explicit differentiation and integration rules for homogeneous harmonic polynomial polynomials and spherical harmonics in IR^3 with respect to the following differential operators: partial_1, partial_2, partial_3, x_3 partial_2 - x_2 partial_3, x_3 partial_1 - x_1 partial_3, x_2 partial_1 - x_1 partial_2 and x_1 partial_1 + x_2 partial_2 + x_3 partial_3. A numerical application to the problem of determining the geopotential field will be shown.

Fragmentation of tropical rain forests is pervasive and results in various modifications in the ecosystem functioning such as … It has long been noticed that the colony densities of a dominant herbivore in the neotropics - leaf-cutting ant (LCA) - increase in fragmentation-related habitats like forest edges and small fragments, however the reasons for this increase are not clear. The aim of the study was to test the hypothesis that bottom-up control of LCA populations is less effective in fragmented compared to continuous forests and thus explains the increase in LCA colony densities in these habitats. In order to test for less effective bottom-up control, I proposed four working hypotheses. I hypothesized that LCA colonies in fragmented habitats (1) find more palatable vegetation due to low plant defences, (2) forage on few dominant species resulting in a narrow diet breadth, (3) possess small foraging areas and (4) increase herbivory rate at the colony level. The study was conducted in the remnants of the Atlantic rainforest in NE Brazil. Two fragmentation-related forest habitats were included: the edge and a 3500-ha continuous forest and the interior of the 50-ha forest fragment. The interior of the continuous forest served as a control habitat for the study. All working hypotheses can be generally accepted. The results indicate that the abundance of LCA host plant species in the habitats created by forest fragmentation along with weaker chemical defense of those species (especially the lack of terpenoids) allow ants to forage predominantly on palatable species and thus reduce foraging costs on other species. This is supported by narrower ant diet breadth in these habitats. Similarly, small foraging areas in edge habitats and in small forest fragments indicate that there ants do not have to go far to find the suitable host species and thus they save foraging costs. Increased LCA herbivory rates indicate that the damages (i.e., amount of harvested foliage) caused by LCA are more important in fragmentation-related habitats which are more vulnerable to LCA herbivory due to the high availability of palatable plants and a low total amount of foliage (LAI). (1) Few plant defences, (2) narrower ant diet breadth, (3) reduced colony foraging areas, and (4) increased herbivory rates, clearly indicate a weaker bottom-up control for LCA in fragmented habitats. Weak bottom-up control in the fragmentation-related habitats decreases the foraging costs of a LCA colony in these habitats and the colonies might use the surplus of energy resulting from reduced foraging costs to increase the colony growth, the reproduction and turnover. If correct, this explains why fragmented habitats support more LCA colonies at a given time compared to continuous forest habitats. Further studies are urgently needed to estimate LCA colony growth and turnover rates. There are indices that edge effects of forest fragmentation might be more responsible in regulating LCA populations than area or isolation effects. This emphasizes the need to conserve big forest fragments not to fall below a critical size and retain their regular shape. Weak bottom-up control of LCA populations has various consequences on forested ecosystems. I suggest a loop between forest fragmentation and LCA population dynamics: the increased LCA colony densities, along with lower bottom-up control increase LCA herbivory pressure on the forest and thus inevitably amplify the deleterious effects of fragmentation. These effects include direct consequences of leaf removal by ants and various indirect effects on ecosystem functioning. This study contributes to our understanding of how primary fragmentation effects, via the alteration of trophic interactions, may translate into higher order effects on ecosystem functions.

Virtual material design is the microscopic variation of materials in the computer, followed by the numerical evaluation of the effect of this variation on the material‘s macroscopic properties. The goal of this procedure is an in some sense improved material. Here, we give examples regarding the dependence of the effective elastic moduli of a composite material on the geometry of the shape of an inclusion. A new approach on how to solve such interface problems avoids mesh generation and gives second order accurate results even in the vicinity of the interface. The Explicit Jump Immersed Interface Method is a finite difference method for elliptic partial differential equations that works on an equidistant Cartesian grid in spite of non-grid aligned discontinuities in equation parameters and solution. Near discontinuities, the standard finite difference approximations are modified by adding correction terms that involve jumps in the function and its derivatives. This work derives the correction terms for two dimensional linear elasticity with piecewise constant coefficients, i.e. for composite materials. It demonstrates numerically convergence and approximation properties of the method.

Music Information Retrieval (MIR) is an interdisciplinary research area that has the goal to improve the way music is accessible through information systems. One important part of MIR is the research for algorithms to extract meaningful information (called feature data) from music audio signals. Feature data can for example be used for content based genre classification of music pieces. This masters thesis contributes in three ways to the current state of the art: • First, an overview of many of the features that are being used in MIR applications is given. These methods – called “descriptors” or “features” in this thesis – are discussed in depth, giving a literature review and for most of them illustrations. • Second, a large part of the described features are implemented in a uniform framework, called T-Toolbox which is programmed in the Matlab environment. It also allows to do classification experiments and descriptor visualisation. For classification, an interface to the machine-learning environment WEKA is provided. • Third, preliminary evaluations are done investigating how well these methods are suited for automatically classifying music according to categorizations such as genre, mood, and perceived complexity. This evaluation is done using the descriptors implemented in the T-Toolbox, and several state-of-the-art machine learning algorithms. It turns out that – in the experimental setup of this thesis – the treated descriptors are not capable to reliably discriminate between the classes of most examined categorizations; but there is an indication that these results could be improved by developing more elaborate techniques.

Fiber Dynamics in Turbulent Flows -Part I: General Modeling Framework -Part II: Specific Taylor Drag
(2005)

Part I: General Modeling Framework The paper at hand deals with the modeling of turbulence effects on the dynamics of a long slender elastic fiber. Independent of the choice of the drag model, a general aerodynamic force concept is derived on the basis of the velocity field for the randomly fluctuating component of the flow. Its construction as centered differentiable Gaussian field complies thereby with the requirements of the stochastic k-turbulence model and Kolmogorov’s universal equilibrium theory on local isotropy. Part II: Specific Taylor Drag In [12], an aerodynamic force concept for a general air drag model is derived on top of a stochastic k-epsilon description for a turbulent flow field. The turbulence effects on the dynamics of a long slender elastic fiber are particularly modeled by a correlated random Gaussian force and in its asymptotic limit on a macroscopic fiber scale by Gaussian white noise with flow - dependent amplitude. The paper at hand now presents quantitative similarity estimates and numerical comparisons for the concrete choice of a Taylor drag model in a given application.