### Refine

#### Year of publication

#### Document Type

- Report (198) (remove)

#### Keywords

- numerical upscaling (7)
- Integer programming (4)
- hub location (4)
- Darcy’s law (3)
- Heston model (3)
- Lagrangian mechanics (3)
- effective heat conductivity (3)
- facility location (3)
- non-Newtonian flow in porous media (3)
- poroelasticity (3)

#### Faculty / Organisational entity

- Fraunhofer (ITWM) (198) (remove)

In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.

In this paper mathematical models for liquid films generated by impinging jets are discussed. Attention is stressed to the interaction of the liquid film with some obstacle. S. G. Taylor [Proc. R. Soc. London Ser. A 253, 313 (1959)] found that the liquid film generated by impinging jets is very sensitive to properties of the wire which was used as an obstacle. The aim of this presentation is to propose a modification of the Taylor's model, which allows to simulate the film shape in cases, when the angle between jets is different from 180°. Numerical results obtained by discussed models give two different shapes of the liquid film similar as in Taylors experiments. These two shapes depend on the regime: either droplets are produced close to the obstacle or not. The difference between two regimes becomes larger if the angle between jets decreases. Existence of such two regimes can be very essential for some applications of impinging jets, if the generated liquid film can have a contact with obstacles.

Granular systems in solid-like state exhibit properties like stiffness
dependence on stress, dilatancy, yield or incremental non-linearity
that can be described within the continuum mechanical framework.
Different constitutive models have been proposed in the literature either based on relations between some components of the stress tensor or on a quasi-elastic description. After a brief description of these
models, the hyperelastic law recently proposed by Jiang and Liu [1]
will be investigated. In this framework, the stress-strain relation is
derived from an elastic strain energy density where the stable proper-
ties are linked to a Drucker-Prager yield criteria. Further, a numerical method based on the finite element discretization and Newton-
Raphson iterations is presented to solve the force balance equation.
The 2D numerical examples presented in this work show that the stress
distributions can be computed not only for triangular domains, as previoulsy done in the literature, but also for more complex geometries.
If the slope of the heap is greater than a critical value, numerical instabilities appear and no elastic solution can be found, as predicted by
the theory. As main result, the dependence of the material parameter
Xi on the maximum angle of repose is established.

Radiotherapy is one of the major forms in cancer treatment. The patient is irradiated with high-energetic photons or charged particles with the primary goal of delivering sufficiently high doses to the tumor tissue while simultaneously sparing the surrounding healthy tissue. The inverse search for the treatment plan giving the desired dose distribution is done by means of numerical optimization [11, Chapters 3-5]. For this purpose, the aspects of dose quality in the tissue are modeled as criterion functions, whose mathematical properties also affect the type of the corresponding optimization problem. Clinical practice makes frequent use of criteria that incorporate volumetric and spatial information about the shape of the dose distribution. The resulting optimization problems are of global type by empirical knowledge and typically computed with generic global solver concepts, see for example [16]. The development of good global solvers to compute radiotherapy optimization problems is an important topic of research in this application, however, the structural properties of the underlying criterion functions are typically not taken into account in this context.

This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.

We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented.

The stationary heat equation is solved with periodic boundary conditions in geometrically complex composite materials with high contrast in the thermal conductivities of the individual phases. This is achieved by harmonic averaging and explicitly introducing the jumps across the material interfaces as additional variables. The continuity of the heat flux yields the needed extra equations for these variables. A Schur-complent formulation for the new variables is derived that is solved using the FFT and BiCGStab methods. The EJ-HEAT solver is given as a 3-page Matlab program in the Appendix. The C++ implementation is used for material design studies. It solves 3-dimensional problems with around 190 Mio variables on a 64-bit AMD Opteron desktop system in less than 6 GB memory and in minutes to hours, depending on the contrast and required accuracy. The approach may also be used to compute effective electric conductivities because they are governed by the stationary heat equation.

Four aspects are important in the design of hydraulic lters. We distinguish between two cost factors and two performance factors. Regarding performance, filter eciencynd lter capacity are of interest. Regarding cost, there are production considerations such as spatial restrictions, material cost and the cost of manufacturing the lter. The second type of cost is the operation cost, namely the pressure drop. Albeit simulations should and will ultimately deal with all 4 aspects, for the moment our work is focused on cost. The PleatGeo Module generates three-dimensional computer models of a single pleat of a hydraulic lter interactively. PleatDict computes the pressure drop that will result for the particular design by direct numerical simulation. The evaluation of a new pleat design takes only a few hours on a standard PC compared to days or weeks used for manufacturing and testing a new prototype of a hydraulic lter. The design parameters are the shape of the pleat, the permeabilities of one or several layers of lter media and the geometry of a supporting netting structure that is used to keep the out ow area open. Besides the underlying structure generation and CFD technology, we present some trends regarding the dependence of pressure drop on design parameters that can serve as guide lines for the design of hydraulic lters. Compared to earlier two-dimensional models, the three-dimensional models can include a support structure.

A fully automatic procedure is proposed to rapidly compute the permeability of porous materials from their binarized microstructure. The discretization is a simplified version of Peskin’s Immersed Boundary Method, where the forces are applied at the no-slip grid points. As needed for the computation of permeability, steady flows at zero Reynolds number are considered. Short run-times are achieved by eliminating the pressure and velocity variables using an Fast Fourier Transform-based and 4 Poisson problembased fast inversion approach on rectangular parallelepipeds with periodic boundary conditions. In reference to calling it a fast method using fictitious or artificial forces, the implementation is called FFF-Stokes. Large scale computations on 3d images are quickly and automatically performed to estimate the permeability of some sample materials. A matlab implementation is provided to allow readers to experience the automation and speed of the method for realistic three-dimensional models.

In the presented work, we make use of the strong reciprocity between kinematics and geometry to build a geometrically nonlinear, shearable low order discrete shell model of Cosserat type defined on triangular meshes, from which we deduce a rotation–free Kirchhoff type model with the triangle vertex positions as degrees of freedom. Both models behave physically plausible already on very coarse meshes, and show good
convergence properties on regular meshes. Moreover, from the theoretical side, this deduction provides a
common geometric framework for several existing models.

Die Simulation von Prüfständen und insbesondere von Baugruppen und Gesamtfahrzeugen auf Prüfständen durch Kopplung von Mehrkörpersimulation mit Modellen für Regelung und Aktuatorik leistet einen wesentlichen Beitrag zur Entwicklungszeitverkürzung. In diesem Beitrag wird ein Kooperationsprojekt vorgestellt, in dem ein Co- Simulationsmodell für die beweglichen Massen sowie die Regelung und Hydraulik eines Gesamtfahrzeugprüfstands erstellt wurde. Es wird sowohl auf die Validierung des Fahrzeugmodells durch Straßenmessungen als auch auf die Identifikation und Validierung des Prüfstandsmodells einschließlich Servohydraulik und Regelung eingegangen.

Worldwide the installed capacity of renewable technologies for electricity production is
rising tremendously. The German market is particularly progressive and its regulatory
rules imply that production from renewables is decoupled from market prices and electricity
demand. Conventional generation technologies are to cover the residual demand
(defined as total demand minus production from renewables) but set the price at the
exchange. Existing electricity price models do not account for the new risks introduced
by the volatile production of renewables and their effects on the conventional demand
curve. A model for residual demand is proposed, which is used as an extension of
supply/demand electricity price models to account for renewable infeed in the market.
Infeed from wind and solar (photovoltaics) is modeled explicitly and withdrawn from
total demand. The methodology separates the impact of weather and capacity. Efficiency
is transformed on the real line using the logit-transformation and modeled as a stochastic process. Installed capacity is assumed a deterministic function of time. In a case study the residual demand model is applied to the German day-ahead market
using a supply/demand model with a deterministic supply-side representation. Price trajectories are simulated and the results are compared to market future and option
prices. The trajectories show typical features seen in market prices in recent years and the model is able to closely reproduce the structure and magnitude of market prices.
Using the simulated prices it is found that renewable infeed increases the volatility of forward prices in times of low demand, but can reduce volatility in peak hours. Prices
for different scenarios of installed wind and solar capacity are compared and the meritorder effect of increased wind and solar capacity is calculated. It is found that wind
has a stronger overall effect than solar, but both are even in peak hours.

Continuously improving imaging technologies allow to capture the complex spatial
geometry of particles. Consequently, methods to characterize their three
dimensional shapes must become more sophisticated, too. Our contribution to
the geometric analysis of particles based on 3d image data is to unambiguously
generalize size and shape descriptors used in 2d particle analysis to the spatial
setting.
While being defined and meaningful for arbitrary particles, the characteristics
were actually selected motivated by the application to technical cleanliness. Residual
dirt particles can seriously harm mechanical components in vehicles, machines,
or medical instruments. 3d geometric characterization based on micro-computed
tomography allows to detect dangerous particles reliably and with
high throughput. It thus enables intervention within the production line. Analogously
to the commonly agreed standards for the two dimensional case, we
show how to classify 3d particles as granules, chips and fibers on the basis of
the chosen characteristics. The application to 3d image data of dirt particles is
demonstrated.

We present the application of a meshfree method for simulations of interaction between fluids and flexible structures. As a flexible structure we consider a sheet of paper. In a two-dimensional framework this sheet can be modeled as curve by the dynamical Kirchhoff-Love theory. The external forces taken into account are gravitation and the pressure difference between upper and lower surface of the sheet. This pressure difference is computed using the Finite Pointset Method (FPM) for the incompressible Navier-Stokes equations. FPM is a meshfree, Lagrangian particle method. The dynamics of the sheet are computed by a finite difference method. We show the suitability of the meshfree method for simulations of fluid-structure interaction in several applications.

A Lattice Boltzmann Method for immiscible multiphase flow simulations using the Level Set Method
(2008)

We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.

It is commonly believed that not all degrees of freedom are needed to produce good solutions for the treatment planning problem in intensity modulated radiotherapy treatment (IMRT). However, typical methods to exploit this fact have either increased the complexity of the optimization problem or were heuristic in nature. In this work we introduce a technique based on adaptively refining variable clusters to successively attain better treatment plans. The approach creates approximate solutions based on smaller models that may get arbitrarily close to the optimal solution. Although the method is illustrated using a specific treatment planning model, the components constituting the variable clustering and the adaptive refinement are independent of the particular optimization problem.

It has been empirically verified that smoother intensity maps can be expected to produce shorter sequences when step-and-shoot collimation is the method of choice. This work studies the length of sequences obtained by the sequencing algorithm by Bortfeld and Boyer using a probabilistic approach. The results of this work build a theoretical foundation for the up to now only empirically validated fact that if smoothness of intensity maps is considered during their calculation, the solutions can be expected to be more easily applied.

We present a parsimonious multi-asset Heston model. All single-asset submodels follow the well-known Heston dynamics and their parameters are typically calibrated on implied market volatilities. We focus on the calibration of the correlation structure between the single-asset marginals in the absence of sucient liquid cross-asset option price data. The presented model is parsimonious in the sense that d(d􀀀1)=2 asset-asset cross-correlations are required for a d-asset Heston model. In order to calibrate the model, we present two general setups corresponding to relevant practical situations: (1) when the empirical cross-asset correlations in the risk neutral world are given by the user and we need to calibrate the correlations between the driving Brownian motions or (2) when they have to be estimated from the historical time series. The theoretical background, including the ergodicity of the multidimensional CIR process, for the proposed estimators is also studied.

For the last decade, optimization of beam orientations in intensitymodulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity proles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity proles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity proles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity proles for every selection of beam orientations, making the dependence between beam orientations and its intensity proles less important. We take advantage of this property to present a dynamic algorithm for beam orientation in IMRT which is based on multicriteria inverse planning. The algorithm approximates beam intensity proles iteratively instead of doing it for every selection of beam orientation, saving a considerable amount of calculation time. Every iteration goes from an N-beam plan to a plan with N + 1 beams. Beam selection criteria are based on a score function that minimizes the deviation from the prescribed dose, in addition to a reject-accept criterion. To illustrate the eciency of the algorithm it has been applied to an articial example where optimality is trivial and to three real clinical cases: a prostate carcinoma, a tumor in the head and neck region and a paraspinal tumor. In comparison to the standard equally spaced beam plans, improvements are reported in all of the three clinical examples, even, in some cases with a fewer number of beams.

Forderungen nach kürzeren Entwicklungszyklen bei gleichzeitig höherer Produktqualität führen in allen Bereichen der Nutzfahrzeugtechnik und insbesondere auch bei Baumaschinen zum verstärkten Einsatz von Simulationssoftware. Um in diesem Sinne Lebensdauerberechnungen durchführen zu können, sind jedoch genaue Kenntnisse über die im Kundeneinsatz auftretenden Betriebslasten und Beanspruchungen erforderlich. Für deren Ermittlung hat der Baumaschinenhersteller VOLVO Construction Equipment einen Mobilbagger umfassend mit Messtechnik ausgestattet, die neben den mechanischen Belastungen an der Arbeitsausrüstung auch wesentliche Kenndaten des Hydrauliksystems und des Fahrantriebs erfasst. Dieser Messbagger wurde bereits bei unterschiedlichen Kunden in Europa eingesetzt. Der Artikel beschreibt die methodische Vorgehensweise zur Verarbeitung der erfassten Daten und zur Generierung von repräsentativen Nutzungsprofilen am Beispiel der mechanischen Belastungen an der Arbeitseinrichtung, die im Wesentlichen vom Fraunhofer Institut für Techno- und Wirtschaftsmathematik (ITWM) erarbeitet wurde.

In this article, a new model predictive control approach to nonlinear stochastic systems will be presented. The new approach is based on particle filters, which are usually used for estimating states or parameters. Here, two particle filters will be combined, the first one giving an estimate for the actual state based on the actual output of the system; the second one gives an estimate of a control input for the system. This is basically done by adopting the basic model predictive control strategies for the second particle filter. Later in this paper, this new approach is applied to a CSTR (continuous stirred-tank reactor) example and to the inverted pendulum.

Bei der Erprobung sicherheitsrelevanter Bauteile von Nutzfahrzeugen steht man vor der Aufgabe, die sehr vielfältige Belastung durch die Kunden abschätzen zu müssen und daraus ein Prüfprogramm für die Bauteile abzuleiten, das mehreren gegenläufigen Anforderungen gerecht werden muss: Das Programm muss scharf genug sein, damit bei erfolgreicher Prüfung ein Ausfall im Feld im Rahmen eines bestimmungsgemäßen Gebrauchs ausgeschlossen werden kann, es soll aber nicht zu einer Überdimensionierung der Bauteile führen, und es soll mit relativ wenigen Bauteilversuchen eine ausreichende Aussagesicherheit erreicht werden. Wegen der hohen Anforderungen bzgl. Sicherheit müssen bei der klassischen statistischen Vorgehensweise – Schätzen der Verteilung der Kundenbeanspruchung aus Messdaten, Schätzen der Verteilung der Bauteilfestigkeit aus Versuchsergebnissen und Ableiten einer Ausfallwahrscheinlichkeit – die Verteilungen in den extremen Rändern bekannt sein. Dazu reicht aber das Datenmaterial in der Regel bei weitem nicht aus. Bei der klassischen „empirischen“ Vorgehensweise werden Kennwerte der Beanspruchung und der Festigkeit verglichen und ein ausreichender Sicherheitsabstand gefordert. Das hier vorgeschlagene Verfahren kombiniert beide Methoden, setzt dabei die Möglichkeiten der statistischen Modellierung soweit aufgrund der Datenlage vertretbar ein und ergänzt die Ergebnisse durch empirisch begründete Sicherheitsfaktoren. Dabei werden bei der Lastfestlegung die im Versuch vorhandenen Möglichkeiten berücksichtigt. Hauptvorteile dieses Verfahrens sind a) die Transparenz bzgl. der mit statistischen Mitteln erreichbaren Aussagen und des Zusammenspiels zwischen Lastermittlung und Versuch und b) die Möglichkeit durch entsprechenden Aufwand bei Messungen und Erprobung die empirischen zugunsten der statistischen Anteile zu reduzieren.

In the ground vehicle industry it is often an important task to simulate full vehicle models based on the wheel forces and moments, which have been measured during driving over certain roads with a prototype vehicle. The models are described by a system of differential algebraic equations (DAE) or ordinary differential equations (ODE). The goal of the simulation is to derive section forces at certain components for a durability assessment. In contrast to handling simulations, which are performed including more or less complex tyre models, a driver model, and a digital road profile, the models we use here usually do not contain the tyres or a driver model. Instead, the measured wheel forces are used for excitation of the unconstrained model. This can be difficult due to noise in the input data, which leads to an undesired drift of the vehicle model in the simulation.

Die Erprobung neuer Fahrzeugachsen oder Achsvarianten auf Basis von Lastdaten aus dem Fahrbetrieb erfolgt meist mit Hilfe komplexer mehrkanaliger Prüfstände. Bei solchen Erprobungen sollen im Allgemeinen die im Fahrbetrieb gemessenen Radnabenkräfte und Momente vom Prüfstand reproduziert werden. Aufgrund der komplexen Wechselwirkungen zwischen Prüfling und Prüfmaschine stellt sich bei jedem neuen Konzept die Frage, ob der gewünschte Test mit einem vorgegebenen Prüfsystemaufbau durchführbar ist, bzw. welche Konfiguration des Prüfsystems für den geplanten Test geeignet erscheint. In dieser Arbeit wird die Modellierung eines neuartigen Achsprüfsystemkonzeptes beschrieben, das auf zwei Hexapoden basiert. Die Modellierung umfasst neben der geometrischen Anordnung des Prüfsystems auch die Hydraulik sowie den internen Controller. Das Prüfsystemmodell wurde als so genanntes Template innerhalb des Fahrzeugsimulationsprogramms ADAMS/Car entwickelt und kann mit verschiedenen Achsmodellen zu einem Gesamtsystem gekoppelt werden. An diesem Gesamtmodell können alle am realen Prüfsystem auftretenden Arbeitsschritte wie Controllereinstellung, Drive-File-Iteration und Simulation durchgeführt werden. Geometrische oder hydraulische Parameter können auf einfache Weise geändert werden, um eine optimale Anpassung des Prüfsystems an den Prüfling und die vorgegebenen Lastdaten zu ermöglichen. Das im Rahmen des Projektes entwickelte Modell unterstützt und begleitet einerseits die Einführung des neuen Achsprüfsystemkonzeptes und kann andererseits zur virtuellen Vorbereitung von Testläufen eingesetzt werden. Am Beispiel einer Vorder- und einer Hinterachse wird die allgemeine Vorgehensweise erläutert und die neuen Möglichkeiten aufgezeigt, die sich durch die Prüfsystemsimulation ergeben.

Testing a new suspension based on real load data is performed on elaborate multi channel test rigs. Usually wheel forces and moments measured during driving maneuvers are reproduced on the rig. Because of the complicated interaction between rig and suspension each new rig configuration has to prove its efficiency with respect to the requirements and the configuration might be subject to optimization. This paper deals with modeling a new rig concept based on two hexapods. The real physical rig has been designed and meanwhile built by MOOG-FCS for VOLKSWAGEN. The aim of the simulation project reported here was twofold: First the simulation of the rig together with real VOLKSWAGEN suspension models at a time where the design was not yet finalized was used to verify and optimize the desired properties of the rig. Second the simulation environment was set up in a way that it can be used to prepare real tests on the rig. The model contains the geometric configuration as well as the hydraulics and the controller. It is implemented as an ADAMS/Car template and can be combined with different suspension models to get a complete assembly representing the entire test rig. Using this model, all steps required for a real test run such as controller adaptation, drive file iteration and simulation can be performed. Geometric or hydraulic parameters can be modified easily to improve the setup and adapt the system to the suspension and the load data.

One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide for the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to assure that all clinically meaningful plans are covered and that as many as possible clinically irrelevant plans are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that consititutes the first phase. It is possible that two plans on the Parteto surface have a very small, clinically insignificant difference in one criterion and a significant difference in one other criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, so called tradeoff bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects.

A simple transformation of the Equation of Motion (EoM) allows us to directly integrate nonlinear structural models into the recursive Multibody System (MBS) formalism of SIMPACK. This contribution describes how the integration is performed for a discrete Cosserat rod model which has been developed at the ITWM. As a practical example, the run-up of a simplified three-bladed wind turbine is studied where the dynamic deformations of the three blades are calculated by the Cosserat rod model.

In this paper we address the improvement of transfer quality in public mass transit networks. Generally there are several transit operators offering service and our work is motivated by the question how their timetables can be altered to yield optimized transfer possibilities in the overall network. To achieve this, only small changes to the timetables are allowed. The set-up makes it possible to use a quadratic semi-assignment model to solve the optimization problem. We apply this model, equipped with a new way to assess transfer quality, to the solution of four real-world examples. It turns out that improvements in overall transfer quality can be determined by such optimization-based techniques. Therefore they can serve as a first step towards a decision support tool for planners of regional transit networks.

Open cell foams are a promising and versatile class of porous materials. Open metal foams serve as crash absorbers and catalysts, metal and ceramic foams are used for filtering, and open polymer foams are hidden in every-day-life items like mattresses or chairs. Due to their high porosity, classical 2d quantitative analysis can give only very limited information about the microstructure of open foams. On the other hand, micro computed tomography (μCT) yields high quality 3d images of open foams. Thus 3d imaging is the method of choice for open cell foams. In this report we summarise a variety of methods for the analysis of the resulting volume images of open foam structures developed or refined and applied at the Fraunhofer ITWM over a course of nearly ten years: The model based determination of mean characteristics like the mean cell volume or the mean strut thickness demanding only a simple binarisation as well as the image analytic cell reconstruction yielding empirical distributions of cell characteristics.

In order to optimize the acoustic properties of a stacked fiber non-woven, the microstructure of the non-woven is modeled by a macroscopically homogeneous random system of straight cylinders (tubes). That is, the fibers are modeled by a spatially stationary random system of lines (Poisson line process), dilated by a sphere. Pressing the non-woven causes anisotropy. In our model, this anisotropy is described by a one parametric distribution of the direction of the fibers. In the present application, the anisotropy parameter has to be estimated from 2d reflected light microscopic images of microsections of the non-woven. After fitting the model, the flow is computed in digitized realizations of the stochastic geometric model using the lattice Boltzmann method. Based on the flow resistivity, the formulas of Delany and Bazley predict the frequency-dependent acoustic absorption of the non-woven in the impedance tube. Using the geometric model, the description of a non-woven with improved acoustic absorption properties is obtained in the following way: First, the fiber thicknesses, porosity and anisotropy of the fiber system are modified. Then the flow and acoustics simulations are performed in the new sample. These two steps are repeatedc for various sets of parameters. Finally, the set of parameters for the geometric model leading to the best acoustic absorption is chosen.

IMRT planning on adaptive volume structures – a significant advance of computational complexity
(2004)

In intensity-modulated radiotherapy (IMRT) planning the oncologist faces the challenging task of finding a treatment plan that he considers to be an ideal compromise of the inherently contradictive goals of delivering a sufficiently high dose to the target while widely sparing critical structures. The search for this a priori unknown compromise typically requires the computation of several plans, i.e. the solution of several optimization problems. This accumulates to a high computational expense due to the large scale of these problems - a consequence of the discrete problem formulation. This paper presents the adaptive clustering method as a new algorithmic concept to overcome these difficulties. The computations are performed on an individually adapted structure of voxel clusters rather than on the original voxels leading to a decisively reduced computational complexity as numerical examples on real clinical data demonstrate. In contrast to many other similar concepts, the typical trade-off between a reduction in computational complexity and a loss in exactness can be avoided: the adaptive clustering method produces the optimum of the original problem. This flexible method can be applied to both single- and multi-criteria optimization methods based on most of the convex evaluation functions used in practice

In this paper, the analysis of one approach for the regularization of pure Neumann problems for second order elliptical equations, e.g., Poisson’s equation and linear elasticity equations, is presented. The main topic under consideration is the behavior of the condition number of the regularized problem. A general framework for the analysis is presented. This allows to determine a form of regularization term which leads to the “natural” asymptotic of the condition number of the regularized problem with respect to mesh parameter. Some numerical results, which support theoretical analysis are presented as well. The main motivation for the presented research is to develop theoretical background for an efficient and robust implementation of the solver for pure Neumann problems for the linear elasticity equations. Such solvers usually are needed in a number of domain decomposition methods, e.g. FETI. Developed approaches are planed to be used in software, developing in ITWM, e.g. KneeMech simulation software.

Virtual material design is the microscopic variation of materials in the computer, followed by the numerical evaluation of the effect of this variation on the material‘s macroscopic properties. The goal of this procedure is an in some sense improved material. Here, we give examples regarding the dependence of the effective elastic moduli of a composite material on the geometry of the shape of an inclusion. A new approach on how to solve such interface problems avoids mesh generation and gives second order accurate results even in the vicinity of the interface. The Explicit Jump Immersed Interface Method is a finite difference method for elliptic partial differential equations that works on an equidistant Cartesian grid in spite of non-grid aligned discontinuities in equation parameters and solution. Near discontinuities, the standard finite difference approximations are modified by adding correction terms that involve jumps in the function and its derivatives. This work derives the correction terms for two dimensional linear elasticity with piecewise constant coefficients, i.e. for composite materials. It demonstrates numerically convergence and approximation properties of the method.

We introduce a refined tree method to compute option prices using the stochastic volatility model of Heston. In a first step, we model the stock and variance process as two separate trees and with transition probabilities obtained by matching tree moments up to order two against the Heston model ones. The correlation between the driving Brownian motions in the Heston model is then incorporated by the node-wise adjustment of the probabilities. This adjustment, leaving the marginals fixed, optimizes the match between tree and model correlation. In some nodes, we are even able to further match moments of higher order. Numerically this gives convergence orders faster than 1/N, where N is the number of dis- cretization steps. Accuracy of our method is checked for European option prices against a semi closed-form, and our prices for both European and American options are compared to alternative approaches.

We study global and local robustness properties of several estimators for shape and scale in a generalized Pareto model. The estimators considered in this paper cover maximum likelihood estimators, skipped maximum likelihood estimators, moment-based estimators, Cramér-von-Mises Minimum Distance estimators, and, as a special case of quantile-based estimators, Pickands Estimator as well as variants of the latter tuned for higher finite sample breakdown point (FSBP), and lower variance. We further consider an estimator matching population median and median of absolute deviations to the empirical ones (MedMad); again, in order to improve its FSBP, we propose a variant using a suitable asymmetric Mad as constituent, and which may be tuned to achieve an expected FSBP of 34%. These estimators are compared to one-step estimators distinguished as optimal in the shrinking neighborhood setting, i.e., the most bias-robust estimator minimizing the maximal (asymptotic) bias and the estimator minimizing the maximal (asymptotic) MSE. For each of these estimators, we determine the FSBP, the influence function, as well as statistical accuracy measured by asymptotic bias, variance, and mean squared error—all evaluated uniformly on shrinking convex contamination neighborhoods. Finally, we check these asymptotic theoretical findings against finite sample behavior by an extensive simulation study.

We present some optimality results for robust Kalman filtering. To this end, we introduce the general setup of state space models which will not be limited to a Euclidean or time-discrete framework. We pose the problem of state reconstruction and repeat the classical existing algorithms in this context. We then extend the ideal-model setup allowing for outliers which in this context may be system-endogenous or -exogenous, inducing the somewhat conflicting goals of tracking and attenuation. In quite a general framework, we solve corresponding minimax MSE-problems for both types of outliers separately, resulting in saddle-points consisting of an optimally-robust procedure and a corresponding least favorable outlier situation. Still insisting on recursivity, we obtain an operational solution, the rLS filter and variants of it. Exactly robust-optimal filters would need knowledge of certain hard-to-compute conditional means in the ideal model; things would be much easier if these conditional means were linear. Hence, it is important to quantify the deviation of the exact conditional mean from linearity. We obtain a somewhat surprising characterization of linearity for the conditional expectation in this setting. Combining both optimal filter types (for system-endogenous and -exogenous situation) we come up with a delayed hybrid filter which is able to treat both types of outliers simultaneously. Keywords: robustness, Kalman Filter, innovation outlier, additive outlier

We are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and a numerical investigation of the model with special focus on inertia effects.

This work deals with the optimal control of a free surface Stokes flow which responds to an applied outer pressure. Typical applications are fiber spinning or thin film manufacturing. We present and discuss two adjoint-based optimization approaches that differ in the treatment of the free boundary as either state or control variable. In both cases the free boundary is modeled as the graph of a function. The PDE-constrained optimization problems are numerically solved by the BFGS method, where the gradient of the reduced cost function is expressed in terms of adjoint variables. Numerical results for both strategies are finally compared with respect to accuracy and efficiency.

This work presents a proof of convergence of a discrete solution to a continuous one. At first, the continuous problem is stated as a system
of equations which describe filtration process in the pressing section of a
paper machine. Two flow regimes appear in the modeling of this problem.
The model for the saturated flow is presented by the Darcy’s law and the mass conservation. The second regime is described by the Richards approach together with a dynamic capillary pressure model. The finite
volume method is used to approximate the system of PDEs. Then the existence of a discrete solution to proposed finite difference scheme is proven.
Compactness of the set of all discrete solutions for different mesh sizes is
proven. The main Theorem shows that the discrete solution converges
to the solution of continuous problem. At the end we present numerical
studies for the rate of convergence.

Numerical modeling of electrochemical process in Li-Ion battery is an emerging topic of great practical interest. In this work we present a Finite Volume discretization of electrochemical diffusive processes occurring during the operation of Li-Ion batteries. The system of equations is a nonlinear, time-dependent diffusive system, coupling the Li concentration and the electric potential. The system is formulated at length-scale at which two different types of domains are distinguished, one for the electrolyte and one for the active solid particles in the electrode. The domains can be of highly irregular shape, with electrolyte occupying the pore space of a porous electrode. The material parameters in each domain differ by several orders of magnitude and can be non-linear functions of Li ions concentration and/or the electrical potential. Moreover, special interface conditions are imposed at the boundary separating the electrolyte from the active solid particles. The field variables are discontinuous across such an interface and the coupling is highly non- linear, rendering direct iteration methods ineffective for such problems. We formulate a Newton iteration for an purely implicit Finite Volume discretization of the coupled system. A series of numerical examples are presented for different type of electrolyte/electrode configurations and material parameters. The convergence of the Newton method is characterized both as function of nonlinear material parameters as well as the nonlinearity in the interface conditions.

The paper at hand presents a slender body theory for the dynamics of a curved inertial viscous Newtonian ber. Neglecting surface tension and temperature dependence, the ber ow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the ber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional ber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms.

We consider the contact of two elastic bodies with rough surfaces at the interface. The size of the micropeaks and valleys is very small compared with the macrosize of the bodies’ domains. This makes the direct application of the FEM for the calculation of the contact problem prohibitively costly. A method is developed that allows deriving a macrocontact condition on the interface. The method involves the twoscale asymptotic homogenization procedure that takes into account the microgeometry of the interface layer and the stiffnesses of materials of both domains. The macrocontact condition can then be used in a FEM model for the contact problem on the macrolevel. The averaged contact stiffness obtained allows the replacement of the interface layer in the macromodel by the macrocontact condition.

The theory of the two-scale convergence was applied to homogenization of elasto-plastic composites with a periodic structure and exponential hardening law. The theory is based on the fact that the elastic as well as the plastic part of the stress field two-scale converges to a limit, which is factorized by parts, depending only on macroscopic characteristics, represented in terms of corresponding part of the homogenised stress tensor and only on stress concentration tensor, related to the micro-geometry and elastic or plastic micro-properties of composite components. The theory was applied to metallic matrix material with Ludwik and Hocket-Sherby hardening law and pure elastic inclusions in two numerical examples. Results were compared with results of mechanical averaging based on the self-consistent methods.

A spectral theory for constituents of macroscopically homogeneous random microstructures modeled as homogeneous random closed sets is developed and provided with a sound mathematical basis, where the spectrum obtained by Fourier methods corresponds to the angular intensity distribution of x-rays scattered by this constituent. It is shown that the fast Fourier transform applied to three-dimensional images of microstructures obtained by micro-tomography is a powerful tool of image processing. The applicability of this technique is is demonstrated in the analysis of images of porous media.

Two approaches for determining the Euler-Poincaré characteristic of a set observed on lattice points are considered in the context of image analysis { the integral geometric and the polyhedral approach. Information about the set is assumed to be available on lattice points only. In order to retain properties of the Euler number and to provide a good approximation of the true Euler number of the original set in the Euclidean space, the appropriate choice of adjacency in the lattice for the set and its background is crucial. Adjacencies are defined using tessellations of the whole space into polyhedrons. In R 3 , two new 14 adjacencies are introduced additionally to the well known 6 and 26 adjacencies. For the Euler number of a set and its complement, a consistency relation holds. Each of the pairs of adjacencies (14:1; 14:1), (14:2; 14:2), (6; 26), and (26; 6) is shown to be a pair of complementary adjacencies with respect to this relation. That is, the approximations of the Euler numbers are consistent if the set and its background (complement) are equipped with this pair of adjacencies. Furthermore, sufficient conditions for the correctness of the approximations of the Euler number are given. The analysis of selected microstructures and a simulation study illustrate how the estimated Euler number depends on the chosen adjacency. It also shows that there is not a uniquely best pair of adjacencies with respect to the estimation of the Euler number of a set in Euclidean space.

Home Health Care (HHC) services are becoming increasingly important in Europe’s aging societies. Elderly people have varying degrees of need for assistance and medical treatment. It is advantageous to allow them to live in their own homes as long as possible, since a long-term stay in a nursing home can be much more costly for the social insurance system than a treatment at home providing assistance to the required level. Therefore, HHC services are a cost-effective and flexible instrument in the social system. In Germany, organizations providing HHC services are generally either larger charities with countrywide operations or small private companies offering services only in a city or a rural area. While the former have a hierarchical organizational structure and a large number of employees, the latter typically only have some ten to twenty nurses under contract. The relationship to the patients (“customers”) is often long-term and can last for several years. Therefore acquiring and keeping satisfied customers is crucial for HHC service providers and intensive competition among them is observed.

In this paper, a multi-period supply chain network design problem is addressed. Several aspects of practical relevance are considered such as those related with the financial decisions that must be accounted for by a company managing a supply chain. The decisions to be made comprise the location of the facilities, the flow of commodities and the investments to make in alternative activities to those directly related with the supply chain design. Uncertainty is assumed for demand and interest rates, which is described by a set of scenarios. Therefore, for the entire planning horizon, a tree of scenarios is built. A target is set for the return on investment and the risk of falling below it is measured and accounted for. The service level is also measured and included in the objective function. The problem is formulated as a multi-stage stochastic mixed-integer linear programming problem. The goal is to maximize the total financial benefit. An alternative formulation which is based upon the paths in the scenario tree is also proposed. A methodology for measuring the value of the stochastic solution in this problem is discussed. Computational tests using randomly generated data are presented showing that the stochastic approach is worth considering in these type of problems.

»Denn nichts ist für den Menschen als Menschen etwas wert, was er nicht mit Leidenschaft tun kann«
(2001)

Vortrag anlässlich der Verleihung des Akademiepreises des Landes Rheinland-Pfalz am 21.11.2001 Was macht einen guten Hochschullehrer aus? Auf diese Frage gibt es sicher viele verschiedene, fachbezogene Antworten, aber auch ein paar allgemeine Gesichtspunkte: es bedarf der »Leidenschaft« für die Forschung (Max Weber), aus der dann auch die Begeisterung für die Lehre erwächst. Forschung und Lehre gehören zusammen, um die Wissenschaft als lebendiges Tun vermitteln zu können. Der Vortrag gibt Beispiele dafür, wie in angewandter Mathematik Forschungsaufgaben aus praktischen Alltagsproblemstellungen erwachsen, die in die Lehre auf verschiedenen Stufen (Gymnasium bis Graduiertenkolleg) einfließen; er leitet damit auch zu einem aktuellen Forschungsgebiet, der Mehrskalenanalyse mit ihren vielfältigen Anwendungen in Bildverarbeitung, Materialentwicklung und Strömungsmechanik über, was aber nur kurz gestreift wird. Mathematik erscheint hier als eine moderne Schlüsseltechnologie, die aber auch enge Beziehungen zu den Geistes- und Sozialwissenschaften hat.

No doubt: Mathematics has become a technology in its own right, maybe even a key technology. Technology may be defined as the application of science to the problems of commerce and industry. And science? Science maybe defined as developing, testing and improving models for the prediction of system behavior; the language used to describe these models is mathematics and mathematics provides methods to evaluate these models. Here we are! Why has mathematics become a technology only recently? Since it got a tool, a tool to evaluate complex, "near to reality" models: Computer! The model may be quite old - Navier-Stokes equations describe flow behavior rather well, but to solve these equations for realistic geometry and higher Reynolds numbers with sufficient precision is even for powerful parallel computing a real challenge. Make the models as simple as possible, as complex as necessary - and then evaluate them with the help of efficient and reliable algorithms: These are genuine mathematical tasks.