### Filtern

#### Erscheinungsjahr

#### Dokumenttyp

- Bericht (179)
- Preprint (19)
- Dissertation (4)
- Arbeitspapier (1)

#### Sprache

- Englisch (203) (entfernen)

#### Schlagworte

- numerical upscaling (7)
- Integer programming (4)
- hub location (4)
- Darcy’s law (3)
- Heston model (3)
- Lagrangian mechanics (3)
- effective heat conductivity (3)
- facility location (3)
- non-Newtonian flow in porous media (3)
- poroelasticity (3)

#### Fachbereich / Organisatorische Einheit

- Fraunhofer (ITWM) (203) (entfernen)

Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.

Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.

It is well known that the structure at a microscopic point of view strongly influences the
macroscopic properties of materials. Moreover, the advancement in imaging technologies allows
to capture the complexity of the structures at always decreasing scales. Therefore, more
sophisticated image analysis techniques are needed.
This thesis provides tools to geometrically characterize different types of three-dimensional
structures with applications to industrial production and to materials science. Our goal is to
enhance methods that allow the extraction of geometric features from images and the automatic
processing of the information.
In particular, we investigate which characteristics are sufficient and necessary to infer
the desired information, such as particles classification for technical cleanliness and
fitting of stochastic models in materials science.
In the production line of automotive industry, dirt particles collect on the surface of mechanical
components. Residual dirt might reduce the performance and durability of assembled products.
Geometric characterization of these particles allows to identify their potential danger.
While the current standards are based on 2d microscopic images, we extend the characterization
to 3d.
In particular, we provide a collection of parameters that exhaustively describe size and shape
of three-dimensional objects and can be efficiently estimated from binary images. Furthermore,
we show that only a few features are sufficient to classify particles according to the standards
of technical cleanliness.
In the context of materials science, we consider two types of microstructures: fiber systems
and foams.
Stochastic geometry grants the fundamentals for versatile models able to encompass the
geometry observed in the samples. To allow automatic model fitting, we need rules stating which
parameters of the model yield the best-fitting characteristics. However, the validity of such
rules strongly depends on the properties of the structures and on the choice of the model.
For instance, isotropic orientation distribution yields the best theoretical results for Boolean
models and Poisson processes of cylinders with circular cross sections. Nevertheless, fiber
systems in composites are often anisotropic.
Starting from analytical results from the literature, we derive formulae for anisotropic
Poisson processes of cylinders with polygonal cross sections that can be directly used in
applications. We apply this procedure to a sample of medium density fiber board. Even
if image resolution does not allow to estimate reliably characteristics of the singles fibers,
we can fit Boolean models and Poisson cylinder processes. In particular, we show the complete
model fitting and validation procedure with cylinders with circular and squared cross sections.
Different problems arise when modeling cellular materials. Motivated by the physics of foams,
random Laguerre tessellations are a good choice to model the pore system of foams.
Considering tessellations generated by systems of non-overlapping spheres allows to control the
cell size distribution, but yields the loss of an analytical description of the model.
Nevertheless, automatic model fitting can still be obtained by approximating the characteristics
of the tessellation depending on the parameters of the model. We investigate how to improve
the choice of the model parameters. Angles between facets and between edges were never considered
so far. We show that the distributions of angles in Laguerre tessellations
depend on the model parameters. Thus, including the moments of the angles still allows automatic
model fitting. Moreover, we propose an algorithm to estimate angles from images of real foams.
We observe that angles are matched well in random Laguerre tessellations also when they are not
employed to choose the model parameters. Then, we concentrate on the edge length distribution. In
Laguerre tessellations occur many more short edges than in real foams. To deal with this problem,
we consider relaxed models. Relaxation refers to topological and structural modifications
of a tessellation in order to make it comply with Plateau's laws of mechanical equilibrium. We inspect
samples of different types of foams, closed and open cell foams, polymeric and metallic. By comparing
the geometric characteristics of the model and of the relaxed tessellations, we conclude that whether
the relaxation improves the edge length distribution strongly depends on the type of foam.

Test rig optimization
(2014)

Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.

In the presented work, we make use of the strong reciprocity between kinematics and geometry to build a geometrically nonlinear, shearable low order discrete shell model of Cosserat type defined on triangular meshes, from which we deduce a rotation–free Kirchhoff type model with the triangle vertex positions as degrees of freedom. Both models behave physically plausible already on very coarse meshes, and show good
convergence properties on regular meshes. Moreover, from the theoretical side, this deduction provides a
common geometric framework for several existing models.

A simple transformation of the Equation of Motion (EoM) allows us to directly integrate nonlinear structural models into the recursive Multibody System (MBS) formalism of SIMPACK. This contribution describes how the integration is performed for a discrete Cosserat rod model which has been developed at the ITWM. As a practical example, the run-up of a simplified three-bladed wind turbine is studied where the dynamic deformations of the three blades are calculated by the Cosserat rod model.

We present the derivation of a simple viscous damping model of Kelvin–Voigt type for geometrically exact
Cosserat rods from three–dimensional continuum theory. Assuming a homogeneous and isotropic material,
we obtain explicit formulas for the damping parameters of the model in terms of the well known stiffness
parameters of the rod and the retardation time constants defined as the ratios of bulk and shear viscosities to
the respective elastic moduli. We briefly discuss the range of validity of our damping model and illustrate
its behaviour with a numerical example.

In this paper, we propose multi-level Monte Carlo(MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multi-phase flow and transport. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. We consider two types of ensemble level mixed multiscale finite element methods, (1) the no-local-solve-online ensemble level method (NLSO) and (2) the local-solve-online ensemble level method (LSO). Both mixed multiscale methods use a number of snapshots of the permeability media to generate a multiscale basis.
As a result, in the offline stage, we construct multiple basis functions for
each coarse region where basis functions correspond to different realizations.
In the no-local-solve-online ensemble level method one uses the whole set of pre-computed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method one uses the pre-computed functions to construct a multiscale basis for a particular realization. With this basis the solution corresponding to this
particular realization is approximated in LSO mixed MsFEM. In both approaches
the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to pre-compute a
multiscale basis. We note that LSO approaches share similarities with reduced basis methods [11, 21, 22].
In multi-level Monte Carlo methods ([14, 13]), more accurate (and expensive) forward simulations are run with fewer samples while less accurate(and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations carefully, one can show that MLMC methods can provide better accuracy
at the same cost as MC methods. In our simulations, our goal is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. In particular, we show that NLSO
mixed MsFEM is more accurate compared to LSO mixed MsFEM. Further, we use both approaches in the context of MLMC to speed-up MC
calculations. We present basic aspects of the algorithm and numerical
results for coupled flow and transport in heterogeneous porous media.

The direction splitting approach proposed earlier in [6], aiming at the efficient solution of Navier-Stokes equations, is extended and adopted here to solve the Navier-Stokes-Brinkman equations describing incompressible flows in plain and in porous media. The resulting pressure equation is a perturbation of the
incompressibility constrained using a direction-wise factorized operator as proposed in [6]. We prove that this approach is unconditionally stable for the unsteady Navier-Stokes-Brinkman problem. We also provide numerical illustrations of the method's accuracy and efficiency.

Granular systems in solid-like state exhibit properties like stiffness
dependence on stress, dilatancy, yield or incremental non-linearity
that can be described within the continuum mechanical framework.
Different constitutive models have been proposed in the literature either based on relations between some components of the stress tensor or on a quasi-elastic description. After a brief description of these
models, the hyperelastic law recently proposed by Jiang and Liu [1]
will be investigated. In this framework, the stress-strain relation is
derived from an elastic strain energy density where the stable proper-
ties are linked to a Drucker-Prager yield criteria. Further, a numerical method based on the finite element discretization and Newton-
Raphson iterations is presented to solve the force balance equation.
The 2D numerical examples presented in this work show that the stress
distributions can be computed not only for triangular domains, as previoulsy done in the literature, but also for more complex geometries.
If the slope of the heap is greater than a critical value, numerical instabilities appear and no elastic solution can be found, as predicted by
the theory. As main result, the dependence of the material parameter
Xi on the maximum angle of repose is established.

In this work, some model reduction approaches for performing simulations
with a pseudo-2D model of Li-ion battery are presented. A full pseudo-2D model of processes in Li-ion batteries is presented following
[3], and three methods to reduce the order of the full model are considered. These are: i) directly reduce the model order using proper
orthogonal decomposition, ii) using fractional time step discretization in order to solve the equations in decoupled way, and iii) reformulation
approaches for the diffusion in the solid phase. Combinations of above
methods are also considered. Results from numerical simulations are presented, and the efficiency and the accuracy of the model reduction approaches are discussed.

Worldwide the installed capacity of renewable technologies for electricity production is
rising tremendously. The German market is particularly progressive and its regulatory
rules imply that production from renewables is decoupled from market prices and electricity
demand. Conventional generation technologies are to cover the residual demand
(defined as total demand minus production from renewables) but set the price at the
exchange. Existing electricity price models do not account for the new risks introduced
by the volatile production of renewables and their effects on the conventional demand
curve. A model for residual demand is proposed, which is used as an extension of
supply/demand electricity price models to account for renewable infeed in the market.
Infeed from wind and solar (photovoltaics) is modeled explicitly and withdrawn from
total demand. The methodology separates the impact of weather and capacity. Efficiency
is transformed on the real line using the logit-transformation and modeled as a stochastic process. Installed capacity is assumed a deterministic function of time. In a case study the residual demand model is applied to the German day-ahead market
using a supply/demand model with a deterministic supply-side representation. Price trajectories are simulated and the results are compared to market future and option
prices. The trajectories show typical features seen in market prices in recent years and the model is able to closely reproduce the structure and magnitude of market prices.
Using the simulated prices it is found that renewable infeed increases the volatility of forward prices in times of low demand, but can reduce volatility in peak hours. Prices
for different scenarios of installed wind and solar capacity are compared and the meritorder effect of increased wind and solar capacity is calculated. It is found that wind
has a stronger overall effect than solar, but both are even in peak hours.

In this work we extend the multiscale finite element method (MsFEM)
as formulated by Hou and Wu in [14] to the PDE system of linear elasticity.
The application, motivated from the multiscale analysis of highly heterogeneous
composite materials, is twofold. Resolving the heterogeneities on
the finest scale, we utilize the linear MsFEM basis for the construction of
robust coarse spaces in the context of two-level overlapping Domain Decomposition
preconditioners. We motivate and explain the construction
and present numerical results validating the approach. Under the assumption
that the material jumps are isolated, that is they occur only in the
interior of the coarse grid elements, our experiments show uniform convergence
rates independent of the contrast in the Young's modulus within the
heterogeneous material. Elsewise, if no restrictions on the position of the
high coefficient inclusions are imposed, robustness can not be guaranteed
any more. These results justify expectations to obtain coefficient-explicit
condition number bounds for the PDE system of linear elasticity similar to
existing ones for scalar elliptic PDEs as given in the work of Graham, Lechner
and Scheichl [12]. Furthermore, we numerically observe the properties
of the MsFEM coarse space for linear elasticity in an upscaling framework.
Therefore, we present experimental results showing the approximation errors
of the multiscale coarse space w.r.t. the fine-scale solution.

The paper production is a problem with significant importance for the society
and it is a challenging topic for scientific investigations. This study is concerned
with the simulations of the pressing section of a paper machine. A two-dimensional
model is developed to account for the water flow within the pressing zone. Richards’
type equation is used to describe the flow in the unsaturated zone. The dynamic capillary
pressure–saturation relation proposed by Hassanizadeh and co-workers (Hassanizadeh
et al., 2002; Hassanizadeh, Gray, 1990, 1993a) is adopted for the paper
production process.
The mathematical model accounts for the co-existence of saturated and unsaturated
zones in a multilayer computational domain. The discretization is performed
by the MPFA-O method. The numerical experiments are carried out for parameters
which are typical for the production process. The static and dynamic capillary
pressure–saturation relations are tested to evaluate the influence of the dynamic
capillary effect.

In this paper, we present a viscoelastic rod model that is suitable for fast and accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (‘stiff’ dof), bending and torsion (‘soft’ dof). For inner dissipation, a consistent damping potential proposed by Antman is chosen. We parametrise the rotational dof by unit quaternions and directly use the quaternionic evolution differential equation for the discretisation of the Cosserat rod curvature. The discrete version of our rod model is obtained via a finite difference discretisation on a staggered grid. After an index reduction from three to zero, the right-hand side function f and the Jacobian \(\partial f/\partial(q, v, t)\) of the dynamical system \(\dot{q} = v, \dot{v} = f(q, v, t)\) is free of higher algebraic (e. g. root) or transcendental (e. g. trigonometric or exponential) functions and therefore cheap to evaluate. A comparison with Abaqus finite element results demonstrates the correct mechanical behavior of our discrete rod model. For the time integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computational times within milliseconds, it is suitable for interactive applications in ‘virtual reality’ as well as for multibody dynamics simulation.

This work presents a proof of convergence of a discrete solution to a continuous one. At first, the continuous problem is stated as a system
of equations which describe filtration process in the pressing section of a
paper machine. Two flow regimes appear in the modeling of this problem.
The model for the saturated flow is presented by the Darcy’s law and the mass conservation. The second regime is described by the Richards approach together with a dynamic capillary pressure model. The finite
volume method is used to approximate the system of PDEs. Then the existence of a discrete solution to proposed finite difference scheme is proven.
Compactness of the set of all discrete solutions for different mesh sizes is
proven. The main Theorem shows that the discrete solution converges
to the solution of continuous problem. At the end we present numerical
studies for the rate of convergence.

An efficient mathematical model to virtually generate woven metal wire meshes is
presented. The accuracy of this model is verified by the comparison of virtual structures with three-dimensional
images of real meshes, which are produced via computer tomography. Virtual structures
are generated for three types of metal wire meshes using only easy to measure parameters. For these
geometries the velocity-dependent pressure drop is simulated and compared with measurements
performed by the GKD - Gebr. Kufferath AG. The simulation results lie within the tolerances of
the measurements. The generation of the structures and the numerical simulations were done at
GKD using the Fraunhofer GeoDict software.

Continuously improving imaging technologies allow to capture the complex spatial
geometry of particles. Consequently, methods to characterize their three
dimensional shapes must become more sophisticated, too. Our contribution to
the geometric analysis of particles based on 3d image data is to unambiguously
generalize size and shape descriptors used in 2d particle analysis to the spatial
setting.
While being defined and meaningful for arbitrary particles, the characteristics
were actually selected motivated by the application to technical cleanliness. Residual
dirt particles can seriously harm mechanical components in vehicles, machines,
or medical instruments. 3d geometric characterization based on micro-computed
tomography allows to detect dangerous particles reliably and with
high throughput. It thus enables intervention within the production line. Analogously
to the commonly agreed standards for the two dimensional case, we
show how to classify 3d particles as granules, chips and fibers on the basis of
the chosen characteristics. The application to 3d image data of dirt particles is
demonstrated.

Input loads are essential for the numerical simulation of vehicle multibody system
(MBS)- models. Such load data is called invariant, if it is independent of the specific system under consideration. A digital road profile, e.g., can be used to excite MBS models of different
vehicle variants. However, quantities efficiently obtained by measurement such as wheel forces
are typically not invariant in this sense. This leads to the general task to derive invariant loads
on the basis of measurable, but system-dependent quantities. We present an approach to derive
input data for full-vehicle simulation that can be used to simulate different variants of a vehicle
MBS model. An important ingredient of this input data is a virtual road profile computed by optimal control methods.

This work presents the dynamic capillary pressure model (Hassanizadeh, Gray, 1990, 1993a) adapted for the needs of paper manufacturing process simulations. The dynamic capillary pressure-saturation relation is included in a one-dimensional simulation model for the pressing section of a paper machine. The one-dimensional model is derived from a two-dimensional model by averaging with respect to the vertical direction. Then, the model is discretized by the finite volume method and solved by Newton’s method. The numerical experiments are carried out for parameters typical for the paper layer. The dynamic capillary pressure-saturation relation shows significant influence on the distribution of water pressure. The behaviour of the solution agrees with laboratory experiments (Beck, 1983).

In this paper we study the possibilities of sharing profit in combinatorial procurement auctions and exchanges. Bundles of heterogeneous items are offered by the sellers, and the buyers can then place bundle bids on sets of these items. That way, both sellers and buyers can express synergies between items and avoid the well-known risk of exposure (see, e.g., [3]). The reassignment of items to participants is known as the Winner Determination Problem (WDP). We propose solving the WDP by using a Set Covering formulation, because profits are potentially higher than with the usual Set Partitioning formulation, and subsidies are unnecessary. The achieved benefit is then to be distributed amongst the participants of the auction, a process which is known as profit sharing. The literature on profit sharing provides various desirable criteria. We focus on three main properties we would like to guarantee: Budget balance, meaning that no more money is distributed than profit was generated, individual rationality, which guarantees to each player that participation does not lead to a loss, and the core property, which provides every subcoalition with enough money to keep them from separating. We characterize all profit sharing schemes that satisfy these three conditions by a monetary flow network and state necessary conditions on the solution of the WDP for the existence of such a profit sharing. Finally, we establish a connection to the famous VCG payment scheme [2, 8, 19], and the Shapley Value [17].

We introduce a refined tree method to compute option prices using the stochastic volatility model of Heston. In a first step, we model the stock and variance process as two separate trees and with transition probabilities obtained by matching tree moments up to order two against the Heston model ones. The correlation between the driving Brownian motions in the Heston model is then incorporated by the node-wise adjustment of the probabilities. This adjustment, leaving the marginals fixed, optimizes the match between tree and model correlation. In some nodes, we are even able to further match moments of higher order. Numerically this gives convergence orders faster than 1/N, where N is the number of dis- cretization steps. Accuracy of our method is checked for European option prices against a semi closed-form, and our prices for both European and American options are compared to alternative approaches.

In this paper we deal with dierent statistical modeling of real world accident data in order to quantify the eectiveness of a safety function or a safety conguration (meaning a specic combination of safety functions) in vehicles. It is shown that the eectiveness can be estimated along the so-called relative risk, even if the eectiveness does depend on a confounding variable which may be categorical or continuous. For doing so a concrete statistical modeling is not necessary, that is the resulting estimate is of nonparametric nature. In a second step the quite usual and from a statistical point of view classical logistic regression modeling is investigated. Main emphasis has been laid on the understanding of the model and the interpretation of the occurring parameters. It is shown that the eectiveness of the safety function also can be detected via such a logistic approach and that relevant confounding variables can and should be taken into account. The interpretation of the parameters related to the confounder and the quantication of the in uence of the confounder is shown to be rather problematic. All the theoretical results are illuminated by numerical data examples.

This report describes the calibration and completion of the volatility cube in the SABR model. The description is based on a project done for Assenagon GmbH in Munich. However, we use fictitious market data which resembles realistic market data. The problem posed by our client is formulated in section 1. Here we also motivate why this is a relevant problem. The SABR model is briefly reviewed in section 2. Section 3 discusses the calibration and completion of the volatility cube. An example is presented in section 4. We conclude by suggesting possible future research in section 5.

In this article, a new model predictive control approach to nonlinear stochastic systems will be presented. The new approach is based on particle filters, which are usually used for estimating states or parameters. Here, two particle filters will be combined, the first one giving an estimate for the actual state based on the actual output of the system; the second one gives an estimate of a control input for the system. This is basically done by adopting the basic model predictive control strategies for the second particle filter. Later in this paper, this new approach is applied to a CSTR (continuous stirred-tank reactor) example and to the inverted pendulum.

The modelling of hedge funds poses a difficult problem since the available reported data sets are often small and incomplete. We propose a switching regression model for hedge funds, in which the coefficients are able to switch between different regimes. The coefficients are governed by a Markov chain in discrete time. The different states of the Markov chain represent different states of the economy, which influence the performance of the independent variables. Hedge fund indices are chosen as regressors. The parameter estimation for the switching parameter as well as for the switching error term is done through a filtering technique for hidden Markov models developed by Elliott (1994). Recursive parameter estimates are calculated through a filter-based EM-algorithm, which uses the hidden information of the underlying Markov chain. Our switching regression model is applied on hedge fund series and hedge fund indices from the HFR database.

In this paper a three dimensional stochastic model for the lay-down of fibers on a moving conveyor belt in the production process of nonwoven materials is derived. The model is based on stochastic diferential equations describing the resulting position of the fiber on the belt under the influence of turbulent air ows. The model presented here is an extension of an existing surrogate model, see [6, 3].

The optimal design of rotational production processes for glass wool manufacturing poses severe computational challenges to mathematicians, natural scientists and engineers. In this paper we focus exclusively on the spinning regime where thousands of viscous thermal glass jets are formed by fast air streams. Homogeneity and slenderness of the spun fibers are the quality features of the final fabric. Their prediction requires the computation of the fuidber-interactions which involves the solving of a complex three-dimensional multiphase problem with appropriate interface conditions. But this is practically impossible due to the needed high resolution and adaptive grid refinement. Therefore, we propose an asymptotic coupling concept. Treating the glass jets as viscous thermal Cosserat rods, we tackle the multiscale problem by help of momentum (drag) and heat exchange models that are derived on basis of slender-body theory and homogenization. A weak iterative coupling algorithm that is based on the combination of commercial software and self-implemented code for ow and rod solvers, respectively, makes then the simulation of the industrial process possible. For the boundary value problem of the rod we particularly suggest an adapted collocation-continuation method. Consequently, this work establishes a promising basis for future optimization strategies.

The scope of this paper is to enhance the model for the own-company stockholder (given in Desmettre, Gould and Szimayer (2010)), who can voluntarily performance-link his personal wealth to his management success by acquiring stocks in the own-company whose value he can directly influence via spending work effort. The executive is thereby characterized by a parameter of risk aversion and the two work effectiveness parameters inverse work productivity and disutility stress. We extend the model to a constant absolute risk aversion framework using an exponential utility/disutility set-up. A closed-form solution is given for the optimal work effort an executive will apply and we derive the optimal investment strategies of the executive. Furthermore, we determine an up-front fair cash compensation applying an indifference utility rationale. Our study shows to a large extent that the results previously obtained are robust under the choice of the utility/disutility set-up.

We will present a rigorous derivation of the equations and interface conditions for ion, charge and heat transport in Li-ion insertion batteries. The derivation is based exclusively on universally accepted principles of nonequilibrium thermodynamics and the assumption of a one step intercalation reaction at the interface of electrolyte and active particles. Without loss of generality the transport in the active particle is assumed to be isotropic. The electrolyte is described as a fully dissociated salt in a neutral solvent. The presented theory is valid for transport on a spatial scale for which local charge neutrality holds i.e. beyond the scale of the diffuse double layer. Charge neutrality is explicitely used to determine the correct set of thermodynamically independent variables. The theory guarantees strictly positive entropy production. The various contributions to the Peltier coeficients for the interface between the active particles and the electrolyte as well as the contributions to the heat of mixing are obtained as a result of the theory.

Simulation of multibody systems (mbs) is an inherent part in developing and design of complex mechanical systems. Moreover, simulation during operation gained in importance in the recent years, e.g. for HIL-, MIL- or monitoring applications. In this paper we discuss the numerical simulation of multibody systems on different platforms. The main section of this paper deals with the simulation of an established truck model [9] on different platforms, one microcontroller and two real-time processor boards. Additional to numerical C-code the latter platforms provide the possibility to build the model with a commercial mbs tool, which is also investigated. A survey of different ways of generating code and equations of mbs models is given and discussed concerning handling, possible limitations as well as performance. The presented benchmarks are processed under terms of on-board real time applications. A further important restriction, caused by the real-time requirement, is a fixed integration step size. Whence, carefully chosen numerical integration algorithms are necessary, especially in the case of closed loops in the model. We investigate linearly-implicit time integration methods with fixed step size, so-called Rosenbrock methods, and compare them with respect to their accuracy and performance on the tested processors.

This work deals with the modeling and simulation of slender viscous jets exposed to gravity and rotation, as they occur in rotational spinning processes. In terms of slender-body theory we show the asymptotic reduction of a viscous Cosserat rod to a string system for vanishing slenderness parameter. We propose two string models, i.e. inertial and viscous-inertial string models, that differ in the closure conditions and hence yield a boundary value problem and an interface problem, respectively. We investigate the existence regimes of the string models in the four-parametric space of Froude, Rossby, Reynolds numbers and jet length. The convergence regimes where the respective string solution is the asymptotic limit to the rod turn out to be disjoint and to cover nearly the whole parameter space. We explore the transition hyperplane and derive analytically low and high Reynolds number limits. Numerical studies of the stationary jet behavior for different parameter ranges complete the work.

Numerical modeling of electrochemical process in Li-Ion battery is an emerging topic of great practical interest. In this work we present a Finite Volume discretization of electrochemical diffusive processes occurring during the operation of Li-Ion batteries. The system of equations is a nonlinear, time-dependent diffusive system, coupling the Li concentration and the electric potential. The system is formulated at length-scale at which two different types of domains are distinguished, one for the electrolyte and one for the active solid particles in the electrode. The domains can be of highly irregular shape, with electrolyte occupying the pore space of a porous electrode. The material parameters in each domain differ by several orders of magnitude and can be non-linear functions of Li ions concentration and/or the electrical potential. Moreover, special interface conditions are imposed at the boundary separating the electrolyte from the active solid particles. The field variables are discontinuous across such an interface and the coupling is highly non- linear, rendering direct iteration methods ineffective for such problems. We formulate a Newton iteration for an purely implicit Finite Volume discretization of the coupled system. A series of numerical examples are presented for different type of electrolyte/electrode configurations and material parameters. The convergence of the Newton method is characterized both as function of nonlinear material parameters as well as the nonlinearity in the interface conditions.

Modeling of species and charge transport in Li-Ion Batteries based on non-equilibrium thermodynamics
(2010)

In order to improve the design of Li ion batteries the complex interplay of various physical phenomena in the active particles of the electrodes and in the electrolyte has to be balanced. The separate transport phenomena in the electrolyte and in the active particle as well as their coupling due to the electrochemical reactions at the interfaces between the electrode particles and the electrolyte will inuence the performance and the lifetime of a battery. Any modeling of the complex phenomena during the usage of a battery has therefore to be based on sound physical and chemical principles in order to allow for reliable predictions for the response of the battery to changing load conditions. We will present a modeling approach for the transport processes in the electrolyte and the electrodesbased on non-equilibrium thermodynamics and transport theory. The assumption of local charge neutrality, which is known to be valid in concentrated electrolytes, is explicitly used to identify the independent thermodynamic variables and uxes. The theory guarantees strictly positive entropy production. Dierences to other theories will be discussed.

This paper discusses a numerical subgrid resolution approach for solving the Stokes-Brinkman system of equations, which is describing coupled ow in plain and in highly porous media. Various scientic and industrial problems are described by this system, and often the geometry and/or the permeability vary on several scales. A particular target is the process of oil ltration. In many complicated lters, the lter medium or the lter element geometry are too ne to be resolved by a feasible computational grid. The subgrid approach presented in the paper is aimed at describing how these ne details are accounted for by solving auxiliary problems in appropriately chosen grid cells on a relatively coarse computational grid. This is done via a systematic and a careful procedure of modifying and updating the coecients of the Stokes-Brinkman system in chosen cells. This numerical subgrid approach is motivated from one side from homogenization theory, from which we borrow the formulations for the so called cell problem, and from the other side from the numerical upscaling approaches, such as Multiscale Finite Volume, Multiscale Finite Element, etc. Results on the algorithm's eciency, both in terms of computational time and memory usage, are presented. Comparison with solutions on full ne grid (when possible) are presented in order to evaluate the accuracy. Advantages and limitations of the considered subgrid approach are discussed.

We consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market includ- ing the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility. The power utility case is discussed as well. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two posi- tions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibility to benefit from her work effort by acquiring own-company shares. This gives insight into aspects of optimal contract design. Our framework is applicable to the pharma- ceutical and financial industry, and the IT sector.

We present a two-scale finite element method for solving Brinkman’s and Darcy’s equations. These systems of equations model fluid flows in highly porous and porous media, respectively. The method uses a recently proposed discontinuous Galerkin FEM for Stokes’ equations byWang and Ye and the concept of subgrid approximation developed by Arbogast for Darcy’s equations. In order to reduce the “resonance error” and to ensure convergence to the global fine solution the algorithm is put in the framework of alternating Schwarz iterations using subdomains around the coarse-grid boundaries. The discussed algorithms are implemented using the Deal.II finite element library and are tested on a number of model problems.

This work deals with the optimal control of a free surface Stokes flow which responds to an applied outer pressure. Typical applications are fiber spinning or thin film manufacturing. We present and discuss two adjoint-based optimization approaches that differ in the treatment of the free boundary as either state or control variable. In both cases the free boundary is modeled as the graph of a function. The PDE-constrained optimization problems are numerically solved by the BFGS method, where the gradient of the reduced cost function is expressed in terms of adjoint variables. Numerical results for both strategies are finally compared with respect to accuracy and efficiency.

We present some optimality results for robust Kalman filtering. To this end, we introduce the general setup of state space models which will not be limited to a Euclidean or time-discrete framework. We pose the problem of state reconstruction and repeat the classical existing algorithms in this context. We then extend the ideal-model setup allowing for outliers which in this context may be system-endogenous or -exogenous, inducing the somewhat conflicting goals of tracking and attenuation. In quite a general framework, we solve corresponding minimax MSE-problems for both types of outliers separately, resulting in saddle-points consisting of an optimally-robust procedure and a corresponding least favorable outlier situation. Still insisting on recursivity, we obtain an operational solution, the rLS filter and variants of it. Exactly robust-optimal filters would need knowledge of certain hard-to-compute conditional means in the ideal model; things would be much easier if these conditional means were linear. Hence, it is important to quantify the deviation of the exact conditional mean from linearity. We obtain a somewhat surprising characterization of linearity for the conditional expectation in this setting. Combining both optimal filter types (for system-endogenous and -exogenous situation) we come up with a delayed hybrid filter which is able to treat both types of outliers simultaneously. Keywords: robustness, Kalman Filter, innovation outlier, additive outlier

A number of water flow problems in porous media are modelled by Richards’ equation [1]. There exist a lot of different applications of this model. We are concerned with the simulation of the pressing section of a paper machine. This part of the industrial process provides the dewatering of the paper layer by the use of clothings, i.e. press felts, which absorb the water during pressing [2]. A system of nips are formed in the simplest case by rolls, which increase sheet dryness by pressing against each other (see Figure 1). A lot of theoretical studies were done for Richards’ equation (see [3], [4] and references therein). Most articles consider the case of x-independent coefficients. This simplifies the system considerably since, after Kirchhoff’s transformation of the problem, the elliptic operator becomes linear. In our case this condition is not satisfied and we have to consider nonlinear operator of second order. Moreover, all these articles are concerned with the nonstationary problem, while we are interested in the stationary case. Due to complexity of the physical process our problem has a specific feature. An additional convective term appears in our model because the porous media moves with the constant velocity through the pressing rolls. This term is zero in immobile porous media. We are not aware of papers, which deal with such kind of modified steady Richards’ problem. The goal of this paper is to obtain the stability results, to show the existence of a solution to the discrete problem, to prove the convergence of the approximate solution to the weak solution of the modified steady Richards’ equation, which describes the transport processes in the pressing section. In Section 2 we present the model which we consider. In Section 3 a numerical scheme obtained by the finite volume method is given. The main part of this paper is theoretical studies, which are given in Section 4. Section 5 presents a numerical experiment. The conclusion of this work is given in Section 6.

A theory of discrete Cosserat rods is formulated in the language of discrete Lagrangian mechanics. By exploiting Kirchho's kinetic analogy, the potential energy density of a rod is a function on the tangent bundle of the conguration manifold and thus formally corresponds to the Lagrangian function of a dynamical system. The equilibrium equations are derived from a variational principle using a formulation that involves null{space matrices. In this formulation, no Lagrange multipliers are necessary to enforce orthonormality of the directors. Noether's theorem relates rst integrals of the equilibrium equations to Lie group actions on the conguration bundle, so{called symmetries. The symmetries relevant for rod mechanics are frame{indierence, isotropy and uniformity. We show that a completely analogous and self{contained theory of discrete rods can be formulated in which the arc{length is a discrete variable ab initio. In this formulation, the potential energy density is dened directly on pairs of points along the arc{length of the rod, in analogy to Veselov's discrete reformulation of Lagrangian mechanics. A discrete version of Noether's theorem then identies exact rst integrals of the discrete equilibrium equations. These exact conservation properties confer the discrete solutions accuracy and robustness, as demonstrated by selected examples of application. Copyright c 2010 John Wiley & Sons, Ltd.

We study global and local robustness properties of several estimators for shape and scale in a generalized Pareto model. The estimators considered in this paper cover maximum likelihood estimators, skipped maximum likelihood estimators, moment-based estimators, Cramér-von-Mises Minimum Distance estimators, and, as a special case of quantile-based estimators, Pickands Estimator as well as variants of the latter tuned for higher finite sample breakdown point (FSBP), and lower variance. We further consider an estimator matching population median and median of absolute deviations to the empirical ones (MedMad); again, in order to improve its FSBP, we propose a variant using a suitable asymmetric Mad as constituent, and which may be tuned to achieve an expected FSBP of 34%. These estimators are compared to one-step estimators distinguished as optimal in the shrinking neighborhood setting, i.e., the most bias-robust estimator minimizing the maximal (asymptotic) bias and the estimator minimizing the maximal (asymptotic) MSE. For each of these estimators, we determine the FSBP, the influence function, as well as statistical accuracy measured by asymptotic bias, variance, and mean squared error—all evaluated uniformly on shrinking convex contamination neighborhoods. Finally, we check these asymptotic theoretical findings against finite sample behavior by an extensive simulation study.

In this paper, a multi-period supply chain network design problem is addressed. Several aspects of practical relevance are considered such as those related with the financial decisions that must be accounted for by a company managing a supply chain. The decisions to be made comprise the location of the facilities, the flow of commodities and the investments to make in alternative activities to those directly related with the supply chain design. Uncertainty is assumed for demand and interest rates, which is described by a set of scenarios. Therefore, for the entire planning horizon, a tree of scenarios is built. A target is set for the return on investment and the risk of falling below it is measured and accounted for. The service level is also measured and included in the objective function. The problem is formulated as a multi-stage stochastic mixed-integer linear programming problem. The goal is to maximize the total financial benefit. An alternative formulation which is based upon the paths in the scenario tree is also proposed. A methodology for measuring the value of the stochastic solution in this problem is discussed. Computational tests using randomly generated data are presented showing that the stochastic approach is worth considering in these type of problems.

In this article, we summarise the rotation-free and quaternionic parametrisation of a rigid body. We derive and explain the close interrelations between both parametrisations. The internal constraints due to the redundancies in the parametrisations, which lead to DAEs, are handled with the null space technique. We treat both single rigid bodies and general multibody systems with joints, which lead to external joint constraints. Several numerical examples compare both formalisms to the index reduced versions of the corresponding standard formulations.

Classical geometrically exact Kirchhoff and Cosserat models are used to study the nonlinear deformation of rods. Extension, bending and torsion of the rod may be represented by the Kirchhoff model. The Cosserat model additionally takes into account shearing effects. Second order finite differences on a staggered grid define discrete viscoelastic versions of these classical models. Since the rotations are parametrised by unit quaternions, the space discretisation results in differential-algebraic equations that are solved numerically by standard techniques like index reduction and projection methods. Using absolute coordinates, the mass and constraint matrices are sparse and this sparsity may be exploited to speed-up time integration. Further improvements are possible in the Cosserat model, because the constraints are just the normalisation conditions for unit quaternions such that the null space of the constraint matrix can be given analytically. The results of the theoretical investigations are illustrated by numerical tests.

We present a parsimonious multi-asset Heston model. All single-asset submodels follow the well-known Heston dynamics and their parameters are typically calibrated on implied market volatilities. We focus on the calibration of the correlation structure between the single-asset marginals in the absence of sucient liquid cross-asset option price data. The presented model is parsimonious in the sense that d(d􀀀1)=2 asset-asset cross-correlations are required for a d-asset Heston model. In order to calibrate the model, we present two general setups corresponding to relevant practical situations: (1) when the empirical cross-asset correlations in the risk neutral world are given by the user and we need to calibrate the correlations between the driving Brownian motions or (2) when they have to be estimated from the historical time series. The theoretical background, including the ergodicity of the multidimensional CIR process, for the proposed estimators is also studied.

The understanding of the motion of long slender elastic fibers in turbulent flows is of great interest to research, development and production in technical textiles manufacturing. The fiber dynamics depend on the drag forces that are imposed on the fiber by the fluid. Their computation requires in principle a coupling of fiber and flow with no-slip interface conditions. However, theneeded high resolution and adaptive grid refinement make the direct numerical simulation of the three-dimensional fluid-solid-problem for slender fibers and turbulent flows not only extremely costly and complex, but also still impossible for practically relevant applications. Embedded in a slender body theory, an aerodynamic force concept for a general drag model was therefore derived on basis of a stochastic k-o; description for a turbulent flow field in [23]. The turbulence effects on the fiber dynamics were modeled by a correlated random Gaussian force and its asymptotic limit on a macroscopic fiber scale by Gaussian white noise with flow-dependent amplitude. The concept was numerically studied under the conditions of a melt-spinning process for nonwoven materials in [24] – for the specific choice of a non-linear Taylor drag model. Taylor [35] suggested the heuristic model for high Reynolds number flows, Re in [20, 3 · 105], around inclined slender objects under an angle of attack of alpha in (pi/36, pi/2] between flow and object tangent. Since the Reynolds number is considered with respect to the relative velocity between flow and fiber, the numerical results lackaccuracy evidently for small Re that occur in cases of flexible light fibers moving occasionally with the flow velocity. In such a regime (Re << 1), linear Stokes drag forces were successfully applied for the prediction of small particles immersed in turbulent flows, see e.g. [25, 26, 32, 39], a modifiedStokes force taking also into account the particle oscillations was presented in [14]. The linear drag relation was also conferred to longer filaments by imposing free-draining assumptions [29, 8]. Apart from this, the Taylor drag suffers from its non-applicability to tangential incident flow situations (alpha = 0) that often occur in fiber and nonwoven production processes.

In this work we use the Parsimonious Multi–Asset Heston model recently developed in [Dimitroff et al., 2009] at Fraunhofer ITWM, Department Financial Mathematics, Kaiserslautern (Germany) and apply it to Quanto options. We give a summary of the model and its calibration scheme. A suitable transformation of the Quanto option payoff is explained and used to price Quantos within the new framework. Simulated prices are given and compared to market prices and Black–Scholes prices. We find that the new approach underprices the chosen options, but gives better results than the Black–Scholes approach, which is prevailing in the literature on Quanto options.

Home Health Care (HHC) services are becoming increasingly important in Europe’s aging societies. Elderly people have varying degrees of need for assistance and medical treatment. It is advantageous to allow them to live in their own homes as long as possible, since a long-term stay in a nursing home can be much more costly for the social insurance system than a treatment at home providing assistance to the required level. Therefore, HHC services are a cost-effective and flexible instrument in the social system. In Germany, organizations providing HHC services are generally either larger charities with countrywide operations or small private companies offering services only in a city or a rural area. While the former have a hierarchical organizational structure and a large number of employees, the latter typically only have some ten to twenty nurses under contract. The relationship to the patients (“customers”) is often long-term and can last for several years. Therefore acquiring and keeping satisfied customers is crucial for HHC service providers and intensive competition among them is observed.

The capacitated single-allocation hub location problem revisited: A note on a classical formulation
(2009)

Denote by G = (N;A) a complete graph where N is the set of nodes and A is the set of edges. Assume that a °ow wij should be sent from each node i to each node j (i; j 2 N). One possibility is to send these °ows directly between the corresponding pairs of nodes. However, in practice this is often neither e±cient nor costly attractive because it would imply that a link was built between each pair of nodes. An alternative is to select some nodes to become hubs and use them as consolidation and redistribution points that altogether process more e±ciently the flow in the network. Accordingly, hubs are nodes in the graph that receive tra±c (mail, phone calls, passengers, etc) from di®erent origins (nodes) and redirect this tra±c directly to the destination nodes (when a link exists) or else to other hubs. The concentration of tra±c in the hubs and its shipment to other hubs lead to a natural decrease in the overall cost due to economies of scale.

Radiotherapy is one of the major forms in cancer treatment. The patient is irradiated with high-energetic photons or charged particles with the primary goal of delivering sufficiently high doses to the tumor tissue while simultaneously sparing the surrounding healthy tissue. The inverse search for the treatment plan giving the desired dose distribution is done by means of numerical optimization [11, Chapters 3-5]. For this purpose, the aspects of dose quality in the tissue are modeled as criterion functions, whose mathematical properties also affect the type of the corresponding optimization problem. Clinical practice makes frequent use of criteria that incorporate volumetric and spatial information about the shape of the dose distribution. The resulting optimization problems are of global type by empirical knowledge and typically computed with generic global solver concepts, see for example [16]. The development of good global solvers to compute radiotherapy optimization problems is an important topic of research in this application, however, the structural properties of the underlying criterion functions are typically not taken into account in this context.

One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide for the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to assure that all clinically meaningful plans are covered and that as many as possible clinically irrelevant plans are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that consititutes the first phase. It is possible that two plans on the Parteto surface have a very small, clinically insignificant difference in one criterion and a significant difference in one other criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, so called tradeoff bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects.

The rotational spinning of viscous jets is of interest in many industrial applications, including pellet manufacturing [4, 14, 19, 20] and drawing, tapering and spinning of glass and polymer fibers [8, 12, 13], see also [15, 21] and references within. In [12] an asymptotic model for the dynamics of curved viscous inertial fiber jets emerging from a rotating orifice under surface tension and gravity was deduced from the three-dimensional free boundary value problem given by the incompressible Navier-Stokes equations for a Newtonian fluid. In the terminology of [1], it is a string model consisting of balance equations for mass and linear momentum. Accounting for inner viscous transport, surface tension and placing no restrictions on either the motion or the shape of the jet’s center-line, it generalizes the previously developed string models for straight [3, 5, 6] and curved center-lines [4, 13, 19]. Moreover, the numerical results investigating the effects of viscosity, surface tension, gravity and rotation on the jet behavior coincide well with the experiments of Wong et.al. [20].

A general multi-period network redesign problem arising in the context of strategic supply chain planning (SCP) is studied. Several aspects of practical relevance in SCP are captured namely, multiple facility layers with different types of facilities, flows between facilities in the same layer, direct shipments to customers, and facility relocation. An efficient two-phase heuristic approach is proposed for obtaining feasible solutions to the problem, which is initially modeled as a large-scale mixed-integer linear program. In the first stage of the heuristic, a linear programming rounding strategy is applied to second initial values for the binary location variables in the model. The second phase of the heuristic uses local search to correct the initial solution when feasibility is not reached or to improve the solution when its quality does not meet given criteria. The results of an extensive computational study performed on randomly generated instances are reported.

In this paper, an extension to the classical capacitated single-allocation hub location problem is studied in which the size of the hubs is part of the decision making process. For each potential hub a set of capacities is assumed to be available among which one can be chosen. Several formulations are proposed for the problem, which are compared in terms of the bound provided by the linear programming relaxation. Di®erent sets of inequalities are proposed to enhance the models. Several preprocessing tests are also presented with the goal of reducing the size of the models for each particular instance. The results of the computational experiments performed using the proposed models are reported.

In the literature, there are at least two equivalent two-factor Gaussian models for the instantaneous short rate. These are the original two-factor Hull White model (see [3]) and the G2++ one by Brigo and Mercurio (see [1]). Both these models first specify a time homogeneous two-factor short rate dynamics and then by adding a deterministic shift function '(·) fit exactly the initial term structure of interest rates. However, the obtained results are rather clumsy and not intuitive which means that a special care has to be taken for their correct numerical implementation.

In the ground vehicle industry it is often an important task to simulate full vehicle models based on the wheel forces and moments, which have been measured during driving over certain roads with a prototype vehicle. The models are described by a system of differential algebraic equations (DAE) or ordinary differential equations (ODE). The goal of the simulation is to derive section forces at certain components for a durability assessment. In contrast to handling simulations, which are performed including more or less complex tyre models, a driver model, and a digital road profile, the models we use here usually do not contain the tyres or a driver model. Instead, the measured wheel forces are used for excitation of the unconstrained model. This can be difficult due to noise in the input data, which leads to an undesired drift of the vehicle model in the simulation.

For the numerical simulation of a mechanical multibody system (MBS), dynamical loads are needed as input data, such as a road profile. With given input quantities, the equations of motion of the system can be integrated. Output quantities for further investigations are calculated from the integration results. In this paper, we consider the corresponding inverse problem: We assume, that a dynamical system and some reference output signals are given. The general task is to derive an input signal, such that the system simulation produces the desired reference output. We present the state-of-the-art method in industrial applications, the iterative learning control method (ILC) and give an application example from automotive industry. Then, we discuss three alternative methods based on optimal control theory for differential algebraic equations (DAEs) and give an overview of their general scheme.

Inspired by Kirchhoff’s kinetic analogy, the special Cosserat theory of rods is formulatedin the language of Lagrangian mechanics. A static rod corresponds to an abstract Lagrangian system where the energy density takes the role of the Lagrangian function. The equilibrium equations are derived from a variational principle. Noether’s theorem relates their first integrals to frame-indifference, isotropy and uniformity. These properties can be formulated in terms of Lie group symmetries. The rotational degrees of freedom, present in the geometrically exact beam theory, are represented in terms of orthonormal director triads. To reduce the number of unknowns, Lagrange multipliers associated with the orthonormality constraints are eliminated using null-space matrices. This is done both in the continuous and in the discrete setting. The discrete equilibrium equations are used to compute discrete rod configurations, where different types of boundary conditions can be handled.

In this paper, we present a viscoelastic rod model that is suitable for fast and sufficiently accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (’stiff ’ dof), bending and torsion (’soft’ dof). For inner dissipation, a consistent damping potential from Antman is chosen. Our discrete model is based on a finite difference discretisation on a staggered grid. The right-hand side function f and the Jacobian ∂f/∂(q, v, t) of the dynamical system q˙ = v, v˙ = f(q, v, t) – after index reduction from three to zero – is free of higher algebraic (e.g. root) or transcendent (e.g. trigonometric or exponential) functions and is therefore cheap to evaluate. For the time integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computation times within milliseconds, it is suitable for interactivemanipulation in ’virtual reality’ applications. In contrast to fast common VR rod models, our model reflects the structural mechanics solutions sufficiently correct, as comparison with ABAQUS finite element results shows.

In nancial mathematics stock prices are usually modelled directly as a result of supply and demand and under the assumption that dividends are paid continuously. In contrast economic theory gives us the dividend discount model assuming that the stock price equals the present value of its future dividends. These two models need not to contradict each other - in their paper Korn and Rogers (2005) introduce a general dividend model preserving the stock price to follow a stochastic process and to be equal to the sum of all its discounted dividends. In this paper we specify the model of Korn and Rogers in a Black-Scholes framework in order to derive a closed-form solution for the pricing of American Call options under the assumption of a known next dividend followed by several stochastic dividend payments during the option's time to maturity.

In this work we establish a hierarchy of mathematical models for the numerical simulation of the production process of technical textiles. The models range from highly complex three-dimensional fluid-solid interactions to one-dimensional fiber dynamics with stochastic aerodynamic drag and further to efficiently handable stochastic surrogate models for fiber lay-down. They are theoretically and numerically analyzed and coupled via asymptotic analysis, similarity estimates and parameter identification. Themodel hierarchy is applicable to a wide range of industrially relevant production processes and enables the optimization, control and design of technical textiles.

Four aspects are important in the design of hydraulic lters. We distinguish between two cost factors and two performance factors. Regarding performance, filter eciencynd lter capacity are of interest. Regarding cost, there are production considerations such as spatial restrictions, material cost and the cost of manufacturing the lter. The second type of cost is the operation cost, namely the pressure drop. Albeit simulations should and will ultimately deal with all 4 aspects, for the moment our work is focused on cost. The PleatGeo Module generates three-dimensional computer models of a single pleat of a hydraulic lter interactively. PleatDict computes the pressure drop that will result for the particular design by direct numerical simulation. The evaluation of a new pleat design takes only a few hours on a standard PC compared to days or weeks used for manufacturing and testing a new prototype of a hydraulic lter. The design parameters are the shape of the pleat, the permeabilities of one or several layers of lter media and the geometry of a supporting netting structure that is used to keep the out ow area open. Besides the underlying structure generation and CFD technology, we present some trends regarding the dependence of pressure drop on design parameters that can serve as guide lines for the design of hydraulic lters. Compared to earlier two-dimensional models, the three-dimensional models can include a support structure.

Territory design and districting may be viewed as the problem of grouping small geographic areas into larger geographic clusters called territories in such a way that the latter are acceptable according to relevant planning criteria. The availability of GIS on computers and the growing interest in Geo-Marketing leads to an increasing importance of this area. Despite the wide range of applications for territory design problems, when taking a closer look at the models proposed in the literature, a lot of similarities can be noticed. Indeed, the models are many times very similar and can often be, more or less directly, carried over to other applications. Therefore, our aim is to provide a generic application-independent model and present efficient solution techniques. We introduce a basic model that covers aspects common to most applications. Moreover, we present a method for solving the general model which is based on ideas from the field of computational geometry. Theoretical as well as computational results underlining the efficiency of the new approach will be given. Finally, we show how to extend the model and solution algorithm to make it applicable for a broader range of applications and how to integrate the presented techniques into a GIS.

In this paper, the model of Köttgen, Barkey and Socie, which corrects the elastic stress and strain tensor histories at notches of a metallic specimen under non-proportional loading, is improved. It can be used in connection with any multiaxial s -e -law of incremental plasticity. For the correction model, we introduce a constraint for the strain components that goes back to the work of Hoffmann and Seeger. Parameter identification for the improved model is performed by Automatic Differentiation and an established least squares algorithm. The results agree accurately both with transient FE computations and notch strain measurements.

Safety and reliability requirements on the one side and short development cycles, low costs and lightweight design on the other side are two competing aspects of truck engineering. For safety critical components essentially no failures can be tolerated within the target mileage of a truck. For other components the goals are to stay below certain predefined failure rates. Reducing weight or cost of structures often also reduces strength and reliability. The requirements on the strength, however, strongly depend on the loads in actual customer usage. Without sufficient knowledge of these loads one needs large safety factors, limiting possible weight or cost reduction potentials. There are a lot of different quantities influencing the loads acting on the vehicle in actual usage. These ‘influencing quantities’ are, for example, the road quality, the driver, traffic conditions, the mission (long haulage, distribution or construction site), and the geographic region. Thus there is a need for statistical methods to model the load distribution with all its variability, which in turn can be used for the derivation of testing specifications.

We propose a constraint-based approach for the two-dimensional rectangular packing problem with orthogonal orientations. This problem is to arrange a set of rectangles that can be rotated by 90 degrees into a rectangle of minimal size such that no two rectangles overlap. It arises in the placement of electronic devices during the layout of 2.5D System-in-Package integrated electronic systems. Moffitt et al. [8] solve the packing without orientations with a branch and bound approach and use constraint propagation. We generalize their propagation techniques to allow orientations. Our approach is compared to a mixed-integer program and we provide results that outperform it.

Open cell foams are a promising and versatile class of porous materials. Open metal foams serve as crash absorbers and catalysts, metal and ceramic foams are used for filtering, and open polymer foams are hidden in every-day-life items like mattresses or chairs. Due to their high porosity, classical 2d quantitative analysis can give only very limited information about the microstructure of open foams. On the other hand, micro computed tomography (μCT) yields high quality 3d images of open foams. Thus 3d imaging is the method of choice for open cell foams. In this report we summarise a variety of methods for the analysis of the resulting volume images of open foam structures developed or refined and applied at the Fraunhofer ITWM over a course of nearly ten years: The model based determination of mean characteristics like the mean cell volume or the mean strut thickness demanding only a simple binarisation as well as the image analytic cell reconstruction yielding empirical distributions of cell characteristics.

The problem discussed in this paper is motivated by the new recycling directiveWEEE of the EC. The core of this law is, that each company which sells electrical or electronic equipment in a European country has the obligation to recollect and recycle an amount of returned items which is proportional to its market share. To assign collection stations to companies, in Germany for one product type a territory design approach is planned. However, in contrast to classical territory design, the territories should be geographically as dispersed as possible to avoid that a company, resp. its logistics provider responsible for the recollection, gains a monopoly in some region. First, we identify an appropriate measure for the dispersion of a territory. Afterwards, we present a first mathematical programming model for this new problem as well as a solution method based on the GRASP methodology. Extensive computational results illustrate the suitability of the model and assess the effectiveness of the heuristic.

We develop a framework for analyzing an executive’s own-company stockholding and work effort preferences. The executive, characterized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed-form, and an indifference utility rationale is demonstrated to determine his required compensation. Our results have implications for the practical and theoretical assessment of executive quality and the benefits of performance contracting. Assuming knowledge of the company’s non-systematic risk, our executive’s unconstrained own-company investment identifies his work effectiveness (i.e. quality), and also reflects work effort that establishes a base-level that performance contracting should seek to exceed.

An easy numerical handling of time-dependent problems with complicated geometries, free moving boundaries and interfaces, or oscillating solutions is of great importance for many applications, e.g., in fluid dynamics (free surface and multiphase flows, fluid-structure interactions [22, 18, 24]), failure mechanics (crack growth and propagation [4]), magnetohydrodynamics (accretion disks, jets and cloud simulation [6]), biophysics and -chemistry. Appropriate discretizations, so-called mesh-less methods, have been developed during the last decades to meet these challenging demands and to relieve the burden of remeshing and successive mesh generation being faced by the conventional mesh-based methods, [16, 10, 3]. The prearranged mesh is an artificial constraint to ensure compatibility of the mesh-based interpolant schemes, that often conflicts with the real physical conditions of the continuum model. Then, remeshing becomes inevitable, which is not only extremely time- and storage consuming but also the source for numerical errors and hence the gradual loss of computational accuracy. Apart from this advantage, mesh-less methods also lead to fundamentally better approximations regarding aspects, such as smoothness, nonlocal interpolation character, flexible connectivity, refinement and enrichment procedures, [16]. The common idea of mesh-less methods is the discretization of the domain of interest by a finite set of independent, randomly distributed particles moving with a characteristic velocity of the problem. Location and distribution of the particles then account for the time-dependent description of the geometry, data and solution. Thereby, the global solution is linearly superposed from the local information carried by the particles. In classical particle methods [20, 21], the respective weight functions are Dirac distributions which yield solutions in a distributional sense.

Recently we developed a discrete model of elastic rods with symmetric cross section suitable for a fast simulation of quasistatic deformations [33]. The model is based on Kirchhoff’s geometrically exact theory of rods. Unlike simple models of “mass & spring” type typically used in VR applications, our model provides a proper coupling of bending and torsion. The computational approach comprises a variational formulation combined with a finite difference discretization of the continuum model. Approximate solutions of the equilibrium equations for sequentially varying boundary conditions are obtained by means of energy minimization using a nonlinear CG method. As the computational performance of our model yields solution times within the range of milliseconds, our approach proves to be sufficient to simulate an interactive manipulation of such flexible rods in virtual reality applications in real time.

Summary. We present a model of exible rods | based on Kirchhoff\\\'s geometrically exact theory | which is suitable for the fast simulation of quasistatic deformations within VR or functional DMU applications. Unlike simple models of \\\"mass & spring\\\" type typically used in VR applications, our model provides a proper coupling of bending and torsion. The computational approach comprises a variational formulation combined with a nite dierence discretization of the continuum model. Approximate solutions of the equilibrium equations for sequentially varying boundary conditions are obtained by means of energy minimization using a nonlinear CG method. The computational performance of our model proves to be sucient for the interactive manipulation of exible cables in assembly simulation.

Abstract. An efficient approach to the numerical upscaling of thermal conductivities of fibrous media, e.g. insulation materials, is considered. First, standard cell problems for a second order elliptic equation are formulated for a proper piece of random fibrous structure, following homogenization theory. Next, a graph formed by the fibers is considered, and a second order elliptic equation with suitable boundary conditions is solved on this graph only. Replacing the boundary value problem for the full cell with an auxiliary problem with special boundary conditions on a connected subdomain of highly conductive material is justified in a previous work of the authors. A discretization on the graph is presented here, and error estimates are provided. The efficient implementation of the algorithm is discussed. A number of numerical experiments is presented in order to illustrate the performance of the proposed method.

This paper introduces methods for the detection of anisotropies which are caused by compression of regular three-dimensional point patterns. Isotropy tests based on directional summary statistics and estimators for the compression factor are developed. These allow not only for the detection of anisotropies but also for the estimation of their strength. Using simulated data the power of the methods and the dependence of the power on the intensity, the degree of regularity, and the compression strength are studied. The motivation of this paper is the investigation of anisotropies in the structure of polar ice. Therefore, our methods are applied to the point patterns of centres of air pores extracted from tomographic images of ice cores. This way the presence of anisotropies in the ice caused by the compression of the ice sheet as well as an increase of their strength with increasing depth are shown.

In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.

With the ever-increasing significance of software in our everyday lives, it is vital to afford reliable software quality estimates. Typically, quantitative software quality analyses rely on either statistical fault prediction methods (FPMs) or stochastic software reliability growth models (SRGMs). Adopting solely FPMs or SRGMs, though, may result in biased predictions that do not account for uncertainty in the distinct prediction methods; thus rendering the prediction less reliable. This paper identifies flaws of the individual prediction methods and suggests a hybrid prediction approach that combines FPMs and SRGMs. We adopt FPMs for initially estimating the expected number of failures for fi- nite failure SRGMs. Initial parameter estimates yield more accurate reliability predictions until sufficient failures are observed that enable stable parameter estimates in SRGMs. Being at the equilibrium level of FPM and SRGM pre- dictions we suggest combining the competing prediction methods with respect to the principle of heterogeneous redundancy. That is, we propose using the in- dividual methods separately and combining their predictions. In this paper we suggest Bayesian model averaging (BMA) for combining the different methods. The hybrid approach allows early reliability estimates and encourages higher confidence in software quality predictions.

This paper disscuses the minimal area rectangular packing problem of how to pack a set of specified, non-overlapping rectangels into a rectangular container of minimal area. We investigate different mathematical programming approaches of this and introduce a novel approach based on non-linear optimization and the \\\"tunneling effect\\\" achieved by a relaxation of the non-overlapping constraints.

Background and purpose Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Material and methods Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms “selection” and “restriction”. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle – a set constructed from precomputed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions.

Modeling and formulation of optimization problems in IMRT planning comprises the choice of various values such as function-specific parameters or constraint bounds. These values also affect the characteristics of the optimization problem and thus the form of the resulting optimal plans. This publication utilizes concepts of sensitivity analysis and elasticity in convex optimization to analyze the dependence of optimal plans on the modeling parameters. It also derives general rules of thumb how to choose and modify the parameters in order to obtain the desired IMRT plan. These rules are numerically validated for an exemplary IMRT planning problems.

A fully automatic procedure is proposed to rapidly compute the permeability of porous materials from their binarized microstructure. The discretization is a simplified version of Peskin’s Immersed Boundary Method, where the forces are applied at the no-slip grid points. As needed for the computation of permeability, steady flows at zero Reynolds number are considered. Short run-times are achieved by eliminating the pressure and velocity variables using an Fast Fourier Transform-based and 4 Poisson problembased fast inversion approach on rectangular parallelepipeds with periodic boundary conditions. In reference to calling it a fast method using fictitious or artificial forces, the implementation is called FFF-Stokes. Large scale computations on 3d images are quickly and automatically performed to estimate the permeability of some sample materials. A matlab implementation is provided to allow readers to experience the automation and speed of the method for realistic three-dimensional models.

Facility location decisions play a critical role in the strategic design of supply chain networks. In this paper, an extensive literature review of facility location models in the context of supply chain management is given. Following a brief review of core models in facility location, we identify basic features that such models must capture to support decision-making involved in strategic supply chain planning. In particular, the integration of location decisions with other decisions relevant to the design of a supply chain network is discussed. Furthermore, aspects related to the structure of the supply chain network, including those specific to reverse logistics, are also addressed. Significant contributions to the current state-of-the-art are surveyed taking into account numerous factors. Supply chain performance measures and optimization techniques are also reviewed. Applications of facility location models to supply chain network design ranging across various industries are discussed. Finally, a list of issues requiring further research are highlighted.

Bringing robustness to patient flow management through optimized patient transports in hospitals
(2007)

Intra-hospital transports are often required for diagnostic or therapeutic reasons. Depending on the hospital layout, transportation between nursing wards and service units is either provided by ambulances or by trained personnel who accompany patients on foot. In many large German hospitals, the patient transport service is poorly managed and lacks workflow coordination. This contributes to higher hospital costs (e.g. when a patient is not delivered to the operating room on time) and to patient inconvenience due to longer waiting times. We have designed a computer-based planning system - Opti-TRANS c - that supports all phases of the transportation flow, ranging from travel booking, dispatching transport requests to monitoring and reporting trips in real-time. The methodology developed to solve the underlying optimization problem - a dynamic dial-a-ride problem with hospital-specific constraints - draws on fast heuristic methods to ensure the efficient and timely provision of transports. We illustrate the strong impact of Opti-TRANS c on the daily performance of the patient transportation service of a large German hospital. The major benefits obtained with the new tool include streamlined transportation processes and workflow, significant savings and improved patient satisfaction. Moreover, the new planning system has contributed to increase awareness among hospital staff about the importance of implementing efficient logistics practices.

An efficient approach for calculating the effective heat conductivity for a class of industrial composite materials, such as metal foams, fibrous glass materials, and the like, is discussed. These materials, used in insulation or in advanced heat exchangers, are characterized by a low volume fraction of the highly conductive material (glass or metal) having a complex, network-like structure and by a large volume fraction of the insulator (air). We assume that the composite materials have constant macroscopic thermal conductivity tensors, which in principle can be obtained by standard up-scaling techniques, that use the concept of representative elementary volumes (REV), i.e. the effective heat conductivities of composite media can be computed by post-processing the solutions of some special cell problems for REVs. We propose, theoretically justify, and numerically study an efficient approach for calculating the effective conductivity for media for which the ratio of low and high conductivities satisfies 1. In this case one essentially only needs to solve the heat equation in the region occupied by the highly conductive media. For a class of problems we show, that under certain conditions on the microscale geometry, the proposed approach produces an upscaled conductivity that is O() close to the exact upscaled permeability. A number of numerical experiments are presented in order to illustrate the accuracy and the limitations of the proposed method. Applicability of the presented approach to upscaling other similar problems, e.g. flow in fractured porous media, is also discussed.

In this paper, a new mixed integer mathematical programme is proposed for the application of Hub Location Problems (HLP) in public transport planning. This model is among the few existing ones for this application. Some classes of valid inequalities are proposed yielding a very tight model. To solve instances of this problem where existing standard solvers fail, two approaches are proposed. The first one is an exact accelerated Benders decomposition algorithm and the latter a greedy neighborhood search. The computational results substantiate the superiority of our solution approaches to existing standard MIP solvers like CPLEX, both in terms of computational time and problem instance size that can be solved. The greedy neighborhood search heuristic is shown to be extremely efficient.

A Lattice Boltzmann Method for immiscible multiphase flow simulations using the Level Set Method
(2008)

We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.

The theory of the two-scale convergence was applied to homogenization of elasto-plastic composites with a periodic structure and exponential hardening law. The theory is based on the fact that the elastic as well as the plastic part of the stress field two-scale converges to a limit, which is factorized by parts, depending only on macroscopic characteristics, represented in terms of corresponding part of the homogenised stress tensor and only on stress concentration tensor, related to the micro-geometry and elastic or plastic micro-properties of composite components. The theory was applied to metallic matrix material with Ludwik and Hocket-Sherby hardening law and pure elastic inclusions in two numerical examples. Results were compared with results of mechanical averaging based on the self-consistent methods.

Determination of interaction between MCT1 and CAII via a mathematical and physiological approach
(2008)

The enzyme carbonic anhydrase isoform II (CAII), catalysing the hydration and dehydration of CO2, enhances transport activity of the monocarboxylate transporter isoform I (MCT1, SLC16A1) expressed in Xenopus oocytes by a mechanism that does not require CAII catalytic activity (Becker et al. (2005) J. Biol. Chem., 280). In the present study, we have investigated the mechanism of the CAII induced increase in transport activity by using electrophysiological techniques and a mathematical model of the MCT1 transport cycle. The model consists of six states arranged in cyclic fashion and features an ordered, mirror-symmetric, binding mechanism were binding and unbinding of the proton to the transport protein is considered to be the rate limiting step under physiological conditions. An explicit rate expression for the substrate °ux is derived using model reduction techniques. By treating the pools of intra- and extracellular MCT1 substrates as dynamic states, the time dependent kinetics are obtained by integration using the derived expression for the substrate °ux. The simulations were compared with experimental data obtained from MCT1-expressing oocytes injected with di®erent amounts of CAII. The model suggests that CAII increases the e®ective rate constants of the proton reactions, possibly by working as a proton antenna.

In this paper, the analysis of one approach for the regularization of pure Neumann problems for second order elliptical equations, e.g., Poisson’s equation and linear elasticity equations, is presented. The main topic under consideration is the behavior of the condition number of the regularized problem. A general framework for the analysis is presented. This allows to determine a form of regularization term which leads to the “natural” asymptotic of the condition number of the regularized problem with respect to mesh parameter. Some numerical results, which support theoretical analysis are presented as well. The main motivation for the presented research is to develop theoretical background for an efficient and robust implementation of the solver for pure Neumann problems for the linear elasticity equations. Such solvers usually are needed in a number of domain decomposition methods, e.g. FETI. Developed approaches are planed to be used in software, developing in ITWM, e.g. KneeMech simulation software.

In this paper we develop a network location model that combines the characteristics of ordered median and gradual cover models resulting in the Ordered Gradual Covering Location Problem (OGCLP). The Gradual Cover Location Problem (GCLP) was specifically designed to extend the basic cover objective to capture sensitivity with respect to absolute travel distance. Ordered Median Location problems are a generalization of most of the classical locations problems like p-median or p-center problems. They can be modeled by using so-called ordered median functions. These functions multiply a weight to the cost of fulfilling the demand of a customer which depends on the position of that cost relative to the costs of fulfilling the demand of the other customers. We derive Finite Dominating Sets (FDS) for the one facility case of the OGCLP. Moreover, we present efficient algorithms for determining the FDS and also discuss the conditional case where a certain number of facilities are already assumed to exist and one new facility is to be added. For the multi-facility case we are able to identify a finite set of potential facility locations a priori, which essentially converts the network location model into its discrete counterpart. For the multi-facility discrete OGCLP we discuss several Integer Programming formulations and give computational results.

In this paper, we are going to propose the first mathematical model for Multi- Period Hub Location Problems (MPHLP). We apply this mixed integer program- ming model on public transport planning and call it Multi-Period Hub Location Problem for Public Transport (MPHLPPT). In fact, HLPPT model proposed earlier by the authors is extended to include more facts and features of the real-life application. In order to solve instances of this problem where existing standard solvers fail, a solution approach based on a greedy neighborhood search is developed. The computational results substantiate the efficiency of our solution approach to solve instances of MPHLPPT.

Structuring global supply chain networks is a complex decision-making process. The typical inputs to such a process consist of a set of customer zones to serve, a set of products to be manufactured and distributed, demand projections for the different customer zones, and information about future conditions, costs (e.g. for production and transportation) and resources (e.g. capacities, available raw materials). Given the above inputs, companies have to decide where to locate new service facilities (e.g. plants, warehouses), how to allocate procurement and production activities to the variousmanufacturing facilities, and how to manage the transportation of products through the supply chain network in order to satisfy customer demands. We propose a mathematical modelling framework capturing many practical aspects of network design problems simultaneously. For problems of reasonable size we report on computational experience with standard mathematical programming software. The discussion is extended with other decisions required by many real-life applications in strategic supply chain planning. In particular, the multi-period nature of some decisions is addressed by a more comprehensivemodel, which is solved by a specially tailored heuristic approach. The numerical results suggest that the solution procedure can identify high quality solutions within reasonable computational time.

This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.

This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.

Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.

Calculating effective heat conductivity for a class of industrial problems is discussed. The considered composite materials are glass and metal foams, fibrous materials, and the like, used in isolation or in advanced heat exchangers. These materials are characterized by a very complex internal structure, by low volume fraction of the higher conductive material (glass or metal), and by a large volume fraction of the air. The homogenization theory (when applicable), allows to calculate the effective heat conductivity of composite media by postprocessing the solution of special cell problems for representative elementary volumes (REV). Different formulations of such cell problems are considered and compared here. Furthermore, the size of the REV is studied numerically for some typical materials. Fast algorithms for solving the cell problems for this class of problems, are presented and discussed.

Approximation property of multipoint flux approximation (MPFA) approach for elliptic equations with discontinuous full tensor coefficients is discussed here. Finite volume discretization of the above problem is presented in the case of jump discontinuities for the permeability tensor. First order approximation for the fluxes is proved. Results from numerical experiments are presented and discussed.

A numerical upscaling approach, NU, for solving multiscale elliptic problems is discussed. The main components of this NU are: i) local solve of auxil- iary problems in grid blocks and formal upscaling of the obtained re sults to build a coarse scale equation; ii) global solve of the upscaled coarse scale equation; and iii) reconstruction of a fine scale solution by solving local block problems on a dual coarse grid. By its structure NU is similar to other methods for solving multiscale elliptic problems, such as the multiscale finite element method, the multiscale mixed finite element method, the numerical subgrid upscaling method, heterogeneous multiscale method, and the multiscale finite volume method. The difference with those methods is in the way the coarse scale equation is build and solved, and in the way the fine scale solution is reconstructed. Essential components of the presented here NU approach are the formal homogenization in the coarse blocks and the usage of so called multipoint flux approximation method, MPFA. Unlike the usual usage as MPFA as a discretiza- tion method for single scale elliptic problems with tensor discontinuous coefficients, we consider its usage as a part of a numerical upscaling approach. The main aim of this paper is to compare NU with the MsFEM. In particular, it is shown that the resonance effect, which limits the application of the Multiscale FEM, does not appear, or it is significantly relaxed, when the presented here numerical upscaling approach is applied.

In the article the application of kernel functions – the so-called »kernel trick« – in the context of Fisher’s approach to linear discriminant analysis is described for data sets subdivided into two groups and having real attributes. The relevant facts about functional Hilbert spaces and kernel functions including their proofs are presented. The approximative algorithm published in [Mik3] to compute a discriminant function given the data and a kernel function is briefly reviewed. As an illustration of the technique an artificial data set is analysed using the algorithm just mentioned.