### Refine

#### Year of publication

#### Document Type

- Report (399) (remove)

#### Language

- English (399) (remove)

#### Keywords

- numerical upscaling (7)
- hub location (5)
- Elastoplastizität (4)
- Integer programming (4)
- modelling (4)
- poroelasticity (4)
- Darcy’s law (3)
- Dienstgüte (3)
- Elastic BVP (3)
- Elastoplasticity (3)

#### Faculty / Organisational entity

In the literature, there are at least two equivalent two-factor Gaussian models for the instantaneous short rate. These are the original two-factor Hull White model (see [3]) and the G2++ one by Brigo and Mercurio (see [1]). Both these models first specify a time homogeneous two-factor short rate dynamics and then by adding a deterministic shift function '(·) fit exactly the initial term structure of interest rates. However, the obtained results are rather clumsy and not intuitive which means that a special care has to be taken for their correct numerical implementation.

In this paper we study the possibilities of sharing profit in combinatorial procurement auctions and exchanges. Bundles of heterogeneous items are offered by the sellers, and the buyers can then place bundle bids on sets of these items. That way, both sellers and buyers can express synergies between items and avoid the well-known risk of exposure (see, e.g., [3]). The reassignment of items to participants is known as the Winner Determination Problem (WDP). We propose solving the WDP by using a Set Covering formulation, because profits are potentially higher than with the usual Set Partitioning formulation, and subsidies are unnecessary. The achieved benefit is then to be distributed amongst the participants of the auction, a process which is known as profit sharing. The literature on profit sharing provides various desirable criteria. We focus on three main properties we would like to guarantee: Budget balance, meaning that no more money is distributed than profit was generated, individual rationality, which guarantees to each player that participation does not lead to a loss, and the core property, which provides every subcoalition with enough money to keep them from separating. We characterize all profit sharing schemes that satisfy these three conditions by a monetary flow network and state necessary conditions on the solution of the WDP for the existence of such a profit sharing. Finally, we establish a connection to the famous VCG payment scheme [2, 8, 19], and the Shapley Value [17].

Determination of interaction between MCT1 and CAII via a mathematical and physiological approach
(2008)

The enzyme carbonic anhydrase isoform II (CAII), catalysing the hydration and dehydration of CO2, enhances transport activity of the monocarboxylate transporter isoform I (MCT1, SLC16A1) expressed in Xenopus oocytes by a mechanism that does not require CAII catalytic activity (Becker et al. (2005) J. Biol. Chem., 280). In the present study, we have investigated the mechanism of the CAII induced increase in transport activity by using electrophysiological techniques and a mathematical model of the MCT1 transport cycle. The model consists of six states arranged in cyclic fashion and features an ordered, mirror-symmetric, binding mechanism were binding and unbinding of the proton to the transport protein is considered to be the rate limiting step under physiological conditions. An explicit rate expression for the substrate °ux is derived using model reduction techniques. By treating the pools of intra- and extracellular MCT1 substrates as dynamic states, the time dependent kinetics are obtained by integration using the derived expression for the substrate °ux. The simulations were compared with experimental data obtained from MCT1-expressing oocytes injected with di®erent amounts of CAII. The model suggests that CAII increases the e®ective rate constants of the proton reactions, possibly by working as a proton antenna.

The level-set method has been recently introduced in the field of shape optimization, enabling a smooth representation of the boundaries on a fixed mesh and therefore leading to fast numerical algorithms. However, most of these algorithms use a Hamilton-Jacobi equation to connect the evolution of the level-set function with the deformation of the contours, and consequently they cannot create any new holes in the domain (at least in 2D). In this work, we propose an evolution equation for the level-set function based on a generalization of the concept of topological gradient. This results in a new algorithm allowing for all kinds of topology changes.

This report discusses two approaches for a posteriori error indication in the linear elasticity solver DDFEM: An indicator based on the Richardson extrapolation and Zienkiewicz-Zhu-type indicator. The solver handles 3D linear elasticity steady-state problems. It uses own input language to describe the mesh and the boundary conditions. Finite element discretization over tetrahedral meshes with first or second order shape functions (hierarchical basis) has been used to resolve the model. The parallelization of the numerical method is based on the domain decomposition approach. DDFEM is highly portable over a set of parallel computer architectures supporting the MPI-standard.

The rotational spinning of viscous jets is of interest in many industrial applications, including pellet manufacturing [4, 14, 19, 20] and drawing, tapering and spinning of glass and polymer fibers [8, 12, 13], see also [15, 21] and references within. In [12] an asymptotic model for the dynamics of curved viscous inertial fiber jets emerging from a rotating orifice under surface tension and gravity was deduced from the three-dimensional free boundary value problem given by the incompressible Navier-Stokes equations for a Newtonian fluid. In the terminology of [1], it is a string model consisting of balance equations for mass and linear momentum. Accounting for inner viscous transport, surface tension and placing no restrictions on either the motion or the shape of the jet’s center-line, it generalizes the previously developed string models for straight [3, 5, 6] and curved center-lines [4, 13, 19]. Moreover, the numerical results investigating the effects of viscosity, surface tension, gravity and rotation on the jet behavior coincide well with the experiments of Wong et.al. [20].

The optimal design of rotational production processes for glass wool manufacturing poses severe computational challenges to mathematicians, natural scientists and engineers. In this paper we focus exclusively on the spinning regime where thousands of viscous thermal glass jets are formed by fast air streams. Homogeneity and slenderness of the spun fibers are the quality features of the final fabric. Their prediction requires the computation of the fuidber-interactions which involves the solving of a complex three-dimensional multiphase problem with appropriate interface conditions. But this is practically impossible due to the needed high resolution and adaptive grid refinement. Therefore, we propose an asymptotic coupling concept. Treating the glass jets as viscous thermal Cosserat rods, we tackle the multiscale problem by help of momentum (drag) and heat exchange models that are derived on basis of slender-body theory and homogenization. A weak iterative coupling algorithm that is based on the combination of commercial software and self-implemented code for ow and rod solvers, respectively, makes then the simulation of the industrial process possible. For the boundary value problem of the rod we particularly suggest an adapted collocation-continuation method. Consequently, this work establishes a promising basis for future optimization strategies.

This work deals with the modeling and simulation of slender viscous jets exposed to gravity and rotation, as they occur in rotational spinning processes. In terms of slender-body theory we show the asymptotic reduction of a viscous Cosserat rod to a string system for vanishing slenderness parameter. We propose two string models, i.e. inertial and viscous-inertial string models, that differ in the closure conditions and hence yield a boundary value problem and an interface problem, respectively. We investigate the existence regimes of the string models in the four-parametric space of Froude, Rossby, Reynolds numbers and jet length. The convergence regimes where the respective string solution is the asymptotic limit to the rod turn out to be disjoint and to cover nearly the whole parameter space. We explore the transition hyperplane and derive analytically low and high Reynolds number limits. Numerical studies of the stationary jet behavior for different parameter ranges complete the work.

This paper introduces a new high Level programming language for a novel
class of computational devices namely data-procedural machines. These machines are by up to several orders of magnitude more efficient than the von Neumann paradigm of computers and are as flexible and as universal as computers. Their efficiency and flexibility is achieved by using field-programmable logic as the essential technology platform. The paper briefly summarizes and illustrates the essential new features of this language by means of two example programs.

This report gives an insight into basics of stress field simulations for geothermal reservoirs.
The quasistatic equations of poroelasticity are deduced from constitutive equations, balance
of mass and balance of momentum. Existence and uniqueness of a weak solution is shown.
In order of to find an approximate solution numerically, usage of the so–called method of
fundamental solutions is a promising way. The idea of this method as well as a sketch of
how convergence may be proven are given.

Piezoelectric filters are used in telecommunication to filter electrical signals. This report deals with the problem of calculating passing and damped frequency intervals for a filter with given geometrical configurations and materials. Only periodic filters, which are widely used in practice, were considered. These filters consist of periodically arranged cells. For a small amount of cells a numerical procedure to visualise the wave propagation in the filter was developed. For a big number of cells another model of the filter was obtained. In this model it is assumed that the filter occupies an infinite domain. This leads to a differential equation, with periodic coefficients, that describes propagation of the wave with a given frequency in the filter. To analyse this equation the Spectral Theory for Periodic Operators had to be employed. Different ways -- analytical and numerical -- to apply the theory were proposed and analysed.

This paper analyzes and solves a patient transportation problem arising in several large hospitals. The aim is to provide an efficient and timely transport service to patients between several locations on a hospital campus. Transportation requests arrive in a dynamic fashion and the solution methodology must therefore be capable of quickly inserting new requests in the current vehicle routes. Contrary to standard dial-a-ride problems, the problem under study contains several complicating constraints which are specific to a hospital context. The paper provides a detailed description of the problem and proposes a two-phase heuristic procedure capable of handling its many features. In the first phase a simple insertion scheme is used to generate a feasible solution, which is improved in the second phase with a tabu search algorithm. The heuristic procedure was extensively tested on real data provided by a German hospital. Results show that the algorithm is capable of handling the dynamic aspect of the problem and of providing high quality solutions. In particular, it succeeded in reducing waiting times for patients while using fewer vehicles.

Within this paper we review image distortion measures. A distortion measure is a criterion that assigns a "quality number" to an image. We distinguish between mathematical distortion measures and those distortion measures in-cooperating a priori knowledge about the imaging devices ( e.g. satellite images), image processing algorithms or the human physiology. We will consider representative examples of different kinds of distortion measures and are going to discuss them.

The provision of quality-of-service (QoS) on the network layer is a major challenge in communication networks. This applies particularly to mobile ad-hoc networks (MANETs) in the area of Ambient Intelligence (AmI), especially with the increasing use of delay and bandwidth sensitive applications. The focus of this survey lies on the classification and analysis of selected QoS routing protocols in the domain of mobile ad-hoc networks. Each protocol is briefly described and assessed, and the results are summarized in multiple tables.

The objective of this paper is to bridge the gap between location theory and practice. To meet this objective focus is given to the development of software capable of addressing the different needs of a wide group of users. There is a very active community on location theory encompassing many research fields such as operations research, computer science, mathematics, engineering, geography, economics and marketing. As a result, people working on facility location problems have a very diverse background and also different needs regarding the software to solve these problems. For those interested in non-commercial applications (e. g. students and researchers), the library of location algorithms (LoLA can be of considerable assistance. LoLA contains a collection of efficient algorithms for solving planar, network and discrete facility location problems. In this paper, a detailed description of the functionality of LoLA is presented. In the fields of geography and marketing, for instance, solving facility location problems requires using large amounts of demographic data. Hence, members of these groups (e. g. urban planners and sales managers) often work with geographical information too s. To address the specific needs of these users, LoLA was inked to a geographical information system (GIS) and the details of the combined functionality are described in the paper. Finally, there is a wide group of practitioners who need to solve large problems and require special purpose software with a good data interface. Many of such users can be found, for example, in the area of supply chain management (SCM). Logistics activities involved in strategic SCM include, among others, facility location planning. In this paper, the development of a commercial location software tool is also described. The too is embedded in the Advanced Planner and Optimizer SCM software developed by SAP AG, Walldorf, Germany. The paper ends with some conclusions and an outlook to future activities.

We propose a constraint-based approach for the two-dimensional rectangular packing problem with orthogonal orientations. This problem is to arrange a set of rectangles that can be rotated by 90 degrees into a rectangle of minimal size such that no two rectangles overlap. It arises in the placement of electronic devices during the layout of 2.5D System-in-Package integrated electronic systems. Moffitt et al. [8] solve the packing without orientations with a branch and bound approach and use constraint propagation. We generalize their propagation techniques to allow orientations. Our approach is compared to a mixed-integer program and we provide results that outperform it.

In this paper we develop a network location model that combines the characteristics of ordered median and gradual cover models resulting in the Ordered Gradual Covering Location Problem (OGCLP). The Gradual Cover Location Problem (GCLP) was specifically designed to extend the basic cover objective to capture sensitivity with respect to absolute travel distance. Ordered Median Location problems are a generalization of most of the classical locations problems like p-median or p-center problems. They can be modeled by using so-called ordered median functions. These functions multiply a weight to the cost of fulfilling the demand of a customer which depends on the position of that cost relative to the costs of fulfilling the demand of the other customers. We derive Finite Dominating Sets (FDS) for the one facility case of the OGCLP. Moreover, we present efficient algorithms for determining the FDS and also discuss the conditional case where a certain number of facilities are already assumed to exist and one new facility is to be added. For the multi-facility case we are able to identify a finite set of potential facility locations a priori, which essentially converts the network location model into its discrete counterpart. For the multi-facility discrete OGCLP we discuss several Integer Programming formulations and give computational results.

The Train Marshalling Problem consists of rearranging an incoming train in a marshalling yard in such a way that cars with the same destinations appear consecutively in the final train and the number of needed sorting tracks is minimized. Besides an initial roll-in operation, just one pull-out operation is allowed. This problem was introduced by Dahlhaus et al. who also showed that the problem is NP-complete. In this paper, we provide a new lower bound on the optimal objective value by partitioning an appropriate interval graph. Furthermore, we consider the corresponding online problem, for which we provide upper and lower bounds on the competitiveness and a corresponding optimal deterministic online algorithm. We provide an experimental evaluation of our lower bound and algorithm which shows the practical tightness of the results.

In the generalized max flow problem, the aim is to find a maximum flow in a generalized network, i.e., a network with multipliers on the arcs that specify which portion of the flow entering an arc at its tail node reaches its head node. We consider this problem for the class of series-parallel graphs. First, we study the continuous case of the problem and prove that it can be solved using a greedy approach. Based on this result, we present a combinatorial algorithm that runs in O(m*m) time and a dynamic programming algorithm with running time O(m*log(m)) that only computes the maximum flow value but not the flow itself. For the integral version of the problem, which is known to be NP-complete, we present a pseudo-polynomial algorithm.

Guaranteeing correctness of compilation is a ma jor precondition for correct software. Code generation can be one of the most error-prone tasks in a compiler. One way to achieve trusted compilation is certifying compilation. A certifying compiler generates for each run a proof that it has performed the compilation run correctly. The proof is checked in a separate theorem prover. If the theorem prover is content with the proof, one can be sure that the compiler produced correct code. This paper presents a certifying code generation phase for a compiler translating an intermediate language into assembler code. The time spent for checking the proofs is the bottleneck of certifying compilation. We exhibit an improved framework for certifying compilation and considerable advances to overcome this bottleneck. We compare our implementation featuring the Coq theorem prover to an older implementation. Our current implementation is feasible for medium to large sized programs.

Abstraction is intensively used in the verification of large, complex or infinite-state systems. With abstractions getting more complex it is often difficult to see whether they are valid. However, for using abstraction in model checking it has to be ensured that properties are preserved. In this paper, we use a translation validation approach to verify property preservation of system abstractions. We formulate a correctness criterion based on simulation between concrete and abstract system for a property to be verified. For each distinct run of the abstraction procedure the correctness is verified in the theorem prover Isabelle/HOL. This technique is applied in the verification of embedded adaptive systems. This paper is an extended version a previously published work.

Many applications dealing with geometry acquisition and processing produce polygonal meshes that carry artifacts like discretization noise. While there are many approaches to remove the artifacts by smoothing or filtering the mesh, they are not tailored to any specific application subject to·certain restrictive objectives. We show how to incorporate smoothing schemes based on the general Laplacian approximation to satsify all those objectives at
the same time for the results of flow simulation in the application field of car manufacturing. In the presented application setting the major restrictions come from the bounding volume of the flow simulation, the so-called installation space. In particular, clean mesh regions (without noise) should not be smoothed while at the same time the installation space must not be violated by the smoothing of the noisy mesh regions. Additionally, aliasing effects at the boundary between clean and noisy mesh regions must be prevented. To address the fact that the meshes come from flow simulation, the presented method is versatile enough to preserve their exact volume and to apply anisotropic filters using the flow information.
Although the paper focuses on the results of a specific application, most of its findings can be transferred to different settings as well.

The Discrete Ordered Median Problem (DOMP) generalizes classical discrete location problems, such as the N-median, N-center and Uncapacitated Facility Location problems. It was introduced by Nickel [16], who formulated it as both a nonlinear and a linear integer program. We propose an alternative integer linear programming formulation for the DOMP, discuss relationships between both integer linear programming formulations, and show how properties of optimal solutions can be used to strengthen these formulations. Moreover, we present a specific branch and bound procedure to solve the DOMP more efficiently. We test the integer linear programming formulations and this branch and bound method computationally on randomly generated test problems.

In this paper, a stochastic model [5] for the turbulent fiber laydown in the industrial production of nonwoven materials is extended by including a moving conveyor belt. In the hydrodynamic limit corresponding to large noise values, the transient and stationary joint probability distributions are determined using the method of multiple scales and the Chapman-Enskog method. Moreover, exponential convergence towards the stationary solution is proven for the reduced problem. For special choices of the industrial parameters, the stochastic limit process is an Ornstein{Uhlenbeck. It is a good approximation of the fiber motion even for moderate noise values. Moreover, as shown by Monte{Carlo simulations, the limiting process can be used to assess the quality of nonwoven materials in the industrial application by determining distributions of functionals of the process.

Radiation therapy planning is always a tight rope walk between dangerous insufficient dose in the target volume and life threatening overdosing of organs at risk. Finding ideal balances between these inherently contradictory goals challenges dosimetrists and physicians in their daily practice. Today’s planning systems are typically based on a single evaluation function that measures the quality of a radiation treatment plan. Unfortunately, such a one dimensional approach cannot satisfactorily map the different backgrounds of physicians and the patient dependent necessities. So, too often a time consuming iteration process between evaluation of dose distribution and redefinition of the evaluation function is needed. In this paper we propose a generic multi-criteria approach based on Pareto’s solution concept. For each entity of interest - target volume or organ at risk a structure dependent evaluation function is defined measuring deviations from ideal doses that are calculated from statistical functions. A reasonable bunch of clinically meaningful Pareto optimal solutions are stored in a data base, which can be interactively searched by physicians. The system guarantees dynamical planning as well as the discussion of tradeoffs between different entities. Mathematically, we model the upcoming inverse problem as a multi-criteria linear programming problem. Because of the large scale nature of the problem it is not possible to solve the problem in a 3D-setting without adaptive reduction by appropriate approximation schemes. Our approach is twofold: First, the discretization of the continuous problem is based on an adaptive hierarchical clustering process which is used for a local refinement of constraints during the optimization procedure. Second, the set of Pareto optimal solutions is approximated by an adaptive grid of representatives that are found by a hybrid process of calculating extreme compromises and interpolation methods.

The quality of freeform surfaces is one of the major topics of CAD/CAM. Aesthetic and technical demands require the construction of high quality surfaces with strong shape conditions. Quality diminishing properties like dents or flat points have to be eliminated while approximation conditions must hold at the same time. Our approach combines quality and approximation criteria to a nonlinear multicriteria optimization problem and achieves an automatic approximation and fitting process.

This report gives an overview of the separate translation of synchronous imperative programs to synchronous guarded actions. In particular, we consider problems to be solved for separate compilation that stem from preemption statements and local variable declarations. We explain how we solved these problems and sketch our solutions implemented in the our Averest framework to implement a compiler that allows a separate compilation of imperative synchronous programs with local variables and unrestricted preemption statements. The focus of the report is the big picture of our entire design flow.

SHIM is a concurrent deterministic programming language for embedded systems built on rendezvous communication. It abstracts away many details to give the developer a high-level view that includes virtual shared variables, threads as orthogonal statements, and deterministic concurrent exceptions.
In this paper, we present a new way to compile a SHIM-like language into a set of asynchronous guarded actions, a well-established intermediate representation for concurrent systems. By doing so, we build a bridge to many other tools, including hardware synthesis and formal verification. We present our translation in detail, illustrate it through examples, and show how the result can be used by various other tools.

This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.

We introduce the concept of streamballs for fluid flow visualization. Streamballs are based upon implicit surface generation techniques adopted from the well-known metaballs. Their property to split or merge automatically in areas of significant divergence or convergence makes them an ideal tool for the visualization of arbitrary complex flow fields. Using convolution surfaces generated by continuous skeletons for streamball construction offers the possibility to visualize even tensor fields.

Optimal degree reductions, i.e. best approximations of \(n\)-th degree Bezier curves
by Bezier curves of degree \(n\) - 1, with respect to different norms are studied. It
is shown that for any \(L_p\)-norm the euclidean degree reduction where the norm is applied to the euclidean distance function of two curves is identical to componentwise degree reduction. The Bezier points of the degree reductions are found to lie on parallel lines through the Bezier points of any Taylor expansion of degree \(n\) - 1 of the original curve. This geometric situation is shown to hold also in the case of constrained degree reduction. The Bezier points of the degree reduction are explicitly given in the unconstrained case for \(p\) = 1 and \(p\) = 2 and in the constrained case for \(p\) = 2.

The problem to interpolate Hermite-type data (i.e. two points with attached tangent vectors) with elastic curves of prescribed tension is known to have multiple solutions. A method is presented that finds all solutions of length not exceeding one period of its curvature function. The algorithm is based on algebraic relations between discrete curvature information which allow to transform the problem into a univariate one. The method operates with curves that by construction partially interpolate the given data. Hereby the objective function of the problem is drastically simplified. A bound on the maximum curvature value is established that provides an interval containing all solutions.

The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe
the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type.

Experience gathered from applying the software process modeling language MVP-L in software development organizations has shown the need for graphical representations of process models. Project members (i.e„ non MVP-L specialists) review models much more easily by using graphical representations. Although several various graphical notations were developed for individual projects in which MVP-L was applied, there was previously no consistent definition of a mapping between textual MVP-L models and graphical representations. This report defines a graphical representation schema for MVP-L
descriptions and combines previous results in a unified form. A basic set of building blocks (i.e., graphical symbols and text fragments) is defined, but because we must first gain experience with the new symbols, only rudimentary guidelines are given for composing basic
symbols into a graphical representation of a model.

Intellectual control over software development projects requires the existence of an integrated set of explicit models of the products to be developed, the processes used to develop them, the resources needed, and the productivity and quality aspects involved. In recent years the development of languages, methods and tools for modeling software processes, analyzing and enacting them has become a major emphasis of software engineering research. The majority of current process research concentrates on prescriptive modeling of small, completely formalizable processes and their execution entirely on computers. This research direction has produced process modeling languages suitable for machine rather than human consumption. The MVP project, launched at the University of Maryland and continued at Universität Kaiserslautern, emphasizes building descriptive models of large, real-world processes and their use by humans and computers for the purpose of understanding, analyzing, guiding and improving software development projects. The language MVP-L has been developed with these purposes in mind. In this paper, we
motivate the need for MVP-L, introduce the prototype language, and demonstrate its uses. We assume that further improvements to our language will be triggered by lessons learned from applications and experiments.

In this work we extend the multiscale finite element method (MsFEM)
as formulated by Hou and Wu in [14] to the PDE system of linear elasticity.
The application, motivated from the multiscale analysis of highly heterogeneous
composite materials, is twofold. Resolving the heterogeneities on
the finest scale, we utilize the linear MsFEM basis for the construction of
robust coarse spaces in the context of two-level overlapping Domain Decomposition
preconditioners. We motivate and explain the construction
and present numerical results validating the approach. Under the assumption
that the material jumps are isolated, that is they occur only in the
interior of the coarse grid elements, our experiments show uniform convergence
rates independent of the contrast in the Young's modulus within the
heterogeneous material. Elsewise, if no restrictions on the position of the
high coefficient inclusions are imposed, robustness can not be guaranteed
any more. These results justify expectations to obtain coefficient-explicit
condition number bounds for the PDE system of linear elasticity similar to
existing ones for scalar elliptic PDEs as given in the work of Graham, Lechner
and Scheichl [12]. Furthermore, we numerically observe the properties
of the MsFEM coarse space for linear elasticity in an upscaling framework.
Therefore, we present experimental results showing the approximation errors
of the multiscale coarse space w.r.t. the fine-scale solution.

This publication tries to develop mathematical subjects for school from realistic problems. The center of this report are business planning and decision problems which occur in almost all companies. The main topics are: Calculation of raw material demand for given orders, consumption of existing stock and the lot sizing.

For the numerical simulation of a mechanical multibody system (MBS), dynamical loads are needed as input data, such as a road profile. With given input quantities, the equations of motion of the system can be integrated. Output quantities for further investigations are calculated from the integration results. In this paper, we consider the corresponding inverse problem: We assume, that a dynamical system and some reference output signals are given. The general task is to derive an input signal, such that the system simulation produces the desired reference output. We present the state-of-the-art method in industrial applications, the iterative learning control method (ILC) and give an application example from automotive industry. Then, we discuss three alternative methods based on optimal control theory for differential algebraic equations (DAEs) and give an overview of their general scheme.

Input loads are essential for the numerical simulation of vehicle multibody system
(MBS)- models. Such load data is called invariant, if it is independent of the specific system under consideration. A digital road profile, e.g., can be used to excite MBS models of different
vehicle variants. However, quantities efficiently obtained by measurement such as wheel forces
are typically not invariant in this sense. This leads to the general task to derive invariant loads
on the basis of measurable, but system-dependent quantities. We present an approach to derive
input data for full-vehicle simulation that can be used to simulate different variants of a vehicle
MBS model. An important ingredient of this input data is a virtual road profile computed by optimal control methods.

The provision of network Quality-of-Service (network QoS) in wireless (ad-hoc) networks is a major challenge in the development of future communication systems. Before designing and implementing these systems, the network QoS requirements are to be specified. Existing approaches to the specification of network QoS requirements are mainly focused on specific domains or individual system layers. In this paper, we present a holistic, comprehensive formalization of network QoS requirements, across layers. QoS requirements are specified on each layer by defining QoS domain, consisting of QoS performance, reliability, and guarantee, and QoS scalability, with utility and cost functions. Furthermore, we derive preorders on multi-dimensional QoS domains, and present criteria to reduce these domains, leading to a manageable subset of QoS values that is sufficient for system design and implementation. We illustrate our approach by examples from the case study Wireless Video Transmission.

The provision of network Quality-of-Service (network QoS) in wireless (ad-hoc) networks is a major challenge in the development of future communication systems. Before designing and implementing these systems, the network QoS requirements are to be specified. Since QoS functionalities are integrated across layers and hence QoS specifications exist on different system layers, a QoS mapping technique is needed to translate the specifications into each other. In this paper, we formalize the relationship between layers. Based on a comprehensive and holistic formalization of network QoS requirements, we define two kinds of QoS mappings. QoS domain mappings associate QoS domains of two abstraction levels. QoS scalability mappings associate utility and cost functions of two abstraction levels. We illustrate our approach by examples from the case study Wireless Video Transmission.

The performance of oil filters used in the automotive industry can be significantly improved, especially when computer simulation is an essential component of the design process. In this paper, we consider parallel numerical algorithms for solving mathematical models describing the process of filtration, filtering out solid particles from liquid oil. The Navier-Stokes-Brinkmann system of equations is used to describe the laminar flow of incompressible isothermal oil. The space discretization in the complicated filter geometry is based on the finite-volume method. Special care is taken for an accurate approximation of velocity and pressure on the interface between the fluid and the porous media. The time discretization used here is a proper modification of the fractional time step discretization (cf. Chorin scheme) of the Navier-Stokes equations, where the Brinkmann term is considered at both, prediction and correction substeps. A data decomposition method is used to develop a parallel algorithm, where the domain is distributed among processors by using a structured reference grid. The MPI library is used to implement the data communication part of the algorithm. A theoretical model is proposed for the estimation of the complexity of the given parallel algorithm and a scalability analysis is done on the basis of this model. Results of computational experiments are presented, and the accuracy and efficiency of the parallel algorithm is tested on real industrial geometries.

In this paper we consider numerical algorithms for solving a system of nonlinear PDEs arising in modeling of liquid polymer injection. We investigate the particular case when a porous preform is located within the mould, so that the liquid polymer flows through a porous medium during the filling stage. The nonlinearity of the governing system of PDEs is due to the non-Newtonian behavior of the polymer, as well as due to the moving free boundary. The latter is related to the penetration front and a Stefan type problem is formulated to account for it. A finite-volume method is used to approximate the given differential problem. Results of numerical experiments are presented. We also solve an inverse problem and present algorithms for the determination of the absolute preform permeability coefficient in the case when the velocity of the penetration front is known from measurements. In both cases (direct and inverse problems) we emphasize on the specifics related to the non-Newtonian behavior of the polymer. For completeness, we discuss also the Newtonian case. Results of some experimental measurements are presented and discussed.

The capacitated single-allocation hub location problem revisited: A note on a classical formulation
(2009)

Denote by G = (N;A) a complete graph where N is the set of nodes and A is the set of edges. Assume that a °ow wij should be sent from each node i to each node j (i; j 2 N). One possibility is to send these °ows directly between the corresponding pairs of nodes. However, in practice this is often neither e±cient nor costly attractive because it would imply that a link was built between each pair of nodes. An alternative is to select some nodes to become hubs and use them as consolidation and redistribution points that altogether process more e±ciently the flow in the network. Accordingly, hubs are nodes in the graph that receive tra±c (mail, phone calls, passengers, etc) from di®erent origins (nodes) and redirect this tra±c directly to the destination nodes (when a link exists) or else to other hubs. The concentration of tra±c in the hubs and its shipment to other hubs lead to a natural decrease in the overall cost due to economies of scale.

In this paper, an extension to the classical capacitated single-allocation hub location problem is studied in which the size of the hubs is part of the decision making process. For each potential hub a set of capacities is assumed to be available among which one can be chosen. Several formulations are proposed for the problem, which are compared in terms of the bound provided by the linear programming relaxation. Di®erent sets of inequalities are proposed to enhance the models. Several preprocessing tests are also presented with the goal of reducing the size of the models for each particular instance. The results of the computational experiments performed using the proposed models are reported.

User interfaces for large distributed applications have to handle specific problems: the complexity of the application itself and the integration of online-data into the user interface. A main task of the user interface architecture is to provide powerful tools to design and augment the end-user system easily, hence giving the designer more time to focus on user requirements. Our experiences developing a user interface system for a process control room showed that a lot of time during the development process is wasted for the integration of online-data residing anywhere but not in the user interface itself. Furtheron external data may be kept by different kinds of programs, e.g. C-programs running
a numerical process model or PROLOG-programs running a diagnosis system, both in parallel to the process and in parallel to the user interface. Facing these specific requirements, we developed a user interface architecture following two main goals: 1. integration of external information into high-level graphical objects and 2. the system should be open for any program running as a separate process using its own problem-oriented language. The architecture is based on two approaches: an asynchronous, distributed and language independent communication model and an object model describing the problem domain and the interface using object-oriented techniques. Other areas like rule-based programming are involved, too. With this paper, we will present the XAVIA user interface architecture, the (as far as we know) first user inteface architecture, which is consequently based on a distributed object model.

The scope of this paper is to enhance the model for the own-company stockholder (given in Desmettre, Gould and Szimayer (2010)), who can voluntarily performance-link his personal wealth to his management success by acquiring stocks in the own-company whose value he can directly influence via spending work effort. The executive is thereby characterized by a parameter of risk aversion and the two work effectiveness parameters inverse work productivity and disutility stress. We extend the model to a constant absolute risk aversion framework using an exponential utility/disutility set-up. A closed-form solution is given for the optimal work effort an executive will apply and we derive the optimal investment strategies of the executive. Furthermore, we determine an up-front fair cash compensation applying an indifference utility rationale. Our study shows to a large extent that the results previously obtained are robust under the choice of the utility/disutility set-up.

We develop a framework for analyzing an executive’s own-company stockholding and work effort preferences. The executive, characterized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed-form, and an indifference utility rationale is demonstrated to determine his required compensation. Our results have implications for the practical and theoretical assessment of executive quality and the benefits of performance contracting. Assuming knowledge of the company’s non-systematic risk, our executive’s unconstrained own-company investment identifies his work effectiveness (i.e. quality), and also reflects work effort that establishes a base-level that performance contracting should seek to exceed.

We consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market includ- ing the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility. The power utility case is discussed as well. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two posi- tions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibility to benefit from her work effort by acquiring own-company shares. This gives insight into aspects of optimal contract design. Our framework is applicable to the pharma- ceutical and financial industry, and the IT sector.

This document offers a concise introduction to the Goal Question Metric Paradigm (GQM Paradigm), and surveys research on applying and extending the GQM Paradigm. We describe the GQM Paradigm in terms of its basic principles, techniques for structuring GQM-related documents, and methods for performing tasks of planning and implementing a measurement program based on GQM. We also survey prototype software tools that support applying the GQM Paradigm in various ways. An annotated bibliography lists sources that document experience gained while using the GQM Paradigm and offer in-depth information about the GQM Paradigm.

This report describes the calibration and completion of the volatility cube in the SABR model. The description is based on a project done for Assenagon GmbH in Munich. However, we use fictitious market data which resembles realistic market data. The problem posed by our client is formulated in section 1. Here we also motivate why this is a relevant problem. The SABR model is briefly reviewed in section 2. Section 3 discusses the calibration and completion of the volatility cube. An example is presented in section 4. We conclude by suggesting possible future research in section 5.

In this work we use the Parsimonious Multi–Asset Heston model recently developed in [Dimitroff et al., 2009] at Fraunhofer ITWM, Department Financial Mathematics, Kaiserslautern (Germany) and apply it to Quanto options. We give a summary of the model and its calibration scheme. A suitable transformation of the Quanto option payoff is explained and used to price Quantos within the new framework. Simulated prices are given and compared to market prices and Black–Scholes prices. We find that the new approach underprices the chosen options, but gives better results than the Black–Scholes approach, which is prevailing in the literature on Quanto options.

The World Wide Web is a medium through which a manufacturer may allow Internet visitors to customize or compose his products. Due to missing or rapidly changing standards these applications are often restricted to relatively simple CGI or JAVA based scripts. Usually, results like images or movies are stored in a database and are transferred on demand to the web-user. Viper (Visualisierung parametrisch editierbarer Raumkomponenten) is a Toolkit [VIP96] written in C++ and JAVA which provides 3D-modeling and visualization methodsfor developing complex web-based applications. The Toolkit has been designed to built a prototype, which can be used to construct and visualize prefabricated homes on the Internet. Alternative applications are outlined in this paper. Within Viper, all objects are stored in a scene graph (VSSG ), which is the basic data structure of the Toolkit. To show the concept and structure of the Toolkit, functionality, and implementation of the prototype are described.

We present two heuristic methods for solving the Discrete Ordered Median Problem (DOMP), for which no such approaches have been developed so far. The DOMP generalizes classical discrete facility location problems, such as the p-median, p-center and Uncapacitated Facility Location problems. The first procedure proposed in this paper is based on a genetic algorithm developed by Moreno Vega [MV96] for p-median and p-center problems. Additionally, a second heuristic approach based on the Variable Neighborhood Search metaheuristic (VNS) proposed by Hansen & Mladenovic [HM97] for the p-median problem is described. An extensive numerical study is presented to show the efficiency of both heuristics and compare them.

In this paper we propose a general approach solution method for the single facility ordered median problem in the plane. All types of weights (non-negative, non-positive, and mixed) are considered. The big triangle small triangle approach is used for the solution. Rigorous and heuristic algorithms are proposed and extensively tested on eight different problems with excellent results.

Safety and reliability requirements on the one side and short development cycles, low costs and lightweight design on the other side are two competing aspects of truck engineering. For safety critical components essentially no failures can be tolerated within the target mileage of a truck. For other components the goals are to stay below certain predefined failure rates. Reducing weight or cost of structures often also reduces strength and reliability. The requirements on the strength, however, strongly depend on the loads in actual customer usage. Without sufficient knowledge of these loads one needs large safety factors, limiting possible weight or cost reduction potentials. There are a lot of different quantities influencing the loads acting on the vehicle in actual usage. These ‘influencing quantities’ are, for example, the road quality, the driver, traffic conditions, the mission (long haulage, distribution or construction site), and the geographic region. Thus there is a need for statistical methods to model the load distribution with all its variability, which in turn can be used for the derivation of testing specifications.

We present a unified approach of several boundary conditions for lattice Boltzmann models. Its general framework is a generalization of previously introduced schemes such as the bounce-back rule, linear or quadratic interpolations, etc. The objectives are two fold: first to give theoretical tools to study the existing boundary conditions and their corresponding accuracy; secondly to design formally third- order accurate boundary conditions for general flows. Using these boundary conditions, Couette and Poiseuille flows are exact solution of the lattice Boltzmann models for a Reynolds number Re = 0 (Stokes limit). Numerical comparisons are given for Stokes flows in periodic arrays of spheres and cylinders, linear periodic array of cylinders between moving plates and for Navier-Stokes flows in periodic arrays of cylinders for Re < 200. These results show a significant improvement of the overall accuracy when using the linear interpolations instead of the bounce-back reflection (up to an order of magnitude on the hydrodynamics fields). Further improvement is achieved with the new multi-reflection boundary conditions, reaching a level of accuracy close to the quasi-analytical reference solutions, even for rather modest grid resolutions and few points in the narrowest channels. More important, the pressure and velocity fields in the vicinity of the obstacles are much smoother with multi-reflection than with the other boundary conditions. Finally the good stability of these schemes is highlighted by some simulations of moving obstacles: a cylinder between flat walls and a sphere in a cylinder.

On a Multigrid Adaptive Refinement Solver for Saturated Non-Newtonian Flow in Porous Media A multigrid adaptive refinement algorithm for non-Newtonian flow in porous media is presented. The saturated flow of a non-Newtonian fluid is described by the continuity equation and the generalized Darcy law. The resulting second order nonlinear elliptic equation is discretized by a finite volume method on a cell-centered grid. A nonlinear full-multigrid, full-approximation-storage algorithm is implemented. As a smoother, a single grid solver based on Picard linearization and Gauss-Seidel relaxation is used. Further, a local refinement multigrid algorithm on a composite grid is developed. A residual based error indicator is used in the adaptive refinement criterion. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Several results from numerical experiments are presented in order to examine the performance of the solver.

Flow of non-Newtonian fluid in saturated porous media can be described by the continuity equation and the generalized Darcy law. Efficient solution of the resulting second order nonlinear elliptic equation is discussed here. The equation is discretized by a finite volume method on a cell-centered grid. Local adaptive refinement of the grid is introduced in order to reduce the number of unknowns. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Two residual based error indicators are exploited in the adaptive refinement criterion. Second order accurate discretization of the fluxes on the interfaces between refined and non-refined subdomains, as well as on the boundaries with Dirichlet boundary condition, are presented here, as an essential part of the accurate and efficient algorithm. A nonlinear full approximation storage multigrid algorithm is developed especially for the above described composite (coarse plus locally refined) grid approach. In particular, second order approximation of the fluxes around interfaces is a result of a quadratic approximation of slave nodes in the multigrid - adaptive refinement (MG-AR) algorithm. Results from numerical solution of various academic and practice-induced problems are presented and the performance of the solver is discussed.

In this paper, we propose multi-level Monte Carlo(MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multi-phase flow and transport. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. We consider two types of ensemble level mixed multiscale finite element methods, (1) the no-local-solve-online ensemble level method (NLSO) and (2) the local-solve-online ensemble level method (LSO). Both mixed multiscale methods use a number of snapshots of the permeability media to generate a multiscale basis.
As a result, in the offline stage, we construct multiple basis functions for
each coarse region where basis functions correspond to different realizations.
In the no-local-solve-online ensemble level method one uses the whole set of pre-computed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method one uses the pre-computed functions to construct a multiscale basis for a particular realization. With this basis the solution corresponding to this
particular realization is approximated in LSO mixed MsFEM. In both approaches
the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to pre-compute a
multiscale basis. We note that LSO approaches share similarities with reduced basis methods [11, 21, 22].
In multi-level Monte Carlo methods ([14, 13]), more accurate (and expensive) forward simulations are run with fewer samples while less accurate(and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations carefully, one can show that MLMC methods can provide better accuracy
at the same cost as MC methods. In our simulations, our goal is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. In particular, we show that NLSO
mixed MsFEM is more accurate compared to LSO mixed MsFEM. Further, we use both approaches in the context of MLMC to speed-up MC
calculations. We present basic aspects of the algorithm and numerical
results for coupled flow and transport in heterogeneous porous media.

In this paper we will introduce the concept of lexicographic max-ordering solutions for multicriteria combinatorial optimization problems. Section 1 provides the basic notions of
multicriteria combinatorial optimization and the definition of lexicographic max-ordering solutions. In Section 2 we will show that lexicographic max-ordering solutions are pareto optimal as well as max-ordering optimal solutions. Furthermore lexicographic max-ordering solutions can be used to characterize the set of pareto solutions. Further properties of lexicographic max-ordering solutions are given. Section 3 will be devoted to algorithms. We give a polynomial time algorithm for the two criteria case where one criterion is a sum and one is a bottleneck objective function, provided that the one criterion sum problem is solvable in polynomial time. For bottleneck functions an algorithm for the general case of Q criteria is presented.

In this paper we investigate two optimization problems for matroids with multiple objective functions, namely finding the pareto set and the max-ordering problem which conists in finding a basis such that the largest objective value is minimal. We prove that the decision versions of both problems are NP-complete. A solution procedure for the max-ordering problem is presented and a result on the relation of the solution sets of the two problems is given. The main results are a characterization of pareto bases by a basis exchange property and finally a connectivity result for proper pareto solutions.

Dealing with problems from locational planning in schools can enrich the mathematical education. In this report we describe planar locational problems which can be used in mathematical lessons. The problems production of a semiconductor plate, design of a fire brigade building and the warehouse problem are from real-world. The problems are worked out detailed so that the usage for school lessons is possible.

In multiple criteria optimization an important research topic is the topological structure of the set \( X_e \) of efficient solutions. Of major interest is the connectedness of \( X_e \), since it would allow the determination of \( X_e \) without considering non-efficient solutions in the
process. We review general results on the subject,including the connectedness result for efficient solutions in multiple criteria linear programming. This result can be used to derive a definition of connectedness for discrete optimization problems. We present a counterexample to a previously stated result in this area, namely that the set of efficient solutions of the shortest path problem is connected. We will also show that connectedness does not hold for another important problem in discrete multiple criteria optimization: the spanning tree problem.

Simulation of multibody systems (mbs) is an inherent part in developing and design of complex mechanical systems. Moreover, simulation during operation gained in importance in the recent years, e.g. for HIL-, MIL- or monitoring applications. In this paper we discuss the numerical simulation of multibody systems on different platforms. The main section of this paper deals with the simulation of an established truck model [9] on different platforms, one microcontroller and two real-time processor boards. Additional to numerical C-code the latter platforms provide the possibility to build the model with a commercial mbs tool, which is also investigated. A survey of different ways of generating code and equations of mbs models is given and discussed concerning handling, possible limitations as well as performance. The presented benchmarks are processed under terms of on-board real time applications. A further important restriction, caused by the real-time requirement, is a fixed integration step size. Whence, carefully chosen numerical integration algorithms are necessary, especially in the case of closed loops in the model. We investigate linearly-implicit time integration methods with fixed step size, so-called Rosenbrock methods, and compare them with respect to their accuracy and performance on the tested processors.

Best-Fit Pattern Matching
(1994)

This report shows that dispatching of methods in object oriented languages is in principle the same as best fit pattern matching. A general conceptual description of best fit pattern matching is presented. Many object oriented features are modelled by means of the general concept. This shows that simple methods, multi methods, overloading of functions, pattern matching,
dynamic and union types, and extendable records can be combined in a single comprehensive concept.

Selfish Bin Coloring
(2009)

We introduce a new game, the so-called bin coloring game, in which selfish players control colored items and each player aims at packing its item into a bin with as few different colors as possible. We establish the existence of Nash and strong as well as weakly and strictly Pareto optimal equilibria in these games in the cases of capacitated and uncapacitated bins. For both kinds of games we determine the prices of anarchy and stability concerning those four equilibrium concepts. Furthermore, we show that extreme Nash equilibria, those with minimal or maximal number of colors in a bin, can be found in time polynomial in the number of items for the uncapcitated case.

The modelling of hedge funds poses a difficult problem since the available reported data sets are often small and incomplete. We propose a switching regression model for hedge funds, in which the coefficients are able to switch between different regimes. The coefficients are governed by a Markov chain in discrete time. The different states of the Markov chain represent different states of the economy, which influence the performance of the independent variables. Hedge fund indices are chosen as regressors. The parameter estimation for the switching parameter as well as for the switching error term is done through a filtering technique for hidden Markov models developed by Elliott (1994). Recursive parameter estimates are calculated through a filter-based EM-algorithm, which uses the hidden information of the underlying Markov chain. Our switching regression model is applied on hedge fund series and hedge fund indices from the HFR database.

Traditional methods fail for the purpose of simulating the complete flow process in urban areas as a consequence of heavy rainfall and as required by the European Standard EN-752 since the bi-directional coupling between sewer and surface is not properly handled. The methodology, developed in the BMBF/ EUREKA-project RisUrSim, solves this problem by carrying out the runoff on the basis of shallow water equations solved on high-resolution surface grids. Exchange nodes between the sewer and the surface, like inlets and manholes, are located in the computational grid and water leaving the sewer in case of surcharge is further distributed on the surface. So far, it has been a problem to get the dense topographical information needed to build models suitable for hydrodynamic runoff calculation in urban areas. Recent airborne data collection methods like laser scanning, however, offer a great chance to economically gather densely sampled input data. This paper studies the potential of such laser-scan data sets for urban water hydrodynamics.

Finite difference discretizations of 1D poroelasticity equations with discontinuous coefficients are analyzed. A recently suggested FD discretization of poroelasticity equations with constant coefficients on staggered grid, [5], is used as a basis. A careful treatment of the interfaces leads to harmonic averaging of the discontinuous coefficients. Here, convergence for the pressure and for the displacement is proven in certain norms for the scheme with harmonic averaging (HA). Order of convergence 1.5 is proven for arbitrary located interface, and second order convergence is proven for the case when the interface coincides with a grid node. Furthermore, following the ideas from [3], modified HA discretization are suggested for particular cases. The velocity and the stress are approximated with second order on the interface in this case. It is shown that for wide class of problems, the modified discretization provides better accuracy. Second order convergence for modified scheme is proven for the case when the interface coincides with a displacement grid node. Numerical experiments are presented in order to illustrate our considerations.

Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.

An efficient approach for calculating the effective heat conductivity for a class of industrial composite materials, such as metal foams, fibrous glass materials, and the like, is discussed. These materials, used in insulation or in advanced heat exchangers, are characterized by a low volume fraction of the highly conductive material (glass or metal) having a complex, network-like structure and by a large volume fraction of the insulator (air). We assume that the composite materials have constant macroscopic thermal conductivity tensors, which in principle can be obtained by standard up-scaling techniques, that use the concept of representative elementary volumes (REV), i.e. the effective heat conductivities of composite media can be computed by post-processing the solutions of some special cell problems for REVs. We propose, theoretically justify, and numerically study an efficient approach for calculating the effective conductivity for media for which the ratio of low and high conductivities satisfies 1. In this case one essentially only needs to solve the heat equation in the region occupied by the highly conductive media. For a class of problems we show, that under certain conditions on the microscale geometry, the proposed approach produces an upscaled conductivity that is O() close to the exact upscaled permeability. A number of numerical experiments are presented in order to illustrate the accuracy and the limitations of the proposed method. Applicability of the presented approach to upscaling other similar problems, e.g. flow in fractured porous media, is also discussed.

A number of water flow problems in porous media are modelled by Richards’ equation [1]. There exist a lot of different applications of this model. We are concerned with the simulation of the pressing section of a paper machine. This part of the industrial process provides the dewatering of the paper layer by the use of clothings, i.e. press felts, which absorb the water during pressing [2]. A system of nips are formed in the simplest case by rolls, which increase sheet dryness by pressing against each other (see Figure 1). A lot of theoretical studies were done for Richards’ equation (see [3], [4] and references therein). Most articles consider the case of x-independent coefficients. This simplifies the system considerably since, after Kirchhoff’s transformation of the problem, the elliptic operator becomes linear. In our case this condition is not satisfied and we have to consider nonlinear operator of second order. Moreover, all these articles are concerned with the nonstationary problem, while we are interested in the stationary case. Due to complexity of the physical process our problem has a specific feature. An additional convective term appears in our model because the porous media moves with the constant velocity through the pressing rolls. This term is zero in immobile porous media. We are not aware of papers, which deal with such kind of modified steady Richards’ problem. The goal of this paper is to obtain the stability results, to show the existence of a solution to the discrete problem, to prove the convergence of the approximate solution to the weak solution of the modified steady Richards’ equation, which describes the transport processes in the pressing section. In Section 2 we present the model which we consider. In Section 3 a numerical scheme obtained by the finite volume method is given. The main part of this paper is theoretical studies, which are given in Section 4. Section 5 presents a numerical experiment. The conclusion of this work is given in Section 6.

Gauss Frame Offsets
(1992)

Objective: In some surgical specialties, e.g. orthopedics, robots are already used in the operating room for bony milling work. Oto- and otoneurosurgery may also greatly benefit by robotic enhanced precision. Study Design: Experimental study on robotic milling on oak wood and human temporal bone specimen. Methods: A standard industrial robot with a 6 degrees-of-freedom serial kinematics was used with force feedback to proportionally control the robot speed. Different milling modes and characteristic path parameters were evaluated to generate milling paths based on CAD geometry data of a cochlear implant and an implantable hearing system. Results: The best suited strategy proofed to be the spiral horizontal milling mode with the burr held perpendicularly to the temporal bone surface. In order to avoid high grooves, the distance in between paths should equal half the radius of the cutting burr head. Due to the vibration of the robot’s own motors, a rather high oscillation of the standard deviation of forces was encountered. This oscillation dropped drastically to nearly 0 N, when the burr head reached contact with the dura mater due to its damping characteristics. The cutting burr could be moved a long time on the dura without damaging it, because of its rather blunt head. The robot moved the burr very smoothly according to the encountered resistances. Conclusion: This is the first development of an functioning robotic milling procedure for otoneurosurgery with force-based speed control. It is planned to implement ultrasound-based local navigation and to perform robotic mastoidectomy.

This paper deals with the problem of determining the sea surface topography from geostrophic flow of ocean currents on local domains of the spherical Earth. In mathematical context the problem amounts to the solution of a spherical differential equation relating the surface curl gradient of a scalar field (sea surface topography) to a surface divergence-free vector field(geostrophic ocean flow). At first, a continuous solution theory is presented in the framework of an integral formula involving Green’s function of the spherical Beltrami operator. Different criteria derived from spherical vector analysis are given to investigate uniqueness. Second, for practical applications Green’s function is replaced by a regularized counterpart. The solution is obtained by a convolution of the flow field with a scaled version of the regularized Green function. Calculating locally without boundary correction would lead to errors near the boundary. To avoid these Gibbs phenomenona we additionally consider the boundary integral of the corresponding region on the sphere which occurs in the integral formula of the solution. For reasons of simplicity we discuss a spherical cap first, that means we consider a continuously differentiable (regular) boundary curve. In a second step we concentrate on a more complicated domain with a non continuously differentiable boundary curve, namely a rectangular region. It will turn out that the boundary integral provides a major part for stabilizing and reconstructing the approximation of the solution in our multiscale procedure.

A new stability preserving model reduction algorithm for discrete linear SISO-systems based on their impulse response is proposed. Similar to the Padé approximation, an equation system for the Markov parameters involving the Hankel matrix is considered, that here however is chosen to be of very high dimension. Although this equation system therefore in general cannot be solved exactly, it is proved that the approximate solution, computed via the Moore-Penrose inverse, gives rise to a stability preserving reduction scheme, a property that cannot be guaranteed for the Padé approach. Furthermore, the proposed algorithm is compared to another stability preserving reduction approach, namely the balanced truncation method, showing comparable performance of the reduced systems. The balanced truncation method however starts from a state space description of the systems and in general is expected to be more computational demanding.

To a network N(q) with determinant D(s;q) depending on a parameter vector q Î Rr via identification of some of its vertices, a network N^ (q) is assigned. The paper deals with procedures to find N^ (q), such that its determinant D^ (s;q) admits a factorization in the determinants of appropriate subnetworks, and with the estimation of the deviation of the zeros of D^ from the zeros of D. To solve the estimation problem state space methods are applied.

The problem discussed in this paper is motivated by the new recycling directiveWEEE of the EC. The core of this law is, that each company which sells electrical or electronic equipment in a European country has the obligation to recollect and recycle an amount of returned items which is proportional to its market share. To assign collection stations to companies, in Germany for one product type a territory design approach is planned. However, in contrast to classical territory design, the territories should be geographically as dispersed as possible to avoid that a company, resp. its logistics provider responsible for the recollection, gains a monopoly in some region. First, we identify an appropriate measure for the dispersion of a territory. Afterwards, we present a first mathematical programming model for this new problem as well as a solution method based on the GRASP methodology. Extensive computational results illustrate the suitability of the model and assess the effectiveness of the heuristic.

The Chained Lin-Kernighan algorithm (CLK) is one of the best heuristics to solve Traveling Salesman Problems (TSP). In this paper a distributed algorithm is proposed, were nodes in a network locally optimize TSP instances by using the CLK algorithm. Within an Evolutionary Algorithm (EA) network-based framework the resulting tours are modified and exchanged with neighboring nodes. We show that the distributed variant finds better tours compared to the original CLK given the same amount of computation time. For instance fl3795, the original CLK got stuck in local optima in each of 10 runs, whereas the distributed algorithm found optimal tours in each run requiring less than 10 CPU minutes per node on average in an 8 node setup. For instance sw24978, the distributed algorithm had an average solution quality of 0.050% above the optimum, compared to CLK's average solution of 0.119% above the optimum given the same total CPU time (104 seconds). Considering the best tours of both variants for this instance, the distributed algorithm is 0.033% above the optimum and the CLK algorithm 0.099%.

Weighted k-cardinality trees
(1992)

We consider the k -CARD TREE problem, i.e., the problem of finding in a given undirected graph G a subtree with k edges, having minimum weight. Applications of this problem arise in oil-field leasing and facility layout. While the general problem is shown to be strongly NP hard, it can be solved in polynomial time if G is itself a tree. We give an integer programming formulation of k-CARD TREE, and an efficient exact separation routine for a set of generalized subtour elimination constraints. The polyhedral structure of the convex huLl of the integer solutions is studied.

The local solution problem of multivariate Fredholm integral equations is studied. Recent research proved that for several function classes the complexity of this problem is closely related to the Gelfand numbers of some characterizing operators. The generalization of this approach to the situation of arbitrary Banach spaces is the subject of the present paper.
Furthermore, an iterative algorithm is described which - under some additional conditions - realizes the optimal error rate. The way these general theorems work is demonstrated by applying them to integral equations in a Sobolev space of periodic functions with dominating mixed derivative of various order.

In this paper the complexity of the local solution of Fredholm integral equations
is studied. For certain Sobolev classes of multivariate periodic functions with dominating mixed derivative we prove matching lower and upper bounds. The lower bound is shown using relations to s-numbers. The upper bound is proved in a constructive way providing an implementable algorithm of optimal order based on Fourier coefficients and a hyperbolic cross approximation.

We study the complexity of local solution of Fredholm integral equations. This means that we want to compute not the full solution, but rather a functional (weighted mean, value in a point) of it. For certain Sobolev classes of multivariate periodic functions we prove matching upper and lower bounds and construct an algorithm of the optimal order, based on Fourier coefficients and a hyperbolic cross approximation.

In recent years, Smolyak quadrature rules (also called hyperbolic cross points or sparse grids) have gained interest as a possible competitor to number theoretic quadratures for high dimensional problems. A standard way of comparing the quality of multivariate quadrature formulas
consists in computing their \(L_2\)-discrepancy. Especially for larger dimensions, such computations are a highly complex task. In this paper we develop a fast recursive algorithm for computing the \(L_2\)-discrepancy (and related quality measures) of general Smolyak quadratures. We carry out numerical comparisons between the discrepancies of certain Smolyak rules, Hammersley and Monte Carlo sequences.

A notion of discrepancy is introduced, which represents the integration error on spaces of \(r\)-smooth periodic functions. It generalizes the diaphony and constitutes a periodic counterpart to the classical \(L_2\)-discrepancy as weil as \(r\)-smooth versions of it introduced recently by Paskov [Pas93]. Based on previous work [FH96], we develop an efficient algorithm for computing periodic discrepancies for quadrature formulas possessing certain tensor product structures, in particular, for Smolyak quadrature rules (also called sparse grid methods). Furthermore, fast algorithms of computing periodic discrepancies for lattice rules can easily be derived from well-known properties of lattices. On this basis we carry out numerical comparisons of discrepancies between Smolyak and lattice rules.

In this paper, the complexity of full solution of Fredholm integral equations of the second kind with data from the Sobolev class \(W^r_2\) is studied. The exact order of information complexity is derived. The lower bound is proved using a Gelfand number technique. The upper bound is shown by providing a concrete algorithm of optimal order, based on a specific hyperbolic cross approximation of the kernel function. Numerical experiments are included, comparing the optimal algorithm with the standard Galerkin method.

In this expository article, we give an introduction into the basics of bootstrap tests in general. We discuss the residual-based and the wild bootstrap for regression models suitable for applications in signal and image analysis. As an illustration of the general idea, we consider a particular test for detecting differences between two noisy signals or images which also works for noise with variable variance. The test statistic is essentially the integrated squared difference between the signals after denoising them by local smoothing. Determining its quantile, which marks the boundary between accepting and rejecting the hypothesis of equal signals, is hardly possible by standard asymptotic methods whereas the bootstrap works well. Applied to the rows and columns of images, the resulting algorithm not only allows for the detection of defects but also for the characterization of their location and shape in surface inspection problems.

The purpose of this paper is the canonical connection of classical global gravity field determination following the concept of Stokes (1849), Bruns (1878), and Neumann (1887) on the one hand and modern locally oriented multiscale computation by use of adaptive locally supported wavelets on the other hand. Essential tools are regularization methods of the Green, Neumann, and Stokes integral representations. The multiscale approximation is guaranteed simply as linear difference scheme by use of Green, Neumann, and Stokes wavelets, respectively. As an application, gravity anomalies caused by plumes are investigated for the Hawaiian and Iceland areas.

We introduce two novel techniques for speeding up the generation of digital \((t,s)\)-sequences. Based on these results a new algorithm for the construction of Owen's randomly permuted \((t,s)\)-sequences is developed and analyzed. An implementation of the new techniques is available at http://www.cs.caltech.edu/~ilja/libseq/index.html

In soil mechanics assumption of only vertical subsidence is often invoked and this leads to the one-dimensional model of poroelasticity. The classical model of linear poroelasticity is obtained by Biot [1], detailed derivation can be found e.g., in [2]. This model is applicable also to modelling certain processes in geomechanics, hydrogeology, petroleum engineering (see, e.g., [3, 8], in biomechanics (e.g., [9, 10]), in filtration (e.g., filter cake formation, see [15, 16, 17]), in paper manufacturing (e.g., [11, 12]), in printing (e.g., [13]), etc. Finite element and finite difference methods were applied by many authors for numerical solution of the Biot system of PDEs, see e.g. [3, 4, 5] and references therein. However, as it is wellknown, the standard FEM and FDM methods are subject to numerical instabilities at the first time steps. To avoid this, discretization on staggered grid was suggested in [4, 5]. A single layer deformable porous medium was considered there. This paper can be viewed as extension of [4, 5] to the case of multilayered deformable porous media. A finite volume discretization to the interface problem for the classical one-dimensional Biot model of consolidation process is applied here. Following assumptions are supposed to be valid: each of the porous layers is composed of incompressible solid matrix, it is homogeneous and isotropic. Furthermore, one of two following assumptions is valid: porous medium is not completely saturated and ﬂuid is incompressible or porous medium is completely saturated and fluid is slightly compressible. The reminder of the paper is organised as follows. Next section presents the mathematical model. Third section is devoted to the dicsretization of the continuous problem. Fourth section contains the results from the numerical experiments.