### Refine

#### Year of publication

- 2007 (27) (remove)

#### Document Type

- Report (27) (remove)

#### Language

- English (27) (remove)

#### Keywords

- numerical upscaling (4)
- Darcy’s law (2)
- effective heat conductivity (2)
- single phase flow (2)
- 3D (1)
- Asymptotic Expansion (1)
- Bayesian Model Averaging (1)
- Boolean polynomials (1)
- Boundary Value Problem (1)
- CIR model (1)

#### Faculty / Organisational entity

- Fraunhofer (ITWM) (27) (remove)

In this work, we analyze two important and simple models of short rates, namely Vasicek and CIR models. The models are described and then the sensitivity of the models with respect to changes in the parameters are studied. Finally, we give the results for the estimation of the model parameters by using two different ways.

With the ever-increasing significance of software in our everyday lives, it is vital to afford reliable software quality estimates. Typically, quantitative software quality analyses rely on either statistical fault prediction methods (FPMs) or stochastic software reliability growth models (SRGMs). Adopting solely FPMs or SRGMs, though, may result in biased predictions that do not account for uncertainty in the distinct prediction methods; thus rendering the prediction less reliable. This paper identifies flaws of the individual prediction methods and suggests a hybrid prediction approach that combines FPMs and SRGMs. We adopt FPMs for initially estimating the expected number of failures for fi- nite failure SRGMs. Initial parameter estimates yield more accurate reliability predictions until sufficient failures are observed that enable stable parameter estimates in SRGMs. Being at the equilibrium level of FPM and SRGM pre- dictions we suggest combining the competing prediction methods with respect to the principle of heterogeneous redundancy. That is, we propose using the in- dividual methods separately and combining their predictions. In this paper we suggest Bayesian model averaging (BMA) for combining the different methods. The hybrid approach allows early reliability estimates and encourages higher confidence in software quality predictions.

This paper disscuses the minimal area rectangular packing problem of how to pack a set of specified, non-overlapping rectangels into a rectangular container of minimal area. We investigate different mathematical programming approaches of this and introduce a novel approach based on non-linear optimization and the \\\"tunneling effect\\\" achieved by a relaxation of the non-overlapping constraints.

Background and purpose Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Material and methods Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms “selection” and “restriction”. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle – a set constructed from precomputed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions.

Modeling and formulation of optimization problems in IMRT planning comprises the choice of various values such as function-specific parameters or constraint bounds. These values also affect the characteristics of the optimization problem and thus the form of the resulting optimal plans. This publication utilizes concepts of sensitivity analysis and elasticity in convex optimization to analyze the dependence of optimal plans on the modeling parameters. It also derives general rules of thumb how to choose and modify the parameters in order to obtain the desired IMRT plan. These rules are numerically validated for an exemplary IMRT planning problems.

A fully automatic procedure is proposed to rapidly compute the permeability of porous materials from their binarized microstructure. The discretization is a simplified version of Peskin’s Immersed Boundary Method, where the forces are applied at the no-slip grid points. As needed for the computation of permeability, steady flows at zero Reynolds number are considered. Short run-times are achieved by eliminating the pressure and velocity variables using an Fast Fourier Transform-based and 4 Poisson problembased fast inversion approach on rectangular parallelepipeds with periodic boundary conditions. In reference to calling it a fast method using fictitious or artificial forces, the implementation is called FFF-Stokes. Large scale computations on 3d images are quickly and automatically performed to estimate the permeability of some sample materials. A matlab implementation is provided to allow readers to experience the automation and speed of the method for realistic three-dimensional models.

Facility location decisions play a critical role in the strategic design of supply chain networks. In this paper, an extensive literature review of facility location models in the context of supply chain management is given. Following a brief review of core models in facility location, we identify basic features that such models must capture to support decision-making involved in strategic supply chain planning. In particular, the integration of location decisions with other decisions relevant to the design of a supply chain network is discussed. Furthermore, aspects related to the structure of the supply chain network, including those specific to reverse logistics, are also addressed. Significant contributions to the current state-of-the-art are surveyed taking into account numerous factors. Supply chain performance measures and optimization techniques are also reviewed. Applications of facility location models to supply chain network design ranging across various industries are discussed. Finally, a list of issues requiring further research are highlighted.

Bringing robustness to patient flow management through optimized patient transports in hospitals
(2007)

Intra-hospital transports are often required for diagnostic or therapeutic reasons. Depending on the hospital layout, transportation between nursing wards and service units is either provided by ambulances or by trained personnel who accompany patients on foot. In many large German hospitals, the patient transport service is poorly managed and lacks workflow coordination. This contributes to higher hospital costs (e.g. when a patient is not delivered to the operating room on time) and to patient inconvenience due to longer waiting times. We have designed a computer-based planning system - Opti-TRANS c - that supports all phases of the transportation flow, ranging from travel booking, dispatching transport requests to monitoring and reporting trips in real-time. The methodology developed to solve the underlying optimization problem - a dynamic dial-a-ride problem with hospital-specific constraints - draws on fast heuristic methods to ensure the efficient and timely provision of transports. We illustrate the strong impact of Opti-TRANS c on the daily performance of the patient transportation service of a large German hospital. The major benefits obtained with the new tool include streamlined transportation processes and workflow, significant savings and improved patient satisfaction. Moreover, the new planning system has contributed to increase awareness among hospital staff about the importance of implementing efficient logistics practices.

An efficient approach for calculating the effective heat conductivity for a class of industrial composite materials, such as metal foams, fibrous glass materials, and the like, is discussed. These materials, used in insulation or in advanced heat exchangers, are characterized by a low volume fraction of the highly conductive material (glass or metal) having a complex, network-like structure and by a large volume fraction of the insulator (air). We assume that the composite materials have constant macroscopic thermal conductivity tensors, which in principle can be obtained by standard up-scaling techniques, that use the concept of representative elementary volumes (REV), i.e. the effective heat conductivities of composite media can be computed by post-processing the solutions of some special cell problems for REVs. We propose, theoretically justify, and numerically study an efficient approach for calculating the effective conductivity for media for which the ratio of low and high conductivities satisfies 1. In this case one essentially only needs to solve the heat equation in the region occupied by the highly conductive media. For a class of problems we show, that under certain conditions on the microscale geometry, the proposed approach produces an upscaled conductivity that is O() close to the exact upscaled permeability. A number of numerical experiments are presented in order to illustrate the accuracy and the limitations of the proposed method. Applicability of the presented approach to upscaling other similar problems, e.g. flow in fractured porous media, is also discussed.

In this paper, a new mixed integer mathematical programme is proposed for the application of Hub Location Problems (HLP) in public transport planning. This model is among the few existing ones for this application. Some classes of valid inequalities are proposed yielding a very tight model. To solve instances of this problem where existing standard solvers fail, two approaches are proposed. The first one is an exact accelerated Benders decomposition algorithm and the latter a greedy neighborhood search. The computational results substantiate the superiority of our solution approaches to existing standard MIP solvers like CPLEX, both in terms of computational time and problem instance size that can be solved. The greedy neighborhood search heuristic is shown to be extremely efficient.

This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.

This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.

Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.

Calculating effective heat conductivity for a class of industrial problems is discussed. The considered composite materials are glass and metal foams, fibrous materials, and the like, used in isolation or in advanced heat exchangers. These materials are characterized by a very complex internal structure, by low volume fraction of the higher conductive material (glass or metal), and by a large volume fraction of the air. The homogenization theory (when applicable), allows to calculate the effective heat conductivity of composite media by postprocessing the solution of special cell problems for representative elementary volumes (REV). Different formulations of such cell problems are considered and compared here. Furthermore, the size of the REV is studied numerically for some typical materials. Fast algorithms for solving the cell problems for this class of problems, are presented and discussed.

Approximation property of multipoint flux approximation (MPFA) approach for elliptic equations with discontinuous full tensor coefficients is discussed here. Finite volume discretization of the above problem is presented in the case of jump discontinuities for the permeability tensor. First order approximation for the fluxes is proved. Results from numerical experiments are presented and discussed.

A numerical upscaling approach, NU, for solving multiscale elliptic problems is discussed. The main components of this NU are: i) local solve of auxil- iary problems in grid blocks and formal upscaling of the obtained re sults to build a coarse scale equation; ii) global solve of the upscaled coarse scale equation; and iii) reconstruction of a fine scale solution by solving local block problems on a dual coarse grid. By its structure NU is similar to other methods for solving multiscale elliptic problems, such as the multiscale finite element method, the multiscale mixed finite element method, the numerical subgrid upscaling method, heterogeneous multiscale method, and the multiscale finite volume method. The difference with those methods is in the way the coarse scale equation is build and solved, and in the way the fine scale solution is reconstructed. Essential components of the presented here NU approach are the formal homogenization in the coarse blocks and the usage of so called multipoint flux approximation method, MPFA. Unlike the usual usage as MPFA as a discretiza- tion method for single scale elliptic problems with tensor discontinuous coefficients, we consider its usage as a part of a numerical upscaling approach. The main aim of this paper is to compare NU with the MsFEM. In particular, it is shown that the resonance effect, which limits the application of the Multiscale FEM, does not appear, or it is significantly relaxed, when the presented here numerical upscaling approach is applied.

In the article the application of kernel functions – the so-called »kernel trick« – in the context of Fisher’s approach to linear discriminant analysis is described for data sets subdivided into two groups and having real attributes. The relevant facts about functional Hilbert spaces and kernel functions including their proofs are presented. The approximative algorithm published in [Mik3] to compute a discriminant function given the data and a kernel function is briefly reviewed. As an illustration of the technique an artificial data set is analysed using the algorithm just mentioned.

In this paper we extend the slender body theory for the dynamics of a curved inertial viscous Newtonian fiber [23] by the inclusion of surface tension in the systematic asymptotic framework and the deduction of boundary conditions for the free fiber end, as it occurs in rotational spinning processes of glass fibers. The fiber ow is described by a three-dimensional free boundary value problem in terms of instationary incompressible Navier-Stokes equations under the neglect of temperature dependence. From standard regular expansion techniques in powers of the slenderness parameter we derive asymptotically leading-order balance laws for mass and momentum combining the inner viscous transport with unrestricted motion and shape of the fiber center-line which becomes important in the practical application. For the numerical investigation of the effects due to surface tension, viscosity, gravity and rotation on the fiber behavior we apply a fnite volume method with implicit flux discretization.

The performance of oil filters used in the automotive industry can be significantly improved, especially when computer simulation is an essential component of the design process. In this paper, we consider parallel numerical algorithms for solving mathematical models describing the process of filtration, filtering out solid particles from liquid oil. The Navier-Stokes-Brinkmann system of equations is used to describe the laminar flow of incompressible isothermal oil. The space discretization in the complicated filter geometry is based on the finite-volume method. Special care is taken for an accurate approximation of velocity and pressure on the interface between the fluid and the porous media. The time discretization used here is a proper modification of the fractional time step discretization (cf. Chorin scheme) of the Navier-Stokes equations, where the Brinkmann term is considered at both, prediction and correction substeps. A data decomposition method is used to develop a parallel algorithm, where the domain is distributed among processors by using a structured reference grid. The MPI library is used to implement the data communication part of the algorithm. A theoretical model is proposed for the estimation of the complexity of the given parallel algorithm and a scalability analysis is done on the basis of this model. Results of computational experiments are presented, and the accuracy and efficiency of the parallel algorithm is tested on real industrial geometries.

We are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and a numerical investigation of the model with special focus on inertia effects.

In this paper, a stochastic model [5] for the turbulent fiber laydown in the industrial production of nonwoven materials is extended by including a moving conveyor belt. In the hydrodynamic limit corresponding to large noise values, the transient and stationary joint probability distributions are determined using the method of multiple scales and the Chapman-Enskog method. Moreover, exponential convergence towards the stationary solution is proven for the reduced problem. For special choices of the industrial parameters, the stochastic limit process is an Ornstein{Uhlenbeck. It is a good approximation of the fiber motion even for moderate noise values. Moreover, as shown by Monte{Carlo simulations, the limiting process can be used to assess the quality of nonwoven materials in the industrial application by determining distributions of functionals of the process.

Abstract — Various advanced two-level iterative methods are studied numerically and compared with each other in conjunction with finite volume discretizations of symmetric 1-D elliptic problems with highly oscillatory discontinuous coefficients. Some of the methods considered rely on the homogenization approach for deriving the coarse grid operator. This approach is considered here as an alternative to the well-known Galerkin approach for deriving coarse grid operators. Different intergrid transfer operators are studied, primary consideration being given to the use of the so-called problemdependent prolongation. The two-grid methods considered are used as both solvers and preconditioners for the Conjugate Gradient method. The recent approaches, such as the hybrid domain decomposition method introduced by Vassilevski and the globallocal iterative procedure proposed by Durlofsky et al. are also discussed. A two-level method converging in one iteration in the case where the right-hand side is only a function of the coarse variable is introduced and discussed. Such a fast convergence for problems with discontinuous coefficients arbitrarily varying on the fine scale is achieved by a problem-dependent selection of the coarse grid combined with problem-dependent prolongation on a dual grid. The results of the numerical experiments are presented to illustrate the performance of the studied approaches.

An algorithm for automatic parallel generation of three-dimensional unstructured computational meshes based on geometrical domain decomposition is proposed in this paper. Software package build upon proposed algorithm is described. Several practical examples of mesh generation on multiprocessor computational systems are given. It is shown that developed parallel algorithm enables us to reduce mesh generation time significantly (dozens of times). Moreover, it easily produces meshes with number of elements of order 5 · 107, construction of those on a single CPU is problematic. Questions of time consumption, efficiency of computations and quality of generated meshes are also considered.

It has been empirically verified that smoother intensity maps can be expected to produce shorter sequences when step-and-shoot collimation is the method of choice. This work studies the length of sequences obtained by the sequencing algorithm by Bortfeld and Boyer using a probabilistic approach. The results of this work build a theoretical foundation for the up to now only empirically validated fact that if smoothness of intensity maps is considered during their calculation, the solutions can be expected to be more easily applied.

Abstract. The stationary, isothermal rotational spinning process of fibers is considered. The investigations are concerned with the case of large Reynolds (± = 3/Re ¿ 1) and small Rossby numbers (\\\" ¿ 1). Modelling the fibers as a Newtonian fluid and applying slender body approximations, the process is described by a two–point boundary value problem of ODEs. The involved quantities are the coordinates of the fiber’s centerline, the fluid velocity and viscous stress. The inviscid case ± = 0 is discussed as a reference case. For the viscous case ± > 0 numerical simulations are carried out. Transfering some properties of the inviscid limit to the viscous case, analytical bounds for the initial viscous stress of the fiber are obtained. A good agreement with the numerical results is found. These bounds give strong evidence, that for ± > 3\\\"2 no physical relevant solution can exist. A possible interpretation of the above coupling of ± and \\\" related to the die–swell phenomenon is given.

In this paper we propose a general approach solution method for the single facility ordered median problem in the plane. All types of weights (non-negative, non-positive, and mixed) are considered. The big triangle small triangle approach is used for the solution. Rigorous and heuristic algorithms are proposed and extensively tested on eight different problems with excellent results.

In this expository article, we give an introduction into the basics of bootstrap tests in general. We discuss the residual-based and the wild bootstrap for regression models suitable for applications in signal and image analysis. As an illustration of the general idea, we consider a particular test for detecting differences between two noisy signals or images which also works for noise with variable variance. The test statistic is essentially the integrated squared difference between the signals after denoising them by local smoothing. Determining its quantile, which marks the boundary between accepting and rejecting the hypothesis of equal signals, is hardly possible by standard asymptotic methods whereas the bootstrap works well. Applied to the rows and columns of images, the resulting algorithm not only allows for the detection of defects but also for the characterization of their location and shape in surface inspection problems.