### Refine

#### Year of publication

- 2007 (27) (remove)

#### Language

- English (27) (remove)

#### Keywords

- numerical upscaling (4)
- Darcy’s law (2)
- effective heat conductivity (2)
- single phase flow (2)
- 3D (1)
- Asymptotic Expansion (1)
- Bayesian Model Averaging (1)
- Boolean polynomials (1)
- Boundary Value Problem (1)
- CIR model (1)
- Delaunay mesh generation (1)
- Existence of Solutions (1)
- Facility location (1)
- Fault Prediction (1)
- Fokker-Planck Equation (1)
- Gröber basis (1)
- IMRT planning (1)
- Integer programming (1)
- Multipoint flux approximation (1)
- Multiscale problem (1)
- Multiscale problems (1)
- Navier-Stokes-Brinkmann system of equations (1)
- Network design (1)
- Non-homogeneous Poisson Process (1)
- Ornstein-Uhlenbeck Process (1)
- Reliability Prediction (1)
- Rotational Fiber Spinning (1)
- Slender body theory (1)
- Stochastic Differential Equations (1)
- Supply Chain Management (1)
- Vasicek model (1)
- Viscous Fibers (1)
- a-priori domain decomposition (1)
- algebraic cryptoanalysis (1)
- algorithm by Bortfeld and Boyer (1)
- anisotropy (1)
- big triangle small triangle method (1)
- binarization (1)
- bounds (1)
- computational fluid dynamics (1)
- convex (1)
- convex optimization (1)
- curved viscous fibers with surface tension (1)
- decomposition (1)
- defect detection (1)
- discontinuous coefficients (1)
- discriminant analysis (1)
- domain decomposition (1)
- elliptic equation (1)
- fibrous insulation materials (1)
- filtration (1)
- finite volume method (1)
- finite-volume method (1)
- formal verification (1)
- free boundary value problem (1)
- functional Hilbert space (1)
- global optimization (1)
- heterogeneous porous media (1)
- heuristic (1)
- hub location (1)
- image processing (1)
- image segmentation (1)
- intensity maps (1)
- intensity modulated radiotherapy planning (1)
- interactive multi-objective optimization (1)
- kernel estimate (1)
- kernel function (1)
- metal foams (1)
- multigrid (1)
- multiscale problem (1)
- non-linear optimization (1)
- non-overlapping constraints (1)
- ordered median (1)
- oscillating coefficients (1)
- paper machine (1)
- permeability of fractured porous media (1)
- planar location (1)
- porous media (1)
- preconditioner (1)
- rectangular packing (1)
- regularization (1)
- reproducing kernel (1)
- satisfiability (1)
- sequences (1)
- smoothness (1)
- textile quality control (1)
- texture classification (1)
- transportation (1)
- two-grid algorithm (1)
- unstructured grid (1)
- wild bootstrap test (1)

#### Faculty / Organisational entity

- Fraunhofer (ITWM) (27) (remove)

In this expository article, we give an introduction into the basics of bootstrap tests in general. We discuss the residual-based and the wild bootstrap for regression models suitable for applications in signal and image analysis. As an illustration of the general idea, we consider a particular test for detecting differences between two noisy signals or images which also works for noise with variable variance. The test statistic is essentially the integrated squared difference between the signals after denoising them by local smoothing. Determining its quantile, which marks the boundary between accepting and rejecting the hypothesis of equal signals, is hardly possible by standard asymptotic methods whereas the bootstrap works well. Applied to the rows and columns of images, the resulting algorithm not only allows for the detection of defects but also for the characterization of their location and shape in surface inspection problems.

This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.

In this paper we propose a general approach solution method for the single facility ordered median problem in the plane. All types of weights (non-negative, non-positive, and mixed) are considered. The big triangle small triangle approach is used for the solution. Rigorous and heuristic algorithms are proposed and extensively tested on eight different problems with excellent results.

It has been empirically verified that smoother intensity maps can be expected to produce shorter sequences when step-and-shoot collimation is the method of choice. This work studies the length of sequences obtained by the sequencing algorithm by Bortfeld and Boyer using a probabilistic approach. The results of this work build a theoretical foundation for the up to now only empirically validated fact that if smoothness of intensity maps is considered during their calculation, the solutions can be expected to be more easily applied.

This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.

Background and purpose Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Material and methods Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms “selection” and “restriction”. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle – a set constructed from precomputed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions.

An algorithm for automatic parallel generation of three-dimensional unstructured computational meshes based on geometrical domain decomposition is proposed in this paper. Software package build upon proposed algorithm is described. Several practical examples of mesh generation on multiprocessor computational systems are given. It is shown that developed parallel algorithm enables us to reduce mesh generation time significantly (dozens of times). Moreover, it easily produces meshes with number of elements of order 5 · 107, construction of those on a single CPU is problematic. Questions of time consumption, efficiency of computations and quality of generated meshes are also considered.

Calculating effective heat conductivity for a class of industrial problems is discussed. The considered composite materials are glass and metal foams, fibrous materials, and the like, used in isolation or in advanced heat exchangers. These materials are characterized by a very complex internal structure, by low volume fraction of the higher conductive material (glass or metal), and by a large volume fraction of the air. The homogenization theory (when applicable), allows to calculate the effective heat conductivity of composite media by postprocessing the solution of special cell problems for representative elementary volumes (REV). Different formulations of such cell problems are considered and compared here. Furthermore, the size of the REV is studied numerically for some typical materials. Fast algorithms for solving the cell problems for this class of problems, are presented and discussed.

Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.

Modeling and formulation of optimization problems in IMRT planning comprises the choice of various values such as function-specific parameters or constraint bounds. These values also affect the characteristics of the optimization problem and thus the form of the resulting optimal plans. This publication utilizes concepts of sensitivity analysis and elasticity in convex optimization to analyze the dependence of optimal plans on the modeling parameters. It also derives general rules of thumb how to choose and modify the parameters in order to obtain the desired IMRT plan. These rules are numerically validated for an exemplary IMRT planning problems.

The performance of oil filters used in the automotive industry can be significantly improved, especially when computer simulation is an essential component of the design process. In this paper, we consider parallel numerical algorithms for solving mathematical models describing the process of filtration, filtering out solid particles from liquid oil. The Navier-Stokes-Brinkmann system of equations is used to describe the laminar flow of incompressible isothermal oil. The space discretization in the complicated filter geometry is based on the finite-volume method. Special care is taken for an accurate approximation of velocity and pressure on the interface between the fluid and the porous media. The time discretization used here is a proper modification of the fractional time step discretization (cf. Chorin scheme) of the Navier-Stokes equations, where the Brinkmann term is considered at both, prediction and correction substeps. A data decomposition method is used to develop a parallel algorithm, where the domain is distributed among processors by using a structured reference grid. The MPI library is used to implement the data communication part of the algorithm. A theoretical model is proposed for the estimation of the complexity of the given parallel algorithm and a scalability analysis is done on the basis of this model. Results of computational experiments are presented, and the accuracy and efficiency of the parallel algorithm is tested on real industrial geometries.

A numerical upscaling approach, NU, for solving multiscale elliptic problems is discussed. The main components of this NU are: i) local solve of auxil- iary problems in grid blocks and formal upscaling of the obtained re sults to build a coarse scale equation; ii) global solve of the upscaled coarse scale equation; and iii) reconstruction of a fine scale solution by solving local block problems on a dual coarse grid. By its structure NU is similar to other methods for solving multiscale elliptic problems, such as the multiscale finite element method, the multiscale mixed finite element method, the numerical subgrid upscaling method, heterogeneous multiscale method, and the multiscale finite volume method. The difference with those methods is in the way the coarse scale equation is build and solved, and in the way the fine scale solution is reconstructed. Essential components of the presented here NU approach are the formal homogenization in the coarse blocks and the usage of so called multipoint flux approximation method, MPFA. Unlike the usual usage as MPFA as a discretiza- tion method for single scale elliptic problems with tensor discontinuous coefficients, we consider its usage as a part of a numerical upscaling approach. The main aim of this paper is to compare NU with the MsFEM. In particular, it is shown that the resonance effect, which limits the application of the Multiscale FEM, does not appear, or it is significantly relaxed, when the presented here numerical upscaling approach is applied.

Approximation property of multipoint flux approximation (MPFA) approach for elliptic equations with discontinuous full tensor coefficients is discussed here. Finite volume discretization of the above problem is presented in the case of jump discontinuities for the permeability tensor. First order approximation for the fluxes is proved. Results from numerical experiments are presented and discussed.

Abstract — Various advanced two-level iterative methods are studied numerically and compared with each other in conjunction with finite volume discretizations of symmetric 1-D elliptic problems with highly oscillatory discontinuous coefficients. Some of the methods considered rely on the homogenization approach for deriving the coarse grid operator. This approach is considered here as an alternative to the well-known Galerkin approach for deriving coarse grid operators. Different intergrid transfer operators are studied, primary consideration being given to the use of the so-called problemdependent prolongation. The two-grid methods considered are used as both solvers and preconditioners for the Conjugate Gradient method. The recent approaches, such as the hybrid domain decomposition method introduced by Vassilevski and the globallocal iterative procedure proposed by Durlofsky et al. are also discussed. A two-level method converging in one iteration in the case where the right-hand side is only a function of the coarse variable is introduced and discussed. Such a fast convergence for problems with discontinuous coefficients arbitrarily varying on the fine scale is achieved by a problem-dependent selection of the coarse grid combined with problem-dependent prolongation on a dual grid. The results of the numerical experiments are presented to illustrate the performance of the studied approaches.

Abstract. The stationary, isothermal rotational spinning process of fibers is considered. The investigations are concerned with the case of large Reynolds (± = 3/Re ¿ 1) and small Rossby numbers (\\\" ¿ 1). Modelling the fibers as a Newtonian fluid and applying slender body approximations, the process is described by a two–point boundary value problem of ODEs. The involved quantities are the coordinates of the fiber’s centerline, the fluid velocity and viscous stress. The inviscid case ± = 0 is discussed as a reference case. For the viscous case ± > 0 numerical simulations are carried out. Transfering some properties of the inviscid limit to the viscous case, analytical bounds for the initial viscous stress of the fiber are obtained. A good agreement with the numerical results is found. These bounds give strong evidence, that for ± > 3\\\"2 no physical relevant solution can exist. A possible interpretation of the above coupling of ± and \\\" related to the die–swell phenomenon is given.

In this paper, a new mixed integer mathematical programme is proposed for the application of Hub Location Problems (HLP) in public transport planning. This model is among the few existing ones for this application. Some classes of valid inequalities are proposed yielding a very tight model. To solve instances of this problem where existing standard solvers fail, two approaches are proposed. The first one is an exact accelerated Benders decomposition algorithm and the latter a greedy neighborhood search. The computational results substantiate the superiority of our solution approaches to existing standard MIP solvers like CPLEX, both in terms of computational time and problem instance size that can be solved. The greedy neighborhood search heuristic is shown to be extremely efficient.

We are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and a numerical investigation of the model with special focus on inertia effects.

In the article the application of kernel functions – the so-called »kernel trick« – in the context of Fisher’s approach to linear discriminant analysis is described for data sets subdivided into two groups and having real attributes. The relevant facts about functional Hilbert spaces and kernel functions including their proofs are presented. The approximative algorithm published in [Mik3] to compute a discriminant function given the data and a kernel function is briefly reviewed. As an illustration of the technique an artificial data set is analysed using the algorithm just mentioned.

In this paper, a stochastic model [5] for the turbulent fiber laydown in the industrial production of nonwoven materials is extended by including a moving conveyor belt. In the hydrodynamic limit corresponding to large noise values, the transient and stationary joint probability distributions are determined using the method of multiple scales and the Chapman-Enskog method. Moreover, exponential convergence towards the stationary solution is proven for the reduced problem. For special choices of the industrial parameters, the stochastic limit process is an Ornstein{Uhlenbeck. It is a good approximation of the fiber motion even for moderate noise values. Moreover, as shown by Monte{Carlo simulations, the limiting process can be used to assess the quality of nonwoven materials in the industrial application by determining distributions of functionals of the process.

With the ever-increasing significance of software in our everyday lives, it is vital to afford reliable software quality estimates. Typically, quantitative software quality analyses rely on either statistical fault prediction methods (FPMs) or stochastic software reliability growth models (SRGMs). Adopting solely FPMs or SRGMs, though, may result in biased predictions that do not account for uncertainty in the distinct prediction methods; thus rendering the prediction less reliable. This paper identifies flaws of the individual prediction methods and suggests a hybrid prediction approach that combines FPMs and SRGMs. We adopt FPMs for initially estimating the expected number of failures for fi- nite failure SRGMs. Initial parameter estimates yield more accurate reliability predictions until sufficient failures are observed that enable stable parameter estimates in SRGMs. Being at the equilibrium level of FPM and SRGM pre- dictions we suggest combining the competing prediction methods with respect to the principle of heterogeneous redundancy. That is, we propose using the in- dividual methods separately and combining their predictions. In this paper we suggest Bayesian model averaging (BMA) for combining the different methods. The hybrid approach allows early reliability estimates and encourages higher confidence in software quality predictions.