### Refine

#### Year of publication

- 2007 (28) (remove)

#### Keywords

- numerical upscaling (4)
- Darcy’s law (2)
- effective heat conductivity (2)
- single phase flow (2)
- 3D (1)
- Asymptotic Expansion (1)
- Bayesian Model Averaging (1)
- Betriebsfestigkeit (1)
- Boolean polynomials (1)
- Bootstrap (1)

#### Faculty / Organisational entity

- Fraunhofer (ITWM) (28) (remove)

In this paper, a stochastic model [5] for the turbulent fiber laydown in the industrial production of nonwoven materials is extended by including a moving conveyor belt. In the hydrodynamic limit corresponding to large noise values, the transient and stationary joint probability distributions are determined using the method of multiple scales and the Chapman-Enskog method. Moreover, exponential convergence towards the stationary solution is proven for the reduced problem. For special choices of the industrial parameters, the stochastic limit process is an Ornstein{Uhlenbeck. It is a good approximation of the fiber motion even for moderate noise values. Moreover, as shown by Monte{Carlo simulations, the limiting process can be used to assess the quality of nonwoven materials in the industrial application by determining distributions of functionals of the process.

This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.

The performance of oil filters used in the automotive industry can be significantly improved, especially when computer simulation is an essential component of the design process. In this paper, we consider parallel numerical algorithms for solving mathematical models describing the process of filtration, filtering out solid particles from liquid oil. The Navier-Stokes-Brinkmann system of equations is used to describe the laminar flow of incompressible isothermal oil. The space discretization in the complicated filter geometry is based on the finite-volume method. Special care is taken for an accurate approximation of velocity and pressure on the interface between the fluid and the porous media. The time discretization used here is a proper modification of the fractional time step discretization (cf. Chorin scheme) of the Navier-Stokes equations, where the Brinkmann term is considered at both, prediction and correction substeps. A data decomposition method is used to develop a parallel algorithm, where the domain is distributed among processors by using a structured reference grid. The MPI library is used to implement the data communication part of the algorithm. A theoretical model is proposed for the estimation of the complexity of the given parallel algorithm and a scalability analysis is done on the basis of this model. Results of computational experiments are presented, and the accuracy and efficiency of the parallel algorithm is tested on real industrial geometries.

In this paper we propose a general approach solution method for the single facility ordered median problem in the plane. All types of weights (non-negative, non-positive, and mixed) are considered. The big triangle small triangle approach is used for the solution. Rigorous and heuristic algorithms are proposed and extensively tested on eight different problems with excellent results.

Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.

An efficient approach for calculating the effective heat conductivity for a class of industrial composite materials, such as metal foams, fibrous glass materials, and the like, is discussed. These materials, used in insulation or in advanced heat exchangers, are characterized by a low volume fraction of the highly conductive material (glass or metal) having a complex, network-like structure and by a large volume fraction of the insulator (air). We assume that the composite materials have constant macroscopic thermal conductivity tensors, which in principle can be obtained by standard up-scaling techniques, that use the concept of representative elementary volumes (REV), i.e. the effective heat conductivities of composite media can be computed by post-processing the solutions of some special cell problems for REVs. We propose, theoretically justify, and numerically study an efficient approach for calculating the effective conductivity for media for which the ratio of low and high conductivities satisfies 1. In this case one essentially only needs to solve the heat equation in the region occupied by the highly conductive media. For a class of problems we show, that under certain conditions on the microscale geometry, the proposed approach produces an upscaled conductivity that is O() close to the exact upscaled permeability. A number of numerical experiments are presented in order to illustrate the accuracy and the limitations of the proposed method. Applicability of the presented approach to upscaling other similar problems, e.g. flow in fractured porous media, is also discussed.

Von sicherheitsrelevanten Bauteilen im Automobilbau verlangt man, dass beim Kunden bis zur Zeit/Strecke q0 höchstens ein Anteil p0 ausgefallen ist. Die Verifikation dieses Quantils geschieht in einer Reihe von Versuchen, bei denen die Bauteile mit einer typischen Kraft zyklisch belastet werden, bis ein gewisses, im Vorfeld festgelegtes, Schadensbild auftritt und die Anzahl Ti der Zyklen („Schwingspiele“) als Lebensdauer notiert wird. Typischerweise ist der Stichprobenumfang N dabei sehr gering (N < 10), während gleichzeitig ein extremes Quantil 0 p0 0, 1 verifiziert werden soll. Verwendet man als Lebensdauerverteilung eine Weibulloder Lognormalverteilung, so tritt in den Quantilschätzern ein deutlicher Bias auf, der beseitigt werden soll. Da es sich hierbei in der Regel um einen positiven Bias handelt, würde man Bauteile als serientauglich einstufen, obwohl sie möglicherweise deutlich unter den Vorgaben liegen. Die Berechnung von Konfidenzintervallen für Quantile geschieht über Delta-Methoden, die ebenfalls schlechte Resultate liefern (in Form einer zu geringen empirischen Signifikanz linksseiter Intervalle). Im Folgenden werden Verallgemeinerungen der Bootstrap- und Jackknife- Biaskorrektur vorgestellt, welche nicht nur versuchen den Bias zu beseitigen, sondern direkt den mittleren quadratischen Fehler des Schätzers weitestgehend zu reduzieren. Simulationsstudien zeigen, dass dies für geringe Stichprobenumfänge gelingt. Außerdem wird untersucht, inwiefern die Methode in Kombination mit der Bootstrap-Quantil-Methode einen verbesserten Intervallschätzer für Quantile liefert. Dabei werden simulierte Daten betrachtet, deren Parameter repräsentativ für Lebensdauerverteilungen von sicherheitsrelevanten Bauteilen sind.

In this expository article, we give an introduction into the basics of bootstrap tests in general. We discuss the residual-based and the wild bootstrap for regression models suitable for applications in signal and image analysis. As an illustration of the general idea, we consider a particular test for detecting differences between two noisy signals or images which also works for noise with variable variance. The test statistic is essentially the integrated squared difference between the signals after denoising them by local smoothing. Determining its quantile, which marks the boundary between accepting and rejecting the hypothesis of equal signals, is hardly possible by standard asymptotic methods whereas the bootstrap works well. Applied to the rows and columns of images, the resulting algorithm not only allows for the detection of defects but also for the characterization of their location and shape in surface inspection problems.

In this paper, a new mixed integer mathematical programme is proposed for the application of Hub Location Problems (HLP) in public transport planning. This model is among the few existing ones for this application. Some classes of valid inequalities are proposed yielding a very tight model. To solve instances of this problem where existing standard solvers fail, two approaches are proposed. The first one is an exact accelerated Benders decomposition algorithm and the latter a greedy neighborhood search. The computational results substantiate the superiority of our solution approaches to existing standard MIP solvers like CPLEX, both in terms of computational time and problem instance size that can be solved. The greedy neighborhood search heuristic is shown to be extremely efficient.

Abstract. The stationary, isothermal rotational spinning process of fibers is considered. The investigations are concerned with the case of large Reynolds (± = 3/Re ¿ 1) and small Rossby numbers (\\\" ¿ 1). Modelling the fibers as a Newtonian fluid and applying slender body approximations, the process is described by a two–point boundary value problem of ODEs. The involved quantities are the coordinates of the fiber’s centerline, the fluid velocity and viscous stress. The inviscid case ± = 0 is discussed as a reference case. For the viscous case ± > 0 numerical simulations are carried out. Transfering some properties of the inviscid limit to the viscous case, analytical bounds for the initial viscous stress of the fiber are obtained. A good agreement with the numerical results is found. These bounds give strong evidence, that for ± > 3\\\"2 no physical relevant solution can exist. A possible interpretation of the above coupling of ± and \\\" related to the die–swell phenomenon is given.