Refine
Year of publication
Document Type
- Report (179) (remove)
Language
- English (179) (remove)
Has Fulltext
- yes (179)
Keywords
- numerical upscaling (7)
- Integer programming (4)
- hub location (4)
- Darcy’s law (3)
- Heston model (3)
- Lagrangian mechanics (3)
- effective heat conductivity (3)
- facility location (3)
- non-Newtonian flow in porous media (3)
- poroelasticity (3)
Faculty / Organisational entity
- Fraunhofer (ITWM) (179) (remove)
In this paper we extend the slender body theory for the dynamics of a curved inertial viscous Newtonian fiber [23] by the inclusion of surface tension in the systematic asymptotic framework and the deduction of boundary conditions for the free fiber end, as it occurs in rotational spinning processes of glass fibers. The fiber ow is described by a three-dimensional free boundary value problem in terms of instationary incompressible Navier-Stokes equations under the neglect of temperature dependence. From standard regular expansion techniques in powers of the slenderness parameter we derive asymptotically leading-order balance laws for mass and momentum combining the inner viscous transport with unrestricted motion and shape of the fiber center-line which becomes important in the practical application. For the numerical investigation of the effects due to surface tension, viscosity, gravity and rotation on the fiber behavior we apply a fnite volume method with implicit flux discretization.
In the article the application of kernel functions – the so-called »kernel trick« – in the context of Fisher’s approach to linear discriminant analysis is described for data sets subdivided into two groups and having real attributes. The relevant facts about functional Hilbert spaces and kernel functions including their proofs are presented. The approximative algorithm published in [Mik3] to compute a discriminant function given the data and a kernel function is briefly reviewed. As an illustration of the technique an artificial data set is analysed using the algorithm just mentioned.
A numerical upscaling approach, NU, for solving multiscale elliptic problems is discussed. The main components of this NU are: i) local solve of auxil- iary problems in grid blocks and formal upscaling of the obtained re sults to build a coarse scale equation; ii) global solve of the upscaled coarse scale equation; and iii) reconstruction of a fine scale solution by solving local block problems on a dual coarse grid. By its structure NU is similar to other methods for solving multiscale elliptic problems, such as the multiscale finite element method, the multiscale mixed finite element method, the numerical subgrid upscaling method, heterogeneous multiscale method, and the multiscale finite volume method. The difference with those methods is in the way the coarse scale equation is build and solved, and in the way the fine scale solution is reconstructed. Essential components of the presented here NU approach are the formal homogenization in the coarse blocks and the usage of so called multipoint flux approximation method, MPFA. Unlike the usual usage as MPFA as a discretiza- tion method for single scale elliptic problems with tensor discontinuous coefficients, we consider its usage as a part of a numerical upscaling approach. The main aim of this paper is to compare NU with the MsFEM. In particular, it is shown that the resonance effect, which limits the application of the Multiscale FEM, does not appear, or it is significantly relaxed, when the presented here numerical upscaling approach is applied.
Approximation property of multipoint flux approximation (MPFA) approach for elliptic equations with discontinuous full tensor coefficients is discussed here. Finite volume discretization of the above problem is presented in the case of jump discontinuities for the permeability tensor. First order approximation for the fluxes is proved. Results from numerical experiments are presented and discussed.
Calculating effective heat conductivity for a class of industrial problems is discussed. The considered composite materials are glass and metal foams, fibrous materials, and the like, used in isolation or in advanced heat exchangers. These materials are characterized by a very complex internal structure, by low volume fraction of the higher conductive material (glass or metal), and by a large volume fraction of the air. The homogenization theory (when applicable), allows to calculate the effective heat conductivity of composite media by postprocessing the solution of special cell problems for representative elementary volumes (REV). Different formulations of such cell problems are considered and compared here. Furthermore, the size of the REV is studied numerically for some typical materials. Fast algorithms for solving the cell problems for this class of problems, are presented and discussed.
Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.
This report discusses two approaches for a posteriori error indication in the linear elasticity solver DDFEM: An indicator based on the Richardson extrapolation and Zienkiewicz-Zhu-type indicator. The solver handles 3D linear elasticity steady-state problems. It uses own input language to describe the mesh and the boundary conditions. Finite element discretization over tetrahedral meshes with first or second order shape functions (hierarchical basis) has been used to resolve the model. The parallelization of the numerical method is based on the domain decomposition approach. DDFEM is highly portable over a set of parallel computer architectures supporting the MPI-standard.
In this paper we address the improvement of transfer quality in public mass transit networks. Generally there are several transit operators offering service and our work is motivated by the question how their timetables can be altered to yield optimized transfer possibilities in the overall network. To achieve this, only small changes to the timetables are allowed. The set-up makes it possible to use a quadratic semi-assignment model to solve the optimization problem. We apply this model, equipped with a new way to assess transfer quality, to the solution of four real-world examples. It turns out that improvements in overall transfer quality can be determined by such optimization-based techniques. Therefore they can serve as a first step towards a decision support tool for planners of regional transit networks.
On a multigrid solver for the threedimensional Biot poroelasticity system in multilayered domains
(2006)
In this paper, we present problem–dependent prolongation and problem–dependent restriction for a multigrid solver for the three-dimensional Biot poroelasticity system, which is solved in a multilayered domain. The system is discretized on a staggered grid using the finite volume method. During the discretization, special care is taken of the discontinuous coefficients. For the efficient multigrid solver, a need in operator-dependent restriction and/or prolongation arises. We derive these operators so that they are consistent with the discretization. They account for the discontinuities of the coefficients, as well as for the coupling of the unknowns within the Biot system. A set of numerical experiments shows necessity of use of the operator-dependent restriction and prolongation in the multigrid solver for the considered class of problems.
In this paper we propose a finite volume discretization for the threedimensional Biot poroelasticity system in multilayered domains. For the stability reasons, staggered grids are used. The discretization accounts for discontinuity of the coefficients across the interfaces between layers with different physical properties. Numerical experiments, based on the proposed discretization showed second order convergence in the maximum norm for the primary as well as flux unknowns of the system. A certain application example is presented as well.
A unified approach to Credit Default Swaption and Constant Maturity Credit Default Swap valuation
(2006)
In this paper we examine the pricing of arbitrary credit derivatives with the Libor Market Model with Default Risk. We show, how to setup the Monte Carlo-Simulation efficiently and investigate the accuracy of closed-form solutions for Credit Default Swaps, Credit Default Swaptions and Constant Maturity Credit Default Swaps. In addition we derive a new closed-form solution for Credit Default Swaptions which allows for time-dependent volatility and abitrary correlation structure of default intensities.1
We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented.
The stationary heat equation is solved with periodic boundary conditions in geometrically complex composite materials with high contrast in the thermal conductivities of the individual phases. This is achieved by harmonic averaging and explicitly introducing the jumps across the material interfaces as additional variables. The continuity of the heat flux yields the needed extra equations for these variables. A Schur-complent formulation for the new variables is derived that is solved using the FFT and BiCGStab methods. The EJ-HEAT solver is given as a 3-page Matlab program in the Appendix. The C++ implementation is used for material design studies. It solves 3-dimensional problems with around 190 Mio variables on a 64-bit AMD Opteron desktop system in less than 6 GB memory and in minutes to hours, depending on the contrast and required accuracy. The approach may also be used to compute effective electric conductivities because they are governed by the stationary heat equation.
In this article, we consider the quasistatic boundary value problems of linear elasticity and nonlinear elastoplasticity, with linear Hooke’s law in the elastic regime for both problems and with the linear kinematic hardening law for the plastic regime in the latter problem. We derive expressions and estimates for the difference of the solutions of both models, i.e. for the stresses, the strains and the displacements. To this end, we use the stop and play operators of nonlinear functional analysis. Further, we give an explicit example of a homotopy between the solutions of both problems.
Testing a new suspension based on real load data is performed on elaborate multi channel test rigs. Usually wheel forces and moments measured during driving maneuvers are reproduced on the rig. Because of the complicated interaction between rig and suspension each new rig configuration has to prove its efficiency with respect to the requirements and the configuration might be subject to optimization. This paper deals with modeling a new rig concept based on two hexapods. The real physical rig has been designed and meanwhile built by MOOG-FCS for VOLKSWAGEN. The aim of the simulation project reported here was twofold: First the simulation of the rig together with real VOLKSWAGEN suspension models at a time where the design was not yet finalized was used to verify and optimize the desired properties of the rig. Second the simulation environment was set up in a way that it can be used to prepare real tests on the rig. The model contains the geometric configuration as well as the hydraulics and the controller. It is implemented as an ADAMS/Car template and can be combined with different suspension models to get a complete assembly representing the entire test rig. Using this model, all steps required for a real test run such as controller adaptation, drive file iteration and simulation can be performed. Geometric or hydraulic parameters can be modified easily to improve the setup and adapt the system to the suspension and the load data.
For the last decade, optimization of beam orientations in intensitymodulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity proles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity proles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity proles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity proles for every selection of beam orientations, making the dependence between beam orientations and its intensity proles less important. We take advantage of this property to present a dynamic algorithm for beam orientation in IMRT which is based on multicriteria inverse planning. The algorithm approximates beam intensity proles iteratively instead of doing it for every selection of beam orientation, saving a considerable amount of calculation time. Every iteration goes from an N-beam plan to a plan with N + 1 beams. Beam selection criteria are based on a score function that minimizes the deviation from the prescribed dose, in addition to a reject-accept criterion. To illustrate the eciency of the algorithm it has been applied to an articial example where optimality is trivial and to three real clinical cases: a prostate carcinoma, a tumor in the head and neck region and a paraspinal tumor. In comparison to the standard equally spaced beam plans, improvements are reported in all of the three clinical examples, even, in some cases with a fewer number of beams.
In this paper we present and investigate a stochastic model for the lay-down of fibers on a conveyor belt in the production process of nonwovens. The model is based on a stochastic differential equation taking into account the motion of the ber under the influence of turbulence. A reformulation as a stochastic Hamiltonian system and an application of the stochastic averaging theorem lead to further simplications of the model. Finally, the model is used to compute the distribution of functionals of the process that might be helpful for the quality assessment of industrial fabrics.
It is commonly believed that not all degrees of freedom are needed to produce good solutions for the treatment planning problem in intensity modulated radiotherapy treatment (IMRT). However, typical methods to exploit this fact have either increased the complexity of the optimization problem or were heuristic in nature. In this work we introduce a technique based on adaptively refining variable clusters to successively attain better treatment plans. The approach creates approximate solutions based on smaller models that may get arbitrarily close to the optimal solution. Although the method is illustrated using a specific treatment planning model, the components constituting the variable clustering and the adaptive refinement are independent of the particular optimization problem.
The desire to simulate more and more geometrical and physical features of technical structures and the availability of parallel computers and parallel numerical solvers which can exploit the power of these machines have lead to a steady increase in the number of grid elements used. Memory requirements and computational time are too large for usual serial PCs. An a priori partitioning algorithm for the parallel generation of 3D nonoverlapping compatible unstructured meshes based on a CAD surface description is presented in this paper. Emphasis is given to practical issues and implementation rather than to theoretical complexity. To achieve robustness of the algorithm with respect to the geometrical shape of the structure authors propose to have several or many but relatively simple algorithmic steps. The geometrical domain decomposition approach has been applied. It allows us to use classic 2D and 3D high-quality Delaunay mesh generators for independent and simultaneous volume meshing. Different aspects of load balancing methods are also explored in the paper. The MPI library and SPMD model are used for parallel grid generator implementation. Several 3D examples are shown.
We present the application of a meshfree method for simulations of interaction between fluids and flexible structures. As a flexible structure we consider a sheet of paper. In a two-dimensional framework this sheet can be modeled as curve by the dynamical Kirchhoff-Love theory. The external forces taken into account are gravitation and the pressure difference between upper and lower surface of the sheet. This pressure difference is computed using the Finite Pointset Method (FPM) for the incompressible Navier-Stokes equations. FPM is a meshfree, Lagrangian particle method. The dynamics of the sheet are computed by a finite difference method. We show the suitability of the meshfree method for simulations of fluid-structure interaction in several applications.
This paper analyzes and solves a patient transportation problem arising in several large hospitals. The aim is to provide an efficient and timely transport service to patients between several locations on a hospital campus. Transportation requests arrive in a dynamic fashion and the solution methodology must therefore be capable of quickly inserting new requests in the current vehicle routes. Contrary to standard dial-a-ride problems, the problem under study contains several complicating constraints which are specific to a hospital context. The paper provides a detailed description of the problem and proposes a two-phase heuristic procedure capable of handling its many features. In the first phase a simple insertion scheme is used to generate a feasible solution, which is improved in the second phase with a tabu search algorithm. The heuristic procedure was extensively tested on real data provided by a German hospital. Results show that the algorithm is capable of handling the dynamic aspect of the problem and of providing high quality solutions. In particular, it succeeded in reducing waiting times for patients while using fewer vehicles.
During the recent years, multiobjective evolutionary algorithms have matured as a flexible optimization tool which can be used in various areas of reallife applications. Practical experiences showed that typically the algorithms need an essential adaptation to the specific problem for a successful application. Considering these requirements, we discuss various issues of the design and application of multiobjective evolutionary algorithms to real-life optimization problems. In particular, questions on problem-specific data structures and evolutionary operators and the determination of method parameters are treated. As a major issue, the handling of infeasible intermediate solutions is pointed out. Three application examples in the areas of constrained global optimization (electronic circuit design), semi-infinite programming (design centering problems), and discrete optimization (project scheduling) are discussed.
Flow of non-Newtonian fluid in saturated porous media can be described by the continuity equation and the generalized Darcy law. Efficient solution of the resulting second order nonlinear elliptic equation is discussed here. The equation is discretized by a finite volume method on a cell-centered grid. Local adaptive refinement of the grid is introduced in order to reduce the number of unknowns. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Two residual based error indicators are exploited in the adaptive refinement criterion. Second order accurate discretization of the fluxes on the interfaces between refined and non-refined subdomains, as well as on the boundaries with Dirichlet boundary condition, are presented here, as an essential part of the accurate and efficient algorithm. A nonlinear full approximation storage multigrid algorithm is developed especially for the above described composite (coarse plus locally refined) grid approach. In particular, second order approximation of the fluxes around interfaces is a result of a quadratic approximation of slave nodes in the multigrid - adaptive refinement (MG-AR) algorithm. Results from numerical solution of various academic and practice-induced problems are presented and the performance of the solver is discussed.
The level-set method has been recently introduced in the field of shape optimization, enabling a smooth representation of the boundaries on a fixed mesh and therefore leading to fast numerical algorithms. However, most of these algorithms use a Hamilton-Jacobi equation to connect the evolution of the level-set function with the deformation of the contours, and consequently they cannot create any new holes in the domain (at least in 2D). In this work, we propose an evolution equation for the level-set function based on a generalization of the concept of topological gradient. This results in a new algorithm allowing for all kinds of topology changes.
Traditional methods fail for the purpose of simulating the complete flow process in urban areas as a consequence of heavy rainfall and as required by the European Standard EN-752 since the bi-directional coupling between sewer and surface is not properly handled. The methodology, developed in the BMBF/ EUREKA-project RisUrSim, solves this problem by carrying out the runoff on the basis of shallow water equations solved on high-resolution surface grids. Exchange nodes between the sewer and the surface, like inlets and manholes, are located in the computational grid and water leaving the sewer in case of surcharge is further distributed on the surface. So far, it has been a problem to get the dense topographical information needed to build models suitable for hydrodynamic runoff calculation in urban areas. Recent airborne data collection methods like laser scanning, however, offer a great chance to economically gather densely sampled input data. This paper studies the potential of such laser-scan data sets for urban water hydrodynamics.
Fiber Dynamics in Turbulent Flows -Part I: General Modeling Framework -Part II: Specific Taylor Drag
(2005)
Part I: General Modeling Framework The paper at hand deals with the modeling of turbulence effects on the dynamics of a long slender elastic fiber. Independent of the choice of the drag model, a general aerodynamic force concept is derived on the basis of the velocity field for the randomly fluctuating component of the flow. Its construction as centered differentiable Gaussian field complies thereby with the requirements of the stochastic k-turbulence model and Kolmogorov’s universal equilibrium theory on local isotropy. Part II: Specific Taylor Drag In [12], an aerodynamic force concept for a general air drag model is derived on top of a stochastic k-epsilon description for a turbulent flow field. The turbulence effects on the dynamics of a long slender elastic fiber are particularly modeled by a correlated random Gaussian force and in its asymptotic limit on a macroscopic fiber scale by Gaussian white noise with flow - dependent amplitude. The paper at hand now presents quantitative similarity estimates and numerical comparisons for the concrete choice of a Taylor drag model in a given application.
Virtual material design is the microscopic variation of materials in the computer, followed by the numerical evaluation of the effect of this variation on the material‘s macroscopic properties. The goal of this procedure is an in some sense improved material. Here, we give examples regarding the dependence of the effective elastic moduli of a composite material on the geometry of the shape of an inclusion. A new approach on how to solve such interface problems avoids mesh generation and gives second order accurate results even in the vicinity of the interface. The Explicit Jump Immersed Interface Method is a finite difference method for elliptic partial differential equations that works on an equidistant Cartesian grid in spite of non-grid aligned discontinuities in equation parameters and solution. Near discontinuities, the standard finite difference approximations are modified by adding correction terms that involve jumps in the function and its derivatives. This work derives the correction terms for two dimensional linear elasticity with piecewise constant coefficients, i.e. for composite materials. It demonstrates numerically convergence and approximation properties of the method.
The Folgar-Tucker equation (FTE) is the model most frequently used for the prediction of fiber orientation (FO) in simulations of the injection molding process for short-fiber reinforced thermoplasts. In contrast to its widespread use in injection molding simulations, little is known about the mathematical properties of the FTE: an investigation of e.g. its phase spaceMFT has been presented only recently. The restriction of the dependent variable of the FTE to the setMFT turns the FTE into a differential algebraic system (DAS), a fact which is commonly neglected when devising numerical schemes for the integration of the FTE. In this article1 we present some recent results on the problem of trace stability as well as some introductory material which complements our recent paper.
Inverse treatment planning of intensity modulated radiothrapy is a multicriteria optimization problem: planners have to find optimal compromises between a sufficiently high dose in tumor tissue that garantuee a high tumor control, and, dangerous overdosing of critical structures, in order to avoid high normal tissue complcication problems. The approach presented in this work demonstrates how to state a flexible generic multicriteria model of the IMRT planning problem and how to produce clinically highly relevant Pareto-solutions. The model is imbedded in a principal concept of Reverse Engineering, a general optimization paradigm for design problems. Relevant parts of the Pareto-set are approximated by using extreme compromises as cornerstone solutions, a concept that is always feasible if box constraints for objective funtions are available. A major practical drawback of generic multicriteria concepts trying to compute or approximate parts of the Pareto-set is the high computational effort. This problem can be overcome by exploitation of an inherent asymmetry of the IMRT planning problem and an adaptive approximation scheme for optimal solutions based on an adaptive clustering preprocessing technique. Finally, a coherent approach for calculating and selecting solutions in a real-timeinteractive decision-making process is presented. The paper is concluded with clinical examples and a discussion of ongoing research topics.
Territory design may be viewed as the problem of grouping small geographic areas into larger geographic clusters called territories in such a way that the latter are acceptable according to relevant planning criteria. In this paper we review the existing literature for applications of territory design problems and solution approaches for solving these types of problems. After identifying features common to all applications we introduce a basic territory design model and present in detail two approaches for solving this model: a classical location–allocation approach combined with optimal split resolution techniques and a newly developed computational geometry based method. We present computational results indicating the efficiency and suitability of the latter method for solving large–scale practical problems in an interactive environment. Furthermore, we discuss extensions to the basic model and its integration into Geographic Information Systems.
In order to optimize the acoustic properties of a stacked fiber non-woven, the microstructure of the non-woven is modeled by a macroscopically homogeneous random system of straight cylinders (tubes). That is, the fibers are modeled by a spatially stationary random system of lines (Poisson line process), dilated by a sphere. Pressing the non-woven causes anisotropy. In our model, this anisotropy is described by a one parametric distribution of the direction of the fibers. In the present application, the anisotropy parameter has to be estimated from 2d reflected light microscopic images of microsections of the non-woven. After fitting the model, the flow is computed in digitized realizations of the stochastic geometric model using the lattice Boltzmann method. Based on the flow resistivity, the formulas of Delany and Bazley predict the frequency-dependent acoustic absorption of the non-woven in the impedance tube. Using the geometric model, the description of a non-woven with improved acoustic absorption properties is obtained in the following way: First, the fiber thicknesses, porosity and anisotropy of the fiber system are modified. Then the flow and acoustics simulations are performed in the new sample. These two steps are repeatedc for various sets of parameters. Finally, the set of parameters for the geometric model leading to the best acoustic absorption is chosen.
We consider the contact of two elastic bodies with rough surfaces at the interface. The size of the micropeaks and valleys is very small compared with the macrosize of the bodies’ domains. This makes the direct application of the FEM for the calculation of the contact problem prohibitively costly. A method is developed that allows deriving a macrocontact condition on the interface. The method involves the twoscale asymptotic homogenization procedure that takes into account the microgeometry of the interface layer and the stiffnesses of materials of both domains. The macrocontact condition can then be used in a FEM model for the contact problem on the macrolevel. The averaged contact stiffness obtained allows the replacement of the interface layer in the macromodel by the macrocontact condition.
IMRT planning on adaptive volume structures – a significant advance of computational complexity
(2004)
In intensity-modulated radiotherapy (IMRT) planning the oncologist faces the challenging task of finding a treatment plan that he considers to be an ideal compromise of the inherently contradictive goals of delivering a sufficiently high dose to the target while widely sparing critical structures. The search for this a priori unknown compromise typically requires the computation of several plans, i.e. the solution of several optimization problems. This accumulates to a high computational expense due to the large scale of these problems - a consequence of the discrete problem formulation. This paper presents the adaptive clustering method as a new algorithmic concept to overcome these difficulties. The computations are performed on an individually adapted structure of voxel clusters rather than on the original voxels leading to a decisively reduced computational complexity as numerical examples on real clinical data demonstrate. In contrast to many other similar concepts, the typical trade-off between a reduction in computational complexity and a loss in exactness can be avoided: the adaptive clustering method produces the optimum of the original problem. This flexible method can be applied to both single- and multi-criteria optimization methods based on most of the convex evaluation functions used in practice
After a short introduction to the basic ideas of lattice Boltzmann methods and a brief description of a modern parallel computer, it is shown how lattice Boltzmann schemes are successfully applied for simulating fluid flow in microstructures and calculating material properties of porous media. It is explained how lattice Boltzmann schemes compute the gradient of the velocity field without numerical differentiation. This feature is then utilised for the simulation of pseudo-plastic fluids, and numerical results are presented for a simple benchmark problem as well as for the simulation of liquid composite moulding.
Iterative solution of large scale systems arising after discretization and linearization of the unsteady non-Newtonian Navier–Stokes equations is studied. cross WLF model is used to account for the non-Newtonian behavior of the fluid. Finite volume method is used to discretize the governing system of PDEs. Viscosity is treated explicitely (e.g., it is taken from the previous time step), while other terms are treated implicitly. Different preconditioners (block–diagonal, block–triangular, relaxed incomplete LU factorization, etc.) are used in conjunction with advanced iterative methods, namely, BiCGStab, CGS, GMRES. The action of the preconditioner in fact requires inverting different blocks. For this purpose, in addition to preconditioned BiCGStab, CGS, GMRES, we use also algebraic multigrid method (AMG). The performance of the iterative solvers is studied with respect to the number of unknowns, characteristic velocity in the basic flow, time step, deviation from Newtonian behavior, etc. Results from numerical experiments are presented and discussed.
In this paper we consider numerical algorithms for solving a system of nonlinear PDEs arising in modeling of liquid polymer injection. We investigate the particular case when a porous preform is located within the mould, so that the liquid polymer flows through a porous medium during the filling stage. The nonlinearity of the governing system of PDEs is due to the non-Newtonian behavior of the polymer, as well as due to the moving free boundary. The latter is related to the penetration front and a Stefan type problem is formulated to account for it. A finite-volume method is used to approximate the given differential problem. Results of numerical experiments are presented. We also solve an inverse problem and present algorithms for the determination of the absolute preform permeability coefficient in the case when the velocity of the penetration front is known from measurements. In both cases (direct and inverse problems) we emphasize on the specifics related to the non-Newtonian behavior of the polymer. For completeness, we discuss also the Newtonian case. Results of some experimental measurements are presented and discussed.
Finite difference discretizations of 1D poroelasticity equations with discontinuous coefficients are analyzed. A recently suggested FD discretization of poroelasticity equations with constant coefficients on staggered grid, [5], is used as a basis. A careful treatment of the interfaces leads to harmonic averaging of the discontinuous coefficients. Here, convergence for the pressure and for the displacement is proven in certain norms for the scheme with harmonic averaging (HA). Order of convergence 1.5 is proven for arbitrary located interface, and second order convergence is proven for the case when the interface coincides with a grid node. Furthermore, following the ideas from [3], modified HA discretization are suggested for particular cases. The velocity and the stress are approximated with second order on the interface in this case. It is shown that for wide class of problems, the modified discretization provides better accuracy. Second order convergence for modified scheme is proven for the case when the interface coincides with a displacement grid node. Numerical experiments are presented in order to illustrate our considerations.
In this paper, we discuss approaches related to the explicit modeling of human beings in software development processes. While in most older simulation models of software development processes, esp. those of the system dynamics type, humans are only represented as a labor pool, more recent models of the discrete-event simulation type require representations of individual humans. In that case, particularities regarding the person become more relevant. These individual effects are either considered as stochastic variations of productivity, or an explanation is sought based on individual characteristics, such as skills for instance. In this paper, we explore such possibilities by recurring to some basic results in psychology, sociology, and labor science. Various specific models for representing human effects in software process simulation are discussed.
In this work the problem of fluid flow in deformable porous media is studied. First, the stationary fluid-structure interaction (FSI) problem is formulated in terms of incompressible Newtonian fluid and a linearized elastic solid. The flow is assumed to be characterized by very low Reynolds number and is described by the Stokes equations. The strains in the solid are small allowing for the solid to be described by the Lame equations, but no restrictions are applied on the magnitude of the displacements leading to strongly coupled, nonlinear fluid-structure problem. The FSI problem is then solved numerically by an iterative procedure which solves sequentially fluid and solid subproblems. Each of the two subproblems is discretized by finite elements and the fluid-structure coupling is reduced to an interface boundary condition. Several numerical examples are presented and the results from the numerical computations are used to perform permeability computations for different geometries.
In soil mechanics assumption of only vertical subsidence is often invoked and this leads to the one-dimensional model of poroelasticity. The classical model of linear poroelasticity is obtained by Biot [1], detailed derivation can be found e.g., in [2]. This model is applicable also to modelling certain processes in geomechanics, hydrogeology, petroleum engineering (see, e.g., [3, 8], in biomechanics (e.g., [9, 10]), in filtration (e.g., filter cake formation, see [15, 16, 17]), in paper manufacturing (e.g., [11, 12]), in printing (e.g., [13]), etc. Finite element and finite difference methods were applied by many authors for numerical solution of the Biot system of PDEs, see e.g. [3, 4, 5] and references therein. However, as it is wellknown, the standard FEM and FDM methods are subject to numerical instabilities at the first time steps. To avoid this, discretization on staggered grid was suggested in [4, 5]. A single layer deformable porous medium was considered there. This paper can be viewed as extension of [4, 5] to the case of multilayered deformable porous media. A finite volume discretization to the interface problem for the classical one-dimensional Biot model of consolidation process is applied here. Following assumptions are supposed to be valid: each of the porous layers is composed of incompressible solid matrix, it is homogeneous and isotropic. Furthermore, one of two following assumptions is valid: porous medium is not completely saturated and fluid is incompressible or porous medium is completely saturated and fluid is slightly compressible. The reminder of the paper is organised as follows. Next section presents the mathematical model. Third section is devoted to the dicsretization of the continuous problem. Fourth section contains the results from the numerical experiments.
A spectral theory for constituents of macroscopically homogeneous random microstructures modeled as homogeneous random closed sets is developed and provided with a sound mathematical basis, where the spectrum obtained by Fourier methods corresponds to the angular intensity distribution of x-rays scattered by this constituent. It is shown that the fast Fourier transform applied to three-dimensional images of microstructures obtained by micro-tomography is a powerful tool of image processing. The applicability of this technique is is demonstrated in the analysis of images of porous media.
No doubt: Mathematics has become a technology in its own right, maybe even a key technology. Technology may be defined as the application of science to the problems of commerce and industry. And science? Science maybe defined as developing, testing and improving models for the prediction of system behavior; the language used to describe these models is mathematics and mathematics provides methods to evaluate these models. Here we are! Why has mathematics become a technology only recently? Since it got a tool, a tool to evaluate complex, "near to reality" models: Computer! The model may be quite old - Navier-Stokes equations describe flow behavior rather well, but to solve these equations for realistic geometry and higher Reynolds numbers with sufficient precision is even for powerful parallel computing a real challenge. Make the models as simple as possible, as complex as necessary - and then evaluate them with the help of efficient and reliable algorithms: These are genuine mathematical tasks.
A non-linear multigrid solver for incompressible Navier-Stokes equations, exploiting finite volume discretization of the equations, is extended by adaptive local refinement. The multigrid is the outer iterative cycle, while the SIMPLE algorithm is used as a smoothing procedure. Error indicators are used to define the refinement subdomain. A special implementation approach is used, which allows to perform unstructured local refinement in conjunction with the finite volume discretization. The multigrid - adaptive local refinement algorithm is tested on 2D Poisson equation and further is applied to a lid-driven flows in a cavity (2D and 3D case), comparing the results with bench-mark data. The software design principles of the solver are also discussed.
One of the main goals of an organization developing software is to increase the quality of the software while at the same time to decrease the costs and the duration of the development process. To achieve this, various decisions e.ecting this goal before and during the development process have to be made by the managers. One appropriate tool for decision support are simulation models of the software life cycle, which also help to understand the dynamics of the software development process. Building up a simulation model requires a mathematical description of the interactions between di.erent objects involved in the development process. Based on experimental data, techniques from the .eld of knowledge discovery can be used to quantify these interactions and to generate new process knowledge based on the analysis of the determined relationships. In this paper blocked neuronal networks and related relevance measures will be presented as an appropriate tool for quanti.cation and validation of qualitatively known dependencies in the software development process.
The objective of the present article is to give an overview of an application of Fuzzy Logic in Regulation Thermography, a method of medical diagnosis support. An introduction to this method of the complementary medical science based on temperature measurements – so-called thermograms – is provided. The process of modelling the physician’s thermogram evaluation rules using the calculus of Fuzzy Logic is explained.
In this paper we focus on the strategic design of supply chain networks. We propose a mathematical modeling framework that captures many practical aspects of network design problems simultaneously but which have not received adequate attention in the literature. The aspects considered include: dynamic planning horizon, generic supply chain network structure, external supply of materials, inventory opportunities for goods, distribution of commodities, facility configuration, availability of capital for investments, and storage limitations. Moreover, network configuration decisions concerning the gradual relocation of facilities over the planning horizon are considered. To cope with fluctuating demands, capacity expansion and reduction scenarios are also analyzed as well as modular capacity shifts. The relation of the proposed modeling framework with existing models is discussed. For problems of reasonable size we report on our computational experience with standard mathematical programming software. In particular, useful insights on the impact of various factors on network design decisions are provided.
In this article, we consider the problem of planning inspections and other tasks within a software development (SD) project with respect to the objectives quality (no. of defects), project duration, and costs. Based on a discrete-event simulation model of SD processes comprising the phases coding, inspection, test, and rework, we present a simplified formulation of the problem as a multiobjective optimization problem. For solving the problem (i.e. finding an approximation of the efficient set) we develop a multiobjective evolutionary algorithm. Details of the algorithm are discussed as well as results of its application to sample problems.
Radiation therapy planning is always a tight rope walk between dangerous insufficient dose in the target volume and life threatening overdosing of organs at risk. Finding ideal balances between these inherently contradictory goals challenges dosimetrists and physicians in their daily practice. Today’s planning systems are typically based on a single evaluation function that measures the quality of a radiation treatment plan. Unfortunately, such a one dimensional approach cannot satisfactorily map the different backgrounds of physicians and the patient dependent necessities. So, too often a time consuming iteration process between evaluation of dose distribution and redefinition of the evaluation function is needed. In this paper we propose a generic multi-criteria approach based on Pareto’s solution concept. For each entity of interest - target volume or organ at risk a structure dependent evaluation function is defined measuring deviations from ideal doses that are calculated from statistical functions. A reasonable bunch of clinically meaningful Pareto optimal solutions are stored in a data base, which can be interactively searched by physicians. The system guarantees dynamical planning as well as the discussion of tradeoffs between different entities. Mathematically, we model the upcoming inverse problem as a multi-criteria linear programming problem. Because of the large scale nature of the problem it is not possible to solve the problem in a 3D-setting without adaptive reduction by appropriate approximation schemes. Our approach is twofold: First, the discretization of the continuous problem is based on an adaptive hierarchical clustering process which is used for a local refinement of constraints during the optimization procedure. Second, the set of Pareto optimal solutions is approximated by an adaptive grid of representatives that are found by a hybrid process of calculating extreme compromises and interpolation methods.
Industrial analog circuits are usually designed using numerical simulation tools. To obtain a deeper circuit understanding, symbolic analysis techniques can additionally be applied. Approximation methods which reduce the complexity of symbolic expressions are needed in order to handle industrial-sized problems. This paper will give an overview to the field of symbolic analog circuit analysis. Starting with a motivation, the state-of-the-art simplification algorithms for linear as well as for nonlinear circuits are presented. The basic ideas behind the different techniques are described, whereas the technical details can be found in the cited references. Finally, the application of linear and nonlinear symbolic analysis will be shown on two example circuits.
Asymptotic homogenisation technique and two-scale convergence is used for analysis of macro-strength and fatigue durability of composites with a periodic structure under cyclic loading. The linear damage accumulation rule is employed in the phenomenological micro-durability conditions (for each component of the composite) under varying cyclic loading. Both local and non-local strength and durability conditions are analysed. The strong convergence of the strength and fatigue damage measure as the structure period tends to zero is proved and their limiting values are estimated.
We present two heuristic methods for solving the Discrete Ordered Median Problem (DOMP), for which no such approaches have been developed so far. The DOMP generalizes classical discrete facility location problems, such as the p-median, p-center and Uncapacitated Facility Location problems. The first procedure proposed in this paper is based on a genetic algorithm developed by Moreno Vega [MV96] for p-median and p-center problems. Additionally, a second heuristic approach based on the Variable Neighborhood Search metaheuristic (VNS) proposed by Hansen & Mladenovic [HM97] for the p-median problem is described. An extensive numerical study is presented to show the efficiency of both heuristics and compare them.
The Discrete Ordered Median Problem (DOMP) generalizes classical discrete location problems, such as the N-median, N-center and Uncapacitated Facility Location problems. It was introduced by Nickel [16], who formulated it as both a nonlinear and a linear integer program. We propose an alternative integer linear programming formulation for the DOMP, discuss relationships between both integer linear programming formulations, and show how properties of optimal solutions can be used to strengthen these formulations. Moreover, we present a specific branch and bound procedure to solve the DOMP more efficiently. We test the integer linear programming formulations and this branch and bound method computationally on randomly generated test problems.
A new stability preserving model reduction algorithm for discrete linear SISO-systems based on their impulse response is proposed. Similar to the Padé approximation, an equation system for the Markov parameters involving the Hankel matrix is considered, that here however is chosen to be of very high dimension. Although this equation system therefore in general cannot be solved exactly, it is proved that the approximate solution, computed via the Moore-Penrose inverse, gives rise to a stability preserving reduction scheme, a property that cannot be guaranteed for the Padé approach. Furthermore, the proposed algorithm is compared to another stability preserving reduction approach, namely the balanced truncation method, showing comparable performance of the reduced systems. The balanced truncation method however starts from a state space description of the systems and in general is expected to be more computational demanding.
In this paper, we present a novel multicriteria decision support system (MCDSS), called knowCube, consisting of components for knowledge organization, generation, and navigation. Knowledge organization rests upon a database for managing qualitative and quantitative criteria, together with add-on information. Knowledge generation serves filling the database via e.g. identification, optimization, classification or simulation. For “finding needles in haycocks”, the knowledge navigation component supports graphical database retrieval and interactive, goal-oriented problem solving. Navigation “helpers” are, for instance, cascading criteria aggregations, modifiable metrics, ergonomic interfaces, and customizable visualizations. Examples from real-life projects, e.g. in industrial engineering and in the life sciences, illustrate the application of our MCDSS.
This paper concerns numerical simulation of flow through oil filters. Oil filters consist of filter housing (filter box), and a porous filtering medium, which completely separates the inlet from the outlet. We discuss mathematical models, describing coupled flows in the pure liquid subregions and in the porous filter media, as well as interface conditions between them. Further, we reformulate the problem in fictitious regions method manner, and discuss peculiarities of the numerical algorithm in solving the coupled system. Next, we show numerical results, validating the model and the algorithm. Finally, we present results from simulation of 3-D oil flow through a real car filter.
On a Multigrid Adaptive Refinement Solver for Saturated Non-Newtonian Flow in Porous Media A multigrid adaptive refinement algorithm for non-Newtonian flow in porous media is presented. The saturated flow of a non-Newtonian fluid is described by the continuity equation and the generalized Darcy law. The resulting second order nonlinear elliptic equation is discretized by a finite volume method on a cell-centered grid. A nonlinear full-multigrid, full-approximation-storage algorithm is implemented. As a smoother, a single grid solver based on Picard linearization and Gauss-Seidel relaxation is used. Further, a local refinement multigrid algorithm on a composite grid is developed. A residual based error indicator is used in the adaptive refinement criterion. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Several results from numerical experiments are presented in order to examine the performance of the solver.
We consider the problem of pricing European forward starting options in the presence of stochastic volatility. By performing a change of measure using the asset price at the time of strike determination as a numeraire, we derive a closed-form solution based on Heston’s model of stochastic volatility.
We present a unified approach of several boundary conditions for lattice Boltzmann models. Its general framework is a generalization of previously introduced schemes such as the bounce-back rule, linear or quadratic interpolations, etc. The objectives are two fold: first to give theoretical tools to study the existing boundary conditions and their corresponding accuracy; secondly to design formally third- order accurate boundary conditions for general flows. Using these boundary conditions, Couette and Poiseuille flows are exact solution of the lattice Boltzmann models for a Reynolds number Re = 0 (Stokes limit). Numerical comparisons are given for Stokes flows in periodic arrays of spheres and cylinders, linear periodic array of cylinders between moving plates and for Navier-Stokes flows in periodic arrays of cylinders for Re < 200. These results show a significant improvement of the overall accuracy when using the linear interpolations instead of the bounce-back reflection (up to an order of magnitude on the hydrodynamics fields). Further improvement is achieved with the new multi-reflection boundary conditions, reaching a level of accuracy close to the quasi-analytical reference solutions, even for rather modest grid resolutions and few points in the narrowest channels. More important, the pressure and velocity fields in the vicinity of the obstacles are much smoother with multi-reflection than with the other boundary conditions. Finally the good stability of these schemes is highlighted by some simulations of moving obstacles: a cylinder between flat walls and a sphere in a cylinder.
In this paper we consider short term storage systems. We analyze presorting strategies to improve the effiency of these storage systems. The presorting task is called Batch PreSorting Problem (BPSP). The BPSP is a variation of an assigment problem, i.e., it has an assigment problem kernel and some additional constraints. We present different types of these presorting problems, introduce mathematical programming formulations and prove the NP-completeness for one type of the BPSP. Experiments are carried out in order to compare the different model formulations and to investigate the behavior of these models.
We consider some portfolio optimisation problems where either the investor has a desire for an a priori specified consumption stream or/and follows a deterministic pay in scheme while also trying to maximize expected utility from final wealth. We derive explicit closed form solutions for continuous and discrete monetary streams. The mathematical method used is classical stochastic control theory.
If an investor borrows money he generally has to pay higher interest rates than he would have received, if he had put his funds on a savings account. The classical model of continuous time portfolio optimisation ignores this effect. Since there is obviously a connection between the default probability and the total percentage of wealth, which the investor is in debt, we study portfolio optimisation with a control dependent interest rate. Assuming a logarithmic and a power utility function, respectively, we prove explicit formulae of the optimal control.
Two approaches for determining the Euler-Poincaré characteristic of a set observed on lattice points are considered in the context of image analysis { the integral geometric and the polyhedral approach. Information about the set is assumed to be available on lattice points only. In order to retain properties of the Euler number and to provide a good approximation of the true Euler number of the original set in the Euclidean space, the appropriate choice of adjacency in the lattice for the set and its background is crucial. Adjacencies are defined using tessellations of the whole space into polyhedrons. In R 3 , two new 14 adjacencies are introduced additionally to the well known 6 and 26 adjacencies. For the Euler number of a set and its complement, a consistency relation holds. Each of the pairs of adjacencies (14:1; 14:1), (14:2; 14:2), (6; 26), and (26; 6) is shown to be a pair of complementary adjacencies with respect to this relation. That is, the approximations of the Euler numbers are consistent if the set and its background (complement) are equipped with this pair of adjacencies. Furthermore, sufficient conditions for the correctness of the approximations of the Euler number are given. The analysis of selected microstructures and a simulation study illustrate how the estimated Euler number depends on the chosen adjacency. It also shows that there is not a uniquely best pair of adjacencies with respect to the estimation of the Euler number of a set in Euclidean space.
Lattice Boltzmann Model for Free-Surface flow and Its Application to Filling Process in Casting
(2002)
A generalized lattice Boltzmann model to simulate free-surface is constructed in both two and three dimensions. The proposed model satisfies the interfacial boundary conditions accurately. A distinctive feature of the model is that the collision processes is carried out only on the points occupied partially or fully by the fluid. To maintain a sharp interfacial front, the method includes an anti-diffusion algorithm. The unknown distribution functions at the interfacial region are constructed according to the first order Chapman-Enskog analysis. The interfacial boundary conditions are satisfied exactly by the coefficients in the Chapman-Enskog expansion. The distribution functions are naturally expressed in the local interfacial coordinates. The macroscopic quantities at the interface are extracted from the least-square solutions of a locally linearized system obtained from the known distribution functions. The proposed method does not require any geometric front construction and is robust for any interfacial topology. Simulation results of realistic filling process are presented: rectangular cavity in two dimensions and Hammer box, Campbell box, Sheffield box, and Motorblock in three dimensions. To enhance the stability at high Reynolds numbers, various upwind-type schemes are developed. Free-slip and no-slip boundary conditions are also discussed.
In the present paper a kinetic model for vehicular traffic leading to multivalued fundamental diagrams is developed and investigated in detail. For this model phase transitions can appear depending on the local density and velocity of the flow. A derivation of associated macroscopic traffic equations from the kinetic equation is given. Moreover, numerical experiments show the appearance of stop and go waves for highway traffic with a bottleneck.
To a network N(q) with determinant D(s;q) depending on a parameter vector q Î Rr via identification of some of its vertices, a network N^ (q) is assigned. The paper deals with procedures to find N^ (q), such that its determinant D^ (s;q) admits a factorization in the determinants of appropriate subnetworks, and with the estimation of the deviation of the zeros of D^ from the zeros of D. To solve the estimation problem state space methods are applied.
A spectral theory for stationary random closed sets is developed and provided with a sound mathematical basis. Definition and proof of existence of the Bartlett spectrum of a stationary random closed set as well as the proof of a Wiener-Khintchine theorem for the power spectrum are used to two ends: First, well known second order characteristics like the covariance can be estimated faster than usual via frequency space. Second, the Bartlett spectrum and the power spectrum can be used as second order characteristics in frequency space. Examples show, that in some cases information about the random closed set is easier to obtain from these characteristics in frequency space than from their real world counterparts.
Free Surface Lattice-Boltzmann Method To Model The Filling Of Expanding Cavities By Bingham Fluids
(2001)
The filling process of viscoplastic metal alloys and plastics in expanding cavities is modelled using the lattice Boltzmann method in two and three dimensions. These models combine the regularized Bingham model for viscoplastic with a free-interface algorithm. The latter is based on a modified immiscible lattice Boltzmann model in which one species is the fluid and the other one is considered as vacuum. The boundary conditions at the curved liquid-vacuum interface are met without any geometrical front reconstruction from a first-order Chapman-Enskog expansion. The numerical results obtained with these models are found in good agreement with available theoretical and numerical analysis.
A Lagrangian particle scheme is applied to the projection method for the incompressible Navier-Stokes equations. The approximation of spatial derivatives is obtained by the weighted least squares method. The pressure Poisson equation is solved by a local iterative procedure with the help of the least squares method. Numerical tests are performed for two dimensional cases. The Couette flow, Poiseuelle flow, decaying shear flow and the driven cavity flow are presented. The numerical solutions are obtained for stationary as well as instationary cases and are compared with the analytical solutions for channel flows. Finally, the driven cavity in a unit square is considered and the stationary solution obtained from this scheme is compared with that from the finite element method.
In the Finite-Volume-Particle Method (FVPM), the weak formulation of a hyperbolic conservation law is discretized by restricting it to a discrete set of test functions. In contrast to the usual Finite-Volume approach, the test functions are not taken as characteristic functions of the control volumes in a spatial grid, but are chosen from a partition of unity with smooth and overlapping partition functions (the particles), which can even move along prescribed velocity fields. The information exchange between particles is based on standard numerical flux functions. Geometrical information, similar to the surface area of the cell faces in the Finite-Volume Method and the corresponding normal directions are given as integral quantities of the partition functions. After a brief derivation of the Finite-Volume-Particle Method, this work focuses on the role of the geometric coefficients in the scheme.
The objective of this paper is to bridge the gap between location theory and practice. To meet this objective focus is given to the development of software capable of addressing the different needs of a wide group of users. There is a very active community on location theory encompassing many research fields such as operations research, computer science, mathematics, engineering, geography, economics and marketing. As a result, people working on facility location problems have a very diverse background and also different needs regarding the software to solve these problems. For those interested in non-commercial applications (e. g. students and researchers), the library of location algorithms (LoLA can be of considerable assistance. LoLA contains a collection of efficient algorithms for solving planar, network and discrete facility location problems. In this paper, a detailed description of the functionality of LoLA is presented. In the fields of geography and marketing, for instance, solving facility location problems requires using large amounts of demographic data. Hence, members of these groups (e. g. urban planners and sales managers) often work with geographical information too s. To address the specific needs of these users, LoLA was inked to a geographical information system (GIS) and the details of the combined functionality are described in the paper. Finally, there is a wide group of practitioners who need to solve large problems and require special purpose software with a good data interface. Many of such users can be found, for example, in the area of supply chain management (SCM). Logistics activities involved in strategic SCM include, among others, facility location planning. In this paper, the development of a commercial location software tool is also described. The too is embedded in the Advanced Planner and Optimizer SCM software developed by SAP AG, Walldorf, Germany. The paper ends with some conclusions and an outlook to future activities.
This paper details models and algorithms which can be applied to evacuation problems. While it concentrates on building evacuation many of the results are applicable also to regional evacuation. All models consider the time as main parameter, where the travel time between components of the building is part of the input and the overall evacuation time is the output. The paper distinguishes between macroscopic and microscopic evacuation models both of which are able to capture the evacuees' movement over time. Macroscopic models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building. Macroscopic approaches which are based on dynamic network flow models (minimum cost dynamic flow, maximum dynamic flow, universal maximum flow, quickest path and quickest flow) are described. A special feature of the presented approach is the fact, that travel times of evacuees are not restricted to be constant, but may be density dependent. Using multicriteria optimization priority regions and blockage due to fire or smoke may be considered. It is shown how the modelling can be done using time parameter either as discrete or continuous parameter. Microscopic models are able to model the individual evacuee's characteristics and the interaction among evacuees which influence their movement. Due to the corresponding huge amount of data one uses simulation approaches. Some probabilistic laws for individual evacuee's movement are presented. Moreover ideas to model the evacuee's movement using cellular automata (CA) and resulting software are presented. In this paper we will focus on macroscopic models and only summarize some of the results of the microscopic approach. While most of the results are applicable to general evacuation situations, we concentrate on building evacuation.
To simulate the influence of process parameters to the melt spinning process a fiber model is used and coupled with CFD calculations of the quench air flow. In the fiber model energy, momentum and mass balance are solved for the polymer mass flow. To calculate the quench air the Lattice Boltzmann method is used. Simulations and experiments for different process parameters and hole configurations are compared and show a good agreement.
In this paper mathematical models for liquid films generated by impinging jets are discussed. Attention is stressed to the interaction of the liquid film with some obstacle. S. G. Taylor [Proc. R. Soc. London Ser. A 253, 313 (1959)] found that the liquid film generated by impinging jets is very sensitive to properties of the wire which was used as an obstacle. The aim of this presentation is to propose a modification of the Taylor's model, which allows to simulate the film shape in cases, when the angle between jets is different from 180°. Numerical results obtained by discussed models give two different shapes of the liquid film similar as in Taylors experiments. These two shapes depend on the regime: either droplets are produced close to the obstacle or not. The difference between two regimes becomes larger if the angle between jets decreases. Existence of such two regimes can be very essential for some applications of impinging jets, if the generated liquid film can have a contact with obstacles.
Within this paper we review image distortion measures. A distortion measure is a criterion that assigns a "quality number" to an image. We distinguish between mathematical distortion measures and those distortion measures in-cooperating a priori knowledge about the imaging devices ( e.g. satellite images), image processing algorithms or the human physiology. We will consider representative examples of different kinds of distortion measures and are going to discuss them.
We examine the feasibility polyhedron of the uncapacitated hub location problem (UHL) with multiple allocation, which has applications in the fields of air passenger and cargo transportation, telecommunication and postal delivery services. In particular we determine the dimension and derive some classes of facets of this polyhedron. We develop some general rules about lifting facets from the uncapacitated facility location (UFL) for UHL and projecting facets from UHL to UFL. By applying these rules we get a new class of facets for UHL which dominates the inequalities in the original formulation. Thus we get a new formulation of UHL whose constraints are all facet–defining. We show its superior computational performance by benchmarking it on a well known data set.
Given a public transportation system represented by its stops and direct connections between stops, we consider two problems dealing with the prices for the customers: The fare problem in which subsets of stops are already aggregated to zones and "good" tariffs have to be found in the existing zone system. Closed form solutions for the fare problem are presented for three objective functions. In the zone problem the design of the zones is part of the problem. This problem is NP hard and we therefore propose three heuristics which prove to be very successful in the redesign of one of Germany's transportation systems