Refine
Year of publication
Document Type
- Report (198)
- Preprint (19)
- Doctoral Thesis (4)
- Course Material (2)
- Working Paper (1)
Has Fulltext
- yes (224)
Keywords
- numerical upscaling (7)
- Integer programming (4)
- hub location (4)
- Darcy’s law (3)
- Heston model (3)
- Lagrangian mechanics (3)
- effective heat conductivity (3)
- facility location (3)
- non-Newtonian flow in porous media (3)
- optimization (3)
- poroelasticity (3)
- virtual material design (3)
- American options (2)
- Bartlett spectrum (2)
- HJB equation (2)
- Heuristics (2)
- IMRT planning (2)
- Kalman filtering (2)
- Kinetic Schemes (2)
- Lattice Boltzmann methods (2)
- Logistics (2)
- MAC type grid (2)
- Navier-Stokes equations (2)
- Noether’s theorem (2)
- Nonlinear multigrid (2)
- Portfolio optimisation (2)
- Rotational spinning process (2)
- Slender body theory (2)
- Special Cosserat rods (2)
- Supply Chain Management (2)
- adaptive refinement (2)
- asymptotic homogenization (2)
- branch and cut (2)
- discontinuous coefficients (2)
- discrete mechanics (2)
- domain decomposition (2)
- energy minimization (2)
- facets (2)
- fast Fourier transform (2)
- fatigue (2)
- fiber orientation (2)
- fiber-fluid interaction (2)
- filling processes (2)
- finite volume method (2)
- finite-volume method (2)
- free-surface phenomena (2)
- hydraulics (2)
- image analysis (2)
- image processing (2)
- injection molding (2)
- integer programming (2)
- interface boundary conditions (2)
- linear elasticity (2)
- model reduction (2)
- multibody dynamics (2)
- multigrid (2)
- multilayered material (2)
- non-overlapping constraints (2)
- numerical simulation (2)
- optimal control (2)
- option pricing (2)
- porous media (2)
- portfolio choice (2)
- power spectrum (2)
- rectangular packing (2)
- simulation (2)
- single phase flow (2)
- software development (2)
- stochastic control (2)
- supply chain management (2)
- valid inequalities (2)
- work effort (2)
- 3D (1)
- 3d imaging (1)
- : Navier-Stokes equations (1)
- : Two-phase flow in porous media (1)
- : multiple criteria optimization (1)
- : multiple objective programming (1)
- Aktien (1)
- Analysis (1)
- Anisotropic Gaussian filter (1)
- Approximation (1)
- Assigment (1)
- Asymptotic Expansion (1)
- Asymptotic expansions (1)
- Asymptotic homogenization (1)
- Ausfallwahrscheinlichkeit (1)
- Automatic Differentiation (1)
- Bauindustrie (1)
- Bauplanung (1)
- Bayesian Model Averaging (1)
- Betriebsfestigkeit (1)
- Bingham viscoplastic model (1)
- Biot poroelasticity system (1)
- Black–Scholes approach (1)
- Blocked Neural Networks (1)
- Boolean polynomials (1)
- Bootstrap (1)
- Boundary Value Problem (1)
- Brinkman (1)
- Brinkman equations (1)
- Börse (1)
- CAD (1)
- CAE-Kette zur Strukturoptimierung (1)
- CFD (1)
- CIR model (1)
- Capacitated Hub Location (1)
- Capacity decisions (1)
- Chapman Enskog distributions (1)
- Code Inspection (1)
- Complexity theory (1)
- Computational Fluid Dynamics (1)
- Computer Assisted Tomograp (1)
- Constant Maturity Credit Default Swap (1)
- Constrained mechanical systems (1)
- Constraint Programming (1)
- Continuous Location (1)
- Cosserat rod (1)
- Credit Default Swaption (1)
- Crofton's intersection formulae (1)
- Curved viscous fibers (1)
- Customer distribution (1)
- Datenerfassung und -auswertung (1)
- Decision support systems (1)
- Delaunay Triangulation (1)
- Delaunay mesh generation (1)
- Design (1)
- Discrete linear systems (1)
- Distortion measure (1)
- Domain Decomposition (1)
- Dynamical Coupling (1)
- Elastic BVP (1)
- Electrophysiology (1)
- Elliptic boundary value problems (1)
- Energie (1)
- Equicofactor matrix polynomials (1)
- Euler number (1)
- Eulerian-Lagrangian formulation (1)
- Existence of Solutions (1)
- FEM (1)
- FETI (1)
- FPM (1)
- Facility location (1)
- Fahrzeugprüfstände (1)
- Fault Prediction (1)
- Festigkeitsverteilung (1)
- Filtering (1)
- Finanzmathematik (1)
- Finite Dominating Sets (1)
- Finite rotations (1)
- Flexible multibody dynamics (1)
- Flooding (1)
- Fluid Structure Interaction (1)
- Fluid dynamics (1)
- Fokker-Planck Equation (1)
- Fokker-Planck equations (1)
- Folgar-Tucker equation (1)
- Folgar-Tucker model (1)
- Forschung (1)
- Free boundary value problem (1)
- Front Propagation (1)
- G2++ model (1)
- Generalized LBE (1)
- Geographical Information Systems (1)
- Geometric (1)
- Gießprozesssimulation (1)
- Gießtechnische Restriktionen (1)
- Gradual Covering (1)
- Greedy Algorithm (1)
- Grid Generation (1)
- Grid Graphs (1)
- Gröber basis (1)
- HJM (1)
- Hadwiger's recursive de nition of the Euler number (1)
- Hankel matrix (1)
- Hedge funds (1)
- Heuristic (1)
- Hilbert transform (1)
- Home Health Care (1)
- Hub Location (1)
- Hull White model (1)
- Human resource modeling (1)
- Hydraulik (1)
- IMRT planning on adaptive volume structures – a significant advance of computational complexity (1)
- Incompressible Navier-Stokes equations (1)
- Infiltration (1)
- Iterative learning control (1)
- Jiang’s Model of Elastoplasticity (1)
- Kalman Filter (1)
- Kirchhoff and Cosserat rods (1)
- Kirchhoff\\\'s geometrically theory (1)
- Knowledge Extraction (1)
- Konfidenz (1)
- Kundenbeanspruchung (1)
- LIBOR market model (1)
- Lagrange formalism (1)
- Large deformations (1)
- Lattice Boltzmann (1)
- Lattice Boltzmann Method (1)
- Lattice Boltzmann method (1)
- Lattice Boltzmann models (1)
- Lattice-Boltzmann method (1)
- Least squares approximation (1)
- Least squares method (1)
- Lebensdauerberechnung (1)
- Lehre (1)
- Lehrerweiterbildung (1)
- Level Set method (1)
- Level-Set Methode (1)
- Li Ion Batteries (1)
- Linear Programming (1)
- Liquid Polymer Moulding (1)
- Load Balancing (1)
- MBS (1)
- MBS simulation (1)
- MILP formulations (1)
- MIP formulations (1)
- MKS (1)
- Mathematical modeling (1)
- Mathematikunterricht (1)
- Matrix perturbation theory (1)
- Maximum-Likelihood (1)
- Mehrskalenanalyse (1)
- Melt spinning (1)
- Mesh-less methods (1)
- Meshfree Method (1)
- Meshfree method (1)
- Metaheuristics (1)
- Model reduction (1)
- Modeling (1)
- Modelling (1)
- Monte Carlo methods (1)
- Monte-Carlo methods (1)
- Multibody simulation (1)
- Multicriteria decision making (1)
- Multipoint flux approximation (1)
- Multiscale problem (1)
- Multiscale problems (1)
- Multiscale structures (1)
- Navier-Stokes (1)
- Navier-Stokes equation (1)
- Navier-Stokes-Brinkmann system of equations (1)
- Network Location (1)
- Network design (1)
- Networks (1)
- Neumann problem (1)
- Non-Newtonian flow (1)
- Non-homogeneous Poisson Process (1)
- Nonequilibrium Thermodynamics (1)
- Nonlinear Regression (1)
- Nonlinear energy (1)
- Numerical modeling (1)
- Nutzungsprofil (1)
- Optimal parameter estimation (1)
- Optimization (1)
- Option pricing (1)
- Optionen (1)
- Ordered Median Function (1)
- Ornstein-Uhlenbeck Process (1)
- POD (1)
- Parallel Programming (1)
- Parameter Identification (1)
- Parametrisation of rotations (1)
- Parsimonious Heston Model (1)
- Parteto surface (1)
- Particle scheme (1)
- Performance of iterative solvers (1)
- Pleated Filter (1)
- Poisson equation (1)
- Poisson line process (1)
- Polyhedral Gauges (1)
- Portfolio-Optimierung (1)
- Preconditioners (1)
- Projection method (1)
- Projektplanung (1)
- Prüfkonzepte (1)
- Quanto option (1)
- Random set (1)
- Realization theory (1)
- Recycling (1)
- Regelung (1)
- Reliability Prediction (1)
- Ripley’s K function (1)
- Robust reliability (1)
- Rosenbrock methods (1)
- Rotational Fiber Spinning (1)
- Rounding (1)
- Route Planning (1)
- SIMPLE (1)
- Sandwich Algorithm (1)
- Scheduling (1)
- Shapley Value (1)
- Sheet ofPaper (1)
- Simulation (1)
- Solid-Gas Separation (1)
- Solid-Liquid Separation (1)
- Spatial Binary Images (1)
- Stationary heat equation (1)
- Stein equation (1)
- Stochastic Differential Equations (1)
- Stokes-Brinkman equations (1)
- Stress-strain correction (1)
- Stromnetz (1)
- Stromverbrauch (1)
- Strömungsmechanik (1)
- Supply Chain Design (1)
- Switching regression model (1)
- Thermal Transport (1)
- Topologieoptimierung (1)
- Unstructured Grid (1)
- VCG payment scheme (1)
- Vasicek model (1)
- Vehicle test rigs (1)
- Viscous Fibers (1)
- Weibull (1)
- Winner Determination Problem (WDP) (1)
- a posteriori error estimates (1)
- a-priori domain decomposition (1)
- acoustic absorption (1)
- adaptive local refinement (1)
- adaptive triangulation (1)
- additive outlier (1)
- aerodynamic drag (1)
- air drag (1)
- algebraic constraints (1)
- algebraic cryptoanalysis (1)
- algorithm by Bortfeld and Boyer (1)
- analog circuits (1)
- angewandte Mathematik (1)
- anisotropic cicosity (1)
- anisotropy (1)
- applied mathematics (1)
- artial differential algebraic equations (1)
- asymptotic (1)
- asymptotic Cosserat models (1)
- asymptotic limits (1)
- automated analog circuit design (1)
- batch presorting problem (1)
- battery modeling (1)
- behavioral modeling (1)
- ber dynamics (1)
- big triangle small triangle method (1)
- binarization (1)
- boudary condistions (1)
- bounce-back rule (1)
- boundary value problems (1)
- bounds (1)
- calibration (1)
- calls (1)
- cancer (1)
- cell volume (1)
- center and median problems (1)
- circuit sizing (1)
- cliquet options (1)
- clustering (1)
- clustering and disaggregation techniques (1)
- combinatorial procurement (1)
- competetive analysis (1)
- composite materials (1)
- compressible Navier Stokes equations (1)
- computational fluid dynamics (1)
- computer algebra (1)
- concentrated electrolyte (1)
- constrained mechanical systems (1)
- constraint propagation (1)
- consumption (1)
- contact problems (1)
- continuous optimization (1)
- control (1)
- controlling (1)
- convergence of approximate solution (1)
- convex (1)
- convex models (1)
- convex optimization (1)
- corre- lation (1)
- correlation (1)
- coupled flow in plain and porous media (1)
- crack diagnosis (1)
- credit risk (1)
- credit spread (1)
- cuboidal lattice (1)
- curved viscous fibers (1)
- curved viscous fibers with surface tension (1)
- damage diagnosis (1)
- decision support systems (1)
- decomposition (1)
- defect detection (1)
- deformable bodies (1)
- deformable porous media (1)
- design centering (1)
- design optimization (1)
- deterministic technical systems (1)
- dial-a-ride (1)
- dif (1)
- differential algebraic equations (1)
- differentialalgebraic equations (1)
- discrete equilibrium distributions (1)
- discrete facility location (1)
- discrete location (1)
- discrete optimization (1)
- discretisation of control problems (1)
- discriminant analysis (1)
- diusion limits (1)
- dividend discount model (1)
- dividends (1)
- drag models (1)
- drift due to noise (1)
- dynamic capillary pressure (1)
- dynamic mode (1)
- edge detection (1)
- effective elastic moduli (1)
- effective thermal conductivity (1)
- efficient set (1)
- elastoplastic BVP (1)
- electrochemical diusive processes (1)
- electrochemical simulation (1)
- electronic circuit design (1)
- elliptic equation (1)
- energy conservation (1)
- error estimates (1)
- estimation of compression (1)
- evolutionary algorithms (1)
- executive compensation (1)
- executive stockholder (1)
- expert system (1)
- explicit jump (1)
- explicit jump immersed interface method (1)
- exponential utility (1)
- extreme solutions (1)
- fiber dynamics (1)
- fiber model (1)
- fiber-fluid interactions (1)
- fiber-turbulence interaction scales (1)
- fibrous insulation materials (1)
- fibrous materials (1)
- film casting process (1)
- filtration (1)
- financial decisions (1)
- finite difference discretization (1)
- finite differences (1)
- finite element method (1)
- finite elements (1)
- finite sample breakdown point (1)
- finite volume discretization (1)
- finite volume discretization discretization (1)
- finite volume discretizations (1)
- finite volume methods (1)
- flexible bodies (1)
- flexible fibers (1)
- flow in heterogeneous porous media (1)
- flow in porous media (1)
- flow resistivity (1)
- flows (1)
- fluid-fiber interactions (1)
- fluid-structure interaction (1)
- force-based simulation (1)
- formal verification (1)
- forward starting options (1)
- frameindifference (1)
- free boundary value problem (1)
- free surface (1)
- free surface Stokes flow (1)
- full vehicle model (1)
- functional Hilbert space (1)
- fuzzy logic (1)
- general semi-infinite optimization (1)
- generalized Pareto distribution (1)
- genetic algorithms (1)
- geographical information systems (1)
- geometrically exact rod models (1)
- geometrically exact rods (1)
- glass processing (1)
- global optimization (1)
- global pressure (1)
- global robustness (1)
- graph laplacian (1)
- heterogeneous porous media (1)
- heuristic (1)
- heuristics (1)
- hierarchical shape functions (1)
- human factors (1)
- human visual system (1)
- hyperealstic (1)
- image segmentation (1)
- impinging jets (1)
- improving and feasible directions (1)
- in-house hospital transportation (1)
- incompressible flow (1)
- inertial and viscous-inertial fiber regimes (1)
- inhomogeneous Helmholtz type differential equations in bounded domains (1)
- innovation outlier (1)
- integral constitutive equation (1)
- intensity maps (1)
- intensity modulated (1)
- intensity modulated radiotherapy planning (1)
- interactive multi-objective optimization (1)
- interactive navigation (1)
- interfa (1)
- interface problem (1)
- interface problems (1)
- interval arithmetic (1)
- invariant excitation (1)
- invariants (1)
- inverse mathematical models (1)
- ion transport (1)
- isotropy test (1)
- kernel estimate (1)
- kernel function (1)
- kinetic derivation (1)
- knowledge management (1)
- knowledge representation (1)
- large scale optimization (1)
- lattice Boltzmann equation (1)
- learning curve (1)
- level-set (1)
- lid-driven flow in a (1)
- linear elasticity equations (1)
- linear filtering (1)
- linear kinematic hardening (1)
- liquid composite moulding (1)
- liquid film (1)
- lithium-ion battery (1)
- local robustness (1)
- locational analysis (1)
- log utility (1)
- logistic regression (1)
- logistics (1)
- long slender fibers (1)
- macro modeling (1)
- macroscopic equations (1)
- mass & spring (1)
- maximal function (1)
- mbs simulation (1)
- metal foams (1)
- microstructure simulatio (1)
- microstructure simulation (1)
- modelling (1)
- models (1)
- modified gradient projection method (1)
- moment matching (1)
- multi-asset (1)
- multi-hypothesis diagnosis (1)
- multi-period planning (1)
- multi-stage stochastic programming (1)
- multibody system simulation (1)
- multicriteria optimization (1)
- multigrid methods (1)
- multiobjective evolutionary algorithms (1)
- multiphase flow (1)
- multiphase mixture model (1)
- multiple objective linear programming problem (1)
- multiscale problem (1)
- multiscale problems (1)
- multiscale structures (1)
- multivalued fundamental diagram (1)
- nD image processing (1)
- nearest neighbour distance (1)
- neighborhod relationships (1)
- non-Newtonian fluids (1)
- non-linear dynamics (1)
- non-linear optimization (1)
- non-linear wealth dynamics (1)
- non-local conditions (1)
- non-woven (1)
- nonlinear algorithms (1)
- nonlinear diffusion (1)
- nonlinear model reduction (1)
- nonlinear programming (1)
- nonlinear stochastic systems (1)
- nonlinearity (1)
- numerical methods (1)
- numerical solution (1)
- occupational choice (1)
- oil filters (1)
- on-board simulation (1)
- online optimization (1)
- open cell foam (1)
- operator-dependent prolongation (1)
- optimal control theory (1)
- optimal portfolio choice (1)
- optimization algorithms (1)
- optimization strategies (1)
- options (1)
- ordered median (1)
- orientation analysis (1)
- orientation space (1)
- orthogonal orientations (1)
- oscillating coefficients (1)
- pH-sensitive microelectrodes (1)
- paper machine (1)
- parallel computing (1)
- parallel implementation (1)
- parametric (1)
- particle methods (1)
- path-connected sublevelsets (1)
- permeability of fractured porous media (1)
- phase space (1)
- phase transitions (1)
- planar location (1)
- polar ice (1)
- political districting (1)
- polynomial algorithms (1)
- porous microstructure (1)
- power utility (1)
- preconditioner (1)
- pressing section of a paper machine (1)
- productivity (1)
- project management and scheduling (1)
- projection-type splitting (1)
- pseudo-compressibility method (1)
- pseudo-plastic fluids (1)
- public transit (1)
- public transport (1)
- puts (1)
- quadratic assignment problem (1)
- quantile estimation (1)
- quasistatic deformations (1)
- radiation therapy (1)
- radiation therapy planning (1)
- radiotherapy planning (1)
- random -Gaussian aerodynamic force (1)
- random set (1)
- random system of fibers (1)
- rate-indepenhysteresis (1)
- real-life applications. (1)
- real-time (1)
- real-time simulation (1)
- real-world accident data (1)
- regularization (1)
- regularized models (1)
- representative systems of Pareto solutions (1)
- reproducing kernel (1)
- risk (1)
- robustness (1)
- rotating machinery (1)
- rotational spinning processes (1)
- safety critical components (1)
- safety function (1)
- sales territory alignment (1)
- satisfiability (1)
- semi-infinite programming (1)
- separable filters (1)
- sequences (1)
- shape (1)
- shape optimization (1)
- sharp function (1)
- sicherheitsrelevante Bauteile (1)
- singularity (1)
- slender- body theory (1)
- slender-body theory (1)
- slenderbody theory (1)
- smoothness (1)
- software process (1)
- software tools (1)
- spinning processes (1)
- stability (1)
- statistical modeling (1)
- steady Richards’ equation (1)
- steady modified Richards’ equation (1)
- stochastic Hamiltonian system (1)
- stochastic averaging. (1)
- stochastic dif (1)
- stochastic volatility (1)
- stokes (1)
- stop and go waves (1)
- stop- and play-operator (1)
- strategic (1)
- strength (1)
- strut thickness (1)
- subgrid approach (1)
- subgrid approximation (1)
- suspension (1)
- swap (1)
- symbolic analysis (1)
- system simulation (1)
- tabu search (1)
- technology (1)
- territory desgin (1)
- testing philosophy (1)
- textile quality control (1)
- texture classification (1)
- thin films (1)
- tolerance analysis (1)
- topological sensitivity (1)
- topology optimization (1)
- tr (1)
- trace stability (1)
- traffic flow (1)
- transfer quality (1)
- transportation (1)
- treatment planning (1)
- tree method (1)
- turbulence modeling (1)
- turbulence modelling (1)
- two-grid algorithm (1)
- two-way coupling (1)
- unstructured grid (1)
- upscaling (1)
- urban elevation (1)
- variable aggregation method (1)
- variable cardinality case (1)
- variable neighborhood search (1)
- variational formulation (1)
- variational inequalities (1)
- various formulations (1)
- viscous thermal jets (1)
- visual (1)
- visual interfaces (1)
- visualization (1)
- volatility (1)
- volume of fluid method (1)
- wave based method (1)
- white noise (1)
- wild bootstrap test (1)
Faculty / Organisational entity
- Fraunhofer (ITWM) (224) (remove)
»Denn nichts ist für den Menschen als Menschen etwas wert, was er nicht mit Leidenschaft tun kann«
(2001)
Vortrag anlässlich der Verleihung des Akademiepreises des Landes Rheinland-Pfalz am 21.11.2001 Was macht einen guten Hochschullehrer aus? Auf diese Frage gibt es sicher viele verschiedene, fachbezogene Antworten, aber auch ein paar allgemeine Gesichtspunkte: es bedarf der »Leidenschaft« für die Forschung (Max Weber), aus der dann auch die Begeisterung für die Lehre erwächst. Forschung und Lehre gehören zusammen, um die Wissenschaft als lebendiges Tun vermitteln zu können. Der Vortrag gibt Beispiele dafür, wie in angewandter Mathematik Forschungsaufgaben aus praktischen Alltagsproblemstellungen erwachsen, die in die Lehre auf verschiedenen Stufen (Gymnasium bis Graduiertenkolleg) einfließen; er leitet damit auch zu einem aktuellen Forschungsgebiet, der Mehrskalenanalyse mit ihren vielfältigen Anwendungen in Bildverarbeitung, Materialentwicklung und Strömungsmechanik über, was aber nur kurz gestreift wird. Mathematik erscheint hier als eine moderne Schlüsseltechnologie, die aber auch enge Beziehungen zu den Geistes- und Sozialwissenschaften hat.
We consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market includ- ing the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility. The power utility case is discussed as well. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two posi- tions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibility to benefit from her work effort by acquiring own-company shares. This gives insight into aspects of optimal contract design. Our framework is applicable to the pharma- ceutical and financial industry, and the IT sector.
In this expository article, we give an introduction into the basics of bootstrap tests in general. We discuss the residual-based and the wild bootstrap for regression models suitable for applications in signal and image analysis. As an illustration of the general idea, we consider a particular test for detecting differences between two noisy signals or images which also works for noise with variable variance. The test statistic is essentially the integrated squared difference between the signals after denoising them by local smoothing. Determining its quantile, which marks the boundary between accepting and rejecting the hypothesis of equal signals, is hardly possible by standard asymptotic methods whereas the bootstrap works well. Applied to the rows and columns of images, the resulting algorithm not only allows for the detection of defects but also for the characterization of their location and shape in surface inspection problems.
This paper discusses the possibility to use and apply the ideas of theWave BasedMethod, which has been developed especially for the steady–state acoustic areas, i.e. to solve the Helmholtz type boundary value problems in a bounded domain, in non–acoustics areas such as steady–state temperature propagation, calculation of the velocity potential function of a liquid flux, calculation of the light irradience in a liver tissue/tumor, etc.
We present a two-scale finite element method for solving Brinkman’s and Darcy’s equations. These systems of equations model fluid flows in highly porous and porous media, respectively. The method uses a recently proposed discontinuous Galerkin FEM for Stokes’ equations byWang and Ye and the concept of subgrid approximation developed by Arbogast for Darcy’s equations. In order to reduce the “resonance error” and to ensure convergence to the global fine solution the algorithm is put in the framework of alternating Schwarz iterations using subdomains around the coarse-grid boundaries. The discussed algorithms are implemented using the Deal.II finite element library and are tested on a number of model problems.
In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement.
Die Simulation von Prüfständen und insbesondere von Baugruppen und Gesamtfahrzeugen auf Prüfständen durch Kopplung von Mehrkörpersimulation mit Modellen für Regelung und Aktuatorik leistet einen wesentlichen Beitrag zur Entwicklungszeitverkürzung. In diesem Beitrag wird ein Kooperationsprojekt vorgestellt, in dem ein Co- Simulationsmodell für die beweglichen Massen sowie die Regelung und Hydraulik eines Gesamtfahrzeugprüfstands erstellt wurde. Es wird sowohl auf die Validierung des Fahrzeugmodells durch Straßenmessungen als auch auf die Identifikation und Validierung des Prüfstandsmodells einschließlich Servohydraulik und Regelung eingegangen.
Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.
In the ground vehicle industry it is often an important task to simulate full vehicle models based on the wheel forces and moments, which have been measured during driving over certain roads with a prototype vehicle. The models are described by a system of differential algebraic equations (DAE) or ordinary differential equations (ODE). The goal of the simulation is to derive section forces at certain components for a durability assessment. In contrast to handling simulations, which are performed including more or less complex tyre models, a driver model, and a digital road profile, the models we use here usually do not contain the tyres or a driver model. Instead, the measured wheel forces are used for excitation of the unconstrained model. This can be difficult due to noise in the input data, which leads to an undesired drift of the vehicle model in the simulation.
One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide for the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to assure that all clinically meaningful plans are covered and that as many as possible clinically irrelevant plans are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that consititutes the first phase. It is possible that two plans on the Parteto surface have a very small, clinically insignificant difference in one criterion and a significant difference in one other criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, so called tradeoff bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects.
Territory design may be viewed as the problem of grouping small geographic areas into larger geographic clusters called territories in such a way that the latter are acceptable according to relevant planning criteria. In this paper we review the existing literature for applications of territory design problems and solution approaches for solving these types of problems. After identifying features common to all applications we introduce a basic territory design model and present in detail two approaches for solving this model: a classical location–allocation approach combined with optimal split resolution techniques and a newly developed computational geometry based method. We present computational results indicating the efficiency and suitability of the latter method for solving large–scale practical problems in an interactive environment. Furthermore, we discuss extensions to the basic model and its integration into Geographic Information Systems.
For the numerical simulation of 3D radiative heat transfer in glasses and glass melts, practically applicable mathematical methods are needed to handle such problems optimal using workstation class computers. Since the exact solution would require super-computer capabilities we concentrate on approximate solutions with a high degree of accuracy. The following approaches are studied: 3D diffusion approximations and 3D ray-tracing methods.
We will present a rigorous derivation of the equations and interface conditions for ion, charge and heat transport in Li-ion insertion batteries. The derivation is based exclusively on universally accepted principles of nonequilibrium thermodynamics and the assumption of a one step intercalation reaction at the interface of electrolyte and active particles. Without loss of generality the transport in the active particle is assumed to be isotropic. The electrolyte is described as a fully dissociated salt in a neutral solvent. The presented theory is valid for transport on a spatial scale for which local charge neutrality holds i.e. beyond the scale of the diffuse double layer. Charge neutrality is explicitely used to determine the correct set of thermodynamically independent variables. The theory guarantees strictly positive entropy production. The various contributions to the Peltier coeficients for the interface between the active particles and the electrolyte as well as the contributions to the heat of mixing are obtained as a result of the theory.
In this paper we develop a network location model that combines the characteristics of ordered median and gradual cover models resulting in the Ordered Gradual Covering Location Problem (OGCLP). The Gradual Cover Location Problem (GCLP) was specifically designed to extend the basic cover objective to capture sensitivity with respect to absolute travel distance. Ordered Median Location problems are a generalization of most of the classical locations problems like p-median or p-center problems. They can be modeled by using so-called ordered median functions. These functions multiply a weight to the cost of fulfilling the demand of a customer which depends on the position of that cost relative to the costs of fulfilling the demand of the other customers. We derive Finite Dominating Sets (FDS) for the one facility case of the OGCLP. Moreover, we present efficient algorithms for determining the FDS and also discuss the conditional case where a certain number of facilities are already assumed to exist and one new facility is to be added. For the multi-facility case we are able to identify a finite set of potential facility locations a priori, which essentially converts the network location model into its discrete counterpart. For the multi-facility discrete OGCLP we discuss several Integer Programming formulations and give computational results.
In first part of this work, summaries of traditional Multiphase Flow Model and more recent Multiphase Mixture Model are presented. Attention is being paid to attempts include various heterogeneous aspects into models. In second part, MMM based differential model for two-phase immiscible flow in porous media is considered. A numerical scheme based on the sequential solution procedure and control volume based finite difference schemes for the pressure and saturation-conservation equations is developed. A computer simulator is built, which exploits object-oriented programming techniques. Numerical result for several test problems are reported.
The Folgar-Tucker equation (FTE) is the model most frequently used for the prediction of fiber orientation (FO) in simulations of the injection molding process for short-fiber reinforced thermoplasts. In contrast to its widespread use in injection molding simulations, little is known about the mathematical properties of the FTE: an investigation of e.g. its phase spaceMFT has been presented only recently. The restriction of the dependent variable of the FTE to the setMFT turns the FTE into a differential algebraic system (DAS), a fact which is commonly neglected when devising numerical schemes for the integration of the FTE. In this article1 we present some recent results on the problem of trace stability as well as some introductory material which complements our recent paper.
In the Finite-Volume-Particle Method (FVPM), the weak formulation of a hyperbolic conservation law is discretized by restricting it to a discrete set of test functions. In contrast to the usual Finite-Volume approach, the test functions are not taken as characteristic functions of the control volumes in a spatial grid, but are chosen from a partition of unity with smooth and overlapping partition functions (the particles), which can even move along prescribed velocity fields. The information exchange between particles is based on standard numerical flux functions. Geometrical information, similar to the surface area of the cell faces in the Finite-Volume Method and the corresponding normal directions are given as integral quantities of the partition functions. After a brief derivation of the Finite-Volume-Particle Method, this work focuses on the role of the geometric coefficients in the scheme.
Two approaches for determining the Euler-Poincaré characteristic of a set observed on lattice points are considered in the context of image analysis { the integral geometric and the polyhedral approach. Information about the set is assumed to be available on lattice points only. In order to retain properties of the Euler number and to provide a good approximation of the true Euler number of the original set in the Euclidean space, the appropriate choice of adjacency in the lattice for the set and its background is crucial. Adjacencies are defined using tessellations of the whole space into polyhedrons. In R 3 , two new 14 adjacencies are introduced additionally to the well known 6 and 26 adjacencies. For the Euler number of a set and its complement, a consistency relation holds. Each of the pairs of adjacencies (14:1; 14:1), (14:2; 14:2), (6; 26), and (26; 6) is shown to be a pair of complementary adjacencies with respect to this relation. That is, the approximations of the Euler numbers are consistent if the set and its background (complement) are equipped with this pair of adjacencies. Furthermore, sufficient conditions for the correctness of the approximations of the Euler number are given. The analysis of selected microstructures and a simulation study illustrate how the estimated Euler number depends on the chosen adjacency. It also shows that there is not a uniquely best pair of adjacencies with respect to the estimation of the Euler number of a set in Euclidean space.
The capacitated single-allocation hub location problem revisited: A note on a classical formulation
(2009)
Denote by G = (N;A) a complete graph where N is the set of nodes and A is the set of edges. Assume that a °ow wij should be sent from each node i to each node j (i; j 2 N). One possibility is to send these °ows directly between the corresponding pairs of nodes. However, in practice this is often neither e±cient nor costly attractive because it would imply that a link was built between each pair of nodes. An alternative is to select some nodes to become hubs and use them as consolidation and redistribution points that altogether process more e±ciently the flow in the network. Accordingly, hubs are nodes in the graph that receive tra±c (mail, phone calls, passengers, etc) from di®erent origins (nodes) and redirect this tra±c directly to the destination nodes (when a link exists) or else to other hubs. The concentration of tra±c in the hubs and its shipment to other hubs lead to a natural decrease in the overall cost due to economies of scale.
Test rig optimization
(2014)
Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.
This report reviews selected image binarization and segmentation methods that have been proposed and which are suitable for the processing of volume images. The focus is on thresholding, region growing, and shape–based methods. Rather than trying to give a complete overview of the field, we review the original ideas and concepts of selected methods, because we believe this information to be important for judging when and under what circumstances a segmentation algorithm can be expected to work properly.
An efficient mathematical model to virtually generate woven metal wire meshes is
presented. The accuracy of this model is verified by the comparison of virtual structures with three-dimensional
images of real meshes, which are produced via computer tomography. Virtual structures
are generated for three types of metal wire meshes using only easy to measure parameters. For these
geometries the velocity-dependent pressure drop is simulated and compared with measurements
performed by the GKD - Gebr. Kufferath AG. The simulation results lie within the tolerances of
the measurements. The generation of the structures and the numerical simulations were done at
GKD using the Fraunhofer GeoDict software.
In this paper, a multi-period supply chain network design problem is addressed. Several aspects of practical relevance are considered such as those related with the financial decisions that must be accounted for by a company managing a supply chain. The decisions to be made comprise the location of the facilities, the flow of commodities and the investments to make in alternative activities to those directly related with the supply chain design. Uncertainty is assumed for demand and interest rates, which is described by a set of scenarios. Therefore, for the entire planning horizon, a tree of scenarios is built. A target is set for the return on investment and the risk of falling below it is measured and accounted for. The service level is also measured and included in the objective function. The problem is formulated as a multi-stage stochastic mixed-integer linear programming problem. The goal is to maximize the total financial benefit. An alternative formulation which is based upon the paths in the scenario tree is also proposed. A methodology for measuring the value of the stochastic solution in this problem is discussed. Computational tests using randomly generated data are presented showing that the stochastic approach is worth considering in these type of problems.
A spectral theory for stationary random closed sets is developed and provided with a sound mathematical basis. Definition and proof of existence of the Bartlett spectrum of a stationary random closed set as well as the proof of a Wiener-Khintchine theorem for the power spectrum are used to two ends: First, well known second order characteristics like the covariance can be estimated faster than usual via frequency space. Second, the Bartlett spectrum and the power spectrum can be used as second order characteristics in frequency space. Examples show, that in some cases information about the random closed set is easier to obtain from these characteristics in frequency space than from their real world counterparts.
In this paper we propose a general approach solution method for the single facility ordered median problem in the plane. All types of weights (non-negative, non-positive, and mixed) are considered. The big triangle small triangle approach is used for the solution. Rigorous and heuristic algorithms are proposed and extensively tested on eight different problems with excellent results.
It is well-known that some of the classical location problems with polyhedral gauges can be solved in polynomial time by finding a finite dominating set, i.e. a finite set of candidates guaranteed to contain at least one optimal location. In this paper it is first established that this result holds for a much larger class of problems than currently considered in the literature. The model for which this result can be proven includes, for instance, location problems with attraction and repulsion, and location-allocation problems. Next, it is shown that the approximation of general gauges by polyhedral ones in the objective function of our general model can be analyzed with regard to the subsequent error in the optimal objective value. For the approximation problem two different approaches are described, the sandwich procedure and the greedy algorithm. Both of these approaches lead - for fixed epsilon - to polynomial approximation algorithms with accuracy epsilon for solving the general model considered in this paper.
It has been empirically verified that smoother intensity maps can be expected to produce shorter sequences when step-and-shoot collimation is the method of choice. This work studies the length of sequences obtained by the sequencing algorithm by Bortfeld and Boyer using a probabilistic approach. The results of this work build a theoretical foundation for the up to now only empirically validated fact that if smoothness of intensity maps is considered during their calculation, the solutions can be expected to be more easily applied.
The paper at hand presents a slender body theory for the dynamics of a curved inertial viscous Newtonian ber. Neglecting surface tension and temperature dependence, the ber ow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the ber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional ber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms.
In this paper, an extension to the classical capacitated single-allocation hub location problem is studied in which the size of the hubs is part of the decision making process. For each potential hub a set of capacities is assumed to be available among which one can be chosen. Several formulations are proposed for the problem, which are compared in terms of the bound provided by the linear programming relaxation. Di®erent sets of inequalities are proposed to enhance the models. Several preprocessing tests are also presented with the goal of reducing the size of the models for each particular instance. The results of the computational experiments performed using the proposed models are reported.
To simulate the influence of process parameters to the melt spinning process a fiber model is used and coupled with CFD calculations of the quench air flow. In the fiber model energy, momentum and mass balance are solved for the polymer mass flow. To calculate the quench air the Lattice Boltzmann method is used. Simulations and experiments for different process parameters and hole configurations are compared and show a good agreement.
Recently we developed a discrete model of elastic rods with symmetric cross section suitable for a fast simulation of quasistatic deformations [33]. The model is based on Kirchhoff’s geometrically exact theory of rods. Unlike simple models of “mass & spring” type typically used in VR applications, our model provides a proper coupling of bending and torsion. The computational approach comprises a variational formulation combined with a finite difference discretization of the continuum model. Approximate solutions of the equilibrium equations for sequentially varying boundary conditions are obtained by means of energy minimization using a nonlinear CG method. As the computational performance of our model yields solution times within the range of milliseconds, our approach proves to be sufficient to simulate an interactive manipulation of such flexible rods in virtual reality applications in real time.
Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.
Die Erprobung neuer Fahrzeugachsen oder Achsvarianten auf Basis von Lastdaten aus dem Fahrbetrieb erfolgt meist mit Hilfe komplexer mehrkanaliger Prüfstände. Bei solchen Erprobungen sollen im Allgemeinen die im Fahrbetrieb gemessenen Radnabenkräfte und Momente vom Prüfstand reproduziert werden. Aufgrund der komplexen Wechselwirkungen zwischen Prüfling und Prüfmaschine stellt sich bei jedem neuen Konzept die Frage, ob der gewünschte Test mit einem vorgegebenen Prüfsystemaufbau durchführbar ist, bzw. welche Konfiguration des Prüfsystems für den geplanten Test geeignet erscheint. In dieser Arbeit wird die Modellierung eines neuartigen Achsprüfsystemkonzeptes beschrieben, das auf zwei Hexapoden basiert. Die Modellierung umfasst neben der geometrischen Anordnung des Prüfsystems auch die Hydraulik sowie den internen Controller. Das Prüfsystemmodell wurde als so genanntes Template innerhalb des Fahrzeugsimulationsprogramms ADAMS/Car entwickelt und kann mit verschiedenen Achsmodellen zu einem Gesamtsystem gekoppelt werden. An diesem Gesamtmodell können alle am realen Prüfsystem auftretenden Arbeitsschritte wie Controllereinstellung, Drive-File-Iteration und Simulation durchgeführt werden. Geometrische oder hydraulische Parameter können auf einfache Weise geändert werden, um eine optimale Anpassung des Prüfsystems an den Prüfling und die vorgegebenen Lastdaten zu ermöglichen. Das im Rahmen des Projektes entwickelte Modell unterstützt und begleitet einerseits die Einführung des neuen Achsprüfsystemkonzeptes und kann andererseits zur virtuellen Vorbereitung von Testläufen eingesetzt werden. Am Beispiel einer Vorder- und einer Hinterachse wird die allgemeine Vorgehensweise erläutert und die neuen Möglichkeiten aufgezeigt, die sich durch die Prüfsystemsimulation ergeben.
In this paper, we discuss approaches related to the explicit modeling of human beings in software development processes. While in most older simulation models of software development processes, esp. those of the system dynamics type, humans are only represented as a labor pool, more recent models of the discrete-event simulation type require representations of individual humans. In that case, particularities regarding the person become more relevant. These individual effects are either considered as stochastic variations of productivity, or an explanation is sought based on individual characteristics, such as skills for instance. In this paper, we explore such possibilities by recurring to some basic results in psychology, sociology, and labor science. Various specific models for representing human effects in software process simulation are discussed.
We study global and local robustness properties of several estimators for shape and scale in a generalized Pareto model. The estimators considered in this paper cover maximum likelihood estimators, skipped maximum likelihood estimators, moment-based estimators, Cramér-von-Mises Minimum Distance estimators, and, as a special case of quantile-based estimators, Pickands Estimator as well as variants of the latter tuned for higher finite sample breakdown point (FSBP), and lower variance. We further consider an estimator matching population median and median of absolute deviations to the empirical ones (MedMad); again, in order to improve its FSBP, we propose a variant using a suitable asymmetric Mad as constituent, and which may be tuned to achieve an expected FSBP of 34%. These estimators are compared to one-step estimators distinguished as optimal in the shrinking neighborhood setting, i.e., the most bias-robust estimator minimizing the maximal (asymptotic) bias and the estimator minimizing the maximal (asymptotic) MSE. For each of these estimators, we determine the FSBP, the influence function, as well as statistical accuracy measured by asymptotic bias, variance, and mean squared error—all evaluated uniformly on shrinking convex contamination neighborhoods. Finally, we check these asymptotic theoretical findings against finite sample behavior by an extensive simulation study.
Robust Reliability of Diagnostic Multi-Hypothesis Algorithms: Application to Rotating Machinery
(1998)
Damage diagnosis based on a bank of Kalman filters, each one conditioned on a specific hypothesized system condition, is a well recognized and powerful diagnostic tool. This multi-hypothesis approach can be applied to a wide range of damage conditions. In this paper, we will focus on the diagnosis of cracks in rotating machinery. The question we address is: how to optimize the multi-hypothesis algorithm with respect to the uncertainty of the spatial form and location of cracks and their resulting dynamic effects. First, we formulate a measure of the reliability of the diagnostic algorithm, and then we discuss modifications of the diagnostic algorithm for the maximization of the reliability. The reliability of a diagnostic algorithm is measured by the amount of uncertainty consistent with no-failure of the diagnosis. Uncertainty is quantitatively represented with convex models.
Worldwide the installed capacity of renewable technologies for electricity production is
rising tremendously. The German market is particularly progressive and its regulatory
rules imply that production from renewables is decoupled from market prices and electricity
demand. Conventional generation technologies are to cover the residual demand
(defined as total demand minus production from renewables) but set the price at the
exchange. Existing electricity price models do not account for the new risks introduced
by the volatile production of renewables and their effects on the conventional demand
curve. A model for residual demand is proposed, which is used as an extension of
supply/demand electricity price models to account for renewable infeed in the market.
Infeed from wind and solar (photovoltaics) is modeled explicitly and withdrawn from
total demand. The methodology separates the impact of weather and capacity. Efficiency
is transformed on the real line using the logit-transformation and modeled as a stochastic process. Installed capacity is assumed a deterministic function of time. In a case study the residual demand model is applied to the German day-ahead market
using a supply/demand model with a deterministic supply-side representation. Price trajectories are simulated and the results are compared to market future and option
prices. The trajectories show typical features seen in market prices in recent years and the model is able to closely reproduce the structure and magnitude of market prices.
Using the simulated prices it is found that renewable infeed increases the volatility of forward prices in times of low demand, but can reduce volatility in peak hours. Prices
for different scenarios of installed wind and solar capacity are compared and the meritorder effect of increased wind and solar capacity is calculated. It is found that wind
has a stronger overall effect than solar, but both are even in peak hours.
Von sicherheitsrelevanten Bauteilen im Automobilbau verlangt man, dass beim Kunden bis zur Zeit/Strecke q0 höchstens ein Anteil p0 ausgefallen ist. Die Verifikation dieses Quantils geschieht in einer Reihe von Versuchen, bei denen die Bauteile mit einer typischen Kraft zyklisch belastet werden, bis ein gewisses, im Vorfeld festgelegtes, Schadensbild auftritt und die Anzahl Ti der Zyklen („Schwingspiele“) als Lebensdauer notiert wird. Typischerweise ist der Stichprobenumfang N dabei sehr gering (N < 10), während gleichzeitig ein extremes Quantil 0 p0 0, 1 verifiziert werden soll. Verwendet man als Lebensdauerverteilung eine Weibulloder Lognormalverteilung, so tritt in den Quantilschätzern ein deutlicher Bias auf, der beseitigt werden soll. Da es sich hierbei in der Regel um einen positiven Bias handelt, würde man Bauteile als serientauglich einstufen, obwohl sie möglicherweise deutlich unter den Vorgaben liegen. Die Berechnung von Konfidenzintervallen für Quantile geschieht über Delta-Methoden, die ebenfalls schlechte Resultate liefern (in Form einer zu geringen empirischen Signifikanz linksseiter Intervalle). Im Folgenden werden Verallgemeinerungen der Bootstrap- und Jackknife- Biaskorrektur vorgestellt, welche nicht nur versuchen den Bias zu beseitigen, sondern direkt den mittleren quadratischen Fehler des Schätzers weitestgehend zu reduzieren. Simulationsstudien zeigen, dass dies für geringe Stichprobenumfänge gelingt. Außerdem wird untersucht, inwiefern die Methode in Kombination mit der Bootstrap-Quantil-Methode einen verbesserten Intervallschätzer für Quantile liefert. Dabei werden simulierte Daten betrachtet, deren Parameter repräsentativ für Lebensdauerverteilungen von sicherheitsrelevanten Bauteilen sind.
Simulation of multibody systems (mbs) is an inherent part in developing and design of complex mechanical systems. Moreover, simulation during operation gained in importance in the recent years, e.g. for HIL-, MIL- or monitoring applications. In this paper we discuss the numerical simulation of multibody systems on different platforms. The main section of this paper deals with the simulation of an established truck model [9] on different platforms, one microcontroller and two real-time processor boards. Additional to numerical C-code the latter platforms provide the possibility to build the model with a commercial mbs tool, which is also investigated. A survey of different ways of generating code and equations of mbs models is given and discussed concerning handling, possible limitations as well as performance. The presented benchmarks are processed under terms of on-board real time applications. A further important restriction, caused by the real-time requirement, is a fixed integration step size. Whence, carefully chosen numerical integration algorithms are necessary, especially in the case of closed loops in the model. We investigate linearly-implicit time integration methods with fixed step size, so-called Rosenbrock methods, and compare them with respect to their accuracy and performance on the tested processors.
In this work we use the Parsimonious Multi–Asset Heston model recently developed in [Dimitroff et al., 2009] at Fraunhofer ITWM, Department Financial Mathematics, Kaiserslautern (Germany) and apply it to Quanto options. We give a summary of the model and its calibration scheme. A suitable transformation of the Quanto option payoff is explained and used to price Quantos within the new framework. Simulated prices are given and compared to market prices and Black–Scholes prices. We find that the new approach underprices the chosen options, but gives better results than the Black–Scholes approach, which is prevailing in the literature on Quanto options.
In this paper we deal with dierent statistical modeling of real world accident data in order to quantify the eectiveness of a safety function or a safety conguration (meaning a specic combination of safety functions) in vehicles. It is shown that the eectiveness can be estimated along the so-called relative risk, even if the eectiveness does depend on a confounding variable which may be categorical or continuous. For doing so a concrete statistical modeling is not necessary, that is the resulting estimate is of nonparametric nature. In a second step the quite usual and from a statistical point of view classical logistic regression modeling is investigated. Main emphasis has been laid on the understanding of the model and the interpretation of the occurring parameters. It is shown that the eectiveness of the safety function also can be detected via such a logistic approach and that relevant confounding variables can and should be taken into account. The interpretation of the parameters related to the confounder and the quantication of the in uence of the confounder is shown to be rather problematic. All the theoretical results are illuminated by numerical data examples.
We introduce a refined tree method to compute option prices using the stochastic volatility model of Heston. In a first step, we model the stock and variance process as two separate trees and with transition probabilities obtained by matching tree moments up to order two against the Heston model ones. The correlation between the driving Brownian motions in the Heston model is then incorporated by the node-wise adjustment of the probabilities. This adjustment, leaving the marginals fixed, optimizes the match between tree and model correlation. In some nodes, we are even able to further match moments of higher order. Numerically this gives convergence orders faster than 1/N, where N is the number of dis- cretization steps. Accuracy of our method is checked for European option prices against a semi closed-form, and our prices for both European and American options are compared to alternative approaches.
In nancial mathematics stock prices are usually modelled directly as a result of supply and demand and under the assumption that dividends are paid continuously. In contrast economic theory gives us the dividend discount model assuming that the stock price equals the present value of its future dividends. These two models need not to contradict each other - in their paper Korn and Rogers (2005) introduce a general dividend model preserving the stock price to follow a stochastic process and to be equal to the sum of all its discounted dividends. In this paper we specify the model of Korn and Rogers in a Black-Scholes framework in order to derive a closed-form solution for the pricing of American Call options under the assumption of a known next dividend followed by several stochastic dividend payments during the option's time to maturity.
Das Smart Grid, „intelligentes Stromnetz“, ist eines der Themen, welche von der Politik und natürlich auch der Stromwirtschaft immer wieder in den Vordergrund gestellt werden. Das Potential der erneuerbaren Energien reicht aus, um Deutschland und Europa zuverlässig mit Strom zu versorgen. Der Umbau der Stromnetze ist dabei von zentraler Bedeutung und bedarf einer Anstrengung der gesamten Gesellschaft. Leider kommt dabei der Stromkunde zu kurz — die Bedürfnisse von Stromkunden werden weitgehend ignoriert und der Datenschutz wird oft ausser acht gelassen. Aber auch kleinere Stadtwerke haben mit dieser Entwicklung Probleme: Aufgrund politischer Vorgaben müssen sie zum Beispiel Smart Meter einführen, obwohl ihnen dadurch Kosten entstehen, die sie nicht direkt auf den Kunden umlegen können. Die Bereitschaft der Kunden, für ein Smart Grid mehr Geld zu bezahlen, ist wohl kaum vorhanden. Gleichzeitig ist es aber notwendig, die bestehenden Stromnetze zu flexibilisieren und auf einen weiter steigenden Anteil von erneuerbaren Energiequellen vorzubereiten
We examine the feasibility polyhedron of the uncapacitated hub location problem (UHL) with multiple allocation, which has applications in the fields of air passenger and cargo transportation, telecommunication and postal delivery services. In particular we determine the dimension and derive some classes of facets of this polyhedron. We develop some general rules about lifting facets from the uncapacitated facility location (UFL) for UHL and projecting facets from UHL to UFL. By applying these rules we get a new class of facets for UHL which dominates the inequalities in the original formulation. Thus we get a new formulation of UHL whose constraints are all facet–defining. We show its superior computational performance by benchmarking it on a well known data set.
This work presents a new framework for Gröbner basis computations with Boolean polynomials. Boolean polynomials can be modeled in a rather simple way, with both coefficients and degree per variable lying in {0, 1}. The ring of Boolean polynomials is, however, not a polynomial ring, but rather the quotient ring of the polynomial ring over the field with two elements modulo the field equations x2 = x for each variable x. Therefore, the usual polynomial data structures seem not to be appropriate for fast Gröbner basis computations. We introduce a specialized data structure for Boolean polynomials based on zero-suppressed binary decision diagrams (ZDDs), which is capable of handling these polynomials more efficiently with respect to memory consumption and also computational speed. Furthermore, we concentrate on high-level algorithmic aspects, taking into account the new data structures as well as structural properties of Boolean polynomials. For example, a new useless-pair criterion for Gröbner basis computations in Boolean rings is introduced. One of the motivations for our work is the growing importance of formal hardware and software verification based on Boolean expressions, which suffer – besides from the complexity of the problems – from the lack of an adequate treatment of arithmetic components. We are convinced that algebraic methods are more suited and we believe that our preliminary implementation shows that Gröbner bases on specific data structures can be capable to handle problems of industrial size.
Home Health Care (HHC) services are becoming increasingly important in Europe’s aging societies. Elderly people have varying degrees of need for assistance and medical treatment. It is advantageous to allow them to live in their own homes as long as possible, since a long-term stay in a nursing home can be much more costly for the social insurance system than a treatment at home providing assistance to the required level. Therefore, HHC services are a cost-effective and flexible instrument in the social system. In Germany, organizations providing HHC services are generally either larger charities with countrywide operations or small private companies offering services only in a city or a rural area. While the former have a hierarchical organizational structure and a large number of employees, the latter typically only have some ten to twenty nurses under contract. The relationship to the patients (“customers”) is often long-term and can last for several years. Therefore acquiring and keeping satisfied customers is crucial for HHC service providers and intensive competition among them is observed.
In this article, a new model predictive control approach to nonlinear stochastic systems will be presented. The new approach is based on particle filters, which are usually used for estimating states or parameters. Here, two particle filters will be combined, the first one giving an estimate for the actual state based on the actual output of the system; the second one gives an estimate of a control input for the system. This is basically done by adopting the basic model predictive control strategies for the second particle filter. Later in this paper, this new approach is applied to a CSTR (continuous stirred-tank reactor) example and to the inverted pendulum.
Background and purpose Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Material and methods Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms “selection” and “restriction”. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle – a set constructed from precomputed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions.
To a network N(q) with determinant D(s;q) depending on a parameter vector q Î Rr via identification of some of its vertices, a network N^ (q) is assigned. The paper deals with procedures to find N^ (q), such that its determinant D^ (s;q) admits a factorization in the determinants of appropriate subnetworks, and with the estimation of the deviation of the zeros of D^ from the zeros of D. To solve the estimation problem state space methods are applied.
An algorithm for automatic parallel generation of three-dimensional unstructured computational meshes based on geometrical domain decomposition is proposed in this paper. Software package build upon proposed algorithm is described. Several practical examples of mesh generation on multiprocessor computational systems are given. It is shown that developed parallel algorithm enables us to reduce mesh generation time significantly (dozens of times). Moreover, it easily produces meshes with number of elements of order 5 · 107, construction of those on a single CPU is problematic. Questions of time consumption, efficiency of computations and quality of generated meshes are also considered.
After a short introduction to the basic ideas of lattice Boltzmann methods and a brief description of a modern parallel computer, it is shown how lattice Boltzmann schemes are successfully applied for simulating fluid flow in microstructures and calculating material properties of porous media. It is explained how lattice Boltzmann schemes compute the gradient of the velocity field without numerical differentiation. This feature is then utilised for the simulation of pseudo-plastic fluids, and numerical results are presented for a simple benchmark problem as well as for the simulation of liquid composite moulding.
A new stability preserving model reduction algorithm for discrete linear SISO-systems based on their impulse response is proposed. Similar to the Padé approximation, an equation system for the Markov parameters involving the Hankel matrix is considered, that here however is chosen to be of very high dimension. Although this equation system therefore in general cannot be solved exactly, it is proved that the approximate solution, computed via the Moore-Penrose inverse, gives rise to a stability preserving reduction scheme, a property that cannot be guaranteed for the Padé approach. Furthermore, the proposed algorithm is compared to another stability preserving reduction approach, namely the balanced truncation method, showing comparable performance of the reduced systems. The balanced truncation method however starts from a state space description of the systems and in general is expected to be more computational demanding.
We develop a framework for analyzing an executive’s own-company stockholding and work effort preferences. The executive, characterized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed-form, and an indifference utility rationale is demonstrated to determine his required compensation. Our results have implications for the practical and theoretical assessment of executive quality and the benefits of performance contracting. Assuming knowledge of the company’s non-systematic risk, our executive’s unconstrained own-company investment identifies his work effectiveness (i.e. quality), and also reflects work effort that establishes a base-level that performance contracting should seek to exceed.
Industrial analog circuits are usually designed using numerical simulation tools. To obtain a deeper circuit understanding, symbolic analysis techniques can additionally be applied. Approximation methods which reduce the complexity of symbolic expressions are needed in order to handle industrial-sized problems. This paper will give an overview to the field of symbolic analog circuit analysis. Starting with a motivation, the state-of-the-art simplification algorithms for linear as well as for nonlinear circuits are presented. The basic ideas behind the different techniques are described, whereas the technical details can be found in the cited references. Finally, the application of linear and nonlinear symbolic analysis will be shown on two example circuits.
In this paper we address the improvement of transfer quality in public mass transit networks. Generally there are several transit operators offering service and our work is motivated by the question how their timetables can be altered to yield optimized transfer possibilities in the overall network. To achieve this, only small changes to the timetables are allowed. The set-up makes it possible to use a quadratic semi-assignment model to solve the optimization problem. We apply this model, equipped with a new way to assess transfer quality, to the solution of four real-world examples. It turns out that improvements in overall transfer quality can be determined by such optimization-based techniques. Therefore they can serve as a first step towards a decision support tool for planners of regional transit networks.
We present some optimality results for robust Kalman filtering. To this end, we introduce the general setup of state space models which will not be limited to a Euclidean or time-discrete framework. We pose the problem of state reconstruction and repeat the classical existing algorithms in this context. We then extend the ideal-model setup allowing for outliers which in this context may be system-endogenous or -exogenous, inducing the somewhat conflicting goals of tracking and attenuation. In quite a general framework, we solve corresponding minimax MSE-problems for both types of outliers separately, resulting in saddle-points consisting of an optimally-robust procedure and a corresponding least favorable outlier situation. Still insisting on recursivity, we obtain an operational solution, the rLS filter and variants of it. Exactly robust-optimal filters would need knowledge of certain hard-to-compute conditional means in the ideal model; things would be much easier if these conditional means were linear. Hence, it is important to quantify the deviation of the exact conditional mean from linearity. We obtain a somewhat surprising characterization of linearity for the conditional expectation in this setting. Combining both optimal filter types (for system-endogenous and -exogenous situation) we come up with a delayed hybrid filter which is able to treat both types of outliers simultaneously. Keywords: robustness, Kalman Filter, innovation outlier, additive outlier
We consider some portfolio optimisation problems where either the investor has a desire for an a priori specified consumption stream or/and follows a deterministic pay in scheme while also trying to maximize expected utility from final wealth. We derive explicit closed form solutions for continuous and discrete monetary streams. The mathematical method used is classical stochastic control theory.
If an investor borrows money he generally has to pay higher interest rates than he would have received, if he had put his funds on a savings account. The classical model of continuous time portfolio optimisation ignores this effect. Since there is obviously a connection between the default probability and the total percentage of wealth, which the investor is in debt, we study portfolio optimisation with a control dependent interest rate. Assuming a logarithmic and a power utility function, respectively, we prove explicit formulae of the optimal control.
The scope of this paper is to enhance the model for the own-company stockholder (given in Desmettre, Gould and Szimayer (2010)), who can voluntarily performance-link his personal wealth to his management success by acquiring stocks in the own-company whose value he can directly influence via spending work effort. The executive is thereby characterized by a parameter of risk aversion and the two work effectiveness parameters inverse work productivity and disutility stress. We extend the model to a constant absolute risk aversion framework using an exponential utility/disutility set-up. A closed-form solution is given for the optimal work effort an executive will apply and we derive the optimal investment strategies of the executive. Furthermore, we determine an up-front fair cash compensation applying an indifference utility rationale. Our study shows to a large extent that the results previously obtained are robust under the choice of the utility/disutility set-up.
Optimal control methods for the calculation of invariant excitation signals for multibody systems
(2010)
Input signals are needed for the numerical simulation of vehicle multibody systems. With these input data, the equations of motion can be integrated numerically and some output quantities can be calculated from the simulation results. In this work we consider the corresponding inverse problem: We assume that some reference output signals are available, typically gained by measurement and focus on the task to derive the input signals that produce the desired reference output in a suitable sense. If the input data is invariant, i.e., independent of the specific system, it can be transferred and used to excite other system variants. This problem can be formulated as optimal control problem. We discuss solution approaches from optimal control theory, their applicability to this special problem class and give some simulation results.
Im vorliegenden Bericht werden die Erfahrungen und Ergebnisse aus dem Projekt OptCast zusammengestellt. Das Ziel dieses Projekts bestand (a) in der Anpassung der Methodik der automatischen Strukturoptimierung für Gussteile und (b) in der Entwicklung und Bereitstellung von gießereispezifischen Optimierungstools für Gießereien und Ingenieurbüros. Gießtechnische Restriktionen lassen sich nicht vollständig auf geometrische Restriktionen reduzieren, da die lokalen Eigenschaften nicht nur von der geometrischen Form des Gussteils, sondern auch vom verwendeten Material abhängen. Sie sind jedoch über eine Gießsimulation (Erstarrungssimulation und Eigenspannungsanalyse) adäquat erfassbar. Wegen dieser Erkenntnis wurde ein neuartiges Topologieoptimierungsverfahren unter Verwendung der Level-Set-Technik entwickelt, bei dem keine variable Dichte des Materials eingeführt wird. In jeder Iteration wird ein scharfer Rand des Bauteils berechnet. Somit ist die Gießsimulation in den iterativen Optimierungsprozess integrierbar.
Calculating effective heat conductivity for a class of industrial problems is discussed. The considered composite materials are glass and metal foams, fibrous materials, and the like, used in isolation or in advanced heat exchangers. These materials are characterized by a very complex internal structure, by low volume fraction of the higher conductive material (glass or metal), and by a large volume fraction of the air. The homogenization theory (when applicable), allows to calculate the effective heat conductivity of composite media by postprocessing the solution of special cell problems for representative elementary volumes (REV). Different formulations of such cell problems are considered and compared here. Furthermore, the size of the REV is studied numerically for some typical materials. Fast algorithms for solving the cell problems for this class of problems, are presented and discussed.
Two-level domain decomposition preconditioner for 3D flows in anisotropic highly heterogeneous porous media is presented. Accurate finite volume discretization based on multipoint flux approximation (MPFA) for 3D pressure equation is employed to account for the jump discontinuities of full permeability tensors. DD/MG type preconditioner for above mentioned problem is developed. Coarse scale operator is obtained from a homogenization type procedure. The influence of the overlapping as well as the influence of the smoother and cell problem formulation is studied. Results from numerical experiments are presented and discussed.
Modeling and formulation of optimization problems in IMRT planning comprises the choice of various values such as function-specific parameters or constraint bounds. These values also affect the characteristics of the optimization problem and thus the form of the resulting optimal plans. This publication utilizes concepts of sensitivity analysis and elasticity in convex optimization to analyze the dependence of optimal plans on the modeling parameters. It also derives general rules of thumb how to choose and modify the parameters in order to obtain the desired IMRT plan. These rules are numerically validated for an exemplary IMRT planning problems.
We consider the problem of pricing European forward starting options in the presence of stochastic volatility. By performing a change of measure using the asset price at the time of strike determination as a numeraire, we derive a closed-form solution based on Heston’s model of stochastic volatility.
Iterative solution of large scale systems arising after discretization and linearization of the unsteady non-Newtonian Navier–Stokes equations is studied. cross WLF model is used to account for the non-Newtonian behavior of the fluid. Finite volume method is used to discretize the governing system of PDEs. Viscosity is treated explicitely (e.g., it is taken from the previous time step), while other terms are treated implicitly. Different preconditioners (block–diagonal, block–triangular, relaxed incomplete LU factorization, etc.) are used in conjunction with advanced iterative methods, namely, BiCGStab, CGS, GMRES. The action of the preconditioner in fact requires inverting different blocks. For this purpose, in addition to preconditioned BiCGStab, CGS, GMRES, we use also algebraic multigrid method (AMG). The performance of the iterative solvers is studied with respect to the number of unknowns, characteristic velocity in the basic flow, time step, deviation from Newtonian behavior, etc. Results from numerical experiments are presented and discussed.
A general approach to the construction of discrete equilibrium dis- tributions is presented. Such distribution functions can be used to set up Kinetic Schemes as well as Lattice Boltzmann methods. The general principles are also applied to the construction of Chapman Enskog dis- tributions which are used in Kinetic Schemes for compressible Navier Stokes equations.
This paper deals with the characterization of microscopically heterogeneous, but macroscopically homogeneous spatial structures. A new method is presented which is strictly based on integral-geometric formulae such as Crofton's intersection formulae and Hadwiger's recursive de nition of the Euler number. The corresponding algorithms have clear advantages over other techniques. As an example of application we consider the analysis of spatial digital images produced by means of Computer Assisted Tomo- graphy.
The performance of oil filters used in the automotive industry can be significantly improved, especially when computer simulation is an essential component of the design process. In this paper, we consider parallel numerical algorithms for solving mathematical models describing the process of filtration, filtering out solid particles from liquid oil. The Navier-Stokes-Brinkmann system of equations is used to describe the laminar flow of incompressible isothermal oil. The space discretization in the complicated filter geometry is based on the finite-volume method. Special care is taken for an accurate approximation of velocity and pressure on the interface between the fluid and the porous media. The time discretization used here is a proper modification of the fractional time step discretization (cf. Chorin scheme) of the Navier-Stokes equations, where the Brinkmann term is considered at both, prediction and correction substeps. A data decomposition method is used to develop a parallel algorithm, where the domain is distributed among processors by using a structured reference grid. The MPI library is used to implement the data communication part of the algorithm. A theoretical model is proposed for the estimation of the complexity of the given parallel algorithm and a scalability analysis is done on the basis of this model. Results of computational experiments are presented, and the accuracy and efficiency of the parallel algorithm is tested on real industrial geometries.
A numerical upscaling approach, NU, for solving multiscale elliptic problems is discussed. The main components of this NU are: i) local solve of auxil- iary problems in grid blocks and formal upscaling of the obtained re sults to build a coarse scale equation; ii) global solve of the upscaled coarse scale equation; and iii) reconstruction of a fine scale solution by solving local block problems on a dual coarse grid. By its structure NU is similar to other methods for solving multiscale elliptic problems, such as the multiscale finite element method, the multiscale mixed finite element method, the numerical subgrid upscaling method, heterogeneous multiscale method, and the multiscale finite volume method. The difference with those methods is in the way the coarse scale equation is build and solved, and in the way the fine scale solution is reconstructed. Essential components of the presented here NU approach are the formal homogenization in the coarse blocks and the usage of so called multipoint flux approximation method, MPFA. Unlike the usual usage as MPFA as a discretiza- tion method for single scale elliptic problems with tensor discontinuous coefficients, we consider its usage as a part of a numerical upscaling approach. The main aim of this paper is to compare NU with the MsFEM. In particular, it is shown that the resonance effect, which limits the application of the Multiscale FEM, does not appear, or it is significantly relaxed, when the presented here numerical upscaling approach is applied.
In soil mechanics assumption of only vertical subsidence is often invoked and this leads to the one-dimensional model of poroelasticity. The classical model of linear poroelasticity is obtained by Biot [1], detailed derivation can be found e.g., in [2]. This model is applicable also to modelling certain processes in geomechanics, hydrogeology, petroleum engineering (see, e.g., [3, 8], in biomechanics (e.g., [9, 10]), in filtration (e.g., filter cake formation, see [15, 16, 17]), in paper manufacturing (e.g., [11, 12]), in printing (e.g., [13]), etc. Finite element and finite difference methods were applied by many authors for numerical solution of the Biot system of PDEs, see e.g. [3, 4, 5] and references therein. However, as it is wellknown, the standard FEM and FDM methods are subject to numerical instabilities at the first time steps. To avoid this, discretization on staggered grid was suggested in [4, 5]. A single layer deformable porous medium was considered there. This paper can be viewed as extension of [4, 5] to the case of multilayered deformable porous media. A finite volume discretization to the interface problem for the classical one-dimensional Biot model of consolidation process is applied here. Following assumptions are supposed to be valid: each of the porous layers is composed of incompressible solid matrix, it is homogeneous and isotropic. Furthermore, one of two following assumptions is valid: porous medium is not completely saturated and fluid is incompressible or porous medium is completely saturated and fluid is slightly compressible. The reminder of the paper is organised as follows. Next section presents the mathematical model. Third section is devoted to the dicsretization of the continuous problem. Fourth section contains the results from the numerical experiments.
This paper concerns numerical simulation of flow through oil filters. Oil filters consist of filter housing (filter box), and a porous filtering medium, which completely separates the inlet from the outlet. We discuss mathematical models, describing coupled flows in the pure liquid subregions and in the porous filter media, as well as interface conditions between them. Further, we reformulate the problem in fictitious regions method manner, and discuss peculiarities of the numerical algorithm in solving the coupled system. Next, we show numerical results, validating the model and the algorithm. Finally, we present results from simulation of 3-D oil flow through a real car filter.
In this paper we consider numerical algorithms for solving a system of nonlinear PDEs arising in modeling of liquid polymer injection. We investigate the particular case when a porous preform is located within the mould, so that the liquid polymer flows through a porous medium during the filling stage. The nonlinearity of the governing system of PDEs is due to the non-Newtonian behavior of the polymer, as well as due to the moving free boundary. The latter is related to the penetration front and a Stefan type problem is formulated to account for it. A finite-volume method is used to approximate the given differential problem. Results of numerical experiments are presented. We also solve an inverse problem and present algorithms for the determination of the absolute preform permeability coefficient in the case when the velocity of the penetration front is known from measurements. In both cases (direct and inverse problems) we emphasize on the specifics related to the non-Newtonian behavior of the polymer. For completeness, we discuss also the Newtonian case. Results of some experimental measurements are presented and discussed.
In this paper mathematical models for liquid films generated by impinging jets are discussed. Attention is stressed to the interaction of the liquid film with some obstacle. S. G. Taylor [Proc. R. Soc. London Ser. A 253, 313 (1959)] found that the liquid film generated by impinging jets is very sensitive to properties of the wire which was used as an obstacle. The aim of this presentation is to propose a modification of the Taylor's model, which allows to simulate the film shape in cases, when the angle between jets is different from 180°. Numerical results obtained by discussed models give two different shapes of the liquid film similar as in Taylors experiments. These two shapes depend on the regime: either droplets are produced close to the obstacle or not. The difference between two regimes becomes larger if the angle between jets decreases. Existence of such two regimes can be very essential for some applications of impinging jets, if the generated liquid film can have a contact with obstacles.
Flow of non-Newtonian fluid in saturated porous media can be described by the continuity equation and the generalized Darcy law. Efficient solution of the resulting second order nonlinear elliptic equation is discussed here. The equation is discretized by a finite volume method on a cell-centered grid. Local adaptive refinement of the grid is introduced in order to reduce the number of unknowns. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Two residual based error indicators are exploited in the adaptive refinement criterion. Second order accurate discretization of the fluxes on the interfaces between refined and non-refined subdomains, as well as on the boundaries with Dirichlet boundary condition, are presented here, as an essential part of the accurate and efficient algorithm. A nonlinear full approximation storage multigrid algorithm is developed especially for the above described composite (coarse plus locally refined) grid approach. In particular, second order approximation of the fluxes around interfaces is a result of a quadratic approximation of slave nodes in the multigrid - adaptive refinement (MG-AR) algorithm. Results from numerical solution of various academic and practice-induced problems are presented and the performance of the solver is discussed.
Finite difference discretizations of 1D poroelasticity equations with discontinuous coefficients are analyzed. A recently suggested FD discretization of poroelasticity equations with constant coefficients on staggered grid, [5], is used as a basis. A careful treatment of the interfaces leads to harmonic averaging of the discontinuous coefficients. Here, convergence for the pressure and for the displacement is proven in certain norms for the scheme with harmonic averaging (HA). Order of convergence 1.5 is proven for arbitrary located interface, and second order convergence is proven for the case when the interface coincides with a grid node. Furthermore, following the ideas from [3], modified HA discretization are suggested for particular cases. The velocity and the stress are approximated with second order on the interface in this case. It is shown that for wide class of problems, the modified discretization provides better accuracy. Second order convergence for modified scheme is proven for the case when the interface coincides with a displacement grid node. Numerical experiments are presented in order to illustrate our considerations.
This work presents a proof of convergence of a discrete solution to a continuous one. At first, the continuous problem is stated as a system
of equations which describe filtration process in the pressing section of a
paper machine. Two flow regimes appear in the modeling of this problem.
The model for the saturated flow is presented by the Darcy’s law and the mass conservation. The second regime is described by the Richards approach together with a dynamic capillary pressure model. The finite
volume method is used to approximate the system of PDEs. Then the existence of a discrete solution to proposed finite difference scheme is proven.
Compactness of the set of all discrete solutions for different mesh sizes is
proven. The main Theorem shows that the discrete solution converges
to the solution of continuous problem. At the end we present numerical
studies for the rate of convergence.
Finding "good" cycles in graphs is a problem of great interest in graph theory as well as in locational analysis. We show that the center and median problems are NP hard in general graphs. This result holds both for the variable cardinality case (i.e. all cycles of the graph are considered) and the fixed cardinality case (i.e. only cycles with a given cardinality p are feasible). Hence it is of interest to investigate special cases where the problem is solvable in polynomial time. In grid graphs, the variable cardinality case is, for instance, trivially solvable if the shape of the cycle can be chosen freely. If the shape is fixed to be a rectangle one can analyse rectangles in grid graphs with, in sequence, fixed dimension, fixed cardinality, and variable cardinality. In all cases a com plete characterization of the optimal cycles and closed form expressions of the optimal objective values are given, yielding polynomial time algorithms for all cases of center rectangle problems. Finally, it is shown that center cycles can be chosen as rectangles for small cardinalities such that the center cycle problem in grid graphs is in these cases completely solved.
Approximation property of multipoint flux approximation (MPFA) approach for elliptic equations with discontinuous full tensor coefficients is discussed here. Finite volume discretization of the above problem is presented in the case of jump discontinuities for the permeability tensor. First order approximation for the fluxes is proved. Results from numerical experiments are presented and discussed.
This work deals with the optimal control of a free surface Stokes flow which responds to an applied outer pressure. Typical applications are fiber spinning or thin film manufacturing. We present and discuss two adjoint-based optimization approaches that differ in the treatment of the free boundary as either state or control variable. In both cases the free boundary is modeled as the graph of a function. The PDE-constrained optimization problems are numerically solved by the BFGS method, where the gradient of the reduced cost function is expressed in terms of adjoint variables. Numerical results for both strategies are finally compared with respect to accuracy and efficiency.
This paper discusses a numerical subgrid resolution approach for solving the Stokes-Brinkman system of equations, which is describing coupled ow in plain and in highly porous media. Various scientic and industrial problems are described by this system, and often the geometry and/or the permeability vary on several scales. A particular target is the process of oil ltration. In many complicated lters, the lter medium or the lter element geometry are too ne to be resolved by a feasible computational grid. The subgrid approach presented in the paper is aimed at describing how these ne details are accounted for by solving auxiliary problems in appropriately chosen grid cells on a relatively coarse computational grid. This is done via a systematic and a careful procedure of modifying and updating the coecients of the Stokes-Brinkman system in chosen cells. This numerical subgrid approach is motivated from one side from homogenization theory, from which we borrow the formulations for the so called cell problem, and from the other side from the numerical upscaling approaches, such as Multiscale Finite Volume, Multiscale Finite Element, etc. Results on the algorithm's eciency, both in terms of computational time and memory usage, are presented. Comparison with solutions on full ne grid (when possible) are presented in order to evaluate the accuracy. Advantages and limitations of the considered subgrid approach are discussed.
On a multigrid solver for the threedimensional Biot poroelasticity system in multilayered domains
(2006)
In this paper, we present problem–dependent prolongation and problem–dependent restriction for a multigrid solver for the three-dimensional Biot poroelasticity system, which is solved in a multilayered domain. The system is discretized on a staggered grid using the finite volume method. During the discretization, special care is taken of the discontinuous coefficients. For the efficient multigrid solver, a need in operator-dependent restriction and/or prolongation arises. We derive these operators so that they are consistent with the discretization. They account for the discontinuities of the coefficients, as well as for the coupling of the unknowns within the Biot system. A set of numerical experiments shows necessity of use of the operator-dependent restriction and prolongation in the multigrid solver for the considered class of problems.
On a Multigrid Adaptive Refinement Solver for Saturated Non-Newtonian Flow in Porous Media A multigrid adaptive refinement algorithm for non-Newtonian flow in porous media is presented. The saturated flow of a non-Newtonian fluid is described by the continuity equation and the generalized Darcy law. The resulting second order nonlinear elliptic equation is discretized by a finite volume method on a cell-centered grid. A nonlinear full-multigrid, full-approximation-storage algorithm is implemented. As a smoother, a single grid solver based on Picard linearization and Gauss-Seidel relaxation is used. Further, a local refinement multigrid algorithm on a composite grid is developed. A residual based error indicator is used in the adaptive refinement criterion. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Several results from numerical experiments are presented in order to examine the performance of the solver.
In this paper we propose a finite volume discretization for the threedimensional Biot poroelasticity system in multilayered domains. For the stability reasons, staggered grids are used. The discretization accounts for discontinuity of the coefficients across the interfaces between layers with different physical properties. Numerical experiments, based on the proposed discretization showed second order convergence in the maximum norm for the primary as well as flux unknowns of the system. A certain application example is presented as well.
Abstract — Various advanced two-level iterative methods are studied numerically and compared with each other in conjunction with finite volume discretizations of symmetric 1-D elliptic problems with highly oscillatory discontinuous coefficients. Some of the methods considered rely on the homogenization approach for deriving the coarse grid operator. This approach is considered here as an alternative to the well-known Galerkin approach for deriving coarse grid operators. Different intergrid transfer operators are studied, primary consideration being given to the use of the so-called problemdependent prolongation. The two-grid methods considered are used as both solvers and preconditioners for the Conjugate Gradient method. The recent approaches, such as the hybrid domain decomposition method introduced by Vassilevski and the globallocal iterative procedure proposed by Durlofsky et al. are also discussed. A two-level method converging in one iteration in the case where the right-hand side is only a function of the coarse variable is introduced and discussed. Such a fast convergence for problems with discontinuous coefficients arbitrarily varying on the fine scale is achieved by a problem-dependent selection of the coarse grid combined with problem-dependent prolongation on a dual grid. The results of the numerical experiments are presented to illustrate the performance of the studied approaches.
Abstract. The stationary, isothermal rotational spinning process of fibers is considered. The investigations are concerned with the case of large Reynolds (± = 3/Re ¿ 1) and small Rossby numbers (\\\" ¿ 1). Modelling the fibers as a Newtonian fluid and applying slender body approximations, the process is described by a two–point boundary value problem of ODEs. The involved quantities are the coordinates of the fiber’s centerline, the fluid velocity and viscous stress. The inviscid case ± = 0 is discussed as a reference case. For the viscous case ± > 0 numerical simulations are carried out. Transfering some properties of the inviscid limit to the viscous case, analytical bounds for the initial viscous stress of the fiber are obtained. A good agreement with the numerical results is found. These bounds give strong evidence, that for ± > 3\\\"2 no physical relevant solution can exist. A possible interpretation of the above coupling of ± and \\\" related to the die–swell phenomenon is given.
Classical geometrically exact Kirchhoff and Cosserat models are used to study the nonlinear deformation of rods. Extension, bending and torsion of the rod may be represented by the Kirchhoff model. The Cosserat model additionally takes into account shearing effects. Second order finite differences on a staggered grid define discrete viscoelastic versions of these classical models. Since the rotations are parametrised by unit quaternions, the space discretisation results in differential-algebraic equations that are solved numerically by standard techniques like index reduction and projection methods. Using absolute coordinates, the mass and constraint matrices are sparse and this sparsity may be exploited to speed-up time integration. Further improvements are possible in the Cosserat model, because the constraints are just the normalisation conditions for unit quaternions such that the null space of the constraint matrix can be given analytically. The results of the theoretical investigations are illustrated by numerical tests.
The rotational spinning of viscous jets is of interest in many industrial applications, including pellet manufacturing [4, 14, 19, 20] and drawing, tapering and spinning of glass and polymer fibers [8, 12, 13], see also [15, 21] and references within. In [12] an asymptotic model for the dynamics of curved viscous inertial fiber jets emerging from a rotating orifice under surface tension and gravity was deduced from the three-dimensional free boundary value problem given by the incompressible Navier-Stokes equations for a Newtonian fluid. In the terminology of [1], it is a string model consisting of balance equations for mass and linear momentum. Accounting for inner viscous transport, surface tension and placing no restrictions on either the motion or the shape of the jet’s center-line, it generalizes the previously developed string models for straight [3, 5, 6] and curved center-lines [4, 13, 19]. Moreover, the numerical results investigating the effects of viscosity, surface tension, gravity and rotation on the jet behavior coincide well with the experiments of Wong et.al. [20].
In this paper, a new mixed integer mathematical programme is proposed for the application of Hub Location Problems (HLP) in public transport planning. This model is among the few existing ones for this application. Some classes of valid inequalities are proposed yielding a very tight model. To solve instances of this problem where existing standard solvers fail, two approaches are proposed. The first one is an exact accelerated Benders decomposition algorithm and the latter a greedy neighborhood search. The computational results substantiate the superiority of our solution approaches to existing standard MIP solvers like CPLEX, both in terms of computational time and problem instance size that can be solved. The greedy neighborhood search heuristic is shown to be extremely efficient.
Structuring global supply chain networks is a complex decision-making process. The typical inputs to such a process consist of a set of customer zones to serve, a set of products to be manufactured and distributed, demand projections for the different customer zones, and information about future conditions, costs (e.g. for production and transportation) and resources (e.g. capacities, available raw materials). Given the above inputs, companies have to decide where to locate new service facilities (e.g. plants, warehouses), how to allocate procurement and production activities to the variousmanufacturing facilities, and how to manage the transportation of products through the supply chain network in order to satisfy customer demands. We propose a mathematical modelling framework capturing many practical aspects of network design problems simultaneously. For problems of reasonable size we report on computational experience with standard mathematical programming software. The discussion is extended with other decisions required by many real-life applications in strategic supply chain planning. In particular, the multi-period nature of some decisions is addressed by a more comprehensivemodel, which is solved by a specially tailored heuristic approach. The numerical results suggest that the solution procedure can identify high quality solutions within reasonable computational time.
In the present paper a kinetic model for vehicular traffic leading to multivalued fundamental diagrams is developed and investigated in detail. For this model phase transitions can appear depending on the local density and velocity of the flow. A derivation of associated macroscopic traffic equations from the kinetic equation is given. Moreover, numerical experiments show the appearance of stop and go waves for highway traffic with a bottleneck.
In this work we extend the multiscale finite element method (MsFEM)
as formulated by Hou and Wu in [14] to the PDE system of linear elasticity.
The application, motivated from the multiscale analysis of highly heterogeneous
composite materials, is twofold. Resolving the heterogeneities on
the finest scale, we utilize the linear MsFEM basis for the construction of
robust coarse spaces in the context of two-level overlapping Domain Decomposition
preconditioners. We motivate and explain the construction
and present numerical results validating the approach. Under the assumption
that the material jumps are isolated, that is they occur only in the
interior of the coarse grid elements, our experiments show uniform convergence
rates independent of the contrast in the Young's modulus within the
heterogeneous material. Elsewise, if no restrictions on the position of the
high coefficient inclusions are imposed, robustness can not be guaranteed
any more. These results justify expectations to obtain coefficient-explicit
condition number bounds for the PDE system of linear elasticity similar to
existing ones for scalar elliptic PDEs as given in the work of Graham, Lechner
and Scheichl [12]. Furthermore, we numerically observe the properties
of the MsFEM coarse space for linear elasticity in an upscaling framework.
Therefore, we present experimental results showing the approximation errors
of the multiscale coarse space w.r.t. the fine-scale solution.
A non-linear multigrid solver for incompressible Navier-Stokes equations, exploiting finite volume discretization of the equations, is extended by adaptive local refinement. The multigrid is the outer iterative cycle, while the SIMPLE algorithm is used as a smoothing procedure. Error indicators are used to define the refinement subdomain. A special implementation approach is used, which allows to perform unstructured local refinement in conjunction with the finite volume discretization. The multigrid - adaptive local refinement algorithm is tested on 2D Poisson equation and further is applied to a lid-driven flows in a cavity (2D and 3D case), comparing the results with bench-mark data. The software design principles of the solver are also discussed.
Inverse treatment planning of intensity modulated radiothrapy is a multicriteria optimization problem: planners have to find optimal compromises between a sufficiently high dose in tumor tissue that garantuee a high tumor control, and, dangerous overdosing of critical structures, in order to avoid high normal tissue complcication problems. The approach presented in this work demonstrates how to state a flexible generic multicriteria model of the IMRT planning problem and how to produce clinically highly relevant Pareto-solutions. The model is imbedded in a principal concept of Reverse Engineering, a general optimization paradigm for design problems. Relevant parts of the Pareto-set are approximated by using extreme compromises as cornerstone solutions, a concept that is always feasible if box constraints for objective funtions are available. A major practical drawback of generic multicriteria concepts trying to compute or approximate parts of the Pareto-set is the high computational effort. This problem can be overcome by exploitation of an inherent asymmetry of the IMRT planning problem and an adaptive approximation scheme for optimal solutions based on an adaptive clustering preprocessing technique. Finally, a coherent approach for calculating and selecting solutions in a real-timeinteractive decision-making process is presented. The paper is concluded with clinical examples and a discussion of ongoing research topics.
In this paper, we present a viscoelastic rod model that is suitable for fast and accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (‘stiff’ dof), bending and torsion (‘soft’ dof). For inner dissipation, a consistent damping potential proposed by Antman is chosen. We parametrise the rotational dof by unit quaternions and directly use the quaternionic evolution differential equation for the discretisation of the Cosserat rod curvature. The discrete version of our rod model is obtained via a finite difference discretisation on a staggered grid. After an index reduction from three to zero, the right-hand side function f and the Jacobian \(\partial f/\partial(q, v, t)\) of the dynamical system \(\dot{q} = v, \dot{v} = f(q, v, t)\) is free of higher algebraic (e. g. root) or transcendental (e. g. trigonometric or exponential) functions and therefore cheap to evaluate. A comparison with Abaqus finite element results demonstrates the correct mechanical behavior of our discrete rod model. For the time integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computational times within milliseconds, it is suitable for interactive applications in ‘virtual reality’ as well as for multibody dynamics simulation.
In this paper, we present a viscoelastic rod model that is suitable for fast and sufficiently accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (’stiff ’ dof), bending and torsion (’soft’ dof). For inner dissipation, a consistent damping potential from Antman is chosen. Our discrete model is based on a finite difference discretisation on a staggered grid. The right-hand side function f and the Jacobian ∂f/∂(q, v, t) of the dynamical system q˙ = v, v˙ = f(q, v, t) – after index reduction from three to zero – is free of higher algebraic (e.g. root) or transcendent (e.g. trigonometric or exponential) functions and is therefore cheap to evaluate. For the time integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computation times within milliseconds, it is suitable for interactivemanipulation in ’virtual reality’ applications. In contrast to fast common VR rod models, our model reflects the structural mechanics solutions sufficiently correct, as comparison with ABAQUS finite element results shows.