Refine
Year of publication
Document Type
- Report (478) (remove)
Keywords
- Mathematikunterricht (7)
- modelling (7)
- numerical upscaling (7)
- Modellierung (6)
- praxisorientiert (6)
- Ambient Intelligence (5)
- Regelung (5)
- hub location (5)
- Elastoplastizität (4)
- Integer programming (4)
- Lineare Algebra (4)
- linear algebra (4)
- mathematical education (4)
- poroelasticity (4)
- praxis orientated (4)
- Darcy’s law (3)
- Dienstgüte (3)
- Elastic BVP (3)
- Elastoplasticity (3)
- Elektrotechnik (3)
- Formalisierung (3)
- Heston model (3)
- Hysterese (3)
- Lagrangian mechanics (3)
- Szenario (3)
- effective heat conductivity (3)
- facility location (3)
- non-Newtonian flow in porous media (3)
- polynomial algorithms (3)
- variational inequalities (3)
- virtual material design (3)
- American options (2)
- Bartlett spectrum (2)
- Betriebsfestigkeit (2)
- Chisel (2)
- Elastisches RWP (2)
- Elastoplastisches RWP (2)
- Field-programmable gate array (FPGA) (2)
- HJB equation (2)
- Heuristics (2)
- IMRT planning (2)
- Inverses Problem (2)
- Jiang's model (2)
- Jiang-Modell (2)
- Lineare Optimierung (2)
- Logistics (2)
- MAC type grid (2)
- Networked Control Systems (2)
- Netzwerk (2)
- Noether’s theorem (2)
- Nonlinear multigrid (2)
- Portfolio optimisation (2)
- Ratenunabhängigkeit (2)
- Regularisierung (2)
- Rotational spinning process (2)
- Simplex (2)
- Slender body theory (2)
- Special Cosserat rods (2)
- Sportentwicklung (2)
- Sportstättenplanung (2)
- Sportwissenschaft (2)
- Stadtplanung (2)
- Standortplanung (2)
- Stücklisten (2)
- Supply Chain Management (2)
- Theorie schwacher Lösungen (2)
- Variationsungleichungen (2)
- Wavelet (2)
- adaptive refinement (2)
- algorithmic game theory (2)
- asymptotic homogenization (2)
- branch and cut (2)
- discontinuous coefficients (2)
- discrete mechanics (2)
- domain decomposition (2)
- elastoplastic BVP (2)
- energy minimization (2)
- facets (2)
- fast Fourier transform (2)
- fiber orientation (2)
- fiber-fluid interaction (2)
- filling processes (2)
- finite volume method (2)
- finite-volume method (2)
- free-surface phenomena (2)
- heuristic (2)
- hydraulics (2)
- hysteresis (2)
- image analysis (2)
- image processing (2)
- injection molding (2)
- integer programming (2)
- interface boundary conditions (2)
- linear elasticity (2)
- linear optimization (2)
- model reduction (2)
- multibody dynamics (2)
- multigrid (2)
- multilayered material (2)
- non-overlapping constraints (2)
- online optimization (2)
- optimal control (2)
- optimization (2)
- option pricing (2)
- porous media (2)
- portfolio choice (2)
- power spectrum (2)
- rectangular packing (2)
- simplex (2)
- simulation (2)
- single phase flow (2)
- software development (2)
- stochastic control (2)
- supply chain management (2)
- valid inequalities (2)
- work effort (2)
- (dynamic) network flows (1)
- 3D (1)
- 3d imaging (1)
- : Navier-Stokes equations (1)
- : multiple criteria optimization (1)
- : multiple objective programming (1)
- AG-RESY (1)
- AKLEON (1)
- Abstract linear systems theory (1)
- Ad-hoc-Netz (1)
- Aktien (1)
- AmICA (1)
- Analysis (1)
- Anisotropic Gaussian filter (1)
- Arbeitsgedächtnis (1)
- Assigment (1)
- Asymptotic Expansion (1)
- Asymptotic expansions (1)
- Asymptotic homogenization (1)
- Ausfallwahrscheinlichkeit (1)
- Automatic Differentiation (1)
- Automatische Differentiation (1)
- Bauindustrie (1)
- Bauplanung (1)
- Bayesian Model Averaging (1)
- Bell Number (1)
- Berechnungskomplexität (1)
- Bevölkerungsumfrage (1)
- Bingham viscoplastic model (1)
- Biot poroelasticity system (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Black–Scholes approach (1)
- Blocked Neural Networks (1)
- Boolean polynomials (1)
- Bootstrap (1)
- Boundary Value Problem (1)
- Brinkman (1)
- Brinkman equations (1)
- Börse (1)
- CAD (1)
- CAE-Kette zur Strukturoptimierung (1)
- CFD (1)
- CHAMP <Satellitenmission> (1)
- CIR model (1)
- Capacitated Hub Location (1)
- Capacity decisions (1)
- Code Inspection (1)
- Competitive Analysis (1)
- Compiler (1)
- Complexity theory (1)
- Constant Maturity Credit Default Swap (1)
- Constrained mechanical systems (1)
- Constraint Programming (1)
- Continuum mechanics (1)
- Convex sets (1)
- Coq (1)
- Core (1)
- Cosserat rod (1)
- Credit Default Swaption (1)
- Curved viscous fibers (1)
- Customer distribution (1)
- Datenerfassung und -auswertung (1)
- Decision support systems (1)
- Delaunay Triangulation (1)
- Delaunay mesh generation (1)
- Design (1)
- Didaktik (1)
- Differentialinklusionen (1)
- Discrete linear systems (1)
- Distortion measure (1)
- Domain Decomposition (1)
- Drahtloses Sensorsystem (1)
- Dynamic Network Flows (1)
- Dynamical Coupling (1)
- Education (1)
- Elastoplastic BVP (1)
- Electrophysiology (1)
- Elliptic boundary value problems (1)
- Energie (1)
- Energieerzeugung (1)
- Equicofactor matrix polynomials (1)
- Euler number (1)
- Eulerian-Lagrangian formulation (1)
- Existence of Solutions (1)
- Extraction (1)
- FEM (1)
- FETI (1)
- FPM (1)
- FPTAS (1)
- Facility location (1)
- Fahrrad (1)
- Fahrzeugprüfstände (1)
- Fault Prediction (1)
- Festigkeitsverteilung (1)
- Filippov theory (1)
- Filippov-Theorie (1)
- Filtering (1)
- Financial Mathematics (1)
- Finanzmathematik (1)
- Finite rotations (1)
- Flexible multibody dynamics (1)
- Flooding (1)
- FlowLoc (1)
- Fluid Structure Interaction (1)
- Fluid dynamics (1)
- Fokker-Planck Equation (1)
- Fokker-Planck equations (1)
- Folgar-Tucker equation (1)
- Folgar-Tucker model (1)
- Formal Semantics (1)
- Forschung (1)
- Free boundary value problem (1)
- Front Propagation (1)
- Fräsen (1)
- Funknetz (1)
- G2++ model (1)
- Galerkin Approximation (1)
- Generalized LBE (1)
- Geographical Information Systems (1)
- Geomagnetic Field Modelling (1)
- Geomagnetismus (1)
- Geomathematik (1)
- Geometric (1)
- Geostrophic flow (1)
- Geothermal Flow (1)
- Geothermischer Fluss (1)
- Gießprozesssimulation (1)
- Gießtechnische Restriktionen (1)
- Gradual Covering (1)
- Graphentheorie (1)
- Gravimetrie (1)
- Greedy Heuristic (1)
- Green’s function (1)
- Grid Generation (1)
- Gröber basis (1)
- HJM (1)
- Hals-Nasen-Ohren-Chirurgie (1)
- Hals-Nasen-Ohren-Heilkunde (1)
- Hankel matrix (1)
- Hardware Description Langauge (HDL) (1)
- Hardware Description Language (HDL) (1)
- Hedge funds (1)
- Helmholtz-Decomposition (1)
- Helmholtz-Zerlegung (1)
- Heston Model (1)
- Heuristic (1)
- Home Health Care (1)
- Homotopie (1)
- Homotopiehochhebungen (1)
- Homotopy (1)
- Homotopy lifting (1)
- Hub Location (1)
- Hub-and-Spoke-System (1)
- Hull White model (1)
- Human resource modeling (1)
- Hydraulik (1)
- Hysteresis (1)
- Hörgerät (1)
- IMRT planning on adaptive volume structures – a significant advance of computational complexity (1)
- Implantation (1)
- Incompressible Navier-Stokes equations (1)
- Infiltration (1)
- Informatik (1)
- Injectivity of mappings (1)
- Injektivität von Abbildungen (1)
- Inkorrekt gestelltes Problem (1)
- Integration (1)
- Integration innovativ (1)
- Investigation (1)
- Isabelle/HOL (1)
- Iterative learning control (1)
- Jiang's constitutive model (1)
- Jiangsches konstitutives Gesetz (1)
- Jiang’s Model of Elastoplasticity (1)
- Kaiserslautern (1)
- Kaktusgraph (1)
- Kalman Filter (1)
- Kirchhoff and Cosserat rods (1)
- Kirchhoff\\\'s geometrically theory (1)
- Knowledge Extraction (1)
- Kommunikationsprotokoll (1)
- Komplexitätsklasse NP (1)
- Konfidenz (1)
- Kontinuumsmechanik (1)
- Konvexe Mengen (1)
- Kristallmathematik (1)
- Kundenbeanspruchung (1)
- LIBOR market model (1)
- Lagrange formalism (1)
- Large deformations (1)
- Lattice Boltzmann (1)
- Lattice Boltzmann method (1)
- Lattice Boltzmann methods (1)
- Lattice Boltzmann models (1)
- Lattice-Boltzmann method (1)
- Least squares approximation (1)
- Least squares method (1)
- Lebensdauerberechnung (1)
- Lehramtsstudium (1)
- Lehre (1)
- Lehrerbildung (1)
- Lehrerfortbildung (1)
- Lehrerweiterbildung (1)
- Lehrmittel (1)
- Level Set method (1)
- Level-Set Methode (1)
- Li Ion Batteries (1)
- Linear Programming (1)
- Linear kinematic hardening (1)
- Linear kinematische Verfestigung (1)
- Liquid Polymer Moulding (1)
- Load Balancing (1)
- Locational Planning (1)
- MBS (1)
- MBS simulation (1)
- META-AKAD (1)
- MILP formulations (1)
- MIP formulations (1)
- MKS (1)
- Mapping (1)
- Mastoid (1)
- Mastoidektomie (1)
- Mathematical modeling (1)
- Matrix perturbation theory (1)
- Maximum-Likelihood (1)
- Mehrskalenanalyse (1)
- Melt spinning (1)
- Mesh-less methods (1)
- Meshfree Method (1)
- Meshfree method (1)
- Metaheuristics (1)
- Methode der Fundamentallösungen (1)
- Mie-Darstellung (1)
- Mie-Representation (1)
- Model Checking (1)
- Model reduction (1)
- Modeling (1)
- Modelling (1)
- Monte Carlo methods (1)
- Monte-Carlo methods (1)
- Multi-dimensional systems (1)
- Multibody simulation (1)
- Multicriteria decision making (1)
- Multipoint flux approximation (1)
- Multiscale problem (1)
- Multiscale problems (1)
- Multiscale structures (1)
- Multiskalenapproximation (1)
- NP-hard (1)
- Nash equilibria (1)
- Navier-Stokes (1)
- Navier-Stokes equation (1)
- Navier-Stokes equations (1)
- Navier-Stokes-Brinkmann system of equations (1)
- Network Location (1)
- Network design (1)
- Networks (1)
- Neumann Wavelets (1)
- Neumann problem (1)
- Neumann wavelets (1)
- Nichtlineare/große Verformungen (1)
- Node Platform Design (1)
- Non-Newtonian flow (1)
- Non-homogeneous Poisson Process (1)
- Nonequilibrium Thermodynamics (1)
- Nonlinear Regression (1)
- Nonlinear energy (1)
- Nonlinear/large deformations (1)
- Numerical modeling (1)
- Nutzungsprofil (1)
- OCL 2.0 (1)
- Ohrenchirurgie (1)
- One-dimensional systems (1)
- Online Algorithms (1)
- Onlineumfrage (1)
- Optimal parameter estimation (1)
- Optimization (1)
- Option pricing (1)
- Optionen (1)
- Ordered Median Function (1)
- Ornstein-Uhlenbeck Process (1)
- Order of printed copy (1)
- Parallel Programming (1)
- Parameter Identification (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Parametrisation of rotations (1)
- Parsimonious Heston Model (1)
- Parteto surface (1)
- Particle scheme (1)
- Peer-to-Peer-Netz (1)
- Performance of iterative solvers (1)
- Pleated Filter (1)
- Poisson equation (1)
- Poisson line process (1)
- Poroelastizität (1)
- Portfolio-Optimierung (1)
- Preconditioners (1)
- Produktion (1)
- Profiles (1)
- Projection method (1)
- Projektplanung (1)
- Prüfkonzepte (1)
- Pseudopolynomial-Time Algorithm (1)
- Quanto option (1)
- RONAF (1)
- Random set (1)
- Rate-independency (1)
- Realization theory (1)
- Recycling (1)
- Reliability Prediction (1)
- Reservierungsprotokoll (1)
- Restricted Shortest Path (1)
- Ripley’s K function (1)
- Roboter (1)
- Rosenbrock methods (1)
- Rotational Fiber Spinning (1)
- Rounding (1)
- Route Planning (1)
- Routing (1)
- SAW filters (1)
- SDL (1)
- SDL-2000 (1)
- SGG (1)
- SIMPLE (1)
- SST (1)
- Satellitengradiometrie (1)
- Scheduling (1)
- Schädelchirurgie (1)
- Sensitivitäten (1)
- Shapley Value (1)
- Shapley value (1)
- Shapleywert (1)
- Sheet ofPaper (1)
- Simulation (1)
- Solid-Gas Separation (1)
- Solid-Liquid Separation (1)
- Spezifikation (1)
- Spieltheorie (1)
- Sprachprofile (1)
- Stationary heat equation (1)
- Stein equation (1)
- Stochastic Differential Equations (1)
- Stokes Wavelets (1)
- Stokes wavelets (1)
- Stokes-Brinkman equations (1)
- Stop- and Play-Operators (1)
- Stop- und Play-Operator (1)
- Stop-und Play-Operator (1)
- Stress-strain correction (1)
- Stromnetz (1)
- Stromverbrauch (1)
- Strömungsmechanik (1)
- Supply Chain Design (1)
- Switching regression model (1)
- System Abstractions (1)
- Thermal Transport (1)
- Titration (1)
- Topologieoptimierung (1)
- Train Rearrangement (1)
- Training (1)
- Translation Validation (1)
- Trennverfahren (1)
- UML 2 (1)
- UML Profile (1)
- Unstructured Grid (1)
- VCG payment scheme (1)
- VHDL (1)
- Variational inequalities (1)
- Variationsungleichugen (1)
- Vasicek model (1)
- Vectorial Wavelets (1)
- Vehicle test rigs (1)
- Vektor-Wavelets (1)
- Vektorkugelfunktionen (1)
- Vektorwavelets (1)
- Viscous Fibers (1)
- Weak Solution Theory (1)
- Weibull (1)
- Winner Determination Problem (WDP) (1)
- Wireless Communication (1)
- Wireless Sensor Network (1)
- Wireless sensor network (1)
- Zuwanderung (1)
- a posteriori error estimates (1)
- a-priori domain decomposition (1)
- acoustic absorption (1)
- adaptive local refinement (1)
- adaptive triangulation (1)
- additive outlier (1)
- aerodynamic drag (1)
- air drag (1)
- algebraic constraints (1)
- algebraic cryptoanalysis (1)
- algorithm by Bortfeld and Boyer (1)
- aliasing (1)
- analog circuits (1)
- angewandte Mathematik (1)
- anisotropic cicosity (1)
- anisotropy (1)
- applied mathematics (1)
- artial differential algebraic equations (1)
- asymptotic (1)
- asymptotic Cosserat models (1)
- asymptotic limits (1)
- automated analog circuit design (1)
- automatic differentiation (1)
- autoregressive process (1)
- basic systems theoretic properties (1)
- batch presorting problem (1)
- battery modeling (1)
- bedingte Aktionen (1)
- behavioral modeling (1)
- ber dynamics (1)
- big triangle small triangle method (1)
- bills of material (1)
- bills of materials (1)
- bin coloring (1)
- binarization (1)
- boudary condistions (1)
- bounce-back rule (1)
- boundary value problems (1)
- bounds (1)
- cactus graph (1)
- calibration (1)
- calls (1)
- cell volume (1)
- change analysis (1)
- circuit sizing (1)
- cliquet options (1)
- clustering (1)
- clustering and disaggregation techniques (1)
- combinatorial procurement (1)
- competetive analysis (1)
- competitive analysis (1)
- compiler (1)
- complexity (1)
- composite materials (1)
- computational fluid dynamics (1)
- computer algebra (1)
- concentrated electrolyte (1)
- constrained mechanical systems (1)
- constraint propagation (1)
- consumption (1)
- contact problems (1)
- continuing teacher education (1)
- continuous optimization (1)
- control (1)
- controlling (1)
- convergence of approximate solution (1)
- convex (1)
- convex optimization (1)
- cooperative game (1)
- core (1)
- corre- lation (1)
- correlation (1)
- coupled flow in plain and porous media (1)
- credit risk (1)
- credit spread (1)
- cuboidal lattice (1)
- curved viscous fibers (1)
- curved viscous fibers with surface tension (1)
- decision support systems (1)
- decomposition (1)
- defect detection (1)
- deformable bodies (1)
- deformable porous media (1)
- delay management (1)
- design centering (1)
- design optimization (1)
- deterministic technical systems (1)
- dial-a-ride (1)
- dif (1)
- differential algebraic equations (1)
- differential inclusions (1)
- differentialalgebraic equations (1)
- discrete facility location (1)
- discrete location (1)
- discrete optimization (1)
- discrete time setting (1)
- discretisation of control problems (1)
- discriminant analysis (1)
- diusion limits (1)
- dividend discount model (1)
- dividends (1)
- domains (1)
- drag models (1)
- drift due to noise (1)
- durability (1)
- dynamic capillary pressure (1)
- dynamic mode (1)
- dynamic network flows (1)
- earliest arrival flows (1)
- edge detection (1)
- effective elastic moduli (1)
- effective thermal conductivity (1)
- efficient set (1)
- eigenvalue problems (1)
- elastoplasticity (1)
- electrochemical diusive processes (1)
- electrochemical simulation (1)
- electronic circuit design (1)
- elliptic equation (1)
- encapsulation (1)
- energy conservation (1)
- error estimates (1)
- estimation of compression (1)
- evolutionary algorithms (1)
- executive compensation (1)
- executive stockholder (1)
- expert system (1)
- explicit jump (1)
- explicit jump immersed interface method (1)
- exponential utility (1)
- extreme equilibria (1)
- extreme solutions (1)
- fatigue (1)
- fiber dynamics (1)
- fiber model (1)
- fiber-fluid interactions (1)
- fiber-turbulence interaction scales (1)
- fibrous insulation materials (1)
- fibrous materials (1)
- film casting process (1)
- filtration (1)
- financial decisions (1)
- finite difference discretization (1)
- finite differences (1)
- finite element method (1)
- finite elements (1)
- finite sample breakdown point (1)
- finite volume discretization (1)
- finite volume discretization discretization (1)
- finite volume discretizations (1)
- finite volume methods (1)
- flexible bodies (1)
- flexible fibers (1)
- flow in heterogeneous porous media (1)
- flow in porous media (1)
- flow resistivity (1)
- flows (1)
- fluid-fiber interactions (1)
- fluid-structure interaction (1)
- force-based simulation (1)
- formal verification (1)
- forward starting options (1)
- fptas (1)
- frameindifference (1)
- free boundary value problem (1)
- free surface (1)
- free surface Stokes flow (1)
- full vehicle model (1)
- functional Hilbert space (1)
- fuzzy logic (1)
- general semi-infinite optimization (1)
- generalized Pareto distribution (1)
- genetic algorithms (1)
- geographical information systems (1)
- geomathematics (1)
- geometrically exact rod models (1)
- geometrically exact rods (1)
- glass processing (1)
- global optimization (1)
- global robustness (1)
- graph laplacian (1)
- guarded actions (1)
- harmonic density (1)
- harmonische Dichte (1)
- heterogeneous porous media (1)
- heuristics (1)
- hierarchical shape functions (1)
- human factors (1)
- human visual system (1)
- hyperealstic (1)
- image segmentation (1)
- impinging jets (1)
- improving and feasible directions (1)
- in-house hospital transportation (1)
- incompressible flow (1)
- inertial and viscous-inertial fiber regimes (1)
- inhomogeneous Helmholtz type differential equations in bounded domains (1)
- innovation outlier (1)
- integral constitutive equation (1)
- intensity maps (1)
- intensity modulated (1)
- intensity modulated radiotherapy planning (1)
- interactive multi-objective optimization (1)
- interactive navigation (1)
- interfa (1)
- interface problem (1)
- interface problems (1)
- interval arithmetic (1)
- invariant excitation (1)
- invariants (1)
- ion transport (1)
- isotropy test (1)
- kernel estimate (1)
- kernel function (1)
- kinetic derivation (1)
- knowledge management (1)
- knowledge representation (1)
- kooperative Spieltheorie (1)
- large scale optimization (1)
- lattice Boltzmann equation (1)
- learning curve (1)
- level-set (1)
- lid-driven flow in a (1)
- linear elasticity equations (1)
- linear filtering (1)
- linear kinematic hardening (1)
- liquid composite moulding (1)
- liquid film (1)
- lithium-ion battery (1)
- local approximation of sea surface topography (1)
- local robustness (1)
- locally supported (Green’s) vector wavelets (1)
- locally supported wavelets (1)
- location theory (1)
- locational planning (1)
- log utility (1)
- logistic regression (1)
- logistics (1)
- long slender fibers (1)
- macro modeling (1)
- macroscopic equations (1)
- magnetic field (1)
- mass & spring (1)
- mastoid (1)
- mastoidectomy (1)
- mathematica education (1)
- maximal function (1)
- mbs simulation (1)
- mechanism design (1)
- metal foams (1)
- method of fundamental solutions (1)
- microstructure simulatio (1)
- microstructure simulation (1)
- minimaler Schnittbaum (1)
- minimum cut tree (1)
- models (1)
- modified gradient projection method (1)
- moment matching (1)
- multi-asset (1)
- multi-period planning (1)
- multi-stage stochastic programming (1)
- multibody system simulation (1)
- multicriteria optimization (1)
- multigrid methods (1)
- multiobjective evolutionary algorithms (1)
- multiphase flow (1)
- multiple objective optimization (1)
- multiscale approximation (1)
- multiscale problem (1)
- multiscale problems (1)
- multiscale structures (1)
- multivalued fundamental diagram (1)
- nD image processing (1)
- nearest neighbour distance (1)
- neighborhod relationships (1)
- network congestion game (1)
- neural network (1)
- non-Newtonian fluids (1)
- non-linear optimization (1)
- non-linear wealth dynamics (1)
- non-local conditions (1)
- non-woven (1)
- nonlinear algorithms (1)
- nonlinear diffusion (1)
- nonlinear model reduction (1)
- nonlinear programming (1)
- nonlinear stochastic systems (1)
- nonlinearity (1)
- nonparametric regression (1)
- numerical methods (1)
- numerical simulation (1)
- numerical solution (1)
- object-orientation (1)
- occupational choice (1)
- oil filters (1)
- on-board simulation (1)
- open cell foam (1)
- operator-dependent prolongation (1)
- optimal control theory (1)
- optimal portfolio choice (1)
- optimization algorithms (1)
- optimization strategies (1)
- options (1)
- ordered median (1)
- orientation analysis (1)
- orientation space (1)
- orthogonal orientations (1)
- oscillating coefficients (1)
- otorhinolaryngological surgery (1)
- ownership (1)
- pH-sensitive microelectrodes (1)
- paper machine (1)
- parallel computing (1)
- parallel implementation (1)
- parametric (1)
- particle methods (1)
- path-connected sublevelsets (1)
- permeability of fractured porous media (1)
- phase space (1)
- phase transitions (1)
- piezoelectric periodic surface acoustic wave filters (1)
- planar location (1)
- polar ice (1)
- political districting (1)
- porous microstructure (1)
- power utility (1)
- preconditioner (1)
- pressing section of a paper machine (1)
- price of anarchy (1)
- price of stability (1)
- productivity (1)
- project management and scheduling (1)
- projection-type splitting (1)
- pseudo-plastic fluids (1)
- public transit (1)
- public transport (1)
- public transportation (1)
- puts (1)
- quadratic assignment problem (1)
- quantile estimation (1)
- quasistatic deformations (1)
- quickest path (1)
- radiation therapy planning (1)
- radiotherapy planning (1)
- random -Gaussian aerodynamic force (1)
- random set (1)
- random system of fibers (1)
- rate-independency (1)
- rate-indepenhysteresis (1)
- real-life applications. (1)
- real-time (1)
- real-time simulation (1)
- real-world accident data (1)
- regularization (1)
- regularized models (1)
- representative systems of Pareto solutions (1)
- reproducing kernel (1)
- risk (1)
- robust network flows (1)
- robustness (1)
- rotational spinning processes (1)
- safety critical components (1)
- safety function (1)
- sales territory alignment (1)
- satisfiability (1)
- selfish routing (1)
- semi-infinite programming (1)
- sensitivities (1)
- separable filters (1)
- sequences (1)
- sequential test (1)
- series-parallel graphs (1)
- shape (1)
- shape optimization (1)
- sharp function (1)
- sicherheitsrelevante Bauteile (1)
- single layer kernel (1)
- singularity (1)
- slender- body theory (1)
- slender-body theory (1)
- slenderbody theory (1)
- smoothness (1)
- software process (1)
- software tools (1)
- spherical decomposition (1)
- spinning processes (1)
- stability (1)
- statistical modeling (1)
- steady Richards’ equation (1)
- steady modified Richards’ equation (1)
- stochastic Hamiltonian system (1)
- stochastic averaging. (1)
- stochastic dif (1)
- stochastic volatility (1)
- stokes (1)
- stop and go waves (1)
- stop- and play-operator (1)
- stop- and play-operators (1)
- strategic (1)
- strength (1)
- strong equilibria (1)
- strut thickness (1)
- subgrid approach (1)
- subgrid approximation (1)
- suspension (1)
- swap (1)
- symbolic analysis (1)
- synchrone Sprachen (1)
- synchronous languages (1)
- system simulation (1)
- tabu search (1)
- technology (1)
- territory desgin (1)
- testing philosophy (1)
- textile quality control (1)
- texture classification (1)
- theorem prover (1)
- thin films (1)
- tolerance analysis (1)
- topological sensitivity (1)
- topology optimization (1)
- total latency (1)
- tr (1)
- trace stability (1)
- traffic flow (1)
- transfer quality (1)
- translation validation (1)
- transportation (1)
- tree method (1)
- turbulence modeling (1)
- turbulence modelling (1)
- two-grid algorithm (1)
- two-way coupling (1)
- types (1)
- unstructured grid (1)
- upscaling (1)
- urban elevation (1)
- variable aggregation method (1)
- variable neighborhood search (1)
- variational formulation (1)
- vector spherical harmonics (1)
- vectorial wavelets (1)
- viscous thermal jets (1)
- visual (1)
- visual interfaces (1)
- visualization (1)
- volatility (1)
- volume of fluid method (1)
- wave based method (1)
- wave propagation (1)
- weak solution theory (1)
- weakly/ strictly pareto optima (1)
- white noise (1)
- wild bootstrap test (1)
- working memory (1)
Faculty / Organisational entity
Durch die Zielsetzung des Projekts, in einem ganzheitlichen Ansatz Bleibefaktoren für Zuwanderer in ländlichen
Räumen zu untersuchen und geeignete Lösungsansätze für deren Integration zu entwickeln, wird eine bisher in
dieser Form kaum betrachtete Forschungslücke adressiert. Diese Nische zeichnet sich dadurch aus, dass im
Vorhaben miteinander verbundene, jedoch bisher meist disziplinär bearbeitete Fragestellungen der
Integrationsforschung, der Stadtplanung und der zukunftsfähigen Kommunalentwicklung bzw. kommunalen
Nachhaltigkeit unter besonderer Berücksichtigung von demografischen Herausforderungen vereint werden.
Diese inhaltliche Verschränkung spiegelt sich zudem auch in der interdisziplinären Vorgehensweise im Projekt
wider. So werden die Fragestellungen des Vorhabens aus der Perspektive von und mit
methodischen Zugängen aus den Sozial-, Wirtschafts- und Planungswissenschaften bearbeitet.
Der von Beginn an starke und unmittelbare Einbezug der Kommunalpartner und weiterer praxisnaher Akteure
stellt sicher, dass von Anfang an unterschiedliche wissenschaftliche und nicht-wissenschaftliche Perspektiven
sowie Praxiswissen in den Forschungsprozess integriert werden, um ein gemeinsames Problemverständnis und
eine hohe Relevanz der Ergebnisse für die kommunale Praxis sicherzustellen.
Zuwanderung an sich ist kein neues Phänomen in der Geschichte der Bundesrepublik, was sich in einer Vielfalt
von Studien und Publikationen zu den Einflussfaktoren auf die Integration von verschiedenen Migrantengruppen
(z.B. von „Gastarbeitern“ und ihren Familien, Aussiedlern und Spätaussiedlern aus Osteuropa, humanitären
Migranten bzw. Flüchtlingen, Migranten in erster und zweiter Generation) widerspiegelt. Darüber hinaus
existieren Studien zu einzelnen Aspekten der Integration wie der Teilhabe am Arbeitsmarkt, dem Schulsystem,
der Integration in den Wohnungsmarkt oder auch standortbezogene Fallstudien. Diese Untersuchungen
betrachten jedoch die allgemeine Integration von Zuwanderern, ohne auf die Besonderheiten von kleinstädtisch
und ländlich geprägten Kommunen einzugehen. Diese Thematik wird in einer Studie der Schader Stiftung
aufgegriffen, in der neben den Herausforderungen und Rahmenbedingungen in den Kommunen einige Aspekte
bzw. Handlungsoptionen der Integration aufgezeigt werden.
Die besonderen Herausforderungen des demografischen Wandels für Kommunen sind ebenfalls Gegenstand
zahlreicher Publikationen. Kleinstädtische und ländlich geprägte Kommunen sind besonders stark von diesem
Megatrend betroffen, so dass in vielen Fällen auch deren Zukunftsfähigkeit bedroht sein kann. Durch die
Integration von Zuwanderern im ländlichen Raum können sich für die Kommunen Potenziale ergeben, die
negativen Auswirkungen des Trends zum Teil aufzufangen.
Aus einer stadtplanerischen Perspektive sind in Kommunen mit demografischen Herausforderungen im Sinne
einer (stark) schrumpfenden Bevölkerung signifikante Anlässe zur baulichen Wiedernutzung von Brachflächen,
zum Schließen von Baulücken oder zur Nachverdichtung im Bestand gegeben: Potentielle volkswirtschaftlichen
Folgen sind zu erwarten, wenn sozialräumlich homogene Wohnungsbestände durch den Attraktivitäts- und
Imageverlust von benachbarten Teilräumen mit hohen Wohnungsleerständen betroffen sind. Zudem gilt es, den
betriebswirtschaftlichen (Kostenfaktoren) und städtebaulichen Auswirkungen entgegenzusteuern, um den
potentiellen baulichen Verfall sowie stadtstrukturelle, funktionale und soziale Missstände zu verhindern.
Eine effiziente Nutzung innerörtlicher Flächenressourcen, sowohl durch die Wiedernutzung von Brachflächen
als auch durch die Reaktivierung von Wohnungsleerständen, ermöglicht es den Kommunen, die
Neuinanspruchnahme von Siedlungs- und Verkehrsflächen zu reduzieren. Hierdurch kann den im Rahmen der
Nationalen Nachhaltigkeitsstrategie formulierten Zielvorgaben der Reduzierung zusätzlicher
Flächeninanspruchnahme Rechnung getragen werden. Ein sparsamer Umgang mit Grund und Boden und die
Begrenzung der Bodenversiegelung ist als städtebauliche Aufgabe durch die Bodenschutzklausel gemäß §1a
Abs. 2 BauGB bestimmt. In Anpassung an die örtlichen und städtebaulichen Gegebenheiten sind anstelle der
Neuausweisung von Bauflächen Möglichkeiten der innerörtlichen Entwicklung zu nutzen. Bei der
Inanspruchnahme unbebauter Flächen ist darüber hinaus eine flächensparende Bauweise zu bevorzugen. Durch
entsprechende Darstellungen und Festsetzungen in den Bauleitplänen kann dies erreicht werden, indem
beispielsweise auf Darstellungen von (Neu-) Bauflächen in Flächennutzungsplänen verzichtet oder indem
Höchstmaße der baulichen Nutzung für Wohnbaugrundstücke in Bebauungsplänen festgesetzt werden (§ 9
Abs. 1 Nr. 3 BauGB).
Anstelle der Neuausweisung von Wohngebieten in siedlungsstrukturellen Randlagen stellt die Innenentwicklung
für den Erhalt lebendiger Zentren und zur Begrenzung der Flächenneuinanspruchnahme einen wichtigen Beitrag
zur nachhaltigen städtebaulichen Entwicklung im Sinne des §1 Abs. 5 BauGB dar. Als Voraussetzungen für die
erfolgreiche Wiedernutzung innerörtlicher Flächen und baulicher Potentiale sind Kenntnisse der vorhandenen
Innenentwicklungspotenziale und ihre Verfügbarkeit erforderlich.
In einem geografischen Informationssystem (GIS) lassen sich nach dem gegenwärtigen Stand von Wissenschaft
und Technik Brachflächen, Baulücken und Leerstände zentral erfassen. Zur Erhebung und Verwaltung von
Wohnungsleerständen in einem kommunalen Leerstandskataster lassen sich im Wesentlichen folgende
Methoden und Datenquellen nutzen: Die Analyse von Ver- und Entsorgerdaten (Strom/Wasser) stellen neben
der Erhebung durch Ortsbegehungen (Inaugenscheinnahme von außen durch geschultes Personal), den
Befragungen von Eigentümern, den statistischen Schätzverfahren (Wohnungsfortschreibung und Melderegister)
bzw. den Befragungen kommunaler Funktionsträger (Ortsvorsteher, Bürgermeister) methodische Zugänge dar.
Zur Verifizierung der Daten erfolgt eine Kombination der genannten Methoden. Dabei werden die ermittelten
Leerstände mittels der Stromzählermethode durch zusätzliche Befragung von
Wohnungsunternehmen/Eigentümern oder Ortsvorstehern ergänzt und plausibilisiert, quantitative Daten
(zählerbasierte Methoden) werden durch qualitative Erhebungen (Befragungen) ergänzt. Da der Zugang
zu diesen kommunalen Datenbeständen erschwert war, bediente sich die Forschungsgruppe der SeniorForschungsprofessur Stadtplanung an zugänglichen öffentlichen Daten (Zensus-Erhebungen zu den
Wohnungsleerständen aus dem Jahr 2011) sowie kommerziell erwerbbaren Daten aus der Marktforschung
(microm Geo-Milieus®), da diese empirisch abgesichert sind und im Kontext der Kommunalentwicklung vielfältig
eingesetzt werden (z.B. Beteiligungsverfahren, Wohnbaulandentwicklung).
Leben in Kaiserslautern 2019
(2019)
Das Projekt zum „Leben in Kaiserslautern 2019“ (LiK) untersucht die Lebensqualität in Kaiserslautern, die Zufriedenheit mit der Demokratie und den politischen Institutionen sowie die politische und gesellschaftliche Partizipation der Bürgerinnen und Bürger. Der Bericht stellt das methodische Design der LiK-Befragung vor. Es werden deskriptive Ergebnisse aus der Befragung präsentiert. Dabei wird auch Bezug auf Ergebnisse aus bundesweiten Bevölkerungsumfragen, die einen Vergleich zwischen Kaiserslautern und ganz Deutschland erlauben, genommen.
Das Projekt „Integration findet Stadt – Im Dialog zum Erfolg“ wird von 2017-2019 als eines von zehn Projekten im Rahmen der Nationalen Stadtentwicklungspolitik zum Thema Integration durchgeführt (gefördert vom Bundesministerium für Umwelt, Naturschutz, Bau und Reaktorsicherheit). Das bestehende Integrationskonzept der Stadt Kaiserslautern soll in diesem Kontext weiterentwickelt und an die veränderte Zusammensetzung der Migranten in der Stadt angepasst werden. Mit dem Projekt ist verbunden, auf Quartiersebene Partizipations- und Aktivierungsprozesse anzustoßen und Integrationsbedarfe und die Bereitschaft zum Engagement zu ermitteln. Ziel des Gesamtprojektes in Kaiserslautern ist es, die Vernetzung in den Quartieren zu stärken, um das Zusammenleben einfacher zu gestalten und Unterstützungspotenziale der deutsch wie migrantisch geprägten Bewohnerinnen und Bewohner zu aktivieren. Im Rahmen dieses Projektes hat das Fachgebiet Stadtsoziologie der TU Kaiserslautern eine Teil Studie über das Zusammenleben von Migranten und nicht Migranten in Kaiserslautern angefertigt.
Im ersten Teil der vorliegenden Studie wird eine statistische Bestandsaufnahme nach demografischen und sozial strukturellen Merkmalen der Bevölkerung in den verschiedenen Stadtteilen durchgeführt. Der zweite Teil informiert anhand von Interviews wie die sozialen Netzwerke in den unterschiedlichen Stadtteilen Kaiserslauterns von Zugewanderten und Alteingesessenen wahrgenommen werden. Im dritten Teil werden Ergebnisse einer quantitativen Befragung zum Zusammenleben im Stadtteil, Bewertungen und Vorstellungen sowie Potentiale für Engagement der Bewohner/innen Kaiserslauterns mit und ohne Migrationshintergrund dargestellt. Dieser Mix von quantitativen und qualitativen Methoden dient dazu um Unterschiede zwischen Bevölkerungsgruppen zu erfassen, Netzwerke des Zusammenlebens zu identifizieren und die unterschiedlichen Stärken und Schwächen der Stadtteile deutlich zu machen. Die unterschiedlichen Zugangswege sollen Integrationsbedarfe und –potenziale erkennbar machen um das vielfältige Leben der Stadt aufzuzeichnen.
In Anbetracht der Flüchtlingsbewegungen von 2014 bis 2016 und der damit einhergehenden Folgewirkungen auf die Bundesrepublik Deutschland als Aufnahmeland erhalten Fragen der Integration einen hohen Stellenwert in der aktuellen gesellschaftspolitischen Debatte. Der Begriff der Integration ist im deutschen Diskurs maßgeblich durch den Ansatz von Hartmut Esser geprägt (Esser 1980, 2001). Er unterscheidet vier Dimensionen der Integration: 1. Kulturation (Wissen, Sprache, gesellschaftliche Teilhabe), 2. Platzierung (Rechte, ökonomisches Potential, Zugang zum Bildungssystem, zum Arbeits- und Wohnungsmarkt); 3. Interaktion: kulturelles und soziales Kapital (Teilhabe am gesellschaftlichen und kulturellen Leben) und 4. Identifikation (Bürgersinn). Allerdings ist der Integrationsbegriff umstritten, da er die Aufgabe der Integration einseitig auf Seiten der Zuwandernden sieht und die Aufgaben der Aufnahmegesellschaft in diesem Prozess zu wenig berücksichtigt (Gestring 2014: 82). Der Begriff der Integration vernachlässigt darüber hinaus, dass sich vielfältige kulturelle Prägungen und Identitäten durchaus miteinander verbinden und gemeinsam leben lassen (West 2014: 92 ff.; Gans et al. 2014). Aus diesem Grund wird der Integrationsbegriff in den Migrationswissenschaften vermieden und neutralere Begriffe werden verwendet, wie Transnationalismus, Transmigration, Trans-, Inter- und Multikulturalität (ARL 2016: 2), Vielfalt, Zweiheimischkeit oder allgemein Vergesellschaftung (ARL 2016: 12). In Hinsicht auf soziale Unterschiede macht Vertovec mit dem Begriff der (Super-)Diversität auf die Bedeutung sozialer Ungleichheiten unterschiedlicher Aufenthaltstitel der Migranten aufmerksam, die mit Zukunftsrechten beziehungsweise Exklusion einhergehen (Vertovec 2007).
Jedoch ist der Begriff „Integration“ eingeführt und auch für praktische Anforderungen vor Ort gut handhabbar, vor allem wenn konkrete Verankerungen in den Lebensbereichen Arbeit, Wohnen, Freizeit und Kultur berührt sind. Zugleich sollte betont werden, dass der Integrationsbegriff nicht auf die Zuwandernden alleine fokussiert werden kann, sondern immer auch Integrationsleistungen von den übrigen Bevölkerungsmitgliedern und Akteuren erfordert.
Auf Stadtteilebene, dort wo die Menschen ihren Alltag verbringen, arbeiten Freiwillige und Organisationen zusammen, um die Integration zu erleichtern. Für die ehrenamtlich Tätigen und die Organisationen besteht die Notwendigkeit, die kulturelle Vielfalt in ihrer Arbeit aufzunehmen, die Ansprache und Prozesse entsprechend zu gestalten und dabei die sozialstrukturellen Bedingungen in den jeweiligen Nachbarschaften nicht außer Acht zu lassen (Sprachkenntnisse, Bildungsniveaus, Berufstätigkeit, familiäre Verpflichtungen, Aufenthaltstitel der verschiedenen Migrantengruppen). Die Veränderungen in der Zusammensetzung der zugewanderten Bevölkerung sind daher für die langjährig Beschäftigten vor Ort möglicherweise nicht unmittelbar nachzuvollziehen.
Verschiedene Studien zur Integration auf Quartiersebene zeigen, dass Rheinland-Pfalz ein hohes Niveau des freiwilligen Engagements erreicht hat (Gesemann/Roth 2015: 28). Wie an anderen Orten auch sind Migrantinnen und Migranten jedoch nur unterdurchschnittlich vertreten. Das Anliegen, die Teilnahmemöglichkeiten an der Gesellschaft zu erweitern, hat in den jeweiligen Stadtgebieten ganz unterschiedliche Voraussetzungen nach Aufenthaltstitel, Qualifikation, Alter oder Familiensituation der Bewohnerinnen und Bewohner. Neben Sprach- und Kontaktschwierigkeiten spielt seit der Flüchtlingsbewegung der Aufenthaltsstatus eine besondere Rolle, da er mit großer Unsicherheit bei den Lebensperspektiven und sonstigen Belastungen der Geflüchteten einhergeht (Vertovic 2007; Robert Bosch-Stiftung 2016; Brücker u.a./et al 2016).
Das Anliegen, die Teilnahmemöglichkeiten an der Gesellschaft zu erweitern, hat in den jeweiligen Stadtgebieten ganz unterschiedliche Voraussetzungen nach Aufenthaltstitel, Qualifikation, Alter oder Familiensituation der Bewohnerinnen und Bewohner.
Im Folgenden werden die stadtsozilogischen Erhebungen separat vorgestellt. Im ersten Teil werden die zentralen Indikatoren die das statistische Amt zur Verfügung stellt, so kleinräumig wie möglich vorgestellt. Die zentralen Indikatoren beziehen sich auf die Demografie und die soziale Lage von Migranten/innen und nicht Migranten/innen. Im zweiten Teil wird das Zusammenleben in ausgewählten Quartieren mit hohem Ausländer/innen bzw. Flüchtlingsanteil behandelt. Der dritte Teil beruht auf einer quantitativen Befragung im Rahmen der Interkulturellen Woche im September 2017, die auf Gemeinsamkeiten und Unterschiede der Wahrnehmung von Integration von Zugewanderten und Mehrheitsgesellschaft abzielt.
In Rheinland-Pfalz hinterlässt der demografische Wandel insbesondere in den ländlichen Regionen seine Spuren und die Gesellschaft wird „älter, bunter, weniger“. Ministerpräsidentin Malu Dreyer verdeutlichte bereits in ihrer Regierungserklärung 2013, dass auch die Förderpolitik des Landes neu ausgerichtet werden muss, um den Herausforderungen des demografischen Wandels frühzeitig begegnen zu können. Dabei sind die stärkere Zusammenarbeit der Kommunen sowie die gemeinsame Erarbeitung von überörtlichen Entwicklungskonzepten notwendig, um den gemeinsamen Bedürfnissen Rechnung tragen zu können. Die Entwicklungskonzepte sollen auf Basis von moderierten Beteiligungsprozessen entstehen, da die Bürgerinnen und Bürger am besten wissen, wie sich die Bedarfe in einer Region aufgrund des demografischen Wandels ändern.
In diesem Zusammenhang startete das Land Rheinland-Pfalz im Jahr 2013 die Zukunftsinitiative „Starke Kommunen – Starkes Land“, welche ein 30-monatiges landesweites Beratungs- und Begleitprojekt darstellte. Der Wettbewerb im Jahr 2013 richtete sich an die Verbandsgemeinden und verbandsfreien Gemeinden in Rheinland-Pfalz, an dessen Ende sechs Modellräume ausgewählt wurden. In diesen wurden Möglichkeiten von Bürgerbeteiligungen und langfristiger interkommunaler Zusammenarbeit auf Verbandsgemeinde-Ebene erprobt und ausgewertet.
Prof. Steinebach und das Team begleiteten die Zukunftsinitiative wissenschaftlich. Der Aufgabenbereich umfasste dabei die Evaluierung der organisatorischen Struktur und des Projektaufbaus, die Analyse der inhaltlichen Themenfelder sowie die Untersuchung und Bewertung der interkommunalen Kooperation. Am Ende der wissenschaftlichen Begleitung wurden die Ergebnisse aufgearbeitet und Handlungsempfehlungen gegeben. Daraus sollen Rückschlüsse für die Förderpolitik des Landes gezogen werden.
Im Zeitraum von März bis Mai 2017 wurde eine Evaluierung der Zukunftsinitiative durchgeführt. Diese ist im Download enthalten.
Due to the increasing number of natural or man-made disasters, the application of operations research methods in evacuation planning has seen a rising interest in the research community. From the beginning, evacuation planning has been highly focused on car-based evacuation. Recently, also the evacuation of transit depended evacuees with the help of buses has been considered.
In this case study, we apply two such models and solution algorithms to evacuate a core part of the metropolitan capital city Kathmandu of Nepal as a hypothetical endangered region, where a large part of population is transit dependent. We discuss the computational results for evacuation time under a broad range of possible scenarios, and derive planning suggestions for practitioners.
Investigate the hardware description language Chisel - A case study implementing the Heston model
(2013)
This paper presents a case study comparing the hardware description language „Constructing Hardware in a Scala Embedded Language“(Chisel) to VHDL. For a thorough comparison the Heston Model was implemented, a stochastic model used in financial mathematics to calculate option prices. Metrics like hardware utilization and maximum clock rate were extracted from both resulting designs and compared to each other. The results showed a 30% reduction in code size compared to VHDL, while the resulting circuits had about the same hardware utilization. Using Chisel however proofed to be difficult because of a few features that were not available for this case study.
Chisel (Constructing Hardware in a Scala embedded language) is a new programming language, which embedded in Scala, used for hardware synthesis. It aims to increase productivity when creating hardware by enabling designers to use features present in higher level programming languages to build complex hardware blocks. In this paper, the most advertised features of Chisel are investigated and compared to their VHDL counterparts, if present. Afterwards, the authors’ opinion if a switch to Chisel is worth considering is presented. Additionally, results from a related case study on Chisel are briefly summarized. The author concludes that, while Chisel has promising features, it is not yet ready for use in the industry.
Sport und Bewegung sind seit jeher wesentliche Bestandteile des öffentlichen Lebens. Der in den letzten Jahren erkennbare und sich weiter verstärkende demographische und gesellschaftliche Wandel führt allerdings zu einer Veränderung des Sport- und Bewegungsverhaltens und damit auch der Nachfrage nach Sportstätten und Bewegungsräumen. Die sich zunehmend verändernde Situation von Sport und Bewegung findet bislang weder auf der politischen Ebene noch auf der Ebene der kommunalen Planung ausreichend Berücksichtigung. Vor dem Hintergrund stetig steigender Bedarfe zur Sicherung der kommunalen Daseinsvorsorge müssen jedoch zeitnah Lösungen gefunden werden, die den veränderten Rahmenbedingungen auch zukünftig gerecht werden. Ausgehend hiervon befasst sich das in den Jahren 2011 und 2012 durchgeführte Forschungs- und Entwicklungsprojekt „Gesunde Kommune – Sport und Bewegung als Faktor der Stadt- und Raumentwicklung“ mit der Bedeutung von Sport und Bewegung für die rheinland-pfälzischen Kommunen und verfolgt das Ziel, Verknüpfungen zwischen räumlichen und sportlichen Entwicklungsfeldern zu erschließen sowie Möglichkeiten zur gezielten Nutzung von Sport und Bewegung für die nachhaltige Raumentwicklung aufzuzeigen. Die raumwirksamen Leistungen von Sport und Bewegung werden hierbei unter den Aspekten Gesundheit, Ökonomie, Ökologie und Soziales betrachtet. Ein wesentliches Projektziel bildete darüber hinausgehend die Bewusstseinsbildung und Sensibilisierung aller relevanten Akteure auf Landes- und Kommunalebene. Das Projekt wurde Erarbeitet durch den Lehrstuhl Stadtplanung der TU Kaiserslautern in Kooperation mit dem Fachgebiet Sportwissenschaft der TU Kaiserslautern im Auftrag der Entwicklungsagentur Rheinland-Pfalz e.V.. Die Dokumentation und Veröffentlichung erfolgte sowohl im Rahmen eines Projektberichts 2011 sowie eines Abschlussberichts 2012.
Gesunde Kommune - Sport und Bewegung als Faktor der Stadt- und Raumentwicklung - Projektbericht 2011
(2012)
Sport und Bewegung sind seit jeher wesentliche Bestandteile des öffentlichen Lebens. Der in den letzten Jahren erkennbare und sich weiter verstärkende demographische und gesellschaftliche Wandel führt allerdings zu einer Veränderung des Sport- und Bewegungsverhaltens und damit auch der Nachfrage nach Sportstätten und Bewegungsräumen. Die sich zunehmend verändernde Situation von Sport und Bewegung findet bislang weder auf der politischen Ebene noch auf der Ebene der kommunalen Planung ausreichend Berücksichtigung. Vor dem Hintergrund stetig steigender Bedarfe zur Sicherung der kommunalen Daseinsvorsorge müssen jedoch zeitnah Lösungen gefunden werden, die den veränderten Rahmenbedingungen auch zukünftig gerecht werden. Ausgehend hiervon befasst sich das in den Jahren 2011 und 2012 durchgeführte Forschungs- und Entwicklungsprojekt „Gesunde Kommune – Sport und Bewegung als Faktor der Stadt- und Raumentwicklung“ mit der Bedeutung von Sport und Bewegung für die rheinland-pfälzischen Kommunen und verfolgt das Ziel, Verknüpfungen zwischen räumlichen und sportlichen Entwicklungsfeldern zu erschließen sowie Möglichkeiten zur gezielten Nutzung von Sport und Bewegung für die nachhaltige Raumentwicklung aufzuzeigen. Die raumwirksamen Leistungen von Sport und Bewegung werden hierbei unter den Aspekten Gesundheit, Ökonomie, Ökologie und Soziales betrachtet. Ein wesentliches Projektziel bildete darüber hinausgehend die Bewusstseinsbildung und Sensibilisierung aller relevanten Akteure auf Landes- und Kommunalebene. Das Projekt wurde Erarbeitet durch den Lehrstuhl Stadtplanung der TU Kaiserslautern in Kooperation mit dem Fachgebiet Sportwissenschaft der TU Kaiserslautern im Auftrag der Entwicklungsagentur Rheinland-Pfalz e.V..
The direction splitting approach proposed earlier in [6], aiming at the efficient solution of Navier-Stokes equations, is extended and adopted here to solve the Navier-Stokes-Brinkman equations describing incompressible flows in plain and in porous media. The resulting pressure equation is a perturbation of the
incompressibility constrained using a direction-wise factorized operator as proposed in [6]. We prove that this approach is unconditionally stable for the unsteady Navier-Stokes-Brinkman problem. We also provide numerical illustrations of the method's accuracy and efficiency.
In this paper, we propose multi-level Monte Carlo(MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multi-phase flow and transport. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. We consider two types of ensemble level mixed multiscale finite element methods, (1) the no-local-solve-online ensemble level method (NLSO) and (2) the local-solve-online ensemble level method (LSO). Both mixed multiscale methods use a number of snapshots of the permeability media to generate a multiscale basis.
As a result, in the offline stage, we construct multiple basis functions for
each coarse region where basis functions correspond to different realizations.
In the no-local-solve-online ensemble level method one uses the whole set of pre-computed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method one uses the pre-computed functions to construct a multiscale basis for a particular realization. With this basis the solution corresponding to this
particular realization is approximated in LSO mixed MsFEM. In both approaches
the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to pre-compute a
multiscale basis. We note that LSO approaches share similarities with reduced basis methods [11, 21, 22].
In multi-level Monte Carlo methods ([14, 13]), more accurate (and expensive) forward simulations are run with fewer samples while less accurate(and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations carefully, one can show that MLMC methods can provide better accuracy
at the same cost as MC methods. In our simulations, our goal is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. In particular, we show that NLSO
mixed MsFEM is more accurate compared to LSO mixed MsFEM. Further, we use both approaches in the context of MLMC to speed-up MC
calculations. We present basic aspects of the algorithm and numerical
results for coupled flow and transport in heterogeneous porous media.
We present the derivation of a simple viscous damping model of Kelvin–Voigt type for geometrically exact
Cosserat rods from three–dimensional continuum theory. Assuming a homogeneous and isotropic material,
we obtain explicit formulas for the damping parameters of the model in terms of the well known stiffness
parameters of the rod and the retardation time constants defined as the ratios of bulk and shear viscosities to
the respective elastic moduli. We briefly discuss the range of validity of our damping model and illustrate
its behaviour with a numerical example.
A simple transformation of the Equation of Motion (EoM) allows us to directly integrate nonlinear structural models into the recursive Multibody System (MBS) formalism of SIMPACK. This contribution describes how the integration is performed for a discrete Cosserat rod model which has been developed at the ITWM. As a practical example, the run-up of a simplified three-bladed wind turbine is studied where the dynamic deformations of the three blades are calculated by the Cosserat rod model.
In the presented work, we make use of the strong reciprocity between kinematics and geometry to build a geometrically nonlinear, shearable low order discrete shell model of Cosserat type defined on triangular meshes, from which we deduce a rotation–free Kirchhoff type model with the triangle vertex positions as degrees of freedom. Both models behave physically plausible already on very coarse meshes, and show good
convergence properties on regular meshes. Moreover, from the theoretical side, this deduction provides a
common geometric framework for several existing models.
One of the fundamental problems in computational structural biology is the prediction of RNA secondary structures from a single sequence. To solve this problem, mainly two different approaches have been used over the past decades: the free energy minimization (MFE) approach which is still considered the most popular and successful method and the competing stochastic context-free grammar (SCFG) approach. While the accuracy of the MFE based algorithms is limited by the quality of underlying thermodynamic models, the SCFG method abstracts from free energies and instead tries to learn about the structural behavior of the molecules by training the grammars on known real RNA structures, making it highly dependent on the availability of a rich high quality training set. However, due to the respective problems associated with both methods, new statistics based approaches towards RNA structure prediction have become increasingly appreciated. For instance, over the last years, several statistical sampling methods and clustering techniques have been invented that are based on the computation of partition functions (PFs) and base pair probabilities according to thermodynamic models. A corresponding SCFG based statistical sampling algorithm for RNA secondary structures has been studied just recently. Notably, this probabilistic method is capable of producing accurate (prediction) results, where its worst-case time and space requirements are equal to those of common RNA folding algorithms for single sequences.
The aim of this work is to present a comprehensive study on how enriching the underlying SCFG by additional information on the lengths of generated substructures (i.e. by incorporating length-dependencies into the SCFG based sampling algorithm, which is actually possible without significant losses in performance) affects the reliability of the induced RNA model and the accuracy of sampled secondary structures. As we will see, significant differences with respect to the overall quality of generated sample sets and the resulting predictive accuracy are typically implied. In principle, when considering the more specialized length-dependent SCFG model as basis for statistical sampling, a higher accuracy of predicted foldings can be reached at the price of a lower diversity of generated candidate structures (compared to the more general traditional SCFG variant or sampling based on PFs that rely on free energies).
In this work we extend the multiscale finite element method (MsFEM)
as formulated by Hou and Wu in [14] to the PDE system of linear elasticity.
The application, motivated from the multiscale analysis of highly heterogeneous
composite materials, is twofold. Resolving the heterogeneities on
the finest scale, we utilize the linear MsFEM basis for the construction of
robust coarse spaces in the context of two-level overlapping Domain Decomposition
preconditioners. We motivate and explain the construction
and present numerical results validating the approach. Under the assumption
that the material jumps are isolated, that is they occur only in the
interior of the coarse grid elements, our experiments show uniform convergence
rates independent of the contrast in the Young's modulus within the
heterogeneous material. Elsewise, if no restrictions on the position of the
high coefficient inclusions are imposed, robustness can not be guaranteed
any more. These results justify expectations to obtain coefficient-explicit
condition number bounds for the PDE system of linear elasticity similar to
existing ones for scalar elliptic PDEs as given in the work of Graham, Lechner
and Scheichl [12]. Furthermore, we numerically observe the properties
of the MsFEM coarse space for linear elasticity in an upscaling framework.
Therefore, we present experimental results showing the approximation errors
of the multiscale coarse space w.r.t. the fine-scale solution.
Worldwide the installed capacity of renewable technologies for electricity production is
rising tremendously. The German market is particularly progressive and its regulatory
rules imply that production from renewables is decoupled from market prices and electricity
demand. Conventional generation technologies are to cover the residual demand
(defined as total demand minus production from renewables) but set the price at the
exchange. Existing electricity price models do not account for the new risks introduced
by the volatile production of renewables and their effects on the conventional demand
curve. A model for residual demand is proposed, which is used as an extension of
supply/demand electricity price models to account for renewable infeed in the market.
Infeed from wind and solar (photovoltaics) is modeled explicitly and withdrawn from
total demand. The methodology separates the impact of weather and capacity. Efficiency
is transformed on the real line using the logit-transformation and modeled as a stochastic process. Installed capacity is assumed a deterministic function of time. In a case study the residual demand model is applied to the German day-ahead market
using a supply/demand model with a deterministic supply-side representation. Price trajectories are simulated and the results are compared to market future and option
prices. The trajectories show typical features seen in market prices in recent years and the model is able to closely reproduce the structure and magnitude of market prices.
Using the simulated prices it is found that renewable infeed increases the volatility of forward prices in times of low demand, but can reduce volatility in peak hours. Prices
for different scenarios of installed wind and solar capacity are compared and the meritorder effect of increased wind and solar capacity is calculated. It is found that wind
has a stronger overall effect than solar, but both are even in peak hours.
In this work, some model reduction approaches for performing simulations
with a pseudo-2D model of Li-ion battery are presented. A full pseudo-2D model of processes in Li-ion batteries is presented following
[3], and three methods to reduce the order of the full model are considered. These are: i) directly reduce the model order using proper
orthogonal decomposition, ii) using fractional time step discretization in order to solve the equations in decoupled way, and iii) reformulation
approaches for the diffusion in the solid phase. Combinations of above
methods are also considered. Results from numerical simulations are presented, and the efficiency and the accuracy of the model reduction approaches are discussed.
Granular systems in solid-like state exhibit properties like stiffness
dependence on stress, dilatancy, yield or incremental non-linearity
that can be described within the continuum mechanical framework.
Different constitutive models have been proposed in the literature either based on relations between some components of the stress tensor or on a quasi-elastic description. After a brief description of these
models, the hyperelastic law recently proposed by Jiang and Liu [1]
will be investigated. In this framework, the stress-strain relation is
derived from an elastic strain energy density where the stable proper-
ties are linked to a Drucker-Prager yield criteria. Further, a numerical method based on the finite element discretization and Newton-
Raphson iterations is presented to solve the force balance equation.
The 2D numerical examples presented in this work show that the stress
distributions can be computed not only for triangular domains, as previoulsy done in the literature, but also for more complex geometries.
If the slope of the heap is greater than a critical value, numerical instabilities appear and no elastic solution can be found, as predicted by
the theory. As main result, the dependence of the material parameter
Xi on the maximum angle of repose is established.
SHIM is a concurrent deterministic programming language for embedded systems built on rendezvous communication. It abstracts away many details to give the developer a high-level view that includes virtual shared variables, threads as orthogonal statements, and deterministic concurrent exceptions.
In this paper, we present a new way to compile a SHIM-like language into a set of asynchronous guarded actions, a well-established intermediate representation for concurrent systems. By doing so, we build a bridge to many other tools, including hardware synthesis and formal verification. We present our translation in detail, illustrate it through examples, and show how the result can be used by various other tools.
The paper production is a problem with significant importance for the society
and it is a challenging topic for scientific investigations. This study is concerned
with the simulations of the pressing section of a paper machine. A two-dimensional
model is developed to account for the water flow within the pressing zone. Richards’
type equation is used to describe the flow in the unsaturated zone. The dynamic capillary
pressure–saturation relation proposed by Hassanizadeh and co-workers (Hassanizadeh
et al., 2002; Hassanizadeh, Gray, 1990, 1993a) is adopted for the paper
production process.
The mathematical model accounts for the co-existence of saturated and unsaturated
zones in a multilayer computational domain. The discretization is performed
by the MPFA-O method. The numerical experiments are carried out for parameters
which are typical for the production process. The static and dynamic capillary
pressure–saturation relations are tested to evaluate the influence of the dynamic
capillary effect.
This work presents the dynamic capillary pressure model (Hassanizadeh, Gray, 1990, 1993a) adapted for the needs of paper manufacturing process simulations. The dynamic capillary pressure-saturation relation is included in a one-dimensional simulation model for the pressing section of a paper machine. The one-dimensional model is derived from a two-dimensional model by averaging with respect to the vertical direction. Then, the model is discretized by the finite volume method and solved by Newton’s method. The numerical experiments are carried out for parameters typical for the paper layer. The dynamic capillary pressure-saturation relation shows significant influence on the distribution of water pressure. The behaviour of the solution agrees with laboratory experiments (Beck, 1983).
In this paper we deal with dierent statistical modeling of real world accident data in order to quantify the eectiveness of a safety function or a safety conguration (meaning a specic combination of safety functions) in vehicles. It is shown that the eectiveness can be estimated along the so-called relative risk, even if the eectiveness does depend on a confounding variable which may be categorical or continuous. For doing so a concrete statistical modeling is not necessary, that is the resulting estimate is of nonparametric nature. In a second step the quite usual and from a statistical point of view classical logistic regression modeling is investigated. Main emphasis has been laid on the understanding of the model and the interpretation of the occurring parameters. It is shown that the eectiveness of the safety function also can be detected via such a logistic approach and that relevant confounding variables can and should be taken into account. The interpretation of the parameters related to the confounder and the quantication of the in uence of the confounder is shown to be rather problematic. All the theoretical results are illuminated by numerical data examples.
We introduce a refined tree method to compute option prices using the stochastic volatility model of Heston. In a first step, we model the stock and variance process as two separate trees and with transition probabilities obtained by matching tree moments up to order two against the Heston model ones. The correlation between the driving Brownian motions in the Heston model is then incorporated by the node-wise adjustment of the probabilities. This adjustment, leaving the marginals fixed, optimizes the match between tree and model correlation. In some nodes, we are even able to further match moments of higher order. Numerically this gives convergence orders faster than 1/N, where N is the number of dis- cretization steps. Accuracy of our method is checked for European option prices against a semi closed-form, and our prices for both European and American options are compared to alternative approaches.
In this paper we study the possibilities of sharing profit in combinatorial procurement auctions and exchanges. Bundles of heterogeneous items are offered by the sellers, and the buyers can then place bundle bids on sets of these items. That way, both sellers and buyers can express synergies between items and avoid the well-known risk of exposure (see, e.g., [3]). The reassignment of items to participants is known as the Winner Determination Problem (WDP). We propose solving the WDP by using a Set Covering formulation, because profits are potentially higher than with the usual Set Partitioning formulation, and subsidies are unnecessary. The achieved benefit is then to be distributed amongst the participants of the auction, a process which is known as profit sharing. The literature on profit sharing provides various desirable criteria. We focus on three main properties we would like to guarantee: Budget balance, meaning that no more money is distributed than profit was generated, individual rationality, which guarantees to each player that participation does not lead to a loss, and the core property, which provides every subcoalition with enough money to keep them from separating. We characterize all profit sharing schemes that satisfy these three conditions by a monetary flow network and state necessary conditions on the solution of the WDP for the existence of such a profit sharing. Finally, we establish a connection to the famous VCG payment scheme [2, 8, 19], and the Shapley Value [17].
We study the efficient computation of Nash and strong equilibria in weighted bottleneck games. In such a game different players interact on a set of resources in the way that every player chooses a subset of the resources as her strategy. The cost of a single resource depends on the total weight of players choosing it and the personal cost every player tries to minimize is the cost of the most expensive resource in her strategy, the bottleneck value. To derive efficient algorithms for finding Nash equilibria in these games, we generalize a tranformation of a bottleneck game into a special congestion game introduced by Caragiannis et al. [1]. While investigating the transformation we introduce so-called lexicographic games, in which the aim of a player is not only to minimize her bottleneck value but to lexicographically minimize the ordered vector of costs of all resources in her strategy. For the special case of network bottleneck games, i.e., the set of resources are the edges of a graph and the strategies are paths, we analyse different Greedy type methods and their limitations for extension-parallel and series-parallel graphs.
We provide a space domain oriented separation of magnetic fields into parts generated by sources in the exterior and sources in the interior of a given sphere. The separation itself is well-known in geomagnetic modeling, usually in terms of a spherical harmonic analysis or a wavelet analysis that is spherical harmonic based. However, it can also be regarded as a modification of the Helmholtz decomposition for which we derive integral representations with explicitly known convolution kernels. Regularizing these singular kernels allows a multiscale representation of the magnetic field with locally supported wavelets. This representation is applied to a set of CHAMP data for crustal field modeling.
In a dynamic network, the quickest path problem asks for a path minimizing the time needed to send a given amount of flow from source to sink along this path. In practical settings, for example in evacuation or transportation planning, the reliability of network arcs depends on the specific scenario of interest. In this circumstance, the question of finding a quickest path among all those having at least a desired path reliability arises. In this article, this reliable quickest path problem is solved by transforming it to the restricted quickest path problem. In the latter, each arc is associated a nonnegative cost value and the goal is to find a quickest path among those not exceeding a predefined budget with respect to the overall (additive) cost value. For both, the restricted and reliable quickest path problem, pseudopolynomial exact algorithms and fully polynomial-time approximation schemes are proposed.
In this paper the multi terminal q-FlowLoc problem (q-MT-FlowLoc) is introduced. FlowLoc problems combine two well-known modeling tools: (dynamic) network flows and locational analysis. Since the q-MT-FlowLoc problem is NP-hard we give a mixed integer programming formulation and propose a heuristic which obtains a feasible solution by calculating a maximum flow in a special graph H. If this flow is also a minimum cost flow, various versions of the heuristic can be obtained by the use of different cost functions. The quality of this solutions is compared.
Das Smart Grid, „intelligentes Stromnetz“, ist eines der Themen, welche von der Politik und natürlich auch der Stromwirtschaft immer wieder in den Vordergrund gestellt werden. Das Potential der erneuerbaren Energien reicht aus, um Deutschland und Europa zuverlässig mit Strom zu versorgen. Der Umbau der Stromnetze ist dabei von zentraler Bedeutung und bedarf einer Anstrengung der gesamten Gesellschaft. Leider kommt dabei der Stromkunde zu kurz — die Bedürfnisse von Stromkunden werden weitgehend ignoriert und der Datenschutz wird oft ausser acht gelassen. Aber auch kleinere Stadtwerke haben mit dieser Entwicklung Probleme: Aufgrund politischer Vorgaben müssen sie zum Beispiel Smart Meter einführen, obwohl ihnen dadurch Kosten entstehen, die sie nicht direkt auf den Kunden umlegen können. Die Bereitschaft der Kunden, für ein Smart Grid mehr Geld zu bezahlen, ist wohl kaum vorhanden. Gleichzeitig ist es aber notwendig, die bestehenden Stromnetze zu flexibilisieren und auf einen weiter steigenden Anteil von erneuerbaren Energiequellen vorzubereiten
In this article, a new model predictive control approach to nonlinear stochastic systems will be presented. The new approach is based on particle filters, which are usually used for estimating states or parameters. Here, two particle filters will be combined, the first one giving an estimate for the actual state based on the actual output of the system; the second one gives an estimate of a control input for the system. This is basically done by adopting the basic model predictive control strategies for the second particle filter. Later in this paper, this new approach is applied to a CSTR (continuous stirred-tank reactor) example and to the inverted pendulum.
This report describes the calibration and completion of the volatility cube in the SABR model. The description is based on a project done for Assenagon GmbH in Munich. However, we use fictitious market data which resembles realistic market data. The problem posed by our client is formulated in section 1. Here we also motivate why this is a relevant problem. The SABR model is briefly reviewed in section 2. Section 3 discusses the calibration and completion of the volatility cube. An example is presented in section 4. We conclude by suggesting possible future research in section 5.
This report gives an overview of the separate translation of synchronous imperative programs to synchronous guarded actions. In particular, we consider problems to be solved for separate compilation that stem from preemption statements and local variable declarations. We explain how we solved these problems and sketch our solutions implemented in the our Averest framework to implement a compiler that allows a separate compilation of imperative synchronous programs with local variables and unrestricted preemption statements. The focus of the report is the big picture of our entire design flow.
This report gives an insight into basics of stress field simulations for geothermal reservoirs.
The quasistatic equations of poroelasticity are deduced from constitutive equations, balance
of mass and balance of momentum. Existence and uniqueness of a weak solution is shown.
In order of to find an approximate solution numerically, usage of the so–called method of
fundamental solutions is a promising way. The idea of this method as well as a sketch of
how convergence may be proven are given.
Continuously improving imaging technologies allow to capture the complex spatial
geometry of particles. Consequently, methods to characterize their three
dimensional shapes must become more sophisticated, too. Our contribution to
the geometric analysis of particles based on 3d image data is to unambiguously
generalize size and shape descriptors used in 2d particle analysis to the spatial
setting.
While being defined and meaningful for arbitrary particles, the characteristics
were actually selected motivated by the application to technical cleanliness. Residual
dirt particles can seriously harm mechanical components in vehicles, machines,
or medical instruments. 3d geometric characterization based on micro-computed
tomography allows to detect dangerous particles reliably and with
high throughput. It thus enables intervention within the production line. Analogously
to the commonly agreed standards for the two dimensional case, we
show how to classify 3d particles as granules, chips and fibers on the basis of
the chosen characteristics. The application to 3d image data of dirt particles is
demonstrated.
Input loads are essential for the numerical simulation of vehicle multibody system
(MBS)- models. Such load data is called invariant, if it is independent of the specific system under consideration. A digital road profile, e.g., can be used to excite MBS models of different
vehicle variants. However, quantities efficiently obtained by measurement such as wheel forces
are typically not invariant in this sense. This leads to the general task to derive invariant loads
on the basis of measurable, but system-dependent quantities. We present an approach to derive
input data for full-vehicle simulation that can be used to simulate different variants of a vehicle
MBS model. An important ingredient of this input data is a virtual road profile computed by optimal control methods.
In this paper, we present a viscoelastic rod model that is suitable for fast and accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (‘stiff’ dof), bending and torsion (‘soft’ dof). For inner dissipation, a consistent damping potential proposed by Antman is chosen. We parametrise the rotational dof by unit quaternions and directly use the quaternionic evolution differential equation for the discretisation of the Cosserat rod curvature. The discrete version of our rod model is obtained via a finite difference discretisation on a staggered grid. After an index reduction from three to zero, the right-hand side function f and the Jacobian \(\partial f/\partial(q, v, t)\) of the dynamical system \(\dot{q} = v, \dot{v} = f(q, v, t)\) is free of higher algebraic (e. g. root) or transcendental (e. g. trigonometric or exponential) functions and therefore cheap to evaluate. A comparison with Abaqus finite element results demonstrates the correct mechanical behavior of our discrete rod model. For the time integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computational times within milliseconds, it is suitable for interactive applications in ‘virtual reality’ as well as for multibody dynamics simulation.
This work presents a proof of convergence of a discrete solution to a continuous one. At first, the continuous problem is stated as a system
of equations which describe filtration process in the pressing section of a
paper machine. Two flow regimes appear in the modeling of this problem.
The model for the saturated flow is presented by the Darcy’s law and the mass conservation. The second regime is described by the Richards approach together with a dynamic capillary pressure model. The finite
volume method is used to approximate the system of PDEs. Then the existence of a discrete solution to proposed finite difference scheme is proven.
Compactness of the set of all discrete solutions for different mesh sizes is
proven. The main Theorem shows that the discrete solution converges
to the solution of continuous problem. At the end we present numerical
studies for the rate of convergence.
In this paper we develop monitoring schemes for detecting structural changes
in nonlinear autoregressive models. We approximate the regression function by a
single layer feedforward neural network. We show that CUSUM-type tests based
on cumulative sums of estimated residuals, that have been intensively studied
for linear regression in both an offline as well as online setting, can be extended
to this model. The proposed monitoring schemes reject (asymptotically) the null
hypothesis only with a given probability but will detect a large class of alternatives
with probability one. In order to construct these sequential size tests the limit
distribution under the null hypothesis is obtained.
We consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market includ- ing the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility. The power utility case is discussed as well. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two posi- tions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibility to benefit from her work effort by acquiring own-company shares. This gives insight into aspects of optimal contract design. Our framework is applicable to the pharma- ceutical and financial industry, and the IT sector.
This paper discusses a numerical subgrid resolution approach for solving the Stokes-Brinkman system of equations, which is describing coupled ow in plain and in highly porous media. Various scientic and industrial problems are described by this system, and often the geometry and/or the permeability vary on several scales. A particular target is the process of oil ltration. In many complicated lters, the lter medium or the lter element geometry are too ne to be resolved by a feasible computational grid. The subgrid approach presented in the paper is aimed at describing how these ne details are accounted for by solving auxiliary problems in appropriately chosen grid cells on a relatively coarse computational grid. This is done via a systematic and a careful procedure of modifying and updating the coecients of the Stokes-Brinkman system in chosen cells. This numerical subgrid approach is motivated from one side from homogenization theory, from which we borrow the formulations for the so called cell problem, and from the other side from the numerical upscaling approaches, such as Multiscale Finite Volume, Multiscale Finite Element, etc. Results on the algorithm's eciency, both in terms of computational time and memory usage, are presented. Comparison with solutions on full ne grid (when possible) are presented in order to evaluate the accuracy. Advantages and limitations of the considered subgrid approach are discussed.
Modeling of species and charge transport in Li-Ion Batteries based on non-equilibrium thermodynamics
(2010)
In order to improve the design of Li ion batteries the complex interplay of various physical phenomena in the active particles of the electrodes and in the electrolyte has to be balanced. The separate transport phenomena in the electrolyte and in the active particle as well as their coupling due to the electrochemical reactions at the interfaces between the electrode particles and the electrolyte will inuence the performance and the lifetime of a battery. Any modeling of the complex phenomena during the usage of a battery has therefore to be based on sound physical and chemical principles in order to allow for reliable predictions for the response of the battery to changing load conditions. We will present a modeling approach for the transport processes in the electrolyte and the electrodesbased on non-equilibrium thermodynamics and transport theory. The assumption of local charge neutrality, which is known to be valid in concentrated electrolytes, is explicitly used to identify the independent thermodynamic variables and uxes. The theory guarantees strictly positive entropy production. Dierences to other theories will be discussed.
In this article we present a method to generate random objects from a large variety of combinatorial classes according to a given distribution. Given a description of the combinatorial class and a set of sample data our method will provide an algorithm that generates objects of size n in worst-case runtime O(n^2) (O(n log(n)) can be achieved at the cost of a higher average-case runtime), with the generated objects following a distribution that closely matches the distribution of the sample data.
Wireless sensor networks are the driving force behind many popular and interdisciplinary research areas, such as environmental monitoring, building automation, healthcare and assisted living applications. Requirements like compactness, high integration of sensors, flexibility, and power efficiency are often very different and cannot be fulfilled by state-of-the-art node platforms at once. In this paper, we present and analyze AmICA: a flexible, compact, easy-to-program, and low-power node platform. Developed from scratch and including a node, a basic communication protocol, and a debugging toolkit, it assists in an user-friendly rapid application development. The general purpose nature of AmICA was evaluated in two practical applications with diametric requirements. Our analysis shows that AmICA nodes are 67% smaller than BTnodes, have five times more sensors than Mica2Dot and consume 72% less energy than the state-of-the-art TelosB mote in sleep mode.
In a dynamic network, the quickest path problem asks for a path such that a given amount of flow can be sent from source to sink via this path in minimal time. In practical settings, for example in evacuation or transportation planning, the problem parameters might not be known exactly a-priori. It is therefore of interest to consider robust versions of these problems in which travel times and/or capacities of arcs depend on a certain scenario. In this article, min-max versions of robust quickest path problems are investigated and, depending on their complexity status, exact algorithms or fully polynomial-time approximation schemes are proposed.
Simulation of multibody systems (mbs) is an inherent part in developing and design of complex mechanical systems. Moreover, simulation during operation gained in importance in the recent years, e.g. for HIL-, MIL- or monitoring applications. In this paper we discuss the numerical simulation of multibody systems on different platforms. The main section of this paper deals with the simulation of an established truck model [9] on different platforms, one microcontroller and two real-time processor boards. Additional to numerical C-code the latter platforms provide the possibility to build the model with a commercial mbs tool, which is also investigated. A survey of different ways of generating code and equations of mbs models is given and discussed concerning handling, possible limitations as well as performance. The presented benchmarks are processed under terms of on-board real time applications. A further important restriction, caused by the real-time requirement, is a fixed integration step size. Whence, carefully chosen numerical integration algorithms are necessary, especially in the case of closed loops in the model. We investigate linearly-implicit time integration methods with fixed step size, so-called Rosenbrock methods, and compare them with respect to their accuracy and performance on the tested processors.
Optimal control methods for the calculation of invariant excitation signals for multibody systems
(2010)
Input signals are needed for the numerical simulation of vehicle multibody systems. With these input data, the equations of motion can be integrated numerically and some output quantities can be calculated from the simulation results. In this work we consider the corresponding inverse problem: We assume that some reference output signals are available, typically gained by measurement and focus on the task to derive the input signals that produce the desired reference output in a suitable sense. If the input data is invariant, i.e., independent of the specific system, it can be transferred and used to excite other system variants. This problem can be formulated as optimal control problem. We discuss solution approaches from optimal control theory, their applicability to this special problem class and give some simulation results.
We will present a rigorous derivation of the equations and interface conditions for ion, charge and heat transport in Li-ion insertion batteries. The derivation is based exclusively on universally accepted principles of nonequilibrium thermodynamics and the assumption of a one step intercalation reaction at the interface of electrolyte and active particles. Without loss of generality the transport in the active particle is assumed to be isotropic. The electrolyte is described as a fully dissociated salt in a neutral solvent. The presented theory is valid for transport on a spatial scale for which local charge neutrality holds i.e. beyond the scale of the diffuse double layer. Charge neutrality is explicitely used to determine the correct set of thermodynamically independent variables. The theory guarantees strictly positive entropy production. The various contributions to the Peltier coeficients for the interface between the active particles and the electrolyte as well as the contributions to the heat of mixing are obtained as a result of the theory.
The scope of this paper is to enhance the model for the own-company stockholder (given in Desmettre, Gould and Szimayer (2010)), who can voluntarily performance-link his personal wealth to his management success by acquiring stocks in the own-company whose value he can directly influence via spending work effort. The executive is thereby characterized by a parameter of risk aversion and the two work effectiveness parameters inverse work productivity and disutility stress. We extend the model to a constant absolute risk aversion framework using an exponential utility/disutility set-up. A closed-form solution is given for the optimal work effort an executive will apply and we derive the optimal investment strategies of the executive. Furthermore, we determine an up-front fair cash compensation applying an indifference utility rationale. Our study shows to a large extent that the results previously obtained are robust under the choice of the utility/disutility set-up.
The optimal design of rotational production processes for glass wool manufacturing poses severe computational challenges to mathematicians, natural scientists and engineers. In this paper we focus exclusively on the spinning regime where thousands of viscous thermal glass jets are formed by fast air streams. Homogeneity and slenderness of the spun fibers are the quality features of the final fabric. Their prediction requires the computation of the fuidber-interactions which involves the solving of a complex three-dimensional multiphase problem with appropriate interface conditions. But this is practically impossible due to the needed high resolution and adaptive grid refinement. Therefore, we propose an asymptotic coupling concept. Treating the glass jets as viscous thermal Cosserat rods, we tackle the multiscale problem by help of momentum (drag) and heat exchange models that are derived on basis of slender-body theory and homogenization. A weak iterative coupling algorithm that is based on the combination of commercial software and self-implemented code for ow and rod solvers, respectively, makes then the simulation of the industrial process possible. For the boundary value problem of the rod we particularly suggest an adapted collocation-continuation method. Consequently, this work establishes a promising basis for future optimization strategies.
In this paper a three dimensional stochastic model for the lay-down of fibers on a moving conveyor belt in the production process of nonwoven materials is derived. The model is based on stochastic diferential equations describing the resulting position of the fiber on the belt under the influence of turbulent air ows. The model presented here is an extension of an existing surrogate model, see [6, 3].
The modelling of hedge funds poses a difficult problem since the available reported data sets are often small and incomplete. We propose a switching regression model for hedge funds, in which the coefficients are able to switch between different regimes. The coefficients are governed by a Markov chain in discrete time. The different states of the Markov chain represent different states of the economy, which influence the performance of the independent variables. Hedge fund indices are chosen as regressors. The parameter estimation for the switching parameter as well as for the switching error term is done through a filtering technique for hidden Markov models developed by Elliott (1994). Recursive parameter estimates are calculated through a filter-based EM-algorithm, which uses the hidden information of the underlying Markov chain. Our switching regression model is applied on hedge fund series and hedge fund indices from the HFR database.
The Train Marshalling Problem consists of rearranging an incoming train in a marshalling yard in such a way that cars with the same destinations appear consecutively in the final train and the number of needed sorting tracks is minimized. Besides an initial roll-in operation, just one pull-out operation is allowed. This problem was introduced by Dahlhaus et al. who also showed that the problem is NP-complete. In this paper, we provide a new lower bound on the optimal objective value by partitioning an appropriate interval graph. Furthermore, we consider the corresponding online problem, for which we provide upper and lower bounds on the competitiveness and a corresponding optimal deterministic online algorithm. We provide an experimental evaluation of our lower bound and algorithm which shows the practical tightness of the results.
In this article, we summarise the rotation-free and quaternionic parametrisation of a rigid body. We derive and explain the close interrelations between both parametrisations. The internal constraints due to the redundancies in the parametrisations, which lead to DAEs, are handled with the null space technique. We treat both single rigid bodies and general multibody systems with joints, which lead to external joint constraints. Several numerical examples compare both formalisms to the index reduced versions of the corresponding standard formulations.
In this paper, a multi-period supply chain network design problem is addressed. Several aspects of practical relevance are considered such as those related with the financial decisions that must be accounted for by a company managing a supply chain. The decisions to be made comprise the location of the facilities, the flow of commodities and the investments to make in alternative activities to those directly related with the supply chain design. Uncertainty is assumed for demand and interest rates, which is described by a set of scenarios. Therefore, for the entire planning horizon, a tree of scenarios is built. A target is set for the return on investment and the risk of falling below it is measured and accounted for. The service level is also measured and included in the objective function. The problem is formulated as a multi-stage stochastic mixed-integer linear programming problem. The goal is to maximize the total financial benefit. An alternative formulation which is based upon the paths in the scenario tree is also proposed. A methodology for measuring the value of the stochastic solution in this problem is discussed. Computational tests using randomly generated data are presented showing that the stochastic approach is worth considering in these type of problems.
We study global and local robustness properties of several estimators for shape and scale in a generalized Pareto model. The estimators considered in this paper cover maximum likelihood estimators, skipped maximum likelihood estimators, moment-based estimators, Cramér-von-Mises Minimum Distance estimators, and, as a special case of quantile-based estimators, Pickands Estimator as well as variants of the latter tuned for higher finite sample breakdown point (FSBP), and lower variance. We further consider an estimator matching population median and median of absolute deviations to the empirical ones (MedMad); again, in order to improve its FSBP, we propose a variant using a suitable asymmetric Mad as constituent, and which may be tuned to achieve an expected FSBP of 34%. These estimators are compared to one-step estimators distinguished as optimal in the shrinking neighborhood setting, i.e., the most bias-robust estimator minimizing the maximal (asymptotic) bias and the estimator minimizing the maximal (asymptotic) MSE. For each of these estimators, we determine the FSBP, the influence function, as well as statistical accuracy measured by asymptotic bias, variance, and mean squared error—all evaluated uniformly on shrinking convex contamination neighborhoods. Finally, we check these asymptotic theoretical findings against finite sample behavior by an extensive simulation study.
A theory of discrete Cosserat rods is formulated in the language of discrete Lagrangian mechanics. By exploiting Kirchho's kinetic analogy, the potential energy density of a rod is a function on the tangent bundle of the conguration manifold and thus formally corresponds to the Lagrangian function of a dynamical system. The equilibrium equations are derived from a variational principle using a formulation that involves null{space matrices. In this formulation, no Lagrange multipliers are necessary to enforce orthonormality of the directors. Noether's theorem relates rst integrals of the equilibrium equations to Lie group actions on the conguration bundle, so{called symmetries. The symmetries relevant for rod mechanics are frame{indierence, isotropy and uniformity. We show that a completely analogous and self{contained theory of discrete rods can be formulated in which the arc{length is a discrete variable ab initio. In this formulation, the potential energy density is dened directly on pairs of points along the arc{length of the rod, in analogy to Veselov's discrete reformulation of Lagrangian mechanics. A discrete version of Noether's theorem then identies exact rst integrals of the discrete equilibrium equations. These exact conservation properties confer the discrete solutions accuracy and robustness, as demonstrated by selected examples of application. Copyright c 2010 John Wiley & Sons, Ltd.
A number of water flow problems in porous media are modelled by Richards’ equation [1]. There exist a lot of different applications of this model. We are concerned with the simulation of the pressing section of a paper machine. This part of the industrial process provides the dewatering of the paper layer by the use of clothings, i.e. press felts, which absorb the water during pressing [2]. A system of nips are formed in the simplest case by rolls, which increase sheet dryness by pressing against each other (see Figure 1). A lot of theoretical studies were done for Richards’ equation (see [3], [4] and references therein). Most articles consider the case of x-independent coefficients. This simplifies the system considerably since, after Kirchhoff’s transformation of the problem, the elliptic operator becomes linear. In our case this condition is not satisfied and we have to consider nonlinear operator of second order. Moreover, all these articles are concerned with the nonstationary problem, while we are interested in the stationary case. Due to complexity of the physical process our problem has a specific feature. An additional convective term appears in our model because the porous media moves with the constant velocity through the pressing rolls. This term is zero in immobile porous media. We are not aware of papers, which deal with such kind of modified steady Richards’ problem. The goal of this paper is to obtain the stability results, to show the existence of a solution to the discrete problem, to prove the convergence of the approximate solution to the weak solution of the modified steady Richards’ equation, which describes the transport processes in the pressing section. In Section 2 we present the model which we consider. In Section 3 a numerical scheme obtained by the finite volume method is given. The main part of this paper is theoretical studies, which are given in Section 4. Section 5 presents a numerical experiment. The conclusion of this work is given in Section 6.
We present some optimality results for robust Kalman filtering. To this end, we introduce the general setup of state space models which will not be limited to a Euclidean or time-discrete framework. We pose the problem of state reconstruction and repeat the classical existing algorithms in this context. We then extend the ideal-model setup allowing for outliers which in this context may be system-endogenous or -exogenous, inducing the somewhat conflicting goals of tracking and attenuation. In quite a general framework, we solve corresponding minimax MSE-problems for both types of outliers separately, resulting in saddle-points consisting of an optimally-robust procedure and a corresponding least favorable outlier situation. Still insisting on recursivity, we obtain an operational solution, the rLS filter and variants of it. Exactly robust-optimal filters would need knowledge of certain hard-to-compute conditional means in the ideal model; things would be much easier if these conditional means were linear. Hence, it is important to quantify the deviation of the exact conditional mean from linearity. We obtain a somewhat surprising characterization of linearity for the conditional expectation in this setting. Combining both optimal filter types (for system-endogenous and -exogenous situation) we come up with a delayed hybrid filter which is able to treat both types of outliers simultaneously. Keywords: robustness, Kalman Filter, innovation outlier, additive outlier
This work deals with the optimal control of a free surface Stokes flow which responds to an applied outer pressure. Typical applications are fiber spinning or thin film manufacturing. We present and discuss two adjoint-based optimization approaches that differ in the treatment of the free boundary as either state or control variable. In both cases the free boundary is modeled as the graph of a function. The PDE-constrained optimization problems are numerically solved by the BFGS method, where the gradient of the reduced cost function is expressed in terms of adjoint variables. Numerical results for both strategies are finally compared with respect to accuracy and efficiency.
We present a two-scale finite element method for solving Brinkman’s and Darcy’s equations. These systems of equations model fluid flows in highly porous and porous media, respectively. The method uses a recently proposed discontinuous Galerkin FEM for Stokes’ equations byWang and Ye and the concept of subgrid approximation developed by Arbogast for Darcy’s equations. In order to reduce the “resonance error” and to ensure convergence to the global fine solution the algorithm is put in the framework of alternating Schwarz iterations using subdomains around the coarse-grid boundaries. The discussed algorithms are implemented using the Deal.II finite element library and are tested on a number of model problems.
Numerical modeling of electrochemical process in Li-Ion battery is an emerging topic of great practical interest. In this work we present a Finite Volume discretization of electrochemical diffusive processes occurring during the operation of Li-Ion batteries. The system of equations is a nonlinear, time-dependent diffusive system, coupling the Li concentration and the electric potential. The system is formulated at length-scale at which two different types of domains are distinguished, one for the electrolyte and one for the active solid particles in the electrode. The domains can be of highly irregular shape, with electrolyte occupying the pore space of a porous electrode. The material parameters in each domain differ by several orders of magnitude and can be non-linear functions of Li ions concentration and/or the electrical potential. Moreover, special interface conditions are imposed at the boundary separating the electrolyte from the active solid particles. The field variables are discontinuous across such an interface and the coupling is highly non- linear, rendering direct iteration methods ineffective for such problems. We formulate a Newton iteration for an purely implicit Finite Volume discretization of the coupled system. A series of numerical examples are presented for different type of electrolyte/electrode configurations and material parameters. The convergence of the Newton method is characterized both as function of nonlinear material parameters as well as the nonlinearity in the interface conditions.
This work deals with the modeling and simulation of slender viscous jets exposed to gravity and rotation, as they occur in rotational spinning processes. In terms of slender-body theory we show the asymptotic reduction of a viscous Cosserat rod to a string system for vanishing slenderness parameter. We propose two string models, i.e. inertial and viscous-inertial string models, that differ in the closure conditions and hence yield a boundary value problem and an interface problem, respectively. We investigate the existence regimes of the string models in the four-parametric space of Froude, Rossby, Reynolds numbers and jet length. The convergence regimes where the respective string solution is the asymptotic limit to the rod turn out to be disjoint and to cover nearly the whole parameter space. We explore the transition hyperplane and derive analytically low and high Reynolds number limits. Numerical studies of the stationary jet behavior for different parameter ranges complete the work.
In the Dynamic Multi-Period Routing Problem, one is given a new set of requests at the beginning of each time period. The aim is to assign requests to dates such that all requests are fulfilled by their deadline and such that the total cost for fulling the requests is minimized. We consider a generalization of the problem which allows two classes of requests: The 1st class requests can only be fulfilled by the 1st class server, whereas the 2nd class requests can be fulfilled by either the 1st or 2nd class server. For each tour, the 1st class server incurs a cost that is alpha times the cost of the 2nd class server, and in each period, only one server can be used. At the beginning of each period, the new requests need to be assigned to service dates. The aim is to make these assignments such that the sum of the costs for all tours over the planning horizon is minimized. We study the problem with requests located on the nonnegative real line and prove that there cannot be a deterministic online algorithm with a competitive ratio better than alpha. However, if we require the difference between release and deadline date to be equal for all requests, we can show that there is a min{2*alpha, 2 + 2/alpha}-competitive algorithm.
In the generalized max flow problem, the aim is to find a maximum flow in a generalized network, i.e., a network with multipliers on the arcs that specify which portion of the flow entering an arc at its tail node reaches its head node. We consider this problem for the class of series-parallel graphs. First, we study the continuous case of the problem and prove that it can be solved using a greedy approach. Based on this result, we present a combinatorial algorithm that runs in O(m*m) time and a dynamic programming algorithm with running time O(m*log(m)) that only computes the maximum flow value but not the flow itself. For the integral version of the problem, which is known to be NP-complete, we present a pseudo-polynomial algorithm.
Online Delay Management
(2010)
We present extensions to the Online Delay Management Problem on a Single Train Line. While a train travels along the line, it learns at each station how many of the passengers wanting to board the train have a delay of delta. If the train does not wait for them, they get delayed even more since they have to wait for the next train. Otherwise, the train waits and those passengers who were on time are delayed by delta. The problem consists in deciding when to wait in order to minimize the total delay of all passengers on the train line. We provide an improved lower bound on the competitive ratio of any deterministic online algorithm solving the problem using game tree evaluation. For the extension of the original model to two possible passenger delays delta_1 and delta_2, we present a 3-competitive deterministic online algorithm. Moreover, we study an objective function modeling the refund system of the German national railway company, which pays passengers with a delay of at least Delta a part of their ticket price back. In this setting, the aim is to maximize the profit. We show that there cannot be a deterministic competitive online algorithm for this problem and present a 2-competitive randomized algorithm.
We prove a general monotonicity result about Nash flows in directed networks and use it for the design of truthful mechanisms in the setting where each edge of the network is controlled by a different selfish agent, who incurs costs when her edge is used. The costs for each edge are assumed to be linear in the load on the edge. To compensate for these costs, the agents impose tolls for the usage of edges. When nonatomic selfish network users choose their paths through the network independently and each user tries to minimize a weighted sum of her latency and the toll she has to pay to the edges, a Nash flow is obtained. Our monotonicity result implies that the load on an edge in this setting can not increase when the toll on the edge is increased, so the assignment of load to the edges by a Nash flow yields a monotone algorithm. By a well-known result, the monotonicity of the algorithm then allows us to design truthful mechanisms based on the load assignment by Nash flows. Moreover, we consider a mechanism design setting with two-parameter agents, which is a generalization of the case of one-parameter agents considered in a seminal paper of Archer and Tardos. While the private data of an agent in the one-parameter case consists of a single nonnegative real number specifying the agent's cost per unit of load assigned to her, the private data of a two-parameter agent consists of a pair of nonnegative real numbers, where the first one specifies the cost of the agent per unit load as in the one-parameter case, and the second one specifies a fixed cost, which the agent incurs independently of the load assignment. We give a complete characterization of the set of output functions that can be turned into truthful mechanisms for two-parameter agents. Namely, we prove that an output function for the two-parameter setting can be turned into a truthful mechanism if and only if the load assigned to every agent is nonincreasing in the agent's bid for her per unit cost and, for almost all fixed bids for the agent's per unit cost, the load assigned to her is independent of the agent's bid for her fixed cost. When the load assigned to an agent is continuous in the agent's bid for her per unit cost, it must be completely independent of the agent's bid for her fixed cost. These results motivate our choice of linear cost functions without fixed costs for the edges in the selfish routing setting, but the results also seem to be interesting in the context of algorithmic mechanism design themselves.
Gegenstand dieser Arbeit ist die Entwicklung eines Wärmetransportmodells für tiefe geothermische (hydrothermale) Reservoire. Existenz- und Eindeutigkeitsaussagen bezüglich einer schwachen Lösung des vorgestellten Modells werden getätigt. Weiterhin wird ein Verfahren zur Approximation dieser Lösung basierend auf einem linearen Galerkin-Schema dargelegt, wobei sowohl die Konvergenz nachgewiesen als auch eine Konvergenzrate erarbeitet werden.
The capacitated single-allocation hub location problem revisited: A note on a classical formulation
(2009)
Denote by G = (N;A) a complete graph where N is the set of nodes and A is the set of edges. Assume that a °ow wij should be sent from each node i to each node j (i; j 2 N). One possibility is to send these °ows directly between the corresponding pairs of nodes. However, in practice this is often neither e±cient nor costly attractive because it would imply that a link was built between each pair of nodes. An alternative is to select some nodes to become hubs and use them as consolidation and redistribution points that altogether process more e±ciently the flow in the network. Accordingly, hubs are nodes in the graph that receive tra±c (mail, phone calls, passengers, etc) from di®erent origins (nodes) and redirect this tra±c directly to the destination nodes (when a link exists) or else to other hubs. The concentration of tra±c in the hubs and its shipment to other hubs lead to a natural decrease in the overall cost due to economies of scale.
Radiotherapy is one of the major forms in cancer treatment. The patient is irradiated with high-energetic photons or charged particles with the primary goal of delivering sufficiently high doses to the tumor tissue while simultaneously sparing the surrounding healthy tissue. The inverse search for the treatment plan giving the desired dose distribution is done by means of numerical optimization [11, Chapters 3-5]. For this purpose, the aspects of dose quality in the tissue are modeled as criterion functions, whose mathematical properties also affect the type of the corresponding optimization problem. Clinical practice makes frequent use of criteria that incorporate volumetric and spatial information about the shape of the dose distribution. The resulting optimization problems are of global type by empirical knowledge and typically computed with generic global solver concepts, see for example [16]. The development of good global solvers to compute radiotherapy optimization problems is an important topic of research in this application, however, the structural properties of the underlying criterion functions are typically not taken into account in this context.
One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide for the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to assure that all clinically meaningful plans are covered and that as many as possible clinically irrelevant plans are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that consititutes the first phase. It is possible that two plans on the Parteto surface have a very small, clinically insignificant difference in one criterion and a significant difference in one other criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, so called tradeoff bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects.
Home Health Care (HHC) services are becoming increasingly important in Europe’s aging societies. Elderly people have varying degrees of need for assistance and medical treatment. It is advantageous to allow them to live in their own homes as long as possible, since a long-term stay in a nursing home can be much more costly for the social insurance system than a treatment at home providing assistance to the required level. Therefore, HHC services are a cost-effective and flexible instrument in the social system. In Germany, organizations providing HHC services are generally either larger charities with countrywide operations or small private companies offering services only in a city or a rural area. While the former have a hierarchical organizational structure and a large number of employees, the latter typically only have some ten to twenty nurses under contract. The relationship to the patients (“customers”) is often long-term and can last for several years. Therefore acquiring and keeping satisfied customers is crucial for HHC service providers and intensive competition among them is observed.
In this work we use the Parsimonious Multi–Asset Heston model recently developed in [Dimitroff et al., 2009] at Fraunhofer ITWM, Department Financial Mathematics, Kaiserslautern (Germany) and apply it to Quanto options. We give a summary of the model and its calibration scheme. A suitable transformation of the Quanto option payoff is explained and used to price Quantos within the new framework. Simulated prices are given and compared to market prices and Black–Scholes prices. We find that the new approach underprices the chosen options, but gives better results than the Black–Scholes approach, which is prevailing in the literature on Quanto options.
Four aspects are important in the design of hydraulic lters. We distinguish between two cost factors and two performance factors. Regarding performance, filter eciencynd lter capacity are of interest. Regarding cost, there are production considerations such as spatial restrictions, material cost and the cost of manufacturing the lter. The second type of cost is the operation cost, namely the pressure drop. Albeit simulations should and will ultimately deal with all 4 aspects, for the moment our work is focused on cost. The PleatGeo Module generates three-dimensional computer models of a single pleat of a hydraulic lter interactively. PleatDict computes the pressure drop that will result for the particular design by direct numerical simulation. The evaluation of a new pleat design takes only a few hours on a standard PC compared to days or weeks used for manufacturing and testing a new prototype of a hydraulic lter. The design parameters are the shape of the pleat, the permeabilities of one or several layers of lter media and the geometry of a supporting netting structure that is used to keep the out ow area open. Besides the underlying structure generation and CFD technology, we present some trends regarding the dependence of pressure drop on design parameters that can serve as guide lines for the design of hydraulic lters. Compared to earlier two-dimensional models, the three-dimensional models can include a support structure.
In this work we establish a hierarchy of mathematical models for the numerical simulation of the production process of technical textiles. The models range from highly complex three-dimensional fluid-solid interactions to one-dimensional fiber dynamics with stochastic aerodynamic drag and further to efficiently handable stochastic surrogate models for fiber lay-down. They are theoretically and numerically analyzed and coupled via asymptotic analysis, similarity estimates and parameter identification. Themodel hierarchy is applicable to a wide range of industrially relevant production processes and enables the optimization, control and design of technical textiles.
In nancial mathematics stock prices are usually modelled directly as a result of supply and demand and under the assumption that dividends are paid continuously. In contrast economic theory gives us the dividend discount model assuming that the stock price equals the present value of its future dividends. These two models need not to contradict each other - in their paper Korn and Rogers (2005) introduce a general dividend model preserving the stock price to follow a stochastic process and to be equal to the sum of all its discounted dividends. In this paper we specify the model of Korn and Rogers in a Black-Scholes framework in order to derive a closed-form solution for the pricing of American Call options under the assumption of a known next dividend followed by several stochastic dividend payments during the option's time to maturity.
This paper discusses the possibility to use and apply the ideas of theWave BasedMethod, which has been developed especially for the steady–state acoustic areas, i.e. to solve the Helmholtz type boundary value problems in a bounded domain, in non–acoustics areas such as steady–state temperature propagation, calculation of the velocity potential function of a liquid flux, calculation of the light irradience in a liver tissue/tumor, etc.
Classical geometrically exact Kirchhoff and Cosserat models are used to study the nonlinear deformation of rods. Extension, bending and torsion of the rod may be represented by the Kirchhoff model. The Cosserat model additionally takes into account shearing effects. Second order finite differences on a staggered grid define discrete viscoelastic versions of these classical models. Since the rotations are parametrised by unit quaternions, the space discretisation results in differential-algebraic equations that are solved numerically by standard techniques like index reduction and projection methods. Using absolute coordinates, the mass and constraint matrices are sparse and this sparsity may be exploited to speed-up time integration. Further improvements are possible in the Cosserat model, because the constraints are just the normalisation conditions for unit quaternions such that the null space of the constraint matrix can be given analytically. The results of the theoretical investigations are illustrated by numerical tests.
Safety and reliability requirements on the one side and short development cycles, low costs and lightweight design on the other side are two competing aspects of truck engineering. For safety critical components essentially no failures can be tolerated within the target mileage of a truck. For other components the goals are to stay below certain predefined failure rates. Reducing weight or cost of structures often also reduces strength and reliability. The requirements on the strength, however, strongly depend on the loads in actual customer usage. Without sufficient knowledge of these loads one needs large safety factors, limiting possible weight or cost reduction potentials. There are a lot of different quantities influencing the loads acting on the vehicle in actual usage. These ‘influencing quantities’ are, for example, the road quality, the driver, traffic conditions, the mission (long haulage, distribution or construction site), and the geographic region. Thus there is a need for statistical methods to model the load distribution with all its variability, which in turn can be used for the derivation of testing specifications.
In this paper, the model of Köttgen, Barkey and Socie, which corrects the elastic stress and strain tensor histories at notches of a metallic specimen under non-proportional loading, is improved. It can be used in connection with any multiaxial s -e -law of incremental plasticity. For the correction model, we introduce a constraint for the strain components that goes back to the work of Hoffmann and Seeger. Parameter identification for the improved model is performed by Automatic Differentiation and an established least squares algorithm. The results agree accurately both with transient FE computations and notch strain measurements.
Territory design and districting may be viewed as the problem of grouping small geographic areas into larger geographic clusters called territories in such a way that the latter are acceptable according to relevant planning criteria. The availability of GIS on computers and the growing interest in Geo-Marketing leads to an increasing importance of this area. Despite the wide range of applications for territory design problems, when taking a closer look at the models proposed in the literature, a lot of similarities can be noticed. Indeed, the models are many times very similar and can often be, more or less directly, carried over to other applications. Therefore, our aim is to provide a generic application-independent model and present efficient solution techniques. We introduce a basic model that covers aspects common to most applications. Moreover, we present a method for solving the general model which is based on ideas from the field of computational geometry. Theoretical as well as computational results underlining the efficiency of the new approach will be given. Finally, we show how to extend the model and solution algorithm to make it applicable for a broader range of applications and how to integrate the presented techniques into a GIS.
Im Sommersemester 2008 führte die AG Optimierung, FB Mathematik zusammen mit dem FB Chemie und dem FB Pädagogik ein interdisziplinäres Seminar zur „Fachdidaktik Chemie und Mathematik“ durch. Durch dieses integrative Lehrveranstaltungskonzept sollte die Nachhaltigkeit der Ausbildung gestärkt und die Verknüpfung von Allgemeiner Didaktik mit der Fachdidaktik sowie zwischen verschiedenen Fachbereichen gefördert werden. In dieser speziellen Veranstaltung erarbeiteten sich die Teilnehmer Inhalte in der Schnittmenge von Chemie und Mathematik, nämlich Kristallgeometrie, Analysis und Titration sowie Graphentheorie und Trennverfahren. Ihre Erkenntnisse wurden im Rahmen von Seminarvorträgen präsentiert und ausgearbeitet. Im folgenden Report befinden sich die Ausarbeitungen, welche Lernziele und Kompetenzen, Sach-, Methodische und Didaktische Analysen sowie Unterrichtsentwürfe umfassen.
The rotational spinning of viscous jets is of interest in many industrial applications, including pellet manufacturing [4, 14, 19, 20] and drawing, tapering and spinning of glass and polymer fibers [8, 12, 13], see also [15, 21] and references within. In [12] an asymptotic model for the dynamics of curved viscous inertial fiber jets emerging from a rotating orifice under surface tension and gravity was deduced from the three-dimensional free boundary value problem given by the incompressible Navier-Stokes equations for a Newtonian fluid. In the terminology of [1], it is a string model consisting of balance equations for mass and linear momentum. Accounting for inner viscous transport, surface tension and placing no restrictions on either the motion or the shape of the jet’s center-line, it generalizes the previously developed string models for straight [3, 5, 6] and curved center-lines [4, 13, 19]. Moreover, the numerical results investigating the effects of viscosity, surface tension, gravity and rotation on the jet behavior coincide well with the experiments of Wong et.al. [20].
A general multi-period network redesign problem arising in the context of strategic supply chain planning (SCP) is studied. Several aspects of practical relevance in SCP are captured namely, multiple facility layers with different types of facilities, flows between facilities in the same layer, direct shipments to customers, and facility relocation. An efficient two-phase heuristic approach is proposed for obtaining feasible solutions to the problem, which is initially modeled as a large-scale mixed-integer linear program. In the first stage of the heuristic, a linear programming rounding strategy is applied to second initial values for the binary location variables in the model. The second phase of the heuristic uses local search to correct the initial solution when feasibility is not reached or to improve the solution when its quality does not meet given criteria. The results of an extensive computational study performed on randomly generated instances are reported.
In this paper, an extension to the classical capacitated single-allocation hub location problem is studied in which the size of the hubs is part of the decision making process. For each potential hub a set of capacities is assumed to be available among which one can be chosen. Several formulations are proposed for the problem, which are compared in terms of the bound provided by the linear programming relaxation. Di®erent sets of inequalities are proposed to enhance the models. Several preprocessing tests are also presented with the goal of reducing the size of the models for each particular instance. The results of the computational experiments performed using the proposed models are reported.
In the literature, there are at least two equivalent two-factor Gaussian models for the instantaneous short rate. These are the original two-factor Hull White model (see [3]) and the G2++ one by Brigo and Mercurio (see [1]). Both these models first specify a time homogeneous two-factor short rate dynamics and then by adding a deterministic shift function '(·) fit exactly the initial term structure of interest rates. However, the obtained results are rather clumsy and not intuitive which means that a special care has to be taken for their correct numerical implementation.
Selfish Bin Coloring
(2009)
We introduce a new game, the so-called bin coloring game, in which selfish players control colored items and each player aims at packing its item into a bin with as few different colors as possible. We establish the existence of Nash and strong as well as weakly and strictly Pareto optimal equilibria in these games in the cases of capacitated and uncapacitated bins. For both kinds of games we determine the prices of anarchy and stability concerning those four equilibrium concepts. Furthermore, we show that extreme Nash equilibria, those with minimal or maximal number of colors in a bin, can be found in time polynomial in the number of items for the uncapcitated case.
We present a parsimonious multi-asset Heston model. All single-asset submodels follow the well-known Heston dynamics and their parameters are typically calibrated on implied market volatilities. We focus on the calibration of the correlation structure between the single-asset marginals in the absence of sucient liquid cross-asset option price data. The presented model is parsimonious in the sense that d(d􀀀1)=2 asset-asset cross-correlations are required for a d-asset Heston model. In order to calibrate the model, we present two general setups corresponding to relevant practical situations: (1) when the empirical cross-asset correlations in the risk neutral world are given by the user and we need to calibrate the correlations between the driving Brownian motions or (2) when they have to be estimated from the historical time series. The theoretical background, including the ergodicity of the multidimensional CIR process, for the proposed estimators is also studied.
The understanding of the motion of long slender elastic fibers in turbulent flows is of great interest to research, development and production in technical textiles manufacturing. The fiber dynamics depend on the drag forces that are imposed on the fiber by the fluid. Their computation requires in principle a coupling of fiber and flow with no-slip interface conditions. However, theneeded high resolution and adaptive grid refinement make the direct numerical simulation of the three-dimensional fluid-solid-problem for slender fibers and turbulent flows not only extremely costly and complex, but also still impossible for practically relevant applications. Embedded in a slender body theory, an aerodynamic force concept for a general drag model was therefore derived on basis of a stochastic k-o; description for a turbulent flow field in [23]. The turbulence effects on the fiber dynamics were modeled by a correlated random Gaussian force and its asymptotic limit on a macroscopic fiber scale by Gaussian white noise with flow-dependent amplitude. The concept was numerically studied under the conditions of a melt-spinning process for nonwoven materials in [24] – for the specific choice of a non-linear Taylor drag model. Taylor [35] suggested the heuristic model for high Reynolds number flows, Re in [20, 3 · 105], around inclined slender objects under an angle of attack of alpha in (pi/36, pi/2] between flow and object tangent. Since the Reynolds number is considered with respect to the relative velocity between flow and fiber, the numerical results lackaccuracy evidently for small Re that occur in cases of flexible light fibers moving occasionally with the flow velocity. In such a regime (Re << 1), linear Stokes drag forces were successfully applied for the prediction of small particles immersed in turbulent flows, see e.g. [25, 26, 32, 39], a modifiedStokes force taking also into account the particle oscillations was presented in [14]. The linear drag relation was also conferred to longer filaments by imposing free-draining assumptions [29, 8]. Apart from this, the Taylor drag suffers from its non-applicability to tangential incident flow situations (alpha = 0) that often occur in fiber and nonwoven production processes.
This contribution presents a model reduction method for nonlinear problems in structural mechanics. Emanating from a Finite Element model of the structure, a subspace and a lookup table are generated which do not require a linearisation of the equations. The method is applied to a model created with commercial FEM software. In this case, the terms describing geometrical and material nonlinearities are not explicitly known.
In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement.
Bei der Erprobung sicherheitsrelevanter Bauteile von Nutzfahrzeugen steht man vor der Aufgabe, die sehr vielfältige Belastung durch die Kunden abschätzen zu müssen und daraus ein Prüfprogramm für die Bauteile abzuleiten, das mehreren gegenläufigen Anforderungen gerecht werden muss: Das Programm muss scharf genug sein, damit bei erfolgreicher Prüfung ein Ausfall im Feld im Rahmen eines bestimmungsgemäßen Gebrauchs ausgeschlossen werden kann, es soll aber nicht zu einer Überdimensionierung der Bauteile führen, und es soll mit relativ wenigen Bauteilversuchen eine ausreichende Aussagesicherheit erreicht werden. Wegen der hohen Anforderungen bzgl. Sicherheit müssen bei der klassischen statistischen Vorgehensweise – Schätzen der Verteilung der Kundenbeanspruchung aus Messdaten, Schätzen der Verteilung der Bauteilfestigkeit aus Versuchsergebnissen und Ableiten einer Ausfallwahrscheinlichkeit – die Verteilungen in den extremen Rändern bekannt sein. Dazu reicht aber das Datenmaterial in der Regel bei weitem nicht aus. Bei der klassischen „empirischen“ Vorgehensweise werden Kennwerte der Beanspruchung und der Festigkeit verglichen und ein ausreichender Sicherheitsabstand gefordert. Das hier vorgeschlagene Verfahren kombiniert beide Methoden, setzt dabei die Möglichkeiten der statistischen Modellierung soweit aufgrund der Datenlage vertretbar ein und ergänzt die Ergebnisse durch empirisch begründete Sicherheitsfaktoren. Dabei werden bei der Lastfestlegung die im Versuch vorhandenen Möglichkeiten berücksichtigt. Hauptvorteile dieses Verfahrens sind a) die Transparenz bzgl. der mit statistischen Mitteln erreichbaren Aussagen und des Zusammenspiels zwischen Lastermittlung und Versuch und b) die Möglichkeit durch entsprechenden Aufwand bei Messungen und Erprobung die empirischen zugunsten der statistischen Anteile zu reduzieren.
An efficient mathematical model to virtually generate woven metal wire meshes is
presented. The accuracy of this model is verified by the comparison of virtual structures with three-dimensional
images of real meshes, which are produced via computer tomography. Virtual structures
are generated for three types of metal wire meshes using only easy to measure parameters. For these
geometries the velocity-dependent pressure drop is simulated and compared with measurements
performed by the GKD - Gebr. Kufferath AG. The simulation results lie within the tolerances of
the measurements. The generation of the structures and the numerical simulations were done at
GKD using the Fraunhofer GeoDict software.
In this paper, we present a viscoelastic rod model that is suitable for fast and sufficiently accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (’stiff ’ dof), bending and torsion (’soft’ dof). For inner dissipation, a consistent damping potential from Antman is chosen. Our discrete model is based on a finite difference discretisation on a staggered grid. The right-hand side function f and the Jacobian ∂f/∂(q, v, t) of the dynamical system q˙ = v, v˙ = f(q, v, t) – after index reduction from three to zero – is free of higher algebraic (e.g. root) or transcendent (e.g. trigonometric or exponential) functions and is therefore cheap to evaluate. For the time integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computation times within milliseconds, it is suitable for interactivemanipulation in ’virtual reality’ applications. In contrast to fast common VR rod models, our model reflects the structural mechanics solutions sufficiently correct, as comparison with ABAQUS finite element results shows.
Inspired by Kirchhoff’s kinetic analogy, the special Cosserat theory of rods is formulatedin the language of Lagrangian mechanics. A static rod corresponds to an abstract Lagrangian system where the energy density takes the role of the Lagrangian function. The equilibrium equations are derived from a variational principle. Noether’s theorem relates their first integrals to frame-indifference, isotropy and uniformity. These properties can be formulated in terms of Lie group symmetries. The rotational degrees of freedom, present in the geometrically exact beam theory, are represented in terms of orthonormal director triads. To reduce the number of unknowns, Lagrange multipliers associated with the orthonormality constraints are eliminated using null-space matrices. This is done both in the continuous and in the discrete setting. The discrete equilibrium equations are used to compute discrete rod configurations, where different types of boundary conditions can be handled.