Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
- 1999 (131)
- 1995 (45)
- 1996 (44)
- 2000 (44)
- 2014 (43)
- 1992 (41)
- 2003 (39)
- 2005 (38)
- 1997 (36)
- 1998 (35)
- 1994 (34)
- 2007 (34)
- 2015 (34)
- 1991 (31)
- 2006 (31)
- 2021 (31)
- 2022 (31)
- 2001 (30)
- 2009 (30)
- 2011 (30)
- 2020 (30)
- 1993 (28)
- 2008 (28)
- 2016 (28)
- 2002 (27)
- 2013 (27)
- 2004 (26)
- 2012 (23)
- 2023 (23)
- 2010 (21)
- 2017 (19)
- 2018 (18)
- 2019 (15)
- 1990 (10)
- 1985 (8)
- 1984 (7)
- 1986 (6)
- 1987 (6)
- 1989 (6)
- 1988 (5)
- 2024 (5)
- 1979 (2)
- 1981 (1)
- 1983 (1)
Document Type
- Preprint (608)
- Doctoral Thesis (292)
- Report (121)
- Article (82)
- Diploma Thesis (26)
- Lecture (25)
- Master's Thesis (8)
- Course Material (5)
- Part of a Book (4)
- Study Thesis (4)
Keywords
- Mathematische Modellierung (19)
- MINT (16)
- Schule (16)
- Wavelet (14)
- Inverses Problem (12)
- Mehrskalenanalyse (12)
- Modellierung (12)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)
- Boltzmann Equation (8)
- Regularisierung (8)
- Lineare Algebra (7)
- Location Theory (7)
- Numerical Simulation (7)
- Optimization (7)
- integer programming (7)
- Algebraische Geometrie (6)
- Finanzmathematik (6)
- Gravitationsfeld (6)
- Navier-Stokes-Gleichung (6)
- Portfolio Selection (6)
- modelling (6)
- wavelets (6)
- Elastizität (5)
- Elastoplastizität (5)
- Modellierungswoche (5)
- Numerische Mathematik (5)
- Stochastische dynamische Optimierung (5)
- haptotaxis (5)
- isogeometric analysis (5)
- nonparametric regression (5)
- portfolio optimization (5)
- time series (5)
- Combinatorial Optimization (4)
- Galerkin-Methode (4)
- Homogenisierung <Mathematik> (4)
- Hysterese (4)
- Kugel (4)
- Multicriteria optimization (4)
- NURBS (4)
- Optionspreistheorie (4)
- Portfolio-Optimierung (4)
- Simulation (4)
- Sphäre (4)
- hub location (4)
- linear algebra (4)
- mathematical education (4)
- multiscale model (4)
- network flows (4)
- neural network (4)
- numerics (4)
- praxis orientated (4)
- Analysis (3)
- Bewertung (3)
- Brownian motion (3)
- CHAMP <Satellitenmission> (3)
- Cauchy-Navier equation (3)
- Cauchy-Navier-Gleichung (3)
- Combinatorial optimization (3)
- Computeralgebra (3)
- Elastoplasticity (3)
- Erwarteter Nutzen (3)
- Finite-Volumen-Methode (3)
- Geodäsie (3)
- Geometric Ergodicity (3)
- Gravimetrie (3)
- Gröbner bases (3)
- Gröbner-Basis (3)
- Harmonische Spline-Funktion (3)
- Hysteresis (3)
- Intensity modulated radiation therapy (3)
- Kugelflächenfunktion (3)
- Lineare Optimierung (3)
- Monte-Carlo-Simulation (3)
- Mosco convergence (3)
- Multicriteria Optimization (3)
- Multiobjective optimization (3)
- Multiresolution Analysis (3)
- Numerische Strömungssimulation (3)
- Partial Differential Equations (3)
- Poisson-Gleichung (3)
- Portfolio Optimization (3)
- Portfoliomanagement (3)
- Randwertproblem / Schiefe Ableitung (3)
- Risikomanagement (3)
- Simplex (3)
- Sobolev-Raum (3)
- Spherical Wavelets (3)
- Spline (3)
- Spline-Approximation (3)
- Standortplanung (3)
- Stücklisten (3)
- Timetabling (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Vektorwavelets (3)
- Wavelet-Analyse (3)
- autoregressive process (3)
- average density (3)
- combinatorial optimization (3)
- consecutive ones property (3)
- consistency (3)
- domain decomposition (3)
- facets (3)
- harmonic density (3)
- heuristic (3)
- kinetic equations (3)
- lattice Boltzmann method (3)
- low Mach number limit (3)
- optimales Investment (3)
- radiotherapy (3)
- tangent measure distributions (3)
- well-posedness (3)
- Algebraic Optimization (2)
- Algebraic dependence of commuting elements (2)
- Algebraische Abhängigkeit der kommutierende Elementen (2)
- Approximation Algorithms (2)
- Asymptotic Analysis (2)
- Asymptotic Expansion (2)
- Asymptotik (2)
- B-Spline (2)
- B-splines (2)
- Beam-on time (2)
- Biorthogonalisation (2)
- Biot-Savart Operator (2)
- Biot-Savart operator (2)
- CFD (2)
- CHAMP (2)
- Change analysis (2)
- Computer Algebra System (2)
- Computeralgebra System (2)
- Decomposition and Reconstruction Schemes (2)
- Decomposition cardinality (2)
- Delay Management (2)
- Derivat <Wertpapier> (2)
- Diffusionsprozess (2)
- Diskrete Fourier-Transformation (2)
- EM algorithm (2)
- Elastic BVP (2)
- Elasticity (2)
- Elastische Deformation (2)
- Elastisches RWP (2)
- Elastoplastisches RWP (2)
- Endliche Geometrie (2)
- Erdmagnetismus (2)
- FFT (2)
- Fatigue (2)
- Field splitting (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- GOCE <Satellitenmission> (2)
- GRACE <Satellitenmission> (2)
- Geometrical Algorithms (2)
- Geothermal Flow (2)
- Graph Theory (2)
- Graphentheorie (2)
- Gruppentheorie (2)
- Hamilton-Jacobi-Differentialgleichung (2)
- Hochskalieren (2)
- Hypervolume (2)
- IMRT (2)
- Integer-valued time series (2)
- Inverse Problem (2)
- Isogeometrische Analyse (2)
- Jiang's model (2)
- Jiang-Modell (2)
- Konvergenz (2)
- Kreditrisiko (2)
- Langevin equation (2)
- Laplace transform (2)
- Lebensversicherung (2)
- Level-Set-Methode (2)
- Line Planning (2)
- Line Pool Generation (2)
- Lineare Elastizitätstheorie (2)
- Lineare partielle Differentialgleichung (2)
- Local smoothing (2)
- Logik (2)
- Lokalisation (2)
- Markov Chain (2)
- Mathematik (2)
- Mehrkriterielle Optimierung (2)
- Mehrskalenmodell (2)
- Mikrostruktur (2)
- Mixture Models (2)
- Modellbildung (2)
- Modulraum (2)
- Multileaf collimator sequencing (2)
- Multiobjective programming (2)
- Multiset Multicover (2)
- Multivariate Approximation (2)
- Neural networks (2)
- Numerisches Verfahren (2)
- Open Source Library (2)
- Optimal Control (2)
- Optimierung (2)
- Optimization in Public Transportation (2)
- POD (2)
- Palm distributions (2)
- Parallel volume (2)
- Particle Methods (2)
- Partielle Differentialgleichung (2)
- Poröser Stoff (2)
- Public Transport Planning (2)
- Rarefied Gas Dynamics (2)
- Ratenunabhängigkeit (2)
- Regressionsanalyse (2)
- Regularization (2)
- Robust Optimization (2)
- Scheduling (2)
- Schnitttheorie (2)
- Singularity theory (2)
- Sobolev spaces (2)
- Software for Public Transport Planning (2)
- Split Operator (2)
- Statistisches Modell (2)
- Stochastic Control (2)
- Stochastische Differentialgleichung (2)
- Stop Location (2)
- Subset selection (2)
- Theorie schwacher Lösungen (2)
- Time Series (2)
- Transaktionskosten (2)
- Up Functions (2)
- Upscaling (2)
- Value-at-Risk (2)
- Variationsungleichungen (2)
- Vehicle Scheduling (2)
- Vektorkugelfunktionen (2)
- Volatilität (2)
- Weißes Rauschen (2)
- White Noise Analysis (2)
- Wills functional (2)
- algebraic geometry (2)
- algorithmic game theory (2)
- approximate identity (2)
- asymptotic analysis (2)
- asymptotic behavior (2)
- average densities (2)
- cancer cell invasion (2)
- changepoint test (2)
- competitive analysis (2)
- connectedness (2)
- convergence (2)
- convex optimization (2)
- coset enumeration (2)
- curve singularity (2)
- degenerate diffusion (2)
- delay (2)
- density distribution (2)
- duality (2)
- dynamische Systeme (2)
- elastoplasticity (2)
- equilibrium strategies (2)
- evolutionary spectrum (2)
- finite volume method (2)
- geomagnetism (2)
- geometric ergodicity (2)
- global existence (2)
- harmonische Dichte (2)
- heat equation (2)
- hidden variables (2)
- homogenization (2)
- hub covering (2)
- hysteresis (2)
- illiquidity (2)
- image denoising (2)
- incompressible Navier-Stokes equations (2)
- interface problem (2)
- inverse optimization (2)
- inverse problem (2)
- inverse problems (2)
- k-link shortest path (2)
- lacunarity distribution (2)
- level set method (2)
- limit and jump relations (2)
- linear optimization (2)
- localizing basis (2)
- location theory (2)
- mathematical modeling (2)
- mesh generation (2)
- mixture (2)
- modal derivatives (2)
- moment realizability (2)
- monotropic programming (2)
- multicriteria optimization (2)
- multileaf collimator (2)
- multiplicative noise (2)
- nonlinear diffusion (2)
- nonlinear diffusion filtering (2)
- occupation measure (2)
- online optimization (2)
- optimal control (2)
- optimal investment (2)
- optimization (2)
- order-two densities (2)
- pH-taxis (2)
- parabolic system (2)
- particle method (2)
- particle methods (2)
- polynomial algorithms (2)
- poroelasticity (2)
- porous media (2)
- pyramid scheme (2)
- rate of convergence (2)
- regression analysis (2)
- regular surface (2)
- regularization (2)
- regularization wavelets (2)
- reproducing kernel (2)
- reproduzierender Kern (2)
- satellite gravity gradiometry (2)
- scale-space (2)
- series-parallel graphs (2)
- simplex (2)
- singularities (2)
- spherical approximation (2)
- splines (2)
- stationarity (2)
- stationary radiative transfer equation (2)
- subgroup problem (2)
- uniqueness (2)
- universal objective function (2)
- valid inequalities (2)
- variational inequalities (2)
- vector spherical harmonics (2)
- vectorial wavelets (2)
- weak solution (2)
- worst-case scenario (2)
- "Slender-Body"-Theorie (1)
- (Joint) chance constraints (1)
- (dynamic) network flows (1)
- 2-d kernel regression (1)
- 3D image analysis (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- ALE-Methode (1)
- AR-ARCH (1)
- Abel integral equations (1)
- Abelian groups (1)
- Abgeschlossenheit (1)
- Ableitung höherer Ordnung (1)
- Ableitungsfreie Optimierung (1)
- Abstract ODE (1)
- Abstract linear systems theory (1)
- Acid-mediated tumor invasion (1)
- Adjazenz-Beziehungen (1)
- Adjoint method (1)
- Adjoint system (1)
- Advanced Encryption Standard (1)
- Aggregation (1)
- Agriculture Loan (1)
- Algebraic Geometry (1)
- Algebraic geometry (1)
- Algebraic groups (1)
- Algebraic optimization (1)
- Algebraischer Funktionenkörper (1)
- Algorithmics (1)
- Alter (1)
- Analytic semigroup (1)
- Angewandte Mathematik (1)
- Anisotropic smoothness classes (1)
- Annulus (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Applications (1)
- Approximationsalgorithmus (1)
- Approximative Identität (1)
- Arbitrage (1)
- Arc distance (1)
- Archimedische Kopula (1)
- Arduino (1)
- Asiatische Option (1)
- Asset allocation (1)
- Asset-liability management (1)
- Associative Memory Problem (1)
- Asympotic Analysis (1)
- Asymptotische Entwicklung (1)
- Ausfallrisiko (1)
- Automatische Differentiation (1)
- Automatische Spracherkennung (1)
- Automorphismengruppe (1)
- Autoregression (1)
- Autoregressive Hilbertian model (1)
- Autoregressive time series (1)
- Balance sheet (1)
- Banach lattice (1)
- Barriers (1)
- Basic Scheme (1)
- Basis Risk (1)
- Basket Option (1)
- Baum <Mathematik> (1)
- Bayes risk (1)
- Bayes-Entscheidungstheorie (1)
- Bayesrisiko (1)
- Beam models (1)
- Beam orientation (1)
- Behinderter (1)
- Bell Number (1)
- Berechnungskomplexität (1)
- Bernstein Kern (1)
- Bernstein–Gelfand–Gelfand construction (1)
- Bernstejn-Polynom (1)
- Beschichtungsprozess (1)
- Beschränkte Krümmung (1)
- Bessel functions (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Betriebsfestigkeit (1)
- Bilanzstrukturmanagement (1)
- Bildsegmentierung (1)
- Binary (1)
- Binomialbaum (1)
- Biot Poroelastizitätgleichung (1)
- Bisector (1)
- Black-Scholes model (1)
- Bondindizes (1)
- Bootstrap (1)
- Boundary Value Problem (1)
- Boundary Value Problem / Oblique Derivative (1)
- Boundary Value Problems (1)
- Box Algorithms (1)
- Box-Algorithm (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownsche Bewegung (1)
- CAQ (1)
- CDO (1)
- CDS (1)
- CDSwaption (1)
- CFL type conditions (1)
- CHAMP-Mission (1)
- CPDO (1)
- CUSUM statistic (1)
- CWGAN (1)
- Cantor sets (1)
- Capacity (1)
- Capital-at-Risk (1)
- Carreau law (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Navier scaling function and wavelet (1)
- Cauchy-Navier-Equation (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Charakter <Gruppentheorie> (1)
- Chi-Quadrat-Test (1)
- Cholesky-Verfahren (1)
- Chorin's projection scheme (1)
- Chow Quotient (1)
- Circle Location (1)
- Classification (1)
- Cluster-Analyse (1)
- Coarse graining (1)
- Cohen-Lenstra heuristic (1)
- Collision Operator (1)
- Collocation Method plus (1)
- Commodity Index (1)
- Competitive Analysis (1)
- Complex Structures (1)
- Complexity (1)
- Complexity and performance of numerical algorithms (1)
- Composite Materials (1)
- Computer Algebra (1)
- Computer algebra (1)
- Conditional Value-at-Risk (1)
- Connectivity (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Constraint Generation (1)
- Construction of hypersurfaces (1)
- Container (1)
- Continuum mechanics (1)
- Convergence Rate (1)
- Convex Analysis (1)
- Convex geometry (1)
- Convex sets (1)
- Convexity (1)
- Copula (1)
- Core (1)
- Cosine function (1)
- Coupled PDEs (1)
- Covid-19 (1)
- Coxeter groups (1)
- Coxeter-Freudenthal-Kuhn triangulation (1)
- Crane (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Curvature (1)
- Curved viscous fibers (1)
- Cut (1)
- Cutting and Packing (1)
- Cycle Decomposition (1)
- DSMC (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Debt Management (1)
- Decision Making (1)
- Decision support (1)
- Decomposition of integer matrices (1)
- Defaultable Options (1)
- Deformationstheorie (1)
- Degenerate Diffusion Semigroups (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Delay Differential Equations (1)
- Dense gas (1)
- Derivatives (1)
- Didaktik (1)
- Differential Cross-Sections (1)
- Differential forms (1)
- Differentialgleichung mit nacheilendem Argument (1)
- Differentialinklusionen (1)
- Differenzenverfahren (1)
- Differenzierbare Mannigfaltigkeit (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Dirichlet series (1)
- Dirichlet-Problem (1)
- Discrete Bicriteria Optimization (1)
- Discrete decision problems (1)
- Discrete velocity models (1)
- Discriminatory power (1)
- Diskrete Mathematik (1)
- Dispersionsrelation (1)
- Dissertation (1)
- Diversifikation (1)
- Domain Decomposition (1)
- Doppelbarriereoption (1)
- Double Barrier Option (1)
- Druckkorrektur (1)
- Dynamic Network Flow Problem (1)
- Dynamic Network Flows (1)
- Dynamic capillary pressure (1)
- Dynamic cut (1)
- Dynamische Systeme (1)
- Dynamische Topographie (1)
- Dynasys (1)
- Dünnfilmapproximation (1)
- EDF observation models (1)
- EGM96 (1)
- EM algorith (1)
- Earliest arrival augmenting path (1)
- Earth' (1)
- Earth's disturbing potential (1)
- Education (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Efficiency (1)
- Efficient Reliability Estimation (1)
- Effizienter Algorithmus (1)
- Effizienz (1)
- Eigenschwingung (1)
- Eikonal equation (1)
- Elastoplastic BVP (1)
- Electricity consumption (1)
- Elektromagnetische Streuung (1)
- Elektronik (1)
- Elementare Zahlentheorie (1)
- Eliminationsverfahren (1)
- Elliptic-parabolic equation (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Endliche Gruppe (1)
- Endliche Lie-Gruppe (1)
- Energy markets (1)
- Enskog equation (1)
- Entscheidungsbaum (1)
- Entscheidungsunterstützung (1)
- Enumerative Geometrie (1)
- Epidemiologie (1)
- Epidemiology (1)
- Erdöl Prospektierung (1)
- Ergodic (1)
- Erwartungswert-Varianz-Ansatz (1)
- Essential m-dissipativity (1)
- Euler's equation of motion (1)
- Evacuation Planning (1)
- Evakuierung (1)
- Evolution Equations (1)
- Evolutionary Integral Equations (1)
- Exogenous (1)
- Expected shortfall (1)
- Experimental Data (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Extrapolation (1)
- Extreme Events (1)
- Extreme value theory (1)
- FEM (1)
- FEM-FCT stabilization (1)
- FPM (1)
- FPTAS (1)
- Faden (1)
- Faltung (1)
- Faltung <Mathematik> (1)
- Families of Probability Measures (1)
- Fast Pseudo Spectral Algorithm (1)
- Fast Wavelet Transform (1)
- Feed-forward Networks (1)
- Feedfoward Neural Networks (1)
- Feynman Integrals (1)
- Feynman path integrals (1)
- Fiber spinning (1)
- Fiber suspension flow (1)
- Filippov theory (1)
- Filippov-Theorie (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finanzzeitreihe (1)
- Finite-Elemente-Methode (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- First--order optimality system (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- FlowLoc (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Kopplung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck equation (1)
- Fokker-Planck-Gleichung (1)
- Forbidden Regions (1)
- Forecasting (1)
- Forward-Backward Stochastic Differential Equation (1)
- Fourier-Transformation (1)
- Fredholm integral equation of the second kind (1)
- Fredholmsche Integralgleichung (1)
- Frequency Averaging (1)
- Function of bounded variation (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionalanalysis (1)
- Funktionenkörper (1)
- Fuzzy Programming (1)
- GARCH (1)
- GARCH Modelle (1)
- GOCE <satellite mission> (1)
- GPS-satellite-to-satellite tracking (1)
- GRACE (1)
- GRACE <satellite mission> (1)
- Galerkin Approximation (1)
- Gamma-Konvergenz (1)
- Garantiezins (1)
- Garbentheorie (1)
- Gauge Distances (1)
- Gauss-Manin connection (1)
- Gaussian random noise (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Generative adversarial networks (1)
- Geo-referenced data (1)
- Geodesie (1)
- Geodätischer Satellit (1)
- Geomagnetic Field Modelling (1)
- Geomagnetismus (1)
- Geomathematik (1)
- Geometrical algorithms (1)
- Geometrische Ergodizität (1)
- Geostrophic flow (1)
- Geostrophisches Gleichgewicht (1)
- Geothermal Systems (1)
- Geothermischer Fluss (1)
- Gewichteter Sobolev-Raum (1)
- Gewichtung (1)
- Gittererzeugung (1)
- Gleichgewichtsstrategien (1)
- Gleichmäßige Approximation (1)
- Global Optimization (1)
- Global optimization (1)
- Globale nichtlineare Analysis (1)
- Glättung (1)
- Glättungsparameterwahl (1)
- Grad expansion (1)
- Gradient based optimization (1)
- Granular flow (1)
- Granulat (1)
- Graph coloring (1)
- Gravimetry (1)
- Gravitation (1)
- Gravitational Field (1)
- Gravitationsmodell (1)
- Greedy Heuristic (1)
- Greedy algorithm (1)
- Green’s function (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Große Abweichung (1)
- Gruppenoperation (1)
- Gröbner base (1)
- Gröbner bases in monoid and group rings (1)
- Gröbner-basis (1)
- Gyroscopic (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamiltonian (1)
- Hamiltonian Path Integrals (1)
- Hamiltonian groups (1)
- Handelsstrategien (1)
- Hardy space (1)
- Harmonische Analyse (1)
- Harmonische Dichte (1)
- Harmonische Funktion (1)
- Hazard Functions (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Helmholtz Type Boundary Value Problems (1)
- Helmholtz decomposition (1)
- Helmholtz-Decomposition (1)
- Helmholtz-Zerlegung (1)
- Heston-Modell (1)
- Heuristic (1)
- Heuristik (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchies (1)
- Hierarchische Matrix (1)
- Higher Order Differentials as Boundary Data (1)
- Hilbert complexes (1)
- Hochschild homology (1)
- Hochschild-Homologie (1)
- Homogeneous Relaxation (1)
- Homogenization (1)
- Homologietheorie (1)
- Homologische Algebra (1)
- Homotopie (1)
- Homotopiehochhebungen (1)
- Homotopy (1)
- Homotopy lifting (1)
- Hub Location Problem (1)
- Hub-and-Spoke-System (1)
- Hybrid Codes (1)
- Hydrological Gravity Variations (1)
- Hydrologie (1)
- Hydrostatischer Druck (1)
- Hyperbolic Conservation (1)
- Hyperelastizität (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hyperspektraler Sensor (1)
- Hypocoercivity (1)
- INGARCH (1)
- ITSM (1)
- Idealklassengruppe (1)
- Identifiability (1)
- Ill-Posed Problems (1)
- Ill-posed Problems (1)
- Ill-posed problem (1)
- Illiquidität (1)
- Image restoration (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Improperly posed problems (1)
- Impulse control (1)
- Incompressible Navier-Stokes (1)
- Index Insurance (1)
- Industrial Applications (1)
- Infectious Diseases (1)
- Inflation (1)
- Information Theory (1)
- Infrarotspektroskopie (1)
- Injectivity of mappings (1)
- Injektivität von Abbildungen (1)
- Inkompressibel Navier-Stokes (1)
- Inkorrekt gestelltes Problem (1)
- Insurance (1)
- Integral (1)
- Integral Equations (1)
- Integral transform (1)
- Integration (1)
- Intensität (1)
- Interdisziplinärer Projektunterricht (1)
- Internationale Diversifikation (1)
- Interpolation Algorithm (1)
- Inverse Problems (1)
- Inverse problems in Banach spaces (1)
- Irreduzibler Charakter (1)
- Isogeometric Analysis (1)
- Isotropy (1)
- Iterative Methods (1)
- Ito (1)
- Jacobigruppe (1)
- Jeffreys' prior (1)
- Jiang's constitutive model (1)
- Jiangsches konstitutives Gesetz (1)
- K-best solution (1)
- K-cardinality trees (1)
- Kaktusgraph (1)
- Kalkül (1)
- Kalkül des natürlichen Schließens (1)
- Kallianpur-Robbins law (1)
- Kanalcodierung (1)
- Karhunen-Loève expansion (1)
- Kategorientheorie (1)
- Kelvin Transformation (1)
- Kernschätzer (1)
- Kinetic Schems (1)
- Kinetic Theory of Gases (1)
- Kinetic theory (1)
- Kirchhoff-Love shell (1)
- Kiyoshi (1)
- Knapsack (1)
- Knapsack problem (1)
- Kohonen's SOM (1)
- Kombinatorik (1)
- Kombinatorische Optimierung (1)
- Kommutative Algebra (1)
- Kompakter Träger <Mathematik> (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Konstruktive Approximation (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsmechanik (1)
- Kontinuumsphysik (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Mengen (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Kreitderivaten (1)
- Kristallmathematik (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kugelfunktion (1)
- Kullback Leibler distance (1)
- Kullback-Leibler divergence (1)
- Kurvenschar (1)
- L-curve Methode (1)
- L2-Approximation (1)
- LIBOR (1)
- Label correcting algorithm (1)
- Label setting algorithm (1)
- Lagrange (1)
- Lagrangian Functions (1)
- Lagrangian relaxation (1)
- Large-Scale Problems (1)
- Lattice Boltzmann (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Lavrentiev regularization (1)
- Lavrentiev regularization for equations with monotone operators (1)
- Leading-Order Optimality (1)
- Learnability (1)
- Learning systems (1)
- Least-squares Monte Carlo method (1)
- Lebesque-Integral (1)
- Legendre Wavelets (1)
- Lehrkräfte (1)
- Lehrmittel (1)
- Level set methods (1)
- Level sets (1)
- Levy process (1)
- Lexicographic Order (1)
- Lexicographic max-ordering (1)
- Lie algebras (1)
- Lie-Typ-Gruppe (1)
- Linear Integral Equations (1)
- Linear kinematic hardening (1)
- Linear kinematische Verfestigung (1)
- Linear membership function (1)
- Lineare Integralgleichung (1)
- Lippmann-Schwinger Equation (1)
- Lippmann-Schwinger equation (1)
- Liquidität (1)
- Local completeness (1)
- Local existence uniqueness (1)
- Locally Supported Radial Basis Functions (1)
- Locally Supported Zonal Kernels (1)
- Locally stationary processes (1)
- Location (1)
- Location problems (1)
- Location theory (1)
- Locational Planning (1)
- Lokalkompakte Kerne (1)
- Low-discrepancy sequences (1)
- Lucena (1)
- MBS (1)
- MKS (1)
- ML-estimation (1)
- MLC (1)
- MLE (1)
- MOCO (1)
- Macaulay’s inverse system (1)
- Machine Scheduling (1)
- Magneto-Elastic Coupling (1)
- Magnetoelastic coupling (1)
- Magnetoelasticity (1)
- Magnetostriction (1)
- Marangoni-Effekt (1)
- Market Equilibrium (1)
- Markov Kette (1)
- Markov process (1)
- Markov switching (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Markov-Prozess (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martingaloptimalitätsprinzip (1)
- Maschinelles Lernen (1)
- Massendichte (1)
- Math-Talent-School (1)
- Mathematical Epidemiology (1)
- Mathematical Finance (1)
- Mathematical Modeling (1)
- Mathematics (1)
- Mathematisches Modell (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Matroids (1)
- Max-Ordering (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Maxwell's equations (1)
- McKay conjecture (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Medical Physics (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionale Spline-Funktion (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrskalen (1)
- Methode der Fundamentallösungen (1)
- Microstructure (1)
- Mie representation (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mie-Darstellung (1)
- Mie-Representation (1)
- Mikroelektronik (1)
- Minimal spannender Baum (1)
- Minimum Cost Network Flow Problem (1)
- Minimum Principle (1)
- Minkowski space (1)
- Mixed Connectivity (1)
- Mixed integer programming (1)
- Mixed method (1)
- Model-Dynamics (1)
- Moduli Spaces (1)
- Molekulardynamik (1)
- Molodensky Problem (1)
- Molodensky problem (1)
- Moment sequence (1)
- Momentum and Mas Transfer (1)
- Monoid and group rings (1)
- Monotone dynamical systems (1)
- Monte Carlo (1)
- Monte Carlo method (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Mosaike (1)
- Motion Capturing (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multi-Variant Model (1)
- Multi-dimensional systems (1)
- Multicriteria Location (1)
- Multileaf Collimator (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiple Criteria (1)
- Multiple Objective Programs (1)
- Multiple criteria analysis (1)
- Multiple criteria optimization (1)
- Multiple objective combinatorial optimization (1)
- Multiple objective optimization (1)
- Multiplicative Schwarz Algorithm (1)
- Multiresolution analysis (1)
- Multiscale Methods (1)
- Multiscale model (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multiskalenapproximation (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multisresolution Analysis (1)
- Multivariate (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- NP (1)
- NP-completeness (1)
- Nash equilibria (1)
- Navier Stokes equation (1)
- Navier-Stokes (1)
- Network flows (1)
- Networks (1)
- Netzwerksynthese (1)
- Neumann Wavelets (1)
- Neumann wavelets (1)
- Neumann-Problem (1)
- Neural Networks (1)
- Neuronales Netz (1)
- Newtonsches Potenzial (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtlineare/große Verformungen (1)
- Nichtlineares Galerkinverfahren (1)
- Nichtparametrische Regression (1)
- Nichtpositive Krümmung (1)
- Niederschlag (1)
- Nilpotent elements (1)
- No-Arbitrage (1)
- Non-commutative Computer Algebra (1)
- Non-convex body (1)
- Non-linear wavelet thresholding (1)
- Nonlinear Galerkin Method (1)
- Nonlinear Optimization (1)
- Nonlinear dynamics (1)
- Nonlinear regression (1)
- Nonlinear time series analysis (1)
- Nonlinear/large deformations (1)
- Nonparametric AR-ARCH (1)
- Nonparametric regression (1)
- Nonparametric time series (1)
- Nonsmooth contact dynamics (1)
- Nonstationary processes (1)
- Nulldimensionale Schemata (1)
- Numerical Analysis (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerics (1)
- Numerische Mathematik / Algorithmus (1)
- Oberflächenmaße (1)
- Oberflächenspannung (1)
- On-line algorithm (1)
- One-dimensional systems (1)
- Online Algorithms (1)
- Optimal Prior Distribution (1)
- Optimal control (1)
- Optimal portfolios (1)
- Optimal semiconductor design (1)
- Optimale Kontrolle (1)
- Optimale Portfolios (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Order (1)
- Orthonormalbasis (1)
- Ovoid (1)
- PDE-Constrained Optimization, Robust Design, Multi-Objective Optimization (1)
- Palm distribution (1)
- Panel clustering (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Pareto Optimality (1)
- Pareto Points (1)
- Pareto optimality (1)
- Parkette (1)
- Partikel Methoden (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathwise Optimality (1)
- Pedestrian FLow (1)
- Perceptron (1)
- Periodic Homogenization (1)
- Perona-Malik filter (1)
- Pfadintegral (1)
- Planares Polynom (1)
- Poisson autoregression (1)
- Poisson noise (1)
- Poisson regression (1)
- PolyBoRi (1)
- Polyhedron (1)
- Polynomapproximation (1)
- Polynomial Eigenfunctions (1)
- Pontrjagin (1)
- Population Balance Equation (1)
- Poroelastizität (1)
- Porous flow (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Potential transform (1)
- Preimage of an ideal under a morphism of algebras (1)
- Probust optimization (1)
- Project prioritization (1)
- Project selection (1)
- Projektarbeit (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Projektunterricht (1)
- Prox-Regularisierung (1)
- Pseudopolynomial-Time Algorithm (1)
- Punktmengen (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- Quadratischer Raum (1)
- Quantile autoregression (1)
- Quantization (1)
- Quasi-Variational Inequalities (1)
- Quasi-identities (1)
- RCGAN (1)
- RCWGAN (1)
- RKHS (1)
- Radial Basis Functions (1)
- Radiation Therapy (1)
- Radiative Heat Trasfer (1)
- Radiative heat transfer (1)
- Radiotherapy (1)
- Rainflow (1)
- Random Errors (1)
- Random body (1)
- Random differential equations (1)
- Random number generation (1)
- Randwertproblem (1)
- Rank test (1)
- Rarefied Gas Flows (1)
- Rarefied Gsa Dynamics (1)
- Rarefied Polyatomic Gases (1)
- Rarefied gas (1)
- Raspberry Pi (1)
- Rate-independency (1)
- Ray-Knight Theorem (1)
- Rayleigh Number (1)
- Reaction-diffusion equations (1)
- Rectifiability (1)
- Recurrent Networks (1)
- Recurrent neural networks (1)
- Reflection (1)
- Reflexionsspektroskopie (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regularisierung / Stoppkriterium (1)
- Regularization / Stop criterion (1)
- Regularization Wavelets (1)
- Regularization methods (1)
- Rehabilitation clinics (1)
- Relaxation (1)
- Reliability (1)
- Representation (1)
- Resolvent Estimate (1)
- Resonant tunneling diode (1)
- Restricted Regions (1)
- Restricted Shortest Path (1)
- Richtungsableitung (1)
- Riemann-Siegel formula (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Riemannsche Summen (1)
- Riesz Transform (1)
- Rigid Body Motion (1)
- Risikoanalyse (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Management (1)
- Risk Measures (1)
- Risk Sharing (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Räumliche Statistik (1)
- SARS-CoV-2 (1)
- SAW filters (1)
- SGG (1)
- SPn-approximation (1)
- SST (1)
- SWARM (1)
- Saddle Points (1)
- Sandwiching algorithm (1)
- Satellitendaten (1)
- Satellitengeodäsie (1)
- Satellitengradiogravimetrie (1)
- Satellitengradiometrie (1)
- Scalar type operator (1)
- Scalar-type operator (1)
- Scale function (1)
- Scattered-Data-Interpolation (1)
- Schaum (1)
- Schaumzerfall (1)
- Schiefe Ableitung (1)
- Schnelle Fourier-Transformation (1)
- Schnitt <Mathematik> (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Second Order Conditions (1)
- Seismic Modeling (1)
- Seismische Tomographie (1)
- Seismische Welle (1)
- Semantik (1)
- Semi-Markov-Kette (1)
- Semi-infinite optimization (1)
- Semigroups (1)
- Sensitivitäten (1)
- Sequential test (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Shallow Water Equations (1)
- Shannon capacity (1)
- Shannon optimal priors (1)
- Shannon-Capacity (1)
- Shape optimization (1)
- Shapley value (1)
- Shapleywert (1)
- Shearlets (1)
- Sheaves (1)
- Shock Wave Problem (1)
- Shortest path problem (1)
- Signalanalyse (1)
- Similarity measures (1)
- Singular <Programm> (1)
- Singularität (1)
- Singularitätentheorie (1)
- Skalierungsfunktion (1)
- Slender body theory (1)
- Slender-Body Approximations (1)
- Smoothed Particle Hydrodynamics (1)
- Sobolevräume (1)
- Solvency II (1)
- Solvency-II-Richtlinie (1)
- Spannungs-Dehn (1)
- Spatial Statistics (1)
- Spectral Analysis (1)
- Spectral Method (1)
- Spectral theory (1)
- Spektralanalyse <Stochastik> (1)
- Spherical (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Harmonics (1)
- Spherical Location Problem (1)
- Spherical Multiresolution Analysis (1)
- Sphärische Approximation (1)
- Sphärische Wavelets (1)
- Spieltheorie (1)
- Spline-Interpolation (1)
- Spline-Wavelets (1)
- Splines (1)
- Split-Operator (1)
- Splitoperator (1)
- Sprung-Diffusions-Prozesse (1)
- Square-mean Convergence (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Standard basis (1)
- Standortprobleme (1)
- Standorttheorie (1)
- Statistical Experiments (1)
- Statistics (1)
- Steuer (1)
- Stieltjes transform (1)
- Stochastic Impulse Control (1)
- Stochastic Processes (1)
- Stochastic Volatility (1)
- Stochastic optimization (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische Volatilität (1)
- Stochastische Zinsen (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stochastisches Feld (1)
- Stochastisches Modell (1)
- Stokes Flow (1)
- Stokes Wavelets (1)
- Stokes wavelets (1)
- Stokes-Gleichung (1)
- Stop- and Play-Operators (1)
- Stop- und Play-Operator (1)
- Stop- und Spieloperator (1)
- Stop-und Play-Operator (1)
- Stornierung (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Strain-Life Approach (1)
- Stratifaltigkeiten (1)
- Structural Reliability (1)
- Structure Theory (1)
- Strukturiertes Finanzprodukt (1)
- Strukturoptimierung (1)
- Strömungsdynamik (1)
- Strömungsmechanik (1)
- Subset Simulationen (1)
- Success Run (1)
- Survival Analysis (1)
- Synthesizer (1)
- Synthetic data generation (1)
- Systemidentifikation (1)
- Sägezahneffekt (1)
- TDTSP (1)
- TSP (1)
- Tabellenkalkulation (1)
- Tableau-Kalkül (1)
- Tail Dependence Koeffizient (1)
- Temporal Variational Autoencoders (1)
- Tensor Spherical Harmonics (1)
- Tensorfeld (1)
- Test for Changepoint (1)
- Theorem of Plemelj-Privalov (1)
- Thermophoresis (1)
- Thin film approximation (1)
- Tichonov-Regularisierung (1)
- Tiefengeothermie (1)
- Time-Series (1)
- Time-Space Multiresolution Analysis (1)
- Time-delay-Netz (1)
- TimeGAN (1)
- Titration (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Traffic flow (1)
- Train Rearrangement (1)
- Transaction costs (1)
- Translation planes (1)
- Transportation Problem (1)
- Tree (1)
- Trennschärfe <Statistik> (1)
- Trennverfahren (1)
- Treppenfunktionen (1)
- Triangular fuzzy number (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Two-Scale Convergence (1)
- Two-phase flow (1)
- Uniform matroids (1)
- Universal objective function (1)
- Unreinheitsfunktion (1)
- Unschärferelation (1)
- Unsupervised learning (1)
- Untermannigfaltigkeit (1)
- Upwind-Verfahren (1)
- Usage modeling (1)
- Utility (1)
- Value at Risk (1)
- Value at risk (1)
- Van Neumann-Kakutani transformation (1)
- Variational autoencoders (1)
- Variational inequalities (1)
- Variationsrechnung (1)
- Variationsungleichugen (1)
- Vector Spherical Harmonics (1)
- Vector-valued holomorphic function (1)
- Vectorfield approximation (1)
- Vectorial Wavelets (1)
- Vektor-Wavelets (1)
- Vektorfeld (1)
- Vektorfeldapproximation (1)
- Verkehsplanung (1)
- Verschlüsselung (1)
- Verschwindungsatz (1)
- Versicherung (1)
- Vetor optimization (1)
- Vigenere (1)
- Virus Variants (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Volatilitätsarbitrage (1)
- Vollständigkeit (1)
- Vorkonditionierer (1)
- Vorlesungsskript (1)
- Voronoi diagram (1)
- Vorwärts-Rückwärts-Stochastische-Differentialgleichung (1)
- Water reservoir management (1)
- Wave Based Method (1)
- Wavelet Analysis auf regulären Flächen (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Wavelet-Transformation (1)
- Wavelets (1)
- Wavelets auf der Kugel und der Sphäre (1)
- Weak Solution Theory (1)
- Wellengeschwindigkeit (1)
- White Noise (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Wirkungsnetz (1)
- Wissenschaftliches Rechnen (1)
- Word problem (1)
- Worst-Case (1)
- Wärmeleitfähigkeit (1)
- Yaglom limits (1)
- Zeitabhängigkeit (1)
- Zeitintegrale Modelle (1)
- Zeitliche Veränderungen (1)
- Zeitreihe (1)
- Zeitreihen (1)
- Zellulärer Automat (1)
- Zentrenprobleme (1)
- Zerlegungen (1)
- Zero-dimensional schemes (1)
- Zonal Kernel Functions (1)
- Zopfgruppe (1)
- Zufälliges Feld (1)
- Zweiphasenströmung (1)
- Zyklische Homologie (1)
- abgeleitete Kategorie (1)
- acid-mediated tumor invasion (1)
- activity-based model (1)
- adaptive algorithm (1)
- adaptive estimation (1)
- adaptive grid generation (1)
- additive Gaussian noise (1)
- adjacency (1)
- adjacency relations (1)
- adjoint approach (1)
- adjoints (1)
- aggressive space mapping (1)
- aleph (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- algorithm (1)
- alternating minimization (1)
- alternating optimization (1)
- analoge Mikroelektronik (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anisotropen Viskositätsmodell (1)
- anisotropic diffusion (1)
- anisotropic viscosity (1)
- anisotropy (1)
- applied mathematics (1)
- approximation methods (1)
- approximative Identität (1)
- arbitrary Lagrangian-Eulerian methods (ALE) (1)
- arbitrary function (1)
- archimedean copula (1)
- area loss (1)
- asian option (1)
- associated Legendre functions (1)
- asymptotic expansions (1)
- asymptotic preserving numerical scheme (1)
- asymptotic-preserving (1)
- auto-pruning (1)
- automatic differentiation (1)
- ball (1)
- basic systems theoretic properties (1)
- basket option (1)
- benders decomposition (1)
- bending strip method (1)
- best basis (1)
- bicriteria shortest path problem (1)
- bicriterion path problems (1)
- bills of material (1)
- bills of materials (1)
- bin coloring (1)
- binomial tree (1)
- biorthogonal bases of L^2 (1)
- bipolar quantum drift diffusion model (1)
- blackout period (1)
- bocses (1)
- body wave velocity (1)
- bootstrap (1)
- bottleneck (1)
- boundary conditions (1)
- boundary value problem (1)
- boundary-value problems of potent (1)
- branch and cut (1)
- branching process (1)
- bus bunching (1)
- cactus graph (1)
- cancer radiation therapy (1)
- canonical ideal (1)
- canonical module (1)
- cardinality constraint combinatorial optimization (1)
- cash management (1)
- center hyperplane (1)
- centrally symmetric polytope (1)
- change analysis (1)
- change point (1)
- changing market coefficients (1)
- characteristic polynomial (1)
- charged fluids (1)
- chemotaxis (1)
- chemotherapy (1)
- classical solutions (1)
- clo (1)
- closure approximation (1)
- clustering (1)
- clustering methods (1)
- combinatorics (1)
- common transversal (1)
- compact operator equation (1)
- complete presentations (1)
- complexity (1)
- composites (1)
- computational complexity (1)
- computational finance (1)
- computer algebra (1)
- computeralgebra (1)
- conditional quantile (1)
- conditional quantiles (1)
- confluence (1)
- consecutive ones matrix (1)
- consecutive ones polytopes (1)
- constructive approximation (1)
- control theory (1)
- convergence behaviour (1)
- convex constraints (1)
- convex distance funtion (1)
- convex operator (1)
- cooling processes (1)
- cooperative game (1)
- core (1)
- correlated errors (1)
- count data (1)
- coupling methods (1)
- coverage error (1)
- crash (1)
- crash hedging (1)
- crash modelling (1)
- credit risk (1)
- curvature (1)
- cusp forms (1)
- cut (1)
- cut basis problem (1)
- cuts (1)
- cyclic homology (1)
- da (1)
- data structure (1)
- data-adaptive bandwidth choice (1)
- decision support (1)
- decision support systems (1)
- decisions (1)
- decoding (1)
- decrease direction (1)
- default time (1)
- deficiency (1)
- deflections of the vertical (1)
- degenerations of an elliptic curve (1)
- delay management (1)
- delay management problem (1)
- denoising (1)
- dense univariate rational interpolation (1)
- density gradient equation (1)
- derivative-free iterative method (1)
- derived category (1)
- descent algorithm (1)
- determinant (1)
- differential inclusions (1)
- diffusion models (1)
- diffusive scaling (1)
- direct product (1)
- directional derivative (1)
- discrepancy (1)
- discrete element method (1)
- discrete measure (1)
- discrete time setting (1)
- discrete velocity models (1)
- discretization (1)
- diskrete Systeme (1)
- displacement problem (1)
- distribution (1)
- diversification (1)
- domain decomposition methods (1)
- domain parametrization (1)
- double exponential distribution (1)
- downward continuation (1)
- drift diffusion (1)
- drift-diffusion limit (1)
- durability (1)
- dynamic holding (1)
- dynamic network flows (1)
- dynamical topography (1)
- earliest arrival flow (1)
- earliest arrival flows (1)
- efficiency loss (1)
- efficient solution (1)
- eigenvalue problems (1)
- elasticity problem (1)
- elastoplastic BVP (1)
- elliptical distribution (1)
- endomorphism ring (1)
- energy transport (1)
- entropy (1)
- enumerative geometry (1)
- epsilon-constraint method (1)
- equilibrium state (1)
- equisingular families (1)
- estimate (1)
- estimation (1)
- estimator (1)
- exact fully discrete vectorial wavelet transform (1)
- exact solution (1)
- exchange rate (1)
- explicit representation (1)
- explicit representations (1)
- explizite Darstellung (1)
- exponential rate (1)
- extreme equilibria (1)
- f-dissimilarity (1)
- face value (1)
- facility location (1)
- fast approximation (1)
- fatigue (1)
- fiber reinforced silicon carbide (1)
- fibre lay-down dynamics (1)
- film casting (1)
- filtration (1)
- final prediction error (1)
- financial mathematics (1)
- finite biodiversity (1)
- finite difference method (1)
- finite difference schemes (1)
- finite element method (1)
- finite groups of Lie type (1)
- finite pointset method (1)
- finite spin group (1)
- finite volume methods (1)
- finite-difference methods (1)
- first hitting time (1)
- fixpoint theorem (1)
- float glass (1)
- flood risk (1)
- fluid dynamic equations (1)
- fluid structure (1)
- fluid structure interaction (1)
- fluid-structure interaction (FSI) (1)
- formale Logik (1)
- formants (1)
- formulation as integral equation (1)
- forward-shooting grid (1)
- fptas (1)
- fractals (1)
- free boundary (1)
- free surface (1)
- freie Oberfläche (1)
- frequency bands (1)
- freqzency bands (1)
- function of bounded variation (1)
- functional data (1)
- functional time series (1)
- fundamental cut (1)
- fundamental systems (1)
- gas dynamics (1)
- gauge (1)
- gebietszerlegung (1)
- general multidimensional moment problem (1)
- generalized Gummel itera (1)
- generalized inverse Gaussian diffusion (1)
- generic character table (1)
- geodetic (1)
- geomagnetic field modelling from MAGSAT data (1)
- geomathematics (1)
- geometric measure theory (1)
- geometrical algorithms (1)
- geometry of measures (1)
- geopotential determination (1)
- gitter (1)
- glioblastoma (1)
- global optimization (1)
- go-or-grow (1)
- go-or-grow dichotomy (1)
- good semigroup (1)
- gradient descent reprojection (1)
- granular flow (1)
- graph and network algorithm (1)
- graph p-Laplacian (1)
- gravimetry (1)
- gravitation (1)
- gravitational field recovery (1)
- grenzwert (1)
- group action (1)
- groups of Lie type (1)
- growing sub-quadratically (1)
- growth optimal portfolios (1)
- großer Investor (1)
- harmonic WFT (1)
- harmonic balance (1)
- harmonic scaling functions and wavelets (1)
- harmonic wavelets (1)
- headway prediction (1)
- heat radiation (1)
- hedging (1)
- hidden Markov (1)
- hierarchical matrix (1)
- higher order (1)
- higher-order moments (1)
- homological algebra (1)
- hybrid method (1)
- hyper-quasi-identities (1)
- hyperbolic conservation laws (1)
- hyperbolic systems (1)
- hyperbolic systems of conservation laws (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hypergeometric functions (1)
- hyperplane transversal (1)
- hyperquasivarieties (1)
- hyperspectal unmixing (1)
- hypocoercivity (1)
- idealclass group (1)
- image analysis (1)
- image enhancement (1)
- image processing (1)
- image restoration (1)
- impulse control (1)
- impurity functions (1)
- incident wave (1)
- incompressible Euler equation (1)
- incompressible elasticity (1)
- incompressible limit (1)
- infinite-dimensional analysis (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- information (1)
- inhibitory synaptic transmission (1)
- initial temperature (1)
- initial temperature reconstruction (1)
- instantaneous phase (1)
- integer GARCH (1)
- integer-valued time series (1)
- integral constitutive equations (1)
- intensity (1)
- intensity map segmentation (1)
- interest oriented portfolios (1)
- internal approximation (1)
- intersection local time (1)
- interval graphs (1)
- intra- and extracellular proton dynamics (1)
- invariant theory (1)
- inverse Fourier transform (1)
- inversion method (1)
- isogeometric analysis (IGA) (1)
- iterative bandwidth choice (1)
- jump diffusion (1)
- jump-diffusion process (1)
- junction (1)
- k-cardinality minimum cut (1)
- k-max (1)
- kardinalzahl (1)
- kernel (1)
- kernel estimate (1)
- kernel estimates (1)
- kinetic approach (1)
- kinetic models (1)
- kinetic semiconductor equations (1)
- kinetic theory (1)
- knapsack (1)
- kombinatorische Optimierung (1)
- konvexe Analysis (1)
- kooperative Spieltheorie (1)
- label setting algorithm (1)
- large deviations (1)
- large investor (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- level K-algebras (1)
- life insurance (1)
- limes (1)
- limit models (1)
- limit theorems (1)
- linear code (1)
- linear programming (1)
- linear systems (1)
- linear transport equation (1)
- local approximation of sea surface topography (1)
- local bandwidths (1)
- local multiscale (1)
- local orientation (1)
- local search algorithm (1)
- local stationarity (1)
- local support (1)
- local trigonometric packets (1)
- local-global conjectures (1)
- localization (1)
- locally compact (1)
- locally compact kernels (1)
- locally maximal clone (1)
- locally stationary process (1)
- locally supported (Green's) vector wavelets (1)
- locally supported (Green’s) vector wavelets (1)
- locally supported wavelets (1)
- location (1)
- location problem (1)
- locational planning (1)
- log averaging methods (1)
- log-utility (1)
- logarithmic average (1)
- logarithmic averages (1)
- logarithmic utility (1)
- logical analysis (1)
- logische Analyse (1)
- lokal kompakt (1)
- lokaler Träger (1)
- lokalisierende Basis (1)
- lokalisierende Kerne (1)
- longevity bonds (1)
- loss analysis (1)
- low discrepancy (1)
- low-rank approximation (1)
- machine learning (1)
- macro derivative (1)
- magnetic field (1)
- market crash (1)
- market manipulation (1)
- markov model (1)
- martingale measu (1)
- martingale optimality principle (1)
- mathematica education (1)
- mathematical modelling (1)
- mathematical morphology (1)
- mathematische Modellierung (1)
- matrix decomposition (1)
- matrix problems (1)
- matroid flows (1)
- maximal dynamic flow (1)
- maximum a posteriori estimation (1)
- maximum capacity path (1)
- maximum entropy (1)
- maximum entropy moment (1)
- maximum flows (1)
- maximum likelihood estimation (1)
- maximum-entropy (1)
- mean-variance approach (1)
- mechanism design (1)
- mehrwertig (1)
- mesh deformation (1)
- mesh-free method (1)
- method of fundamental solutions (1)
- micromechanics (1)
- minimal paths (1)
- minimal polynomial (1)
- minimal spanning tree (1)
- minimaler Schnittbaum (1)
- minimax estimation (1)
- minimax rate (1)
- minimax risk (1)
- minimum cost flows (1)
- minimum cut (1)
- minimum cut tree (1)
- minimum fundamental cut basis (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- mixing (1)
- mixture models (1)
- mixture of quantum fluids and classical fluids (1)
- model order reduction (1)
- model reduction (1)
- moduli space (1)
- moduli spaces (1)
- moment methods (1)
- monlinear vibration (1)
- monodromy (1)
- monogenic signals (1)
- monoid- and group-presentations (1)
- monotone Konvergenz (1)
- monotone consecutive arrangement (1)
- moving contact line (1)
- multi scale (1)
- multi-asset option (1)
- multi-class image segmentation (1)
- multi-level Monte Carlo (1)
- multi-phase flow (1)
- multi-scale model (1)
- multicategory (1)
- multicriteria minimal path problem is presented (1)
- multidimensional Kohonen algorithm (1)
- multifilament superconductor (1)
- multigrid method (1)
- multigrid methods (1)
- multileaf collimator sequencing (1)
- multiliead collimator sequencing (1)
- multiobjective optimization (1)
- multipatch (1)
- multiple collision frequencies (1)
- multiple objective (1)
- multiple objective optimization (1)
- multiresolution analysis (1)
- multiscale analysis (1)
- multiscale approximation (1)
- multiscale approximation on regular telluroidal surfaces (1)
- multiscale denoising (1)
- multiscale methods (1)
- multiscale modeling (1)
- multiscale models (1)
- multivariate chi-square-test (1)
- mutiresolution (1)
- naive diversification (1)
- neighborhood search (1)
- network congestion game (1)
- network flow (1)
- network location (1)
- network synthesis (1)
- netzgenerierung (1)
- neural networks (1)
- never-meet property (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- nichtparametrisch (1)
- non square linear system solving (1)
- non-Gaussia non-i.i.d. errors (1)
- non-commutative geometry (1)
- non-convex body (1)
- non-convex optimization (1)
- non-desarguesian plane (1)
- non-linear wavelet thresholding (1)
- non-local filtering (1)
- non-newtonian flow (1)
- non-parametric regression (1)
- non-stationary time series (1)
- nonconvex optimization (1)
- noninformative prior (1)
- nonlinear circuits (1)
- nonlinear elasticity (1)
- nonlinear finite element method (1)
- nonlinear heat equation (1)
- nonlinear inverse problem (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear thresholding (1)
- nonlinear vibration analysis (1)
- nonlinear wavelet thresholding (1)
- nonlocal filtering (1)
- nonlocal sample dependence (1)
- nonnegative matrix factorization (1)
- nonparametric (1)
- nonparametric regression and (spectral) density estimation (1)
- nonwovens (1)
- norm (1)
- normal cone (1)
- normal mode (1)
- normality (1)
- normalization (1)
- normed residuum (1)
- number fields (1)
- number of objectives (1)
- numeraire portfolios (1)
- numerical integration (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerical methods for stiff equations (1)
- numerics for pdes (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- one-dimensional self-organization (1)
- operator splitting (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optimal portfolios (1)
- optimal rate of convergence (1)
- optiman stopping (1)
- option pricing (1)
- option valuation (1)
- order selection (1)
- order-three density (1)
- order-two density (1)
- orthogonal bandlimited and non-bandlimited wavelets (1)
- ovoids (1)
- parallel numerical algorithms (1)
- parameter choice (1)
- parameter identification (1)
- partial differential equation (1)
- partial differential equations (1)
- partial differential-algebraic equations (1)
- partial information (1)
- partition of unity (1)
- path-dependent options (1)
- pattern (1)
- penalization (1)
- penalty methods (1)
- penalty-free formulation (1)
- personnel scheduling (1)
- petroleum exploration (1)
- physicians (1)
- piezoelectric periodic surface acoustic wave filters (1)
- planar Brownian motion (1)
- planar polynomial (1)
- polycyclic group rings (1)
- polyhedral analysis (1)
- polyhedral norm (1)
- polynomial weight functions (1)
- porous media flow (1)
- portfolio (1)
- portfolio decision (1)
- portfolio optimisation (1)
- portfolio-optimization (1)
- poröse Medien (1)
- positivity preserving time integration (1)
- posterior collapse (1)
- potential (1)
- potential operators (1)
- preconditioners (1)
- predictive control (1)
- prefix reduction (1)
- prefix string rewriting (1)
- prefix-rewriting (1)
- preservation of relations (1)
- pressure correction (1)
- price of anarchy (1)
- price of stability (1)
- primal-dual algorithm (1)
- probability distribution (1)
- projected quasi-gradient method (1)
- projection method (1)
- projective surfaces (1)
- properly efficient solution (1)
- proximation (1)
- proxy modeling (1)
- pseudospectral methods (1)
- public transport (1)
- public transportation (1)
- pyramid schemes (1)
- pyramids (1)
- quadratic forms (1)
- quadrinomial tree (1)
- qualitative threshold model (1)
- quantile autoregression (1)
- quasi-Monte Carlo (1)
- quasi-P (1)
- quasi-SH (1)
- quasi-SV (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- quasivarieties (1)
- quickest path (1)
- radiation therapy (1)
- radiative heat transfer (1)
- rainflow (1)
- random noise (1)
- rare disasters (1)
- rarefied gas flows (1)
- rate-independency (1)
- ratio ergodic theorem (1)
- raum-zeitliche Analyse (1)
- reaction-diffusion-taxis equations (1)
- reaction-diffusion-transport equations (1)
- real quadratic number fields (1)
- reconstruction formula (1)
- reconstructions (1)
- redundant constraint (1)
- reference prior (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- refraction (1)
- regime-shift model (1)
- regularization by wavelets (1)
- regularization methods (1)
- reguläre Fläche (1)
- reinitialization (1)
- rela (1)
- representative systems (1)
- residual based error formula (1)
- resource constrained shortest path problem (1)
- rewriting (1)
- rheology (1)
- risk analysis (1)
- risk measures (1)
- risk reduction (1)
- robust network flows (1)
- robustness (1)
- rostering (1)
- s external gravitational field (1)
- sampling (1)
- satellite gradiometry (1)
- satellite-to-satellite tracking (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- scalar conservation laws (1)
- scalarization (1)
- scale discrete spherical vector wavelets (1)
- scaled boundary isogeometric analysis (1)
- scaled boundary parametrizations (1)
- scaled translates (1)
- scaling functions (1)
- scheduling (1)
- scheduling theory (1)
- schlecht gestellt (1)
- schnelle Approximation (1)
- second class group (1)
- second order upwind discretization (1)
- seismic tomography (1)
- seismic wave (1)
- selfish routing (1)
- semi-classical limits (1)
- semigroup of values (1)
- semisprays (1)
- sensitivities (1)
- separation problem (1)
- sequential test (1)
- set covering (1)
- severely ill-posed inverse problems (1)
- shape optimization (1)
- sheaf theory (1)
- shear flow (1)
- shock wave (1)
- short-time periodogram (1)
- shortest path problem (1)
- sieve estimate (1)
- similarity measures (1)
- simulierte Finanzzeitreihe (1)
- single layer kernel (1)
- singular fluxes (1)
- singular optimal control (1)
- singular spaces (1)
- singuläre Räume (1)
- sink location (1)
- slope limiter (1)
- smoothing (1)
- solution formula (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparsity (1)
- special entropies (1)
- spectral sequences (1)
- spectrogram (1)
- speech recognition (1)
- sphere (1)
- spherical decomposition (1)
- spherical splines (1)
- spline (1)
- spline and wavelet based determination of the geoid and the gravitational potential (1)
- spline-wavelets (1)
- splitting function (1)
- sputtering process (1)
- squares (1)
- stability (1)
- stability uniformly in the mean free path (1)
- star-shaped domain (1)
- stationary solutions (1)
- statistical experiment (1)
- steady Boltzmann equation (1)
- stimulus response data (1)
- stochastic arbitrage (1)
- stochastic coefficient (1)
- stochastic differential equations (1)
- stochastic interest rate (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stochastic stability (1)
- stochastische Arbitrage (1)
- stop location (1)
- stop- and play-operator (1)
- stop- and play-operators (1)
- stratifolds (1)
- strictly quasi-convex functions (1)
- strong equilibria (1)
- strong theorems (1)
- strongly polynomial-time algorithm (1)
- structure tensor (1)
- subgradient (1)
- subgroup presentation problem (1)
- superposed fluids (1)
- superstep cycles (1)
- surface measures (1)
- surrender options (1)
- surrogate algorithm (1)
- systems (1)
- syzygies (1)
- tail dependence coefficient (1)
- tax (1)
- technische Analyse (1)
- tension problems (1)
- tensions (1)
- tensor product basis (1)
- test (1)
- texture (1)
- thermal equilibrium state (1)
- threshold choice (1)
- time delays (1)
- time-delayed carrying capacities (1)
- time-dependent shortest path problem (1)
- time-frequency plan (1)
- time-varying autoregression (1)
- time-varying covariance (1)
- topological asymptotic expansion (1)
- toric geometry (1)
- torische Geometrie (1)
- total latency (1)
- total variation (1)
- total variation spatial regularization (1)
- traffic planning (1)
- transit operations (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- trial systems (1)
- triclinic medium (1)
- tropical geometry (1)
- tumor acidity (1)
- tumor cell invasion (1)
- tumor cell migration (1)
- two-scale expansion (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- uncapacitated facility location (1)
- uncertainty principle (1)
- unendlich (1)
- uniform central limit theorem (1)
- uniform consistency (1)
- uniform ergodicity (1)
- unimodular certification (1)
- unimodularity (1)
- value preserving portfolios (1)
- value semigroup (1)
- value-at-risk (1)
- valuing contracts (1)
- variable selection (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector measure (1)
- vector wavelets (1)
- vectorial multiresolution analysis (1)
- vehicular traffic (1)
- verication theorem (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- viscoelastic fluids (1)
- viscosity solutions (1)
- volatility arbitrage (1)
- vortex seperation (1)
- wave propagation (1)
- wavelet estimators (1)
- wavelet packets (1)
- wavelet thresholding (1)
- wavelet transform (1)
- weak dependence (1)
- weak solution theory (1)
- weak solutions (1)
- weakly/ strictly pareto optima (1)
- weight optimization (1)
- windowed Fourier transform (1)
- winner definition (1)
- worst-case (1)
- Äquisingularität (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)
Faculty / Organisational entity
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
The German energy mix, which provides an overview of the sources of electricity available in Germany, is changing as a result of the expansion of renewable energy sources. With this shift towards sustainable energy sources such as wind and solar power, the electricity market situation is also in flux. Whereas in the past there were few uncertainties in electricity generation and only demand was subject to stochastic uncertainties, generation is now subject to stochastic fluctuations as well, especially due to weather dependency. To provide a supportive framework for this different situation, the electricity market has introduced, among other things, the intraday market, products with half-hourly and quarter-hourly time slices, and a modified balancing energy market design. As a result, both electricity price forecasting and optimization issues remain topical.
In this thesis, we first address intraday market modeling and intraday index forecasting. To do so, we move to the level of individual bids in the intraday market and use them to model the limit order books of intraday products. Based on statistics of the modeled limit order books, we present a novel estimator for the intraday indices. Especially for less liquid products, the order book statistics contain relevant information that allows for significantly more accurate predictions in comparison to the benchmark estimator.
Unlike the intraday market, the day ahead market allows smaller companies without their own trading department to participate since it is operated as a market with daily auctions. We optimize the flexibility offer of such a small company in the day ahead market and model the prices with a stochastic multi-factor model already used in the industry. To make this model accessible for stochastic optimization, we discretize it in time and space using scenario trees. Here we present existing algorithms for scenario tree generation as well as our own extensions and adaptations. These are based on the nested distance, which measures the distance between two distributions of stochastic processes. Based on the resulting scenario trees, we apply the stochastic optimization methods of stochastic programming, dynamic programming, and reinforcement learning to illustrate in which context the methods are appropriate.
Gliomas are one of the most common types of primary brain tumors. Among
those, high grade astrocytomas - so-called glioblastoma multiforme - are the
most aggressive type of cancer originating in the brain, leaving patients a median survival time of 15 to 20 months after diagnosis. The invasive behavior
of the tumor leads to considerable difficulties regarding the localization of all
tumor cells, and thus impedes successful therapy. Here, mathematical models
can help to enhance the assessment of the tumor’s extent.
In this thesis, we set up a multiscale model for the evolution of a glioblastoma.
Starting on the microscopic level, we model subcellular binding processes and
velocity dynamics of single cancer cells. From the resulting mesoscopic equation, we derive a macroscopic equation via scaling methods. Combining this
equation with macroscopic descriptions of the tumor environment, a nonlinear
PDE-ODE-system is obtained. We consider several variations of the derived
model, amongst others introducing a new model for therapy by gliadel wafers,
a treatment approach indicated i.a. for recurrent glioblastoma.
We prove global existence of a weak solution to a version of the developed
PDE-ODE-system, containing degenerate diffusion and flux limitation in the
taxis terms of the tumor equation. The nonnegativity and boundedness of all
components of the solution by their biological carrying capacities is shown.
Finally, 2D-simulations are performed, illustrating the influence of different
parts of the model on tumor evolution. The effects of treatment by gliadel
wafers are compared to the therapy outcomes of classical chemotherapy in different settings.
Emission trading systems (ETS) represent a widely used instrument to control greenhouse
gas emissions, while minimizing reduction costs. In an ETS, the desired amount of emissions in
a predefined time period is fixed in advance; corresponding to this amount, tradeable allowances
are handed out or auctioned to companies which underlie the system. Emissions which are not
covered by an allowance are subject to a penalty at the end of the time period.
Emissions depend on non-deterministic parameters such as weather and the state of the
economy. Therefore, it is natural to view emissions as a stochastic quantity. This introduces a
challenge for the companies involved: In planning their abatement actions, they need to avoid
penalty payments without knowing their total amount of emissions. We consider a stochastic control approach to address this problem: In a continuous-time model, we use the rate of
emission abatement as a control in minimizing the costs that arise from penalty payments and
abatement costs. In a simplified variant of this model, the resulting Hamilton-Jacobi-Bellman
(HJB) equation can be solved analytically.
Taking the viewpoint of a regulator of an ETS, our main interest is to determine the resulting
emissions and to evaluate their compliance with the given emission target. Additionally, as an
incentive for investments in low-emission technologies, a high allowance price with low variability
is desirable. Both the resulting emissions and the allowance price are not directly given by the
solution to the stochastic control problem. Instead we need to solve a stochastic differential
equation (SDE), where the abatement rate enters as the drift term. Due to the nature of the
penalty function, the abatement rate is not continuous. This means that classical results on
existence and uniqueness of a solution as well as convergence of numerical methods, such as the
Euler-Maruyama scheme, do not apply. Therefore, we prove similar results under assumptions
suitable for our case. By applying a standard verification theorem, we show that the stochastic
control approach delivers an optimal abatement rate.
We extend the model by considering several consecutive time periods. This enables us to
model the transfer of unused allowances to the subsequent time period. In formulating the
multi-period model, we pursue two different approaches: In the first, we assume the value that
the company anticipates for an unused allowance to be constant throughout one time period.
We proceed similarly to the one-period model and again obtain an analytical solution. In the
second approach, we introduce an additional stochastic process to simulate the evolution of the
anticipated price for an unused allowance.
The model so far assumes that allowances are allocated for free. Therefore, we construct
another model extension to incorporate the auctioning of allowances. Then, additionally the
problem of choosing the optimal demand at the auction needs to be solved. We find that
the auction price equals the allowance price at the beginning of the respective time period.
Furthermore, we show that the resulting emissions as well as the allowance price are unaffected
by the introduction of auctioning in the setting of our model.
To perform numerical simulations, we first solve the characteristic partial differential equation
derived from the HJB equation by applying the method of lines. Then we apply the Euler-
Maruyama scheme to solve the SDE, delivering realizations of the resulting emissions and the
allowance price paths.
Simulation results indicate that, under realistic settings, the probability of non-compliance
with the emission target is quite large. It can be reduced for instance by an increase of the
penalty. In the multi-period model, we observe that by allowing the transfer of allowances to the
subsequent time period, the probability of non-compliance decreases considerably.
Estimation of Motion Vector Fields of Complex Microstructures by Time Series of Volume Images
(2023)
Mechanical tests form one of the pillars in development and assessment of modern materials. In a world that will be forced to handle its resources more carefully in the near future, development of materials that are favorable regarding for example weight or material consumption is inevitable. To guarantee that such materials can also be used in critical infrastructure, such as foamed materials in automotive industry or new types of concrete in civil engineering, mechanical properties like tensile or compressive strength have to be thoroughly described. One method to do so is by so called in situ tests, where the mechanical test is combined with an image acquisition technique such as Computed Tomography.
The resulting time series of volume images comprise the delicate and individual nature of each material. The objective of this thesis is to present and develop methods to unveil this behavior and make the motion accessible by algorithms. The estimation of motion has been tackled by many communities, and two of them have already made big effort to solve the problems we are facing. Digital Volume Correlation (DVC) on the one hand has been developed by material scientists and was applied in many different context in mechanical testing, but almost never produces displacement fields that allocate one vector per voxel. Medical Image Registration (MIR) on the other hand does produce voxel precise estimates, but is limited to very smooth motion estimates.
The unification of both families, DVC and MIR, under one roof, will therefore be illustrated in the first half of this thesis. Using the theory of inverse problems, we lay the mathematical foundations to explain why in our impression none of the families is sufficient to deal with all of the problems that come with motion estimation in in situ tests. We then proceed by presenting a third community in motion estimation, namely Optical flow, which is normally only applied in two dimensions. Nevertheless, within this community algorithms have been developed that meet many of our requirements. Strategies for large displacement exist as well as methods that resolve jumps, and on top the displacement is always calculated on pixel level. This thesis therefore proceeds by extending some of the most successful methods to 3D.
To ensure the competitiveness of our approach, the last part of this thesis deals with a detailed evaluation of proposed extensions. We focus on three types of materials, foam, fibre systems and concrete, and use simulated and real in situ tests to compare the Optical flow based methods to their competitors from DVC and MIR. By using synthetically generated and simulated displacement fields, we also assess the quality of the calculated displacement fields - a novelty in this area. We conclude this thesis by two specialized applications of our algorithm, which show how the voxel-precise displacement fields serve as useful information to engineers in investigating their materials.
In this thesis, a new concept to prove Mosco convergence of gradient-type Dirichlet forms within the \(L^2\)-framework of K.~Kuwae and T.~Shioya for varying reference measures is developed.
The goal is, to impose as little additional conditions as possible on the sequence of reference measure \({(\mu_N)}_{N\in \mathbb N}\), apart from weak convergence of measures.
Our approach combines the method of Finite Elements from numerical analysis with the topic of Mosco convergence.
We tackle the problem first on a finite-dimensional substructure of the \(L^2\)-framework, which is induced by finitely many basis functions on the state space \(\mathbb R^d\).
These are shifted and rescaled versions of the archetype tent function \(\chi^{(d)}\).
For \(d=1\) the archetype tent function is given by
\[\chi^{(1)}(x):=\big((-x+1)\land(x+1)\big)\lor 0,\quad x\in\mathbb R.\]
For \(d\geq 2\) we define a natural generalization of \(\chi^{(1)}\) as
\[\chi^{(d)}(x):=\Big(\min_{i,j\in\{1,\dots,d\}}\big(\big\{1+x_i-x_j,1+x_i,1-x_i\big\}\big)\Big)_+,\quad x\in\mathbb R^d.\]
Our strategy to obtain Mosco convergence of
\(\mathcal E^N(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu_N\) towards \(\mathcal E(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu\) for \(N\to\infty\)
involves as a preliminary step to restrict those bilinear forms to arguments \(u,v\) from the vector space spanned by the finite family \(\{\chi^{(d)}(\frac{\,\cdot\,}{r}-\alpha)\) \(|\alpha\in Z\}\) for
a finite index set \(Z\subset\mathbb Z^d\) and a scaling parameter \(r\in(0,\infty)\).
In a diagonal procedure, we consider a zero-sequence of scaling parameters and a sequence of index sets exhausting \(\mathbb Z^d\).
The original problem of Mosco convergence, \(\mathcal E^N\) towards \(\mathcal E\) w.r.t.~arguments \(u,v\) form the respective minimal closed form domains extending the pre-domain \(C_b^1(\mathbb R^d)\), can be solved
by such a diagonal procedure if we ask for some additional conditions on the Radon-Nikodym derivatives \(\rho_N(x)=\frac{d\mu_N(x)}{d x}\), \(N\in\mathbb N\). The essential requirement reads
\[\frac{1}{(2r)^d}\int_{[-r,r]^d}|\rho_N(x)- \rho_N(x+y)|d y \quad \overset{r\to 0}{\longrightarrow} \quad 0 \quad \text{in } L^1(d x),\,
\text{uniformly in } N\in\mathbb N.\]
As an intermediate step towards a setting with an infinite-dimensional state space, we let $E$ be a Suslin space and analyse the Mosco convergence of
\(\mathcal E^N(u,v)=\int_E\int_{\mathbb R^d}\langle\nabla_x u(z,x),\nabla_x v(z,x)\rangle_\text{euc}d\mu_N(z,x)\) with reference measure \(\mu_N\) on \(E\times\mathbb R^d\) for \(N\in\mathbb N\).
The form \(\mathcal E^N\) can be seen as a superposition of gradient-type forms on \(\mathbb R^d\).
Subsequently, we derive an abstract result on Mosco convergence for classical gradient-type Dirichlet forms
\(\mathcal E^N(u,v)=\int_E\langle \nabla u,\nabla v\rangle_Hd\mu_N\) with reference measure \(\mu_N\) on a Suslin space $E$ and a tangential Hilbert space \(H\subseteq E\).
The preceding analysis of superposed gradient-type forms can be used on the component forms \(\mathcal E^{N}_k\), which provide the decomposition
\(\mathcal E^{N}=\sum_k\mathcal E^{N}_k\). The index of the component \(k\) runs over a suitable orthonormal basis of admissible elements in \(H\).
For the asymptotic form \(\mathcal E\) and its component forms \(\mathcal E^k\), we have to assume \(D(\mathcal E)=\bigcap_kD(\mathcal E^k)\) regarding their domains, which is equivalent to the Markov uniqueness of \(\mathcal E\).
The abstract results are tested on an example from statistical mechanics.
Under a scaling limit, tightness of the family of laws for a microscopic dynamical stochastic interface model over \((0,1)^d\) is shown and its asymptotic Dirichlet form identified.
The considered model is based on a sequence of weakly converging Gaussian measures \({(\mu_N)}_{N\in\mathbb N}\) on \(L^2((0,1)^d)\), which are
perturbed by a class of physically relevant non-log-concave densities.
This thesis deals with the simulation of large insurance portfolios. On the one hand, we need to model the contracts' development and the insured collective's structure and dynamics. On the other hand, an important task is the forward projection of the given balance sheet. Questions that are interesting in this context, such as the question of the default probability up to a certain time or the question of whether interest rate promises can be kept in the long term, cannot be answered analytically without strong simplifications. Reasons for this are high dependencies between the insurer's assets and liabilities, interactions between existing and new contracts due to claims on a collective reserve, potential policy features such as a guaranteed interest rate, and individual surrender options of the insured. As a consequence, we need numerical calculations, and especially the volatile financial markets require stochastic simulations. Despite the fact that advances in technology with increasing computing capacities allow for faster computations, a contract-specific simulation of all policies is often an impossible task. This is due to the size and heterogeneity of insurance portfolios, long time horizons, and the number of necessary Monte Carlo simulations. Instead, suitable approximation techniques are required.
In this thesis, we therefore develop compression methods, where the insured collective is grouped into cohorts based on selected contract-related criteria and then only an enormously reduced number of representative contracts needs to be simulated. We also show how to efficiently integrate new contracts into the existing insurance portfolio. Our grouping schemes are flexible, can be applied to any insurance portfolio, and maintain the existing structure of the insured collective. Furthermore, we investigate the efficiency of the compression methods and their quality in approximating the real life insurance portfolio.
For the simulation of the insurance business, we introduce a stochastic asset-liability management (ALM) model. Starting with an initial insurance portfolio, our aim is the forward projection of a given balance sheet structure. We investigate conditions for a long-term stability or stationarity corresponding to the idea of a solid and healthy insurance company. Furthermore, a main result is the proof that our model satisfies the fundamental balance sheet equation at the end of every period, which is in line with the principle of double-entry bookkeeping. We analyze several strategies for investing in the capital market and for financing the due obligations. Motivated by observed weaknesses, we develop new, more sophisticated strategies. In extensive simulation studies, we illustrate the short- and long-term behavior of our ALM model and show impacts of different business forms, the predicted new business, and possible capital market crashes on the profitability and stability of a life insurer.
This thesis concerns itself with the long-term behavior of generalized Langevin dynamics with multiplicative noise,
i.e. the solutions to a class of two-component stochastic differential equations in \( \mathbb{R}^{d_1}\times\mathbb{R}^{d_2} \)
subject to outer influence induced by potentials \( \Phi \) and \( \Psi \),
where the stochastic term is only present in the second component, on which it is dependent.
In particular, convergence to an equilibrium defined by an invariant initial distribution \( \mu \) is shown
for weak solutions to the generalized Langevin equation obtained via generalized Dirichlet forms,
and the convergence rate is estimated by applying hypocoercivity methods relying on weak or classical Poincaré inequalities.
As a prerequisite, the space of compactly supported smooth functions is proven to be a domain of essential m-dissipativity
for the associated Kolmogorov backward operator on \(L^2(\mu)\).
In the second part of the thesis, similar Langevin dynamics are considered, however defined on a product of infinite-dimensional separable Hilbert spaces.
The set of finitely based smooth bounded functions is shown to be a domain of essential m-dissipativity for the corresponding Kolmogorov operator \( L \) on \( L^2(\mu) \)
for a Gaussian measure \( \mu \), by applying the previous finite-dimensional result to appropriate restrictions of \( L \).
Under further bounding conditions on the diffusion coefficient relative to the covariance operators of \( \mu \),
hypocoercivity of the generated semigroup is proved, as well as the existence of an associated weakly continuous Markov process
which analytically weakly provides a weak solution to the considered Langevin equation.
This thesis is primarily motivated by a project with Deutsche Bahn about offer preparation in rail freight transport. At its core, a customer should be offered three train paths to choose from in response to a freight train request. As part of this cooperation with DB Netz AG, we investigated how to compute these train paths efficiently. They should be all "good" but also "as different as possible". We solved this practical problem using combinatorial optimization techniques.
At the beginning of this thesis, we describe the practical aspects of our research collaboration. The more theoretical problems, which we consider afterwards, are divided into two parts.
In Part I, we deal with a dual pair of problems on directed graphs with two designated end-vertices. The Almost Disjoint Paths (ADP) problem asks for a maximum number of paths between the end-vertices any two of which have at most one arc in common. In comparison, for the Separating by Forbidden Pairs (SFP) problem we have to select as few arc pairs as possible such that every path between the end-vertices contains both arcs of a chosen pair. The main results of this more theoretical part are the classifications of ADP as an NP-complete and SFP as a Sigma-2-P-complete problem.
In Part II, we address a simplified version of the practical project: the Fastest Path with Time Profiles and Waiting (FPTPW) problem. In a directed acyclic graph with durations on the arcs and time windows at the vertices, we search for a fastest path from a source to a target vertex. We are only allowed to be at a vertex within its time windows, and we are only allowed to wait at specified vertices. After introducing departure-duration functions we develop solution algorithms based on these. We consider special cases that significantly reduce the complexity or are of practical relevance. Furthermore, we show that already this simplified problem is in general NP-hard and investigate the complexity status more closely.
This survey provides the reader with an overview of numerous results on p-permu- tation modules and the closely related classes of endo-trivial, endo-permutation and endo-p- permutation modules. These classes of modules play an important role in the representation theory of finite groups. For example, they are important building blocks used to understand and parametrise several kinds of categorical equivalences between blocks of finite group alge- bras. For this reason, there has been, since the late 1990’s, much interest in classifying such modules. The aim of this manuscript is to review classical results as well as all the major recent advances in the area. The first part of this survey serves as an introduction to the topic for non-experts in modular representation theory of finite groups, outlining proof ideas of the most important results at the foundations of the theory. Simultaneously, the connections between the aforementioned classes of modules are emphasised. In this respect, results, which are dispersed in the literature, are brought together, and emphasis is put on common properties and the role played by the p-permutation modules throughout the theory. Finally, in the last part of the manuscript, lifting results from positive characteristic to characteristic zero are collected and their proofs sketched.
This dissertation presents a generalization of the generalized grey Brownian motion with componentwise independence, called a vector-valued generalized grey Brownian motion (vggBm), and builds a framework of mathematical analysis around this process with the aim of solving stochastic differential equations with respect to this process. Similar to that of the one-dimensional case, the construction of vggBm starts with selecting the appropriate nuclear triple, and construct the corresponding probability measure on the co-nuclear space. Since independence of components are essential in constructing vggBm, a natural way to achieve this is to use the nuclear triple of product spaces: \[ \mathcal{S}_d(\mathbb{R}) \subset L^2_d(\mathbb{R}) \subset \mathcal{S}_d'(\mathbb{R}), \]
where \( L^2_d(\mathbb{R}) \) is the real separable Hilbert space of \( \mathbb{R}^d \)-valued square integrable functions on \( \mathbb{R} \) with respect to the Lebesgue measure, \( \mathcal{S}_d(\mathbb{R}) \) is the external direct sum of \(d\) copies of the nuclear space \(\mathcal{S}(\mathbb{R})\) of Schwartz test functions, and \(\mathcal{S}_d'(\mathbb{R})\) is the dual space of \(\mathcal{S}_d(\mathbb{R})\).
The probability measure used is the the \(d\)-fold product measure of the Mittag-Leffler measure, denoted by \(\mu_{\beta}^{\otimes d}\), whose characteristic function is given by \[ \int_{\mathcal{S}_d'(\mathbb{R})} e^{i\langle\omega,\varphi\rangle}\,\text{d}\mu_{\beta}^{\otimes d}(\omega) = \prod_{k=1}^{d}E_\beta\left(-\frac{1}{2}\langle\varphi_k,\varphi_k\rangle\right),\qquad \varphi\in \mathcal{S}_d(\mathbb{R}), \]
where \( \beta\in(0,1] \), and \( E_\beta \) is the Mittag-Leffler function. Vector-valued generalized grey Brownian motion, denoted by \( B^{\beta,\alpha}_{d}:=(B^{\beta,\alpha}_{d,t})_{t\geq 0}\), is then defined as a process taking values in \( L^2(\mu_{\beta}^{\otimes d};\mathbb{R}^d) \) given by
\[ B^{\beta,\alpha}_{d,t}(\omega) := (\langle\omega_1,M^{\alpha/2}_{-}1\!\!1_{[0,t)}\rangle,\dots,\langle\omega_d,M^{\alpha/2}_{-}1\!\!1_{[0,t)}\rangle),\quad \omega\in\mathcal{S}_d'(\mathbb{R}), \]
where \( M^{\alpha/2} \) is an appropriate fractional operator indexed by \( \alpha\in(0,2) \) and \( 1\!\!1_{[0,t)} \) is the indicator function on the interval \( [0,t) \). This process is, in general, not the aforementioned \(d\)-dimensional analogues of ggBm for \(d\geq 2\), since componentwise independence of the latter process holds only in the Gaussian case.
The study of analysis around vggBm starts with accessibility to Appell systems, so that characterizations and tools for the analysis of the corresponding distribution spaces are established. Then, explicit examples of the use of these characterizations and tools are given: the construction of Donsker's delta function, the existence of local times and self-intersection local times of vggBm, the existence of the derivative of vggBm in the sense of distributions, and the existence of solutions to linear stochastic differential equations with respect to vggBm.
This work aims to study textile structures in the frame of linear elasticity to understand how
the structure and material parameters influence the macroscopic homogenized model. More
precisely, we are interested in how the textile design parameters, such as the ratio between
fibers’ distance and cross-section width, the strength of the contact sliding between yarns,
and the partial clamp on the textile boundaries determine the phenomena that one can see in
shear experiments with textiles. Among others, when the warp and weft yarns change their
in-plane angles first and, after reaching some critical shear angle, the textile plate comes out
of the plane, and its folding starts.
The textile structure under consideration is a woven square, partially clamped on the left
and bottom boundary, made of long thin fibers that cross each other in a periodic pattern.
The fibers cannot penetrate each other, and in-plane sliding is allowed. This last assumption,
together with the partial clamp, adds new levels of complexity to the problem due to
the anisotropy in the yarn’s behavior in the unclamped subdomains of the textile.
The limiting behavior and macroscopic strain fields are found by passing to the limit with
respect to the yarn’s thickness r and the distance between them e, parameters that are asymptotically
related. The homogenization and dimension reduction are done via the unfolding
method, which separates the macroscopic scale from the periodicity cell. In addition to the
homogenization, a dimension reduction from a 3D to a 2D problem is applied. Adapting
the classical unfolding results to both the anisotropic context and to lattice grids (which are
constructed starting from the center lines of the rods crossing each other) are the main tools
we developed to tackle this type of model. They represent the first part of the thesis and are
published in Falconi, Griso, and Orlik, 2022b and Falconi, Griso, and Orlik, 2022a.
Given the parameters mentioned above, we then proceed to classify different textile problems,
incorporating the results from other works on the topic and thoroughly investigating
some others. After the study is conducted, we draw conclusions and give a mathematical
explanation concerning the expected approximation of the displacements, the expected solvability
of the limit problems, and the phenomena mentioned above. The results can be found
in “Asymptotic behavior for textiles with loose contact”, which has been recently submitted.
Epidemiological models have gained much interest during the COVID-19 pandemic.
As the pandemic is now driven by newly emerging variants of SARS-CoV-2, the
question arises how to model multiple virus variants in a single model.
In this thesis, we have extended an established model for COVID-19 forecasts to multiple
virus variants. We analyzed the model mathematically and showed the global
existence and uniqueness of the solution as well as important invariance properties
for a meaningful model. The implementation into an existing framework which allows
us to identify model parameters based on surveillance data is described briefly.
When applying our model to actual transitions between SARS-CoV-2 variants, we
found that forecasts would have been significantly improved by our model extension.
In most cases, we were able to precisely predict peak dates and heights in
case incidences of waves caused by newly emerging variants during early transition
phases. More severe outcomes, like hospitalizations, are found to be harder to predict
because of very limited observational data regarding these outcomes for newly
emerging variants.
Symplectic linear quotient singularities belong to the class of symplectic singularities introduced by Beauville in 2000.
They are linear quotients by a group preserving a symplectic form on the vector space and are necessarily singular by a classical theorem of Chevalley-Serre-Shephard-Todd.
We study \(\mathbb Q\)-factorial terminalizations of such quotient singularities, that is, crepant partial resolutions that are allowed to have mild singularities.
The only symplectic linear quotients that can possibly admit a smooth \(\mathbb Q\)-factorial terminalization are by a theorem of Verbitsky those by symplectic reflection groups.
A smooth \(\mathbb Q\)-factorial terminalization is in this context referred to as a symplectic resolution and over the past two decades, there is an ongoing effort to classify exactly which symplectic reflection groups give rise to quotients that admit symplectic resolutions.
We reduce this classification to finitely many, precisely 45, open cases by proving that for almost all quotients by symplectically primitive symplectic reflection groups no such resolution exists.
Concentrating on the groups themselves, we prove that a parabolic subgroup of a symplectic reflection group is generated by symplectic reflections as well.
This is a direct analogue of a theorem of Steinberg for complex reflection groups.
We further study divisor class groups of \(\mathbb Q\)-factorial terminalizations of linear quotients by finite subgroups \(G\) of the special linear group and prove that such a class group is completely controlled by the symplectic reflections - or more generally junior elements - contained in \(G\).
We finally discuss our implementation of an algorithm by Yamagishi for the computation of the Cox ring of a \(\mathbb Q\)-factorial terminalization of a linear quotient in the computer algebra system OSCAR.
We use this algorithm to construct a generating system of the Cox ring corresponding to the quotient by a dihedral group of order \(2d\) with \(d\) odd acting by symplectic reflections.
Although our argument follows the algorithm, the proof does not logically depend on computer calculations.
We are able to derive the \(\mathbb Q\)-factorial terminalization itself from the Cox ring in this case.
Solving probabilistic-robust optimization problems using methods from semi-infinite optimization
(2023)
Optimization under uncertainty is one field of mathematics which is strongly inspired by real world problems. To handle uncertainties several models have arisen. One of these is the probust model where a combination of probabilistic and worst-case uncertainty is considered. So far, just problem instances with a special structure can be dealt with. In this thesis, we introduce solving techniques applicable for any probust optimization problem. On the one hand, we create upper bounds for the solution value by solving a sequence of chance constrained optimization problems. These bounds are based on discretization schemes which are inspired by semi-infinite optimization. On the other hand, we create lower bounds by solving a sequence of set-approximation problems. Here, we substitute the original event set by an appropriate family of sets. We examine the performance of the corresponding algorithms on simple packing problems where we can provide the probust solution analytically. Afterwards, we solve a water reservoir and a distillation problem and compare the probust solutions with solutions arising from other uncertainty models.
Das MINT-EC-Girls-Camp: Math-Talent-School richtet sich an mathematikbegeisterte Schülerinnen von MINT-EC-Schulen, die Einblicke in die Berufswelt von Mathematikerinnen und Mathematikern bekommen möchten. Die Veranstaltung veranschaulicht den Schülerinnen die steigende Relevanz angewandter mathematischer Forschungsgebiete, wie der Techno- und der Wirtschaftsmathematik. Sie soll dazu dienen, Schüler:innen die Bedeutung mathematischer Arbeitsweisen in der heutigen Berufswelt, insbesondere in Industrie und Wirtschaft, begreifbar zu machen. Die Talent-School wird organisiert von MINT-EC und dem Felix-Klein-Zentrum für Mathematik. Die fachwissenschaftliche Betreuung der Schülerinnen während dieser Talent-School wurde durch Mitarbeitende des Kompetenzzentrums für Mathematische Modellierung in MINT-Projekten in der Schule (KOMMS) der TU Kaiserslautern und des Fraunhofer ITWM umgesetzt. In diesem Report beschreiben wir die Projekte, die während der Talent-School im Oktober 2022 durchgeführt wurden.
Seit 1993 veranstaltet der Fachbereich Mathematik der TU Kaiserslautern jährlich die mathematischen Modellierungswochen. Die Veranstaltung erwuchs parallel zu der steigenden Relevanz angewandter mathematischer Forschungsgebiete, wie der Techno- und der Wirtschaftsmathematik. Sie soll dazu dienen, Schülerinnen und Schülern die Bedeutung mathematischer Arbeitsweisen in der heutigen Berufswelt, insbesondere in Industrie und Wirtschaft, begreifbar zu machen. Darüber hinaus bietet die Modellierungswoche den teilnehmenden Lehrkräften einen Einblick in die Projektarbeit mit offenen Fragestellungen im Rahmen der mathematischen Modellierung. In diesem Report beschreiben wir die Projekte, die während der Modellierungswoche im Dezember 2022 durchgeführt wurden.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
This thesis deals with modeling and simulation of district heating networks (DHN) and the mathematical analysis of the proposed DHN model. We provide a detailed derivation of the complete system of governing equations, starting from a brief exposition of the physical quantities of interest, continued with the components to set up a graph based network model accounting for fluxes and coupling conditions, the transport equations for water and thermal energy in pipelines, and the terms representing consumers and producers. On this basis, we perform an analysis of the solvability of the model equations, starting from the scalar advection problem in a single–consumer single–producer network, to a generalized problem suitable to model simple networks without loops. We also derive an abstract formulation of the problem, which serves as a rigorous mathematical model that can be utilized for optimization problems. The theoretical results can be utilized to perform tran- sient simulations of real world DHN and optimize their performance by optimal control, as indicated in a case study.
LinTim is a scientific algorithm and dataset library that has been under development since 2007 and offers the possibility to carry out the various planning steps in public transportation. Although the name originally derives from "Line planning and Timetabling", the available functions have grown far beyond this scope. This is the documentation for version 2023.12. For more information, see https://www.lintim.net.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
In group theory, a big and important family of infinite groups is given by the algebraic groups. These groups and their structures are already well-understood. In representation theory, the study of the unipotent variety in algebraic groups - and by extension the study of the nilpotent variety in the associated Lie algebra - is of particular interest.
Let \( G \) be a connected reductive algebraic group over an algebraically closed field \(\mathbf{k}\), and let \(\operatorname{Lie}(G)\) be its associated Lie algebra. By now, the orbits in the nilpotent and unipotent variety under the action of \(G\) are completely known and can be found for example in a book of Liebeck and Seitz. There exists, however, no uniform description of these orbits that holds in both good and bad characteristic. With this in mind, Lusztig defined a partition of the unipotent variety of \(G\) in 2011. Equivalently, one can consider certain subsets of the nilpotent variety of \(\operatorname{Lie}(G)\) called the nilpotent pieces. This approach appears in the same paper by Lusztig in which he explicitly determines the nilpotent pieces for simple algebraic groups of classical type.
The nilpotent pieces for the exceptional groups of type \(G_2, F_4, E_6, E_7,\) and \(E_8\) in bad characteristic have not yet been determined.
This thesis gives an introduction to the definition of the nilpotent pieces and presents a solution to this problem for groups of type \(G_2, F_4, E_6\), and partly for \(E_7\). The solution relies heavily on computational work which we elaborate on in later chapters.
We consider a linearized kinetic BGK equation and the associated acoustic system on a network.
Coupling conditions for the macroscopic equations are derived from the kinetic conditions via an asymptotic analysis near the nodes of the network.
This analysis leads to the consideration of a fixpoint problem involving the solutions of kinetic half-space problems.
This work extends the procedure developed in [13], where coupling conditions for a simplified BGK model have been derived.
Numerical comparisons between different coupling conditions
confirm the accuracy of the proposed approximation.
Methods for scale and orientation invariant analysis of lower dimensional structures in 3d images
(2023)
This thesis is motivated by two groups of scientific disciplines: engineering sciences and mathematics. On the one hand, engineering sciences such as civil engineering want to design sustainable and cost-effective materials with desirable mechanical properties. The material behaviour depends on physical properties and production parameters. Therefore, physical properties are measured experimentally from real samples. In our case, computed tomography (CT) is used to non-destructively gain insight into the materials’ microstructure. This results in large 3d images which yield information on geometric microstructure characteristics. On the other hand, mathematical sciences are interested in designing methods with suitable and guaranteed properties. For example, a natural assumption of human vision is to analyse images regardless of object position, orientation, or scale. This assumption is formalized through the concepts of equivariance and invariance.
In Part I, we deal with oriented structures in materials such as concrete or fiber-reinforced composites. In image processing, knowledge of the local structure orientation can be used for various tasks, e.g. structure enhancement. The idea of using banks of directed filters parameterized in the orientation space is effective in 2d. However, this class of methods is prohibitive in 3d due to the high computational burden of filtering when using a fine discretization of the unit sphere. Hence, we introduce a method for 3d pixel-wise orientation estimation and directional filtering inspired by the idea of adaptive refinement in discretized settings. Furthermore, an operator for distinction between isotropic and anisotropic structures is defined based on our method. Finally, usefulness of the method is shown on 3d CT images in three different tasks on a fiber-reinforced polymer, concrete with cracks, and partially closed foams. Additionally, our method is extended to construct line granulometry and characterize fiber length and orientation distributions in fiber-reinforced polymers produced by either 3d printing or by injection moulding.
In Part II, we investigate how to introduce scale invariance for neural networks by using the Riesz transform. In classical convolutional neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, the network may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform, a scale equivariant operator. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider segmenting cracks in CT images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real CT images featuring a wide range of crack widths. As an alternative to deep learning models, the Riesz transform is utilized to construct a scale equivariant scattering network, which does not require a lengthy training procedure and works with very few training examples. Mathematical foundations behind this representation are laid out and analyzed. We show that this representation with 4 times less features than the original scattering networks from Mallat performs comparably well on texture classification and gives superior performance when dealing with scales outside the training set distribution.
Given a finite or countably infinite family of Hilbert spaces \((H_j)_{j\in N} \), we study the Hilbert space tensor product \(\bigotimes_{j\in N} H_j\). In the general case, these tensor products were introduced by John von Neumann. We are especially interested in the case where each Hilbert space \(H_j\) is given as a reproducing kernel Hilbert space, i.e., \(H_j = H(K_j)\) for some reproducing kernel \(K_j\). We establish the following result, which is new for the case of N being infinite: If we restrict the domains of the kernels \(K_j\) properly, their pointwise product \(K\) is again a reproducing kernel, and
\[
H(K) \cong \bigotimes_{j\in N} H_j\,
\]
i.e., there is an isometric isomorphism between both spaces respecting the tensor product structure.
The thesis investigates the phenomenon of hypocoercivity for Langevin-type equations on manifolds via a powerful abstract Hilbert space method. In applications, hypocoercivity experienced by the semigroup can be used to find optimal parameters for the production of nonwoven fleeces. Furthermore, the last chapter introduces a new scaling limit technique: Employing the concept of so-called stratifolds we can show Kuwae-Shioya-Mosco convergence of anisotropic 3D fibre lay-down models to an isotropic 2D model.
Risk management is an indispensable component of the financial system. In this context, capital requirements are built by financial institutions to avoid future bankruptcy. Their calculation is based on a specific kind of maps, so-called risk measures. There exist several forms and definitions of them. Multi-asset risk measures are the starting point of this dissertation. They determine the capital requirements as the minimal amount of money invested into multiple eligible assets to secure future payoffs. The dissertation consists of three main contributions: First, multi-asset risk measures are used to calculate pricing bounds for European type options. Second, multi-asset risk measures are combined with recently proposed intrinsic risk measures to obtain a new kind of a risk measure which we call a multi-asset intrinsic (MAI) risk measure. Third, the preferences of an agent are included in the calculation of the capital requirements. This leads to another new risk measure which we call a scalarized utility-based multi-asset (SUBMA) risk measure.
In the introductory chapter, we recall the definition and properties of multi-asset risk
measures. Then, each of the aforementioned contributions covers a separate chapter. In
the following, the content of these three chapters is explained in more detail:
Risk measures can be used to calculate pricing bounds for financial derivatives. In
Chapter 2, we deal with the pricing of European options in an incomplete financial market
model. We use the common risk measures Value-at-Risk and Expected Shortfall to define
good deals on a financial market with log-normally distributed rates of return. We show that the pricing bounds obtained from Value-at-Risk may have a non-smooth behavior under parameter changes. Additionally, we find situations in which the seller's bound for a call option is smaller than the buyer's bound. We identify the missing convexity of the Value-at-Risk as main reason for this behavior. Due to the strong connection between the obtained pricing bounds and the theory of risk measures, we further obtain new insights in the finiteness and the continuity of multi-asset risk measures.
In Chapter 3, we construct the MAI risk measure. Therefore, recall that a multi-asset risk measure describes the minimal external capital that has to be raised into multiple eligible assets to make a future financial position acceptable, i.e., that it passes a capital adequacy test. Recently, the alternative methodology of intrinsic risk measures
was introduced in the literature. These ask for the minimal proportion of the financial position that has to be reallocated to pass the capital adequacy test, i.e., only internal capital is used. We combine these two concepts and call this new type of risk measure an MAI risk measure. It allows to secure the financial position by external capital as well as reallocating parts of the portfolio as an internal rebooking. We investigate several properties to demonstrate similarities and differences to the two
aforementioned classical types of risk measures. We find out that diversification reduces
the capital requirement only in special situations depending on the financial positions. With the help of Sion's minimax theorem we also prove a dual representation for MAI risk measures. Finally, we determine capital requirements in a model motivated by the Solvency II methodology.
In the final Chapter 4, we construct the SUBMA risk measure. In doing so, we consider the situation in which a financial institution has to satisfy a capital adequacy test, e.g., by the Basel Accords for banks or by Solvency II for insurers. If the financial situation of this institution is tight, then it can happen that no reallocation of the initial
endowment would pass the capital adequacy test. The classical portfolio optimization approach breaks down and a capital increase is needed. We introduce the SUBMA risk measure which optimizes the hedging costs and the expected utility of the institution simultaneously subject to the capital adequacy test. We find out that the SUBMA risk measure is coherent if the utility function has constant relative risk aversion and the capital adequacy test leads to a coherent acceptance set. In a one-period financial market model we present a sufficient condition for the SUBMA risk measure to be finite-valued and continuous. Finally, we calculate the SUBMA risk measure in a continuous-time financial market model for two benchmark capital adequacy tests.
The main objects of study in this thesis are abelian varieties and their endomorphism rings. Abelian varieties are not just interesting in their own right, they also have numerous applications in various areas such as in algebraic geometry, number theory and information security. In fact, they make up one of the best choices in public key cryptography and more recently in post-quantum cryptography. Endomorphism rings are objects attached to abelian varieties. Their computation plays an important role in explicit class field theory and in the security of some post-quantum cryptosystems.
There are subexponential algorithms to compute the endomorphism rings of abelian varieties of dimension one and two. Prior to this work, all these subexponential algorithms came with a probability of failure and additional steps were required to unconditionally prove the output. In addition, these methods do not cover all abelian varieties of dimension two. The objective of this thesis is to analyse the subexponential methods and develop ways to deal with the exceptional cases.
We improve the existing methods by developing algorithms that always output the correct endomorphism ring. In addition to that, we develop a novel approach to compute endomorphism rings of some abelian varieties that could not be handled before. We also prove that the subexponential approaches are simply not good enough to cover all the cases. We use some of our results to construct a family of abelian surfaces with which we build post-quantum cryptosystems that are believed to resist subexponential quantum attacks - a desirable property for cryptosystems. This has the potential of providing an efficient non interactive isogeny based key exchange protocol, which is also capable of resisting subexponential quantum attacks and will be the first of its kind.
Load modeling is one of the crucial tasks for improving smart grids’ energy efficiency. Among many alternatives, machine learning-based load models have become popular in applications and have shown outstanding performance in recent years. The performance of these models highly relies on data quality and quantity available for training. However, gathering a sufficient amount of high-quality data is time-consuming and extremely expensive. In the last decade, Generative Adversarial Networks (GANs) have demonstrated their potential to solve the data shortage problem by generating synthetic data by learning from recorded/empirical data. Educated synthetic datasets can reduce prediction error of electricity consumption when combined with empirical data. Further, they can be used to enhance risk management calculations. Therefore, we propose RCGAN, TimeGAN, CWGAN, and RCWGAN which take individual electricity consumption data as input to provide synthetic data in this study. Our work focuses on one dimensional times series, and numerical experiments on an empirical dataset show that GANs are indeed able to generate synthetic data with realistic appearance.
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
Seit 1993 veranstaltet der Fachbereich Mathematik der TU Kaiserslautern jährlich die mathematischen Modellierungswochen. Die Veranstaltung erwuchs parallel zu der steigenden Relevanz angewandter mathematischer Forschungsgebiete, wie der Technomathematik und der Wirtschaftsmathematik. Sie soll dazu dienen, Schülerinnen und Schülern die Bedeutung mathematischer Arbeitsweisen in der heutigen Berufswelt, insbesondere in Industrie und Wirtschaft, begreifbar zu machen. Darüber hinaus bietet die Modellierungswoche den teilnehmenden Lehrkräften einen Einblick in die Projektarbeit mit offenen Fragestellungen im Rahmen der mathematischen Modellierung. In diesem Report beschreiben wir die Projekte, die während der Modellierungswoche im Dezember 2021 durchgeführt wurden. Der Themenschwerpunkt der Veranstaltung lautete "Wetter und Katastrophenschutz".
The knowledge of structural properties in microscopic materials contributes to a deeper understanding of macroscopic properties. For the study of such materials, several imaging techniques reaching scales in the order of nanometers have been developed. One of the most powerful and sophisticated imaging methods is focused-ion-beam scanning electron
microscopy (FIB-SEM), which combines serial sectioning by an ion beam and imaging by
a scanning electron microscope.
FIB-SEM imaging reaches extraordinary scales below 5 nm with large representative
volumes. However, the complexity of the imaging process results in the addition of artificial distortion and artifacts generating poor-quality images. We introduce a method
for the quality evaluation of images by analyzing general characteristics of the images
as well as artifacts exclusively for FIB-SEM, namely curtaining and charging. For the
evaluation, we propose quality indexes, which are tested on several data sets of porous and non-porous materials with different characteristics and distortions. The quality indexes report objective evaluations in accordance with visual judgment.
Moreover, the acquisition of large volumes at high resolution can be time-consuming. An approach to speed up the imaging is by decreasing the resolution and by considering cuboidal voxel configurations. However, non-isotropic resolutions may lead to errors in the reconstructions. Even if the reconstruction is correct, effects are visible in the analysis.
We study the effects of different voxel settings on the prediction of material and flow properties of reconstructed structures. Results show good agreement between highly resolved cases and ground truths as is expected. Structural anisotropy is reported as
resolution decreases, especially in anisotropic grids. Nevertheless, gray image interpolation
remedies the induced anisotropy. These benefits are visible at flow properties as well.
For highly porous structures, the structural reconstruction is even more difficult as
a consequence of deeper parts of the material visible through the pores. We show as an application example, the reconstruction of two highly porous structures of optical layers, where a typical workflow from image acquisition, preprocessing, reconstruction until a
spatial analysis is performed. The study case shows the advantages of 3D imaging for
optical porous layers. The analysis reveals geometrical structural properties related to the manufacturing processes.
An increasing number of nowadays tasks, such as speech recognition, image generation,
translation, classification or prediction are performed with the help of machine learning.
Especially artificial neural networks (ANNs) provide convincing results for these tasks.
The reasons for this success story are the drastic increase of available data sources in
our more and more digitalized world as well as the development of remarkable ANN
architectures. This development has led to an increasing number of model parameters
together with more and more complex models. Unfortunately, this yields a loss in the
interpretability of deployed models. However, there exists a natural desire to explain the
deployed models, not just by empirical observations but also by analytical calculations.
In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding
of these models. As the name suggests, VAEs are based on standard autoencoders (AEs)
and therefore used to perform dimensionality reduction of data. This is achieved by a
bottleneck structure within the hidden layers of the ANN. From a data input the encoder,
that is the part up to the bottleneck, produces a low dimensional representation. The
decoder, the part from the bottleneck to the output, uses this representation to reconstruct
the input. The model is learned by minimizing the error from the reconstruction.
In our point of view, the most remarkable property and, hence, also a central topic
in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning
is preventing the VAE with thousands of parameters from overfitting. However, such a
desirable property comes with the risk for the model of learning nothing at all. In this
thesis, we look at VAEs and the auto-pruning from two different angles and our main
contributions to research are the following:
(i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the
framework of generalized linear models (GLMs). As a result, we are able to explain
training results of VAEs before conducting the actual training.
(ii) We construct a time dependent VAE and show the effects of the auto-pruning in
this model. As a result, we are able to model financial data sequences and estimate
the value-at-risk (VaR) of associated portfolios. Our results show that we surpass
the standard benchmarks for VaR estimation.
In the representation theory of finite groups, the so-called local-global conjectures assert a relation between the representation theory of a finite group and one of its local subgroups. The McKay-Navarro conjecture claims that the action of a set of Galois automorphisms on certain ordinary characters of the local and global group is equivariant. Navarro, Späth, and Vallejo reduced the conjecture to a problem about simple groups in 2019 and stated an inductive condition that has to be verified for all finite simple groups.
In this work, we give an introduction to the character theory of finite groups and state the McKay-Navarro conjecture and its inductive condition. Furthermore, we recall the definition of finite groups of Lie type and present results regarding their structure and their representation theory.
In the second part of this work, we verify the inductive McKay-Navarro condition for various families of finite groups of Lie type.
In defining characteristic, most groups have already been considered by Ruhstorfer.
We show that the inductive condition also holds for the groups with exceptional graph automorphisms, the Suzuki and Ree groups, the groups \(B_n(2)\) for \(n \geq 2\), as well as for the simple groups of Lie type with non-generic Schur multiplier in their defining characteristic.
This completes the verification of the inductive McKay-Navarro condition in defining characteristic. We further consider the Suzuki and Ree groups and verify the inductive condition for all primes. On the way, we show that there exists a Galois-equivariant Jordan decomposition for their irreducible characters.
Moreover, we consider some families of groups of Lie type that do not admit a generic choice of a local subgroup.
We show that the inductive condition is satisfied for the prime \(\ell=3\) and the groups \(\text{PSL}_3(q)\) with \(q \equiv 4, 7 \mod 9\), \(\text{PSU}_3(q)\) with \(q \equiv 2, 5 \mod 9\), and \(G_2 (q)\) with \(q \equiv 2, 4, 5, 7 \mod 9\).
Further, we verify the inductive condition for the prime \(\ell=2\) and \(G_2(3^f)\) for \(f \geq 1\), \(^3 D_4(q)\), and \(^2E_6(q)\) where \(q\) is an odd prime power.
In this paper, a prediction model for the tensile behaviour of ultra-high performance
fibre-reinforced concrete is proposed. It is based on integrating force contributions of all fibres
crossing the crack plane. Piecewise linear models for the force contributions depending on fibre
orientation and embedded length are fitted to force–slip curves obtained in single-fibre pull-out tests.
Fibre characteristics in the crack are analysed in a micro-computed tomography image of a concrete
sample. For more general predictions, a stochastic fibre model with a one-parametric orientation
distribution is introduced. Simple estimators for the orientation parameter are presented, which only
require fibre orientations in the crack plane. Our prediction method is calibrated to fit experimental
tensile curves.
Aerodynamic design optimization, considered in this thesis, is a large and complex area spanning different disciplines from mathematics to engineering. To perform optimizations on industrially relevant test cases, various algorithms and techniques have been proposed throughout the literature, including the Sobolev smoothing of gradients. This thesis combines the Sobolev methodology for PDE constrained flow problems with the parameterization of the computational grid and interprets the resulting matrix as an approximation of the reduced shape Hessian.
Traditionally, Sobolev gradient methods help prevent a loss of regularity and reduce high-frequency noise in the derivative calculation. Such a reinterpretation of the gradient in a different Hilbert space can be seen as a shape Hessian approximation. In the past, such approaches have been formulated in a non-parametric setting, while industrially relevant applications usually have a parameterized setting. In this thesis, the presence of a design parameterization for the shape description is explicitly considered. This research aims to demonstrate how a combination of Sobolev methods and parameterization can be done successfully, using a novel mathematical result based on the generalized Faà di Bruno formula. Such a formulation can yield benefits even if a smooth parameterization is already used.
The results obtained allow for the formulation of an efficient and flexible optimization strategy, which can incorporate the Sobolev smoothing procedure for test cases where a parameterization describes the shape, e.g., a CAD model, and where additional constraints on the geometry and the flow are to be considered. Furthermore, the algorithm is also extended to One Shot optimization methods. One Shot algorithms are a tool for simultaneous analysis and design when dealing with inexact flow and adjoint solutions in a PDE constrained optimization. The proposed parameterized Sobolev smoothing approach is especially beneficial in such a setting to ensure a fast and robust convergence towards an optimal design.
Key features of the implementation of the algorithms developed herein are pointed out, including the construction of the Laplace-Beltrami operator via finite elements and an efficient evaluation of the parameterization Jacobian using algorithmic differentiation. The newly derived algorithms are applied to relevant test cases featuring drag minimization problems, particularly for three-dimensional flows with turbulent RANS equations. These problems include additional constraints on the flow, e.g., constant lift, and the geometry, e.g., minimal thickness. The Sobolev smoothing combined with the parameterization is applied in classical and One Shot optimization settings and is compared to other traditional optimization algorithms. The numerical results show a performance improvement in runtime for the new combined algorithm over a classical Quasi-Newton scheme.
Wreath product groups \(C_\ell \wr \mathfrak{S}_n\) have a rich combinatorial representation theory coming from the symmetric group case and involving partitions, Young tableaux, and Specht modules. To such a wreath product group \(W\), one can associate various algebras and geometric objects: Hecke algebras, quantum groups, Hilbert schemes, Calogero--Moser spaces, and (restricted) rational Cherednik algebras. Over the years, surprising connections have been made between a lot of these objects, with many of these connections having been traced back to combinatorial constructions and properties of the group \(W\) itself.
In this thesis, we have studied one of the algebras, namely the restricted rational Cherednik algebra \(\overline{\mathsf{H}}_\mathbf{c}(W)\), in order to find combinatorial models which describe certain representation theoretical phenomena around \(\overline{\mathsf{H}}_\mathbf{c}(W)\). In particular, we generalize a result by Gordon and describe the graded \(W\)-characters of the simple modules of \(\overline{\mathsf{H}}_\mathbf{c}(W)\) for generic parameter \(\mathbf{c}\) using Haiman's wreath Macdonald polynomials. These graded \(W\)-characters turn out to be specializations of Haiman's wreath Macdonald polynomials. In the non-generic parameter case, we use recent results by Maksimau to combinatorially express an inductive rule of \(\overline{\mathsf{H}}_\mathbf{c}(W)\)-modules first described by Bellamy. We use our results in type \(B\) to describe the (ungraded) \(B_n\)-character of simple \(\overline{\mathsf{H}}_\mathbf{c}(B_n)\)-modules associated to bipartitions with one empty part. Afterwards, we relate this combinatorial induction to various other algebras and families of \(W\)-characters found in the literature such as Lusztig's constructible characters, as well as detail some connections between generic and non-generic parameter using wreath Macdonald polynomials.
We encounter directional data in numerous application areas such as astronomy, biology or engineering. Examples include the direction of arrival of cosmic rays, the direction of flight of migratory birds or the orientation of steel fibres in fibre-reinforced concrete.
In part I, we define and apply morphological operators, quantiles and depths for directional data. The morphological operators are defined for \(\mathcal{S}^{d−1}\)-valued images with \(\mathcal{S}^{d−1} = \{x \in \mathbb{R}^d :\sqrt{x^T x} = 1\}\) , \(d \geq 2\). Since an ordered structure is necessary for a definition of these operators, which is not naturally given between vectors, an order is determined with the help of the theory of statistical depth functionals.
This allows for defining the basic operators erosion and dilation as well as morphological (multi-scale) operators for \(\mathcal{S}^{d−1}\)-valued images based on them. The operators introduced are related to their grey value counterparts. Furthermore, quantiles and the "angular Mahalanobis" depth for directional data introduced by Ley
et al. (2014) are extended. The concept of Ley et al. (2014) provides useful geometric properties of the depth contours (such as convexity and rotational equivariance) and a Bahadur-type representation of the quantiles. Their concept is canonical for rotationally symmetric depth contours. However, it also produces rotationally symmetric depth contours when the underlying distribution is not rotationally
symmetric. We solve this lack of flexibility for distributions with elliptical depth contours. The basic idea is to deform the elliptic contours by a diffeomorphic mapping to rotationally symmetric contours, thus reverting to the canonical case in Ley et al. (2014). Our results are confirmed by a Monte Carlo simulation study and applied to the analysis of fibre directions in fibre-reinforced concrete. In Part II, we elaborate interdisciplinary results of statistical analysis and stochastic modelling in civil
engineering. Our statistical analysis of the correlation between production parameters (fibre length, fibre diameter, fibre volume fraction as well as casting method, superplasticiser content and specimen size) of ultra-high performance fibre reinforced concrete and the fibre system (spatial arrangement and orientation of the fibres) provides users with a better understanding of this relatively new composite material. The fibre system is modelled by a Boolean model and the fibre orientation by a one-parameter distribution. In addition, the behaviour under tensile loading is modelled.
This contribution defends two claims. The first is about why thought experiments are so relevant and powerful in mathematics. Heuristics and proof are not strictly and, therefore, the relevance of thought experiments is not contained to heuristics. The main argument is based on a semiotic analysis of how mathematics works with signs. Seen in this way, formal symbols do not eliminate thought experiments (replacing them by something rigorous), but rather provide a new stage for them. The formal world resembles the empirical world in that it calls for exploration and offers surprises. This presents a major reason why thought experiments occur both in empirical sciences and in mathematics. The second claim is about a looming aporia that signals the limitation of thought experiments. This aporia arises when mathematical arguments cease to be fully accessible, thus violating a precondition for experimenting in thought. The contribution focuses on the work of Vladimir Voevodsky (1966–2017, Fields medalist in 2002) who argued that even very pure branches of mathematics cannot avoid inaccessibility of proof. Furthermore, he suggested that computer verification is a feasible path forward, but only if proof is not modeled in terms of formal logic.
We consider the optimization problem of a large insurance company that wants to maximize the expected utility of its surplus through the optimal control of the proportional reinsurance. In addition, the insurer is exposed to the risk of default of its reinsurer at the worst possible time, a setting that is closely related to a scenario of the Swiss Solvency Test.
In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if P=NP. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not ℓ-approximable for any natural number ℓ less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum s-t-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.
First essential m-dissipativity of an infinite-dimensional Ornstein-Uhlenbeck operator N, perturbed by the gradient of a potential, on a domain FC
∞
b
of finitely based, smooth and bounded functions, is shown. Our considerations allow unbounded diffusion operators as coefficients. We derive corresponding second order regularity estimates for solutions f of the Kolmogorov equation ◂−▸αf−Nf=g, ◂+▸α∈(0,∞), generalizing some results of Da Prato and Lunardi. Second, we prove essential m-dissipativity for generators (◂,▸LΦ,FC
∞
b
) of infinite-dimensional degenerate diffusion processes. We emphasize that the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) is useful to apply general resolvent methods developed by Beznea, Boboc and Röckner, in order to construct martingale/weak solutions to infinite-dimensional non-linear degenerate stochastic differential equations. Furthermore, the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) and (◂,▸N,FC
∞
b
), as well as the regularity estimates are essential to apply the general abstract Hilbert space hypocoercivity method from Dolbeault, Mouhot, Schmeiser and Grothaus, Stilgenbauer, respectively, to the corresponding diffusions.
We provide a complete elaboration of the L2-Hilbert space hypocoercivity theorem for the degenerate Langevin dynamics with multiplicative noise, studying the longtime behavior of the strongly continuous contraction semigroup solving the abstract Cauchy problem for the associated backward Kolmogorov operator. Hypocoercivity for the Langevin dynamics with constant diffusion matrix was proven previously by Dolbeault, Mouhot and Schmeiser in the corresponding Fokker–Planck framework and made rigorous in the Kolmogorov backwards setting by Grothaus and Stilgenbauer. We extend these results to weakly differentiable diffusion coefficient matrices, introducing multiplicative noise for the corresponding stochastic differential equation. The rate of convergence is explicitly computed depending on the choice of these coefficients and the potential giving the outer force. In order to obtain a solution to the abstract Cauchy problem, we first prove essential self-adjointness of non-degenerate elliptic Dirichlet operators on Hilbert spaces, using prior elliptic regularity results and techniques from Bogachev, Krylov and Röckner. We apply operator perturbation theory to obtain essential m-dissipativity of the Kolmogorov operator, extending the m-dissipativity results from Conrad and Grothaus. We emphasize that the chosen Kolmogorov approach is natural, as the theory of generalized Dirichlet forms implies a stochastic representation of the Langevin semigroup as the transition kernel of a diffusion process which provides a martingale solution to the Langevin equation with multiplicative noise. Moreover, we show that even a weak solution is obtained this way.
This article presents a methodology whereby adjoint solutions for partitioned multiphysics problems can be computed efficiently, in a way that is completely independent of the underlying physical sub-problems, the associated numerical solution methods, and the number and type of couplings between them. By applying the reverse mode of algorithmic differentiation to each discipline, and by using a specialized recording strategy, diagonal and cross terms can be evaluated individually, thereby allowing different solution methods for the generic coupled problem (for example block-Jacobi or block-Gauss-Seidel). Based on an implementation in the open-source multiphysics simulation and design software SU2, we demonstrate how the same algorithm can be applied for shape sensitivity analysis on a heat exchanger (conjugate heat transfer), a deforming wing (fluid–structure interaction), and a cooled turbine blade where both effects are simultaneously taken into account.
In this paper, we devise a stochastic asset–liability management (ALM) model for a life insurance company and analyze its influence on the balance sheet within a low-interest rate environment. In particular, a flexible procedure for the generation of insurers’ compressed contract portfolios that respects the given biometric structure is presented, extending the existing literature on stochastic ALM modeling. The introduced balance sheet model is in line with the principles of double-entry bookkeeping as required in accounting. We further focus on the incorporation of new business, i.e. the addition of newly concluded contracts and thus of insured in each period. Efficient simulations are obtained by integrating new policies into existing cohorts according to contract-related criteria. We provide new results on the consistency of the balance sheet equations. In extensive simulation studies for different scenarios regarding the business form of today’s life insurers, we utilize these to analyze the long-term behavior and the stability of the components of the balance sheet for different asset–liability approaches. Finally, we investigate the robustness of two prominent investment strategies against crashes in the capital markets, which lead to extreme liquidity shocks and thus threaten the insurer’s financial health.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
Many real-world optimization and decision-making problems comprise several, partly conflicting objective functions. The English saying “Quality has its price” is just as true on a large scale as it is in private sphere and, therefore, quality and price are a typical pair of conflicting objective functions that are very common in applications. Yet, in industrial applications, both quality and cost may be understood in the specific context and differ whether a transportation, a production, or a planning problem is considered. Other objective functions that are receiving increasing attention in real-world decision-making situations are, for example, robustness, time, sustainability, adaptability, or longevity.
In this paper we consider the stochastic primitive equation for geophysical flows subject to transport noise and turbulent pressure. Admitting very rough noise terms, the global existence and uniqueness of solutions to this stochastic partial differential equation are proven using stochastic maximal
-regularity, the theory of critical spaces for stochastic evolution equations, and global a priori bounds. Compared to other results in this direction, we do not need any smallness assumption on the transport noise which acts directly on the velocity field and we also allow rougher noise terms. The adaptation to Stratonovich type noise and, more generally, to variable viscosity and/or conductivity are discussed as well.
Continuous-time regime-switching models are a very popular class of models for financial applications. In this work the so-called signal-to-noise matrix is introduced for hidden Markov models where the switching is driven by an unobservable Markov chain. Its relations to filtering, i.e. state estimation of the chain given the available observations, and portfolio optimization are investigated. A convergence result for the filter is derived: The filter converges to its invariant distribution if the eigenvalues of the signal-to-noise matrix converge to zero. This matrix is then also used to prove a mutual fund representation for regime-switching models and a corresponding market reduction which is consistent with filtering and portfolio optimization. Two canonical cases for the reduction are analyzed in more detail, the first based on the market regimes and the second depending on the eigenvalues. These considerations are presented both for observable and unobservable Markov chains. The results are illustrated by numerical simulations.
In this paper we investigate a utility maximization problem with drift uncertainty in a multivariate continuous-time Black–Scholes type financial market which may be incomplete. We impose a constraint on the admissible strategies that prevents a pure bond investment and we include uncertainty by means of ellipsoidal uncertainty sets for the drift. Our main results consist firstly in finding an explicit representation of the optimal strategy and the worst-case parameter, secondly in proving a minimax theorem that connects our robust utility maximization problem with the corresponding dual problem. Thirdly, we show that, as the degree of model uncertainty increases, the optimal strategy converges to a generalized uniform diversification strategy.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
We present new results on standard basis computations of a 0-dimensional ideal I in a power series ring or in the localization of a polynomial ring over a computable field K. We prove the semicontinuity of the “highest corner” in a family of ideals, parametrized by the spectrum of a Noetherian domain A. This semicontinuity is used to design a new modular algorithm for computing a standard basis of I if K is the quotient field of A. It uses the computation over the residue field of a “good” prime ideal of A to truncate high order terms in the subsequent computation over K. We prove that almost all prime ideals are good, so a random choice is very likely to be good, and whether it is good is detected a posteriori by the algorithm. The algorithm yields a significant speed advantage over the non-modular version and works for arbitrary Noetherian domains. The most important special cases are perhaps A = ℤ and A = k[t], k any field and t a set of parameters. Besides its generality, the method differs substantially from previously known modular algorithms for A = ℤ, since it does not manipulate the coefficients. It is also usually faster and can be combined with other modular methods for computations in local rings. The algorithm is implemented in the computer algebra system SINGULAR and we present several examples illustrating its power.
We compute three-dimensional displacement vector fields to estimate the deformation of microstructural data sets in mechanical tests. For this, we extend the well-known optical flow by Brox et al. to three dimensions, with special focus on the discretization of nonlinear terms. We evaluate our method first by synthetically deforming foams and comparing against this ground truth and second with data sets of samples that underwent real mechanical tests. Our results are compared to those from state-of-the-art algorithms in materials science and medical image registration. By a thorough evaluation, we show that our proposed method is able to resolve the displacement best among all chosen comparison methods.
Index Insurance for Farmers
(2021)
In this thesis we focus on weather index insurance for agriculture risk. Even though such an index insurance is easily applicable and reduces information asymmetries, the demand for it is quite low. This is in particular due to the basis risk and the lack of knowledge about it’s effectiveness. The basis risk is the difference between the index insurance payout and the actual loss of the insured. We evaluate the performance of weather index insurance in different contexts, because proper knowledge about index insurance will help to use it as a successful alternative for traditional crop insurance. In addition to that, we also propose and discuss methods to reduce the basis risk.
We also analyze the performance of an agriculture loan which is interlinked with a weather index insurance. We show that an index insurance with actuarial fair or subsidized premium helps to reduce the loan default probability. While we first consider an index insurance with a commonly used linear payout function for this analysis, we later design an index insurance payout function which maximizes the expected utility of the insured. Then we show that, an index insurance with that optimal payout function is more appropriate for bundling with an agriculture loan. The optimal payout function also helps to reduce the basis risk. In addition, we show that a lender who issues agriculture loans can be better off by purchasing a weather index insurance in some circumstances.
We investigate the market equilibrium for weather index insurance by assuming risk averse farmers and a risk averse insurer. When we consider two groups of farmers with different risks, we show that the low risk group subsidizes the high risk group when both should pay the same premium for the index insurance. Further, according to the analysis of an index insurance in an informal risk sharing environment, we observe that the demand of the index insurance can be increased by selling it to a group of farmers who informally share the risk based on the insurance payout, because it reduces the adverse effect of the basis risk. Besides of that we analyze the combination of an index insurance with a gap insurance. Such a combination can increase the demand and reduce the basis risk of the index insurance if we choose the correct levels of premium and of gap insurance cover. Moreover our work shows that index insurance can be a good alternative to proportional and excess loss reinsurance when it is issued at a low enough price.
Laser-induced interstitial thermotherapy (LITT) is a minimally invasive procedure to destroy liver
tumors through thermal ablation. Mathematical models are the basis for computer simulations
of LITT, which support the practitioner in planning and monitoring the therapy.
In this thesis, we propose three potential extensions of an established mathematical model of
LITT, which is based on two nonlinearly coupled partial differential equations (PDEs) modeling
the distribution of the temperature and the laser radiation in the liver.
First, we introduce the Cattaneo–LITT model for delayed heat transfer in this context, prove its
well-posedness and study the effect of an inherent delay parameter numerically.
Second, we model the influence of large blood vessels in the heat-transfer model by means
of a spatially varying blood-perfusion rate. This parameter is unknown at the beginning of
each therapy because it depends on the individual patient and the placement of the LITT
applicator relative to the liver. We propose a PDE-constrained optimal-control problem for the
identification of the blood-perfusion rate, prove the existence of an optimal control and prove
necessary first-order optimality conditions. Furthermore, we introduce a numerical example
based on which we demonstrate the algorithmic solution of this problem.
Third, we propose a reformulation of the well-known PN model hierarchy with Marshak
boundary conditions as a coupled system of second-order PDEs to approximate the radiative-transfer
equation. The new model hierarchy is derived in a general context and is applicable
to a wide range of applications other than LITT. It can be generated in an automated way by
means of algebraic transformations and allows the solution with standard finite-element tools.
We validate our formulation in a general context by means of various numerical experiments.
Finally, we investigate the coupling of this new model hierarchy with the LITT model numerically.
The great interest in robust covering problems is manifold, especially due to the plenitude of real world applications and the additional incorporation of uncertainties which are inherent in many practical issues.
In this thesis, for a fixed positive integer \(q\), we introduce and elaborate on a new robust covering problem, called Robust Min-\(q\)-Multiset-Multicover, and related problems.
The common idea of these problems is, given a collection of subsets of a ground set, to decide on the frequency of choosing each subset so as to satisfy the uncertain demand of each overall occurring element.
Yet, in contrast to general covering problems, the subsets may only cover at most \(q\) of their elements.
Varying the properties of the occurring elements leads to a selection of four interesting robust covering problems which are investigated.
We extensively analyze the complexity of the arising problems, also for various restrictions to particular classes of uncertainty sets.
For a given problem, we either provide a polynomial time algorithm or show that, unless \(\text{P}=\text{NP}\), such an algorithm cannot exist.
Furthermore, in the majority of cases, we even give evidence that a polynomial time approximation scheme is most likely not possible for the hard problem variants.
Moreover, we aim for approximations and approximation algorithms for these hard variants, where we focus on Robust Min-\(q\)-Multiset-Multicover.
For a wide class of uncertainty sets, we present the first known polynomial time approximation algorithm for Robust Min-\(q\)-Multiset-Multicover having a provable worst-case performance guarantee.
In diesem Text werden einige wichtige Grundlagen zusammengefasst, mit denen ein schneller Einstieg in das Arbeiten mit Arduino und Raspberry Pi möglich ist. Wir diskutieren nicht die Grundfunktionen der Geräte, weil es dafür zahlreiche Hilfestellungen im Internet gibt. Stattdessen konzentrieren wir uns vor allem auf die Steuerung von Sensoren und Aktoren und diskutieren einige Projektideen, die den MINT-interdisziplinären Projektunterricht bereichern können.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2021.10. For more information, see https://www.lintim.net
The high complexity of civil engineering structures makes it difficult to satisfactorily evaluate their reliability. However, a good risk assessment of such structures is incredibly important to avert dangers and possible disasters for public life. For this purpose, we need algorithms that reliably deliver estimates for their failure probabilities with high efficiency and whose results enable a better understanding of their reliability. This is a major challenge, especially when dynamics, for example due to uncertainties or time-dependent states, must be included in the model.
The contributions are centered around Subset Simulation, a very popular adaptive Monte Carlo method for reliability analysis in the engineering sciences. It particularly well estimates small failure probabilities in high dimensions and is therefore tailored to the demands of many complex problems. We modify Subset Simulation and couple it with interpolation methods in order to keep its remarkable properties and receive all conditional failure probabilities with respect to one variable of the structural reliability model. This covers many sorts of model dynamics with several model constellations, such as time-dependent modeling, sensitivity and uncertainty, in an efficient way, requiring similar computational demands as a static reliability analysis for one model constellation by Subset Simulation. The algorithm offers many new opportunities for reliability evaluation and can even be used to verify results of Subset Simulation by artificially manipulating the geometry of the underlying limit state in numerous ways, allowing to provide correct results where Subset Simulation systematically fails. To improve understanding and further account for model uncertainties, we present a new visualization technique that matches the extensive information on reliability we get as a result from the novel algorithm.
In addition to these extensions, we are also dedicated to the fundamental analysis of Subset Simulation, partially bridging the gap between theory and results by simulation where inconsistencies exist. Based on these findings, we also extend practical recommendations on selection of the intermediate probability with respect to the implementation of the algorithm and derive a formula for correction of the bias. For a better understanding, we also provide another stochastic interpretation of the algorithm and offer alternative implementations which stick to the theoretical assumptions, typically made in analysis.
Simplified ODE models describing blood flow rate are governed by the pressure gradient.
However, assuming the orientation of the blood flow in a human body correlates to a positive
direction, a negative pressure gradient forces the valve to shut, which stops the flow through
the valve, hence, the flow rate is zero, whereas the pressure rate is formulated by an ODE.
Presence of ODEs together with algebraic constraints and sudden changes of system characterizations
yield systems of switched differential-algebraic equations (swDAEs). Alternating
dynamics of the heart can be well modelled by means of swDAEs. Moreover, to study pulse
wave propagation in arteries and veins, PDE models have been developed. Connection between
the heart and vessels leads to coupling PDEs and swDAEs. This model motivates
to study PDEs coupled with swDAEs, for which the information exchange happens at PDE
boundaries, where swDAE provides boundary conditions to the PDE and PDE outputs serve
as inputs to swDAE. Such coupled systems occur, e.g. while modelling power grids using
telegrapher’s equations with switches, water flow networks with valves and district
heating networks with rapid consumption changes. Solutions of swDAEs might
include jumps, Dirac impulses and their derivatives of arbitrary high orders. As outputs of
swDAE read as boundary conditions of PDE, a rigorous solution framework for PDE must
be developed so that jumps, Dirac impulses and their derivatives are allowed at PDE boundaries
and in PDE solutions. This is a wider solution class than solutions of small bounded
variation (BV), for instance, used in where nonlinear hyperbolic PDEs are coupled with
ODEs. Similarly, in, the solutions to switched linear PDEs with source terms are
restricted to the class of BV. However, in the presence of Dirac impulses and their derivatives,
BV functions cannot handle the coupled systems including DAEs with index greater than one.
Therefore, hyperbolic PDEs coupled with swDAEs with index one will be studied in the BV
setting and with swDAEs whose index is greater than one will be investigated in the distributional
sense. To this end, the 1D space of piecewise-smooth distributions is extended to a 2D
piecewise-smooth distributional solution framework. 2D space of piecewise-smooth distributions
allows trace evaluations at boundaries of the PDE. Moreover, a relationship between
solutions to coupled system and switched delay DAEs is established. The coupling structure
in this thesis forms a rather general framework. In fact, any arbitrary network, where PDEs
are represented by edges and (switched) DAEs by nodes, is covered via this structure. Given
a network, by rescaling spatial domains which modifies the coefficient matrices by a constant,
each PDE can be defined on the same interval which leads to a formulation of a single
PDE whose unknown is made up of the unknowns of each PDE that are stacked over each
other with a block diagonal coefficient matrix. Likewise, every swDAE is reformulated such
that the unknowns are collected above each other and coefficient matrices compose a block
diagonal coefficient matrix so that each node in the network is expressed as a single swDAE.
The results are illustrated by numerical simulations of the power grid and simplified circulatory
system examples. Numerical results for the power grid display the evolution of jumps
and Dirac impulses caused by initial and boundary conditions as a result of instant switches.
On the other hand, the analysis and numerical results for the simplified circulatory system do
not entail a Dirac impulse, for otherwise such an entity would destroy the entire system. Yet
jumps in the flow rate in the numerical results can come about due to opening and closure of
valves, which suits clinical and physiological findings. Regarding physiological parameters,
numerical results obtained in this thesis for the simplified circulatory system agree well with
medical data and findings from literature when compared for the validation
Yield Curves and Chance-Risk Classification: Modeling, Forecasting, and Pension Product Portfolios
(2021)
This dissertation consists of three independent parts: The yield curve shapes generated by interest rate models, the yield curve forecasting, and the application of the chance-risk classification to a portfolio of pension products. As a component of the capital market model, the yield curve influences the chance-risk classification which was introduced to improve the comparability of pension products and strengthen consumer protection. Consequently, all three topics have a major impact on this essential safeguard.
Firstly, we focus on the obtained yield curve shapes of the Vasicek interest rate models. We extend the existing studies on the attainable yield curve shapes in the one-factor Vasicek model by analysis of the curvature. Further, we show that the two-factor Vasicek model can explain significantly more effects that are observed at the market than its one-factor variant. Among them is the occurrence of dipped yield curves.
We further introduce a general change of measure framework for the Monte Carlo simulation of the Vasicek model under a subjective measure. This can be used to avoid the occurrence of a far too high frequency of inverse yield curves with growing time.
Secondly, we examine different time series models including machine learning algorithms forecasting the yield curve. For this, we consider statistical time series models such as autoregression and vector autoregression. Their performances are compared with the performance of a multilayer perceptron, a fully connected feed-forward neural network. For this purpose, we develop an extended approach for the hyperparameter optimization of the perceptron which is based on standard procedures like Grid and Random Search but allows to search a larger hyperparameter space. Our investigation shows that multilayer perceptrons outperform statistical models for long forecast horizons.
The third part deals with the chance-risk classification of state-subsidized pension products in Germany as well as its relevance for customer consulting. To optimize the use of the chance-risk classes assigned by Produktinformationsstelle Altersvorsorge gGmbH, we develop a procedure for determining the chance-risk class of different portfolios of state-subsidized pension products under the constraint that the portfolio chance-risk class does not exceed the customer's risk preference. For this, we consider a portfolio consisting of two new pension products as well as a second one containing a product already owned by the customer as well as the offer of a new one. This is of particular interest for customer consulting and can include other assets of the customer. We examine the properties of various chance and risk parameters as well as their corresponding mappings and show that a diversification effect exists. Based on the properties, we conclude that the average final contract values have to be used to obtain the upper bound of the portfolio chance-risk class. Furthermore, we develop an approach for determining the chance-risk class over the contract term since the chance-risk class is only assigned at the beginning of the accumulation phase. On the one hand, we apply the current legal situation, but on the other hand, we suggest an approach that requires further simulations. Finally, we translate our results into recommendations for customer consultation.
This thesis consists of two parts, i.e. the theoretical background of (R)ABSDE including basic theorems, theoretical proofs and properties (Chapter 2-4), as well as numerical algorithms and simulations for (R)ABSDES (Chapter 5). For the theoretical part, we study ABSDEs (Chapter 2), RABSDEs with one obstacle (Chapter 3)and RABSDEs with two obstacles (Chapter 4) in the defaultable setting respectively, including the existence and uniqueness theorems, applications, the comparison theorem for ABSDEs, their relations with PDEs and stochastic differential delay equations (SDDE). The numerical algorithm part (Chapter 5) introduces two main algorithms, a discrete penalization scheme and a discrete reflected scheme based on a random walk approximation of the Brownian motion as well as a discrete approximation of the default martingale; we give the convergence results of the algorithms, provide a numerical example and an application in American game options in order to illustrate the performance of the algorithms.
Simulating the flow of water in district heating networks requires numerical methods which are independent of the CFL condition. We develop a high order scheme for networks of advection equations allowing large time steps. With the MOOD technique unphysical oscillations of non smooth solutions are avoided. In numerical tests the applicability to real networks is shown.
Dealing with uncertain structures or data has lately been getting much attention in discrete optimization. This thesis addresses two different areas in discrete optimization: Connectivity and covering.
When discussing uncertain structures in networks it is often of interest to determine how many vertices or edges may fail in order for the network to stay connected.
Connectivity is a broad, well studied topic in graph theory. One of the most important results in this area is Menger's Theorem which states that the minimum number of vertices needed to separate two non-adjacent vertices equals the maximum number of internally vertex-disjoint paths between these vertices. Here, we discuss mixed forms of connectivity in which both vertices and edges are removed from a graph at the same time. The Beineke Harary Conjecture states that for any two distinct vertices that can be separated with k vertices and l edges but not with k-1 vertices and l edges or k vertices and l-1 edges there exist k+l edge-disjoint paths between them of which k+1 are internally vertex-disjoint. In contrast to Menger's Theorem, the existence of the paths is not sufficient for the connectivity statement to hold. Our main contribution is the proof of the Beineke Harary Conjecture for the case that l equals 2.
We also consider different problems from the area of facility location and covering. We regard problems in which we are given sets of locations and regions, where each region has an assigned number of clients. We are now looking for an allocation of suppliers into the locations, such that each client is served by some supplier. The notable difference to other covering problems is that we assume that each supplier may only serve a fixed number of clients which is not part of the input. We discuss the complexity and solution approaches of three such problems which vary in the way the clients are assigned to the suppliers.
Linear algebra, together with polynomial arithmetic, is the foundation of computer algebra. The algorithms have improved over the last 20 years, and the current state of the art algorithms for matrix inverse, solution of a linear system and determinants have a theoretical sub-cubic complexity. This thesis presents fast and practical algorithms for some classical problems in linear algebra over number fields and polynomial rings. Here, a number field is a finite extension of the field of rational numbers, and the polynomial rings we considered in this thesis are over finite fields.
One of the key problems of symbolic computation is intermediate coefficient swell: the bit length of intermediate results can grow during the computation compared to those in the input and output. The standard strategy to overcome this is not to compute the number directly but to compute it modulo some other numbers, using either the Chinese remainder theorem (CRT) or a variation of Newton-Hensel lifting. Often, the final step of these algorithms is combined with reconstruction methods such as rational reconstruction to convert the integral result into the rational solution. Here, we present reconstruction methods over number fields with a fast and simple vector-reconstruction algorithm.
The state of the art method for computing the determinant over integers is due to Storjohann. When generalizing his method over number field, we encountered the problem that modules generated by the rows of a matrix over number fields are in general not free, thus Strojohann's method cannot be used directly. Therefore, we have used the theory of pseudo-matrices to overcome this problem. As a sub-problem of this application, we generalized a unimodular certification method for pseudo-matrices: similar to the integer case, we check whether the determinant of the given pseudo matrix is a unit by testing the integrality of the corresponding dual module using higher-order lifting.
One of the main algorithms in linear algebra is the Dixon solver for linear system solving due to Dixon. Traditionally this algorithm is used only for square systems having a unique solution. Here we generalized Dixon algorithm for non-square linear system solving. As the solution is not unique, we have used a basis of the kernel to normalize the solution. The implementation is accompanied by a fast kernel computation algorithm that also extends to compute the reduced-row-echelon form of a matrix over integers and number fields.
The fast implementations for computing the characteristic polynomial and minimal polynomial over number fields use the CRT-based modular approach. Finally, we extended Storjohann's determinant computation algorithm over polynomial ring over finite fields, with its sub-algorithms for reconstructions and unimodular certification. In this case, we face the problem of intermediate degree swell. To avoid this phenomenon, we used higher-order lifting techniques in the unimodular certification algorithm. We have successfully used the half-gcd approach to optimize the rational polynomial reconstruction.
Life insurance companies are asked by the Solvency II regime to retain capital requirements against economically adverse developments. This ensures that they are continuously able to meet their payment obligations towards the policyholders. When relying on an internal model approach, an insurer's solvency capital requirement is defined as the 99.5% value-at-risk of its full loss probability distribution over the coming year. In the introductory part of this thesis, we provide the actuarial modeling tools and risk aggregation methods by which the companies can accomplish the derivations of these forecasts. Since the industry still lacks the computational capacities to fully simulate these distributions, the insurers have to refer to suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. We dedicate the first part of this thesis to establishing a theoretical framework of the LSMC method. We start with how LSMC for calculating capital requirements is related to its original use in American option pricing. Then we decompose LSMC into four steps. In the first one, the Monte Carlo simulation setting is defined. The second and third steps serve the calibration and validation of the proxy function, and the fourth step yields the loss distribution forecast by evaluating the proxy model. When guiding through the steps, we address practical challenges and propose an adaptive calibration algorithm. We complete with a slightly disguised real-world application. The second part builds upon the first one by taking up the LSMC framework and diving deeper into its calibration step. After a literature review and a basic recapitulation, various adaptive machine learning approaches relying on least-squares regression and model selection criteria are presented as solutions to the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over GLM and GAM methods to MARS and kernel regression routines. We justify the combinability of the regression ingredients mathematically and compare their approximation quality in slightly altered real-world experiments. Thereby, we perform sensitivity analyses, discuss numerical stability and run comprehensive out-of-sample tests. The scope of the analyzed regression variants extends to other high-dimensional variable selection applications. Life insurance contracts with early exercise features can be priced by LSMC as well due to their analogies to American options. In the third part of this thesis, equity-linked contracts with American-style surrender options and minimum interest rate guarantees payable upon contract termination are valued. We allow randomness and jumps in the movements of the interest rate, stochastic volatility, stock market and mortality. For the simultaneous valuation of numerous insurance contracts, a hybrid probability measure and an additional regression function are introduced. Furthermore, an efficient seed-related simulation procedure accounting for the forward discretization bias and a validation concept are proposed. An extensive numerical example rounds off the last part.
In this thesis one considers the periodic homogenization of a linearly coupled magneto-elastic model problem and focuses on the derivation of spectral methods to solve the obtained unit cell problem afterwards. In the beginning, the equations of linear elasticity and magnetism are presented together with the physical quantities used within. After specifying the model assumptions, the system of partial differential equations is rewritten in a weak form for which the existence and uniqueness of solutions is discussed. The model problem then undergoes a homogenization process where the original problem is approximated by a substitute problem with a repeating micro-structural geometry that was generated from a representative volume element (RVE). The following separation of scales, which can be achieved either by an asymptotic expansion or through a two-scale limit process, yields the homogenized problem on the macroscopic scale and the periodic unit cell problem. The latter is further analyzed using Fourier series, leading to periodic Lippmann-Schwinger type equations allowing for the development of matrix-free solvers. It is shown that, while it is possible to craft a scheme for the coupled problem from the purely elastic and magnetic Lippmann-Schwinger equations alone without much additional effort, a more general setting is provided when deriving a Lippmann-Schwinger equation for the coupled system directly. These numerical approaches are then validated with some analytically solvable test problems, before their performance is tested against each other for some more complex examples.
Adjoint-Based Shape Optimization and Optimal Control with Applications to Microchannel Systems
(2021)
Optimization problems constrained by partial differential equations (PDEs) play an important role in many areas of science and engineering. They often arise in the optimization of technological applications, where the underlying physical effects are modeled by PDEs. This thesis investigates such problems in the context of shape optimization and optimal control with microchannel systems as novel applications. Such systems are used, e.g., as cooling systems, heat exchangers, or chemical reactors as their high surface-to-volume ratio, which results in beneficial heat and mass transfer characteristics, allows them to excel in these settings. Additionally, this thesis considers general PDE constrained optimization problems with particular regard to their efficient solution.
As our first application, we study a shape optimization problem for a microchannel cooling system: We rigorously analyze this problem, prove its shape differentiability, and calculate the corresponding shape derivative. Afterwards, we consider the numerical optimization of the cooling system for which we employ a hierarchy of reduced models derived via porous medium modeling and a dimension reduction technique. A comparison of the models in this context shows that the reduced models approximate the original one very accurately while requiring substantially less computational resources.
Our second application is the optimization of a chemical microchannel reactor for the Sabatier process using techniques from PDE constrained optimal control. To treat this problem, we introduce two models for the reactor and solve a parameter identification problem to determine the necessary kinetic reaction parameters for our models. Thereafter, we consider the optimization of the reactor's operating conditions with the objective of improving its product yield, which shows considerable potential for enhancing the design of the reactor.
To provide efficient solution techniques for general shape optimization problems, we introduce novel nonlinear conjugate gradient methods for PDE constrained shape optimization and analyze their performance on several well-established benchmark problems. Our results show that the proposed methods perform very well, making them efficient and appealing gradient-based shape optimization algorithms.
Finally, we continue recent software-based developments for PDE constrained optimization and present our novel open-source software package cashocs. Our software implements and automates the adjoint approach and, thus, facilitates the solution of general PDE constrained shape optimization and optimal control problems. Particularly, we highlight our software's user-friendly interface, straightforward applicability, and mesh independent behavior.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around one or several sites of vaso-occlusion are typical for GBM and hint on poor prognosis of patient survival.
This thesis focuses on studying the establishment and maintenance of these histological patterns specific to GBM with the aim of modeling the microlocal tumor environment under the influence of acidity, tissue anisotropy and hypoxia-induced angiogenesis. This aim is reached with two classes of models: multiscale and multiphase. Each of them features a reaction-diffusion equation (RDE) for the acidity acting as a chemorepellent and inhibitor of growth, coupled in a nonlinear way to a reaction-diffusion-taxis equation (RDTE) for glioma dynamics. The numerical simulations of the resulting systems are able to reproduce pseudopalisade-like patterns. The effect of tumor vascularization on these patterns is studied through a flux-limited model belonging to the multiscale class. Thereby, PDEs of reaction-diffusion-taxis type are deduced for glioma and endothelial cell (EC) densities with flux-limited pH-taxis for the tumor and chemotaxis towards vascular endothelial growth factor (VEGF) for ECs. These, in turn, are coupled to RDEs for acidity and VEGF produced by tumor. The numerical simulations of the obtained system show pattern disruption and transient behavior due to hypoxia-induced angiogenesis. Moreover, comparing two upscaling techniques through numerical simulations, we observe that the macroscopic PDEs obtained via parabolic scaling (directed tissue) are able to reproduce glioma patterns, while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related.
Deligne-Lusztig theory allows the parametrization of generic character tables of finite groups of Lie type in terms of families of conjugacy classes and families of irreducible characters "independently" of \(q\).
Only in small cases the theory also gives all the values of the table.
For most of the groups the completion of the table must be carried out with ad-hoc methods.
The aim of the present work is to describe one possible computation which avoids Lusztig's theory of "character sheaves".
In particular, the theory of Gel'fand-Graev characters and Clifford theory is used to complete the generic character table of \(G={\rm Spin}_8^+(q)\) for \(q\) odd.
As an example of the computations, we also determine the character table of \({\rm SL}_4(q)\), for \(q\) odd.
In the process of finding character values, the following tools are developed.
By explicit use of the Bruhat decomposition of elements, the fusion of the unipotent classes of \(G\) is determined.
Among others, this is used to compute the 2-parameter Green functions of every Levi subgroup with disconnected centre of \(G\).
Furthermore, thanks to a certain action of the centre \(Z(G)\) on the characters of \(G\), it is shown how, in principle, the values of any character depend on its values at the unipotent elements.
It is important to consider \({\rm Spin}_8^+(q)\) as it is one of the "smallest" interesting examples for which Deligne--Lusztig theory is not sufficient to construct the whole character table.
The reasons is related to the structure of \({\mathbf G}={\rm Spin}_8\), from which \(G\) is constructed.
Firstly, \({\mathbf G}\) has disconnected centre.
Secondly, \({\mathbf G}\) is the only simple algebraic group which has an outer group automorphism of order 3.
And finally, \(G\) can be realized as a subgroup of bigger groups, like \(E_6(q)\), \(E_7(q)\) or \(E_8(q)\).
The computation on \({\rm Spin}_8^+(q)\) serves as preparation for those cases.
The construction of number fields with given Galois group fits into the framework of the inverse Galois problem. This problem remains still unsolved, although many partial results have been obtained over the last century.
Shafarevich proved in 1954 that every solvable group is realizable as the Galois group of a number field. Unfortunately, the proof does not provide a method to explicitly find such a field.
This work aims at producing a constructive version of the theorem by solving the following task: given a solvable group $G$ and a $B\in \mathbf N$, construct all normal number fields with Galois group $G$ and absolute discriminant bounded by $B$.
Since a field with solvable Galois group can be realized as a tower of abelian extensions, the main role in our algorithm is played by class field theory, which is the subject of the first part of this work.
The second half is devoted to the study of the relation between the group structure and the field through Galois correspondence.
In particular, we study the existence of obstructions to embedding problems and some criteria to predict the Galois group of an extension.
Skript zur Vorlesung "Character Theory of finite groups".
Estimation and Portfolio Optimization with Expert Opinions in Discrete-time Financial Markets
(2021)
In this thesis, we mainly discuss the problem of parameter estimation and
portfolio optimization with partial information in discrete-time. In the portfolio optimization problem, we specifically aim at maximizing the utility of
terminal wealth. We focus on the logarithmic and power utility functions. We consider expert opinions as another observation in addition to stock returns to improve estimation of drift and volatility parameters at different times and for the purpose of asset optimization.
In the first part, we assume that the drift term has a fixed distribution, and
the volatility term is constant. We use the Kalman filter to combine the two
types of observations. Moreover, we discuss how to transform this problem
into a non-linear problem of Gaussian noise when the expert opinion is uniformly distributed. The generalized Kalman filter is used to estimate the parameters in this problem.
In the second part, we assume that drift and volatility of asset returns are both driven by a Markov chain. We mainly use the change-of-measure technique to estimate various values required by the EM algorithm. In addition,
we focus on different ways to combine the two observations, expert opinions and asset returns. First, we use the linear combination method. At the same time, we discuss how to use a logistic regression model to quantify expert
opinions. Second, we consider that expert opinions follow a mixed Dirichlet distribution. Under this assumption, we use another probability measure to
estimate the unnormalized filters, needed for the EM algorithm.
In the third part, we assume that expert opinions follow a mixed Dirichlet distribution and focus on how we can obtain approximate optimal portfolio
strategies in different observation settings. We claim the approximate strategies from the dynamic programming equations in different settings and analyze the dependence on the discretization step. Finally we compute different
observation settings in a simulation study.
This article investigates a network interdiction problem on a tree network: given a subset of nodes chosen as facilities, an interdictor may dissect the network by removing a size-constrained set of edges, striving to worsen the established facilities best possible. Here, we consider a reachability objective function, which is closely related to the covering objective function: the interdictor aims to minimize the number of customers that are still connected to any facility after interdiction. For the covering objective on general graphs, this problem is known to be NP-complete (Fröhlich and Ruzika In: On the hardness of covering-interdiction problems. Theor. Comput. Sci., 2021). In contrast to this, we propose a polynomial-time solution algorithm to solve the problem on trees. The algorithm is based on dynamic programming and reveals the relation of this location-interdiction problem to knapsack-type problems. However, the input data for the dynamic program must be elaborately generated and relies on the theoretical results presented in this article. As a result, trees are the first known graph class that admits a polynomial-time algorithm for edge interdiction problems in the context of facility location planning.
Linear evolution equations are considered usually for the time variable being defined on an interval where typically initial conditions or time periodicity of solutions is required to single out certain solutions. Here, we would like to make a point of allowing time to be defined on a metric graph or network where on the branching points coupling conditions are imposed such that time can have ramifications and even loops. This not only generalizes the classical setting and allows for more freedom in the modeling of coupled and interacting systems of evolution equations, but it also provides a unified framework for initial value and time-periodic problems. For these time-graph Cauchy problems questions of well-posedness and regularity of solutions for parabolic problems are studied along with the question of which time-graph Cauchy problems cannot be reduced to an iteratively solvable sequence of Cauchy problems on intervals. Based on two different approaches—an application of the Kalton–Weis theorem on the sum of closed operators and an explicit computation of a Green’s function—we present the main well-posedness and regularity results. We further study some qualitative properties of solutions. While we mainly focus on parabolic problems, we also explain how other Cauchy problems can be studied along the same lines. This is exemplified by discussing coupled systems with constraints that are non-local in time akin to periodicity.
Insurance companies and banks regularly have to face stress tests performed by regulatory instances. To model their investment decision problems that includes stress scenarios, we propose the worst-case portfolio approach. Thus, the resulting optimal portfolios are already stress test prone by construction. A central issue of the worst-case portfolio approach is that neither the time nor the order of occurrence of the stress scenarios are known. Even more, there are no probabilistic assumptions regarding the occurrence of the stresses. By defining the relative worst-case loss and introducing the concept of minimum constant portfolio processes, we generalize the traditional concepts of the indifference frontier and the indifference-optimality principle. We prove the existence of a minimum constant portfolio process that is optimal for the multi-stress worst-case problem. As a main result we derive a verification theorem that provides conditions on Lagrange multipliers and nonlinear ordinary differential equations that support the construction of optimal worst-case portfolio strategies. The practical applicability of the verification theorem is demonstrated via numerical solution of various worst-case problems with stresses. There, it is in particular shown that an investor who chooses the worst-case optimal portfolio process may have a preference regarding the order of stresses, but there may also be stress scenarios where he/she is indifferent regarding the order and time of occurrence.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around the occlusion site of a capillary are typical for GBM and hint on poor prognosis of patient survival. We propose a multiscale modeling approach in the kinetic theory of active particles framework and deduce by an upscaling process a reaction-diffusion model with repellent pH-taxis. We prove existence of a unique global bounded classical solution for a version of the obtained macroscopic system and investigate the asymptotic behavior of the solution. Moreover, we study two different types of scaling and compare the behavior of the obtained macroscopic PDEs by way of simulations. These show that patterns (not necessarily of Turing type), including pseudopalisades, can be formed for some parameter ranges, in accordance with the tumor grade. This is true when the PDEs are obtained via parabolic scaling (undirected tissue), while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related .
Die Möglichkeit einer Prämienanpassung in der deutschen PKV ist vom Wert des sogenannten auslösenden Faktors abhängig, der mittels einer linearen Extrapolation der Schadenquotienten der vergangenen drei Jahre berechnet wird. Seine frühzeitige, verlässliche Vorhersage ist aus Sicht des Risikomanagements von großer Bedeutung. Wir untersuchen deshalb vielfältige Vorhersageansätze, die von klassischen Zeitreihenansätzen und Regression über neuronale Netze bis hin zu hybriden Modellen reichen. Während bei den klassischen Methoden Regression mit ARIMA-Fehlern am besten abschneidet, zeigt ein neuronales Netz, das mit Zeitreihenvorhersage kombiniert oder auf desaisonalisierten und trendbereinigten Daten trainiert wurde, das insgesamt beste Verhalten.
A characterisation of the spaces \({\mathcal {G}}_K\) and \({\mathcal {G}}_K'\) introduced in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) and Potthoff and Timpel (Potential Anal 4(6):637–654, 1995) is given. A first characterisation of these spaces provided in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) uses the concepts of holomorphy on infinite dimensional spaces. We, instead, give a characterisation in terms of U-functionals, i.e., classic holomorphic function on the one dimensional field of complex numbers. We apply our new characterisation to derive new results concerning a stochastic transport equation and the stochastic heat equation with multiplicative noise.
Consider a linear realization of a matroid over a field. One associates with it a configuration
polynomial and a symmetric bilinear form with linear homogeneous coefficients.
The corresponding configuration hypersurface and its non-smooth locus support the
respective first and second degeneracy scheme of the bilinear form.We showthat these
schemes are reduced and describe the effect of matroid connectivity: for (2-)connected
matroids, the configuration hypersurface is integral, and the second degeneracy scheme
is reduced Cohen–Macaulay of codimension 3. If the matroid is 3-connected, then also
the second degeneracy scheme is integral. In the process, we describe the behavior
of configuration polynomials, forms and schemes with respect to various matroid
constructions.
Consider the primitive equations on ◂+▸R2×(◂,▸z0,z1) with initial data a of the form a=◂+▸a1+a2, where ◂+▸a1∈◂◽.▸BUCσ(◂,▸R2;L1(◂,▸z0,z1)) and ◂+▸a2∈L
∞
σ
(◂,▸R2;L1(◂,▸z0,z1)). These spaces are scaling-invariant and represent the anisotropic character of these equations. It is shown that for a1 arbitrary large and a2 sufficiently small, this set of equations admits a unique strong solution which extends to a global one and is thus strongly globally well posed for these data provided a is periodic in the horizontal variables. The approach presented depends crucially on mapping properties of the hydrostatic Stokes semigroup in the L∞(L1)-setting. It can be seen as the counterpart of the classical iteration schemes for the Navier–Stokes equations, now for the primitive equations in the L∞(L1)-setting.
Over the past 2 decades, there has been much progress on the classification of symplectic linear quotient singularities V/G admitting a symplectic (equivalently, crepant) resolution of singularities. The classification is almost complete but there is an infinite series of groups in dimension 4—the symplectically primitive but complex imprimitive groups—and 10 exceptional groups up to dimension 10, for which it is still open. In this paper, we treat the remaining infinite series and prove that for all but possibly 39 cases there is no symplectic resolution. We thereby reduce the classification problem to finitely many open cases. We furthermore prove non-existence of a symplectic resolution for one exceptional group, leaving 39+9=48 open cases in total. We do not expect any of the remaining cases to admit a symplectic resolution.
We show that every convergent power series with monomial extended Jacobian ideal is right equivalent to a Thom–Sebastiani polynomial. This solves a problem posed by Hauser and Schicho. On the combinatorial side, we introduce a notion of Jacobian semigroup ideal involving a transversal matroid. For any such ideal, we construct a defining Thom–Sebastiani polynomial. On the analytic side, we show that power series with a quasihomogeneous extended Jacobian ideal are strongly Euler homogeneous. Due to a Mather–Yau-type theorem, such power series are determined by their Jacobian ideal up to right equivalence.
In this thesis we study a variant of the quadrature problem for stochastic differential equations (SDEs), namely the approximation of expectations \(\mathrm{E}(f(X))\), where \(X = (X(t))_{t \in [0,1]}\) is the solution of an SDE and \(f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R}\) is a functional, mapping each realization of \(X\) into the real numbers. The distinctive feature in this work is that we consider randomized (Monte Carlo) algorithms with random bits as their only source of randomness, whereas the algorithms commonly studied in the literature are allowed to sample from the uniform distribution on the unit interval, i.e., they do have access to random numbers from \([0,1]\).
By assumption, all further operations like, e.g., arithmetic operations, evaluations of elementary functions, and oracle calls to evaluate \(f\) are considered within the real number model of computation, i.e., they are carried out exactly.
In the following, we provide a detailed description of the quadrature problem, namely we are interested in the approximation of
\begin{align*}
S(f) = \mathrm{E}(f(X))
\end{align*}
for \(X\) being the \(r\)-dimensional solution of an autonomous SDE of the form
\begin{align*}
\mathrm{d}X(t) = a(X(t)) \, \mathrm{d}t + b(X(t)) \, \mathrm{d}W(t), \quad t \in [0,1],
\end{align*}
with deterministic initial value
\begin{align*}
X(0) = x_0 \in \mathbb{R}^r,
\end{align*}
and driven by a \(d\)-dimensional standard Brownian motion \(W\). Furthermore, the drift coefficient \(a \colon \mathbb{R}^r \to \mathbb{R}^r\) and the diffusion coefficient \(b \colon \mathbb{R}^r \to \mathbb{R}^{r \times d}\) are assumed to be globally Lipschitz continuous.
For the function classes
\begin{align*}
F_{\infty} = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{\sup}\bigr\}
\end{align*}
and
\begin{align*}
F_p = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{L_p}\bigr\}, \quad 1 \leq p < \infty.
\end{align*}
we have established the following.
\[\]
\(\textit{Theorem 1.}\)
There exists a random bit multilevel Monte Carlo (MLMC) algorithm \(M\) using
\[
L = L(\varepsilon,F) = \begin{cases}\lceil{\log_2(\varepsilon^{-2}}\rceil, &\text{if} \ F = F_p,\\
\lceil{\log_2(\varepsilon^{-2} + \log_2(\log_2(\varepsilon^{-1}))}\rceil, &\text{if} \ F = F_\infty
\end{cases}
\]
and replication numbers
\[
N_\ell = N_\ell(\varepsilon,F) = \begin{cases}
\lceil{(L+1) \cdot 2^{-\ell} \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F = F_p,\\
\lceil{(L+1) \cdot 2^{-\ell} \cdot \max(\ell,1) \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F=f_\infty
\end{cases}
\]
for \(\ell = 0,\ldots,L\), for which exists a positive constant \(c\) such that
\begin{align*}
\mathrm{error}(M,F) = \sup_{f \in F} \bigl(\mathrm{E}(S(f) - M(f))^2\bigr)^{1/2} \leq c \cdot \varepsilon
\end{align*}
and
\begin{align*}
\mathrm{cost}(M,F) = \sup_{f \in F} \mathrm{E}(\mathrm{cost}(M,f)) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for every \(\varepsilon \in {]0,1/2[}\).
\[\]
Hence, in terms of the \(\varepsilon\)-complexity
\begin{align*}
\mathrm{comp}(\varepsilon,F) = \inf\bigl\{\mathrm{cost}(M,F) \colon M \ \text{is a random bit MC algorithm}, \mathrm{error}(M,F) \leq \varepsilon\bigr\}
\end{align*}
we have established the upper bound
\begin{align*}
\mathrm{comp}(\varepsilon,F) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for some positive constant \(c\). That is, we have shown the same weak asymptotic upper bound as in the case of random numbers from \([0,1]\). Hence, in this sense, random bits are almost as powerful as random numbers for our computational problem.
Moreover, we present numerical results for a non-analyzed adaptive random bit MLMC Euler algorithm, in the particular cases of the Brownian motion, the geometric Brownian motion, the Ornstein-Uhlenbeck SDE and the Cox-Ingersoll-Ross SDE. We also provide a numerical comparison to the corresponding adaptive random number MLMC Euler method.
A key challenge in the analysis of the algorithm in Theorem 1 is the approximation of probability distributions by means of random bits. A problem very closely related to the quantization problem, i.e., the optimal approximation of a given probability measure (on a separable Hilbert space) by means of a probability measure with finite support size.
Though we have shown that the random bit approximation of the standard normal distribution is 'harder' than the corresponding quantization problem (lower weak rate of convergence), we have been able to establish the same weak rate of convergence as for the corresponding quantization problem in the case of the distribution of a Brownian bridge on \(L_2([0,1])\), the distribution of the solution of a scalar SDE on \(L_2([0,1])\), and the distribution of a centered Gaussian random element in a separable Hilbert space.
Elementare Zahlentheorie
(2020)
Synapses are connections between different nerve cells that form an essential link in neural signal transmission. It is generally distinguished between electrical and chemical synapses, where chemical synapses are more common in the human brain and are also the type we deal with in this work.
In chemical synapses, small container-like objects called vesicles fill with neurotransmitter and expel them from the cell during synaptic transmission. This process is vital for communication between neurons. However, to the best of our knowledge no mathematical models that take different filling states of the vesicles into account have been developed before this thesis was written.
In this thesis we propose a novel mathematical model for modeling synaptic transmission at chemical synapses which includes the description of vesicles of different filling states. The model consists of a transport equation (for the vesicle growth process) plus three ordinary differential equations (ODEs) and focuses on the presynapse and synaptic cleft.
The well-posedness is proved in detail for this partial differential equation (PDE) system. We also propose a few different variations and related models. In particular, an ODE system is derived and a delay differential equation (DDE) system is formulated. We then use nonlinear optimization methods for data fitting to test some of the models on data made available to us by the Animal Physiology group at TU Kaiserslautern.
Einführung in die Algebra
(2020)
Diversification is one of the main pillars of investment strategies. The prominent 1/N portfolio, which puts equal weight on each asset is, apart from its simplicity, a method which is hard to outperform in realistic settings, as many studies have shown. However, depending on the number of considered assets, this method can lead to very large portfolios. On the other hand, optimization methods like the mean-variance portfolio suffer from estimation errors, which often destroy the theoretical benefits. We investigate the performance of the equal weight portfolio when using fewer assets. For this we explore different naive portfolios, from selecting the best Sharpe ratio assets to exploiting knowledge about correlation structures using clustering methods. The clustering techniques separate the possible assets into non-overlapping clusters and the assets within a cluster are ordered by their Sharpe ratio. Then the best asset of each portfolio is chosen to be a member of the new portfolio with equal weights, the cluster portfolio. We show that this portfolio inherits the advantages of the 1/N portfolio and can even outperform it empirically. For this we use real data and several simulation models. We prove these findings from a statistical point of view using the framework by DeMiguel, Garlappi and Uppal (2009). Moreover, we show the superiority regarding the Sharpe ratio in a setting, where in each cluster the assets are comonotonic. In addition, we recommend the consideration of a diversification-risk ratio to evaluate the performance of different portfolios.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope.
This document is the documentation for version 2020.02.
For more information, see https://www.lintim.net
Operator semigroups and infinite dimensional analysis applied to problems from mathematical physics
(2020)
In this dissertation we treat several problems from mathematical physics via methods from functional analysis and probability theory and in particular operator semigroups. The thesis consists thematically of two parts.
In the first part we consider so-called generalized stochastic Hamiltonian systems. These are generalizations of Langevin dynamics which describe interacting particles moving in a surrounding medium. From a mathematical point of view these systems are stochastic differential equations with a degenerated diffusion coefficient. We construct weak solutions of these equations via the corresponding martingale problem. Therefore, we prove essential m-dissipativity of the degenerated and non-sectorial It\^{o} differential operator. Further, we apply results from the analytic and probabilistic potential theory to obtain an associated Markov process. Afterwards we show our main result, the convergence in law of the positions of the particles in the overdamped regime, the so-called overdamped limit, to a distorted Brownian motion. To this end, we show convergence of the associated operator semigroups in the framework of Kuwae-Shioya. Further, we established a tightness result for the approximations which proves together with the convergence of the semigroups weak convergence of the laws.
In the second part we deal with problems from infinite dimensional Analysis. Three different issues are considered. The first one is an improvement of a characterization theorem of the so-called regular test functions and distribution of White noise analysis. As an application we analyze a stochastic transport equation in terms of regularity of its solution in the space of regular distributions. The last two problems are from the field of relativistic quantum field theory. In the first one the $ (\Phi)_3^4 $-model of quantum field theory is under consideration. We show that the Schwinger functions of this model have a representation as the moments of a positive Hida distribution from White noise analysis. In the last chapter we construct a non-trivial relativistic quantum field in arbitrary space-time dimension. The field is given via Schwinger functions. For these which we establish all axioms of Osterwalder and Schrader. This yields via the reconstruction theorem of Osterwalder and Schrader a unique relativistic quantum field. The Schwinger functions are given as the moments of a non-Gaussian measure on the space of tempered distributions. We obtain the measure as a superposition of Gaussian measures. In particular, this measure is itself non-Gaussian, which implies that the field under consideration is not a generalized free field.
In a recent paper, G. Malle and G. Robinson proposed a modular anologue to Brauer's famous \( k(B) \)-conjecture. If \( B \) is a \( p \)-block of a finite group with defect group \( D \), then they conjecture that \( l(B) \leq p^r \), where \( r \) is the sectional \( p \)-rank of \( D \). Since this conjecture is relatively new, there is obviously still a lot of work to do. This thesis is concerned with proving their conjecture for the finite groups of exceptional Lie type.
The famous Mather-Yau theorem in singularity theory yields a bijection of isomorphy classes of germs of isolated hypersurface singularities and their respective Tjurina algebras.
This result has been generalized by T. Gaffney and H. Hauser to singularities of isolated singularity type. Due to the fact that both results do not have a constructive proof, it is the objective of this thesis to extract explicit information about hypersurface singularities from their Tjurina algebras.
First we generalize the result by Gaffney-Hauser to germs of hypersurface singularities, which are strongly Euler-homogeneous at the origin. Afterwards we investigate the Lie algebra structure of the module of logarithmic derivations of Tjurina algebra while considering the theory of graded analytic algebras by G. Scheja and H. Wiebe. We use the aforementioned theory to show that germs of hypersurface singularities with positively graded Tjurina algebras are strongly Euler-homogeneous at the origin. We deduce the classification of hypersurface singularities with Stanley-Reisner Tjurina ideals.
The notion of freeness and holonomicity play an important role in the investigation of properties of the aforementioned singularities. Both notions have been introduced by K. Saito in 1980. We show that hypersurface singularities with Stanley--Reisner Tjurina ideals are holonomic and have a free singular locus. Furthermore, we present a Las Vegas algorithm, which decides whether a given zero-dimensional \(\mathbb{C}\)-algebra is the Tjurina algebra of a quasi-homogeneous isolated hypersurface singularity. The algorithm is implemented in the computer algebra system OSCAR.