Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (292) (remove)
Has Fulltext
- yes (292)
Keywords
- Algebraische Geometrie (6)
- Portfolio Selection (6)
- Finanzmathematik (5)
- Optimization (5)
- Stochastische dynamische Optimierung (5)
- Elastizität (4)
- Homogenisierung <Mathematik> (4)
- Navier-Stokes-Gleichung (4)
- Numerische Mathematik (4)
- Portfolio-Optimierung (4)
- portfolio optimization (4)
- Bewertung (3)
- Computeralgebra (3)
- Erwarteter Nutzen (3)
- Finite-Volumen-Methode (3)
- Gröbner-Basis (3)
- Inverses Problem (3)
- Monte-Carlo-Simulation (3)
- Mosco convergence (3)
- NURBS (3)
- Numerische Strömungssimulation (3)
- Optionspreistheorie (3)
- Portfolio Optimization (3)
- Portfoliomanagement (3)
- Risikomanagement (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Wavelet (3)
- isogeometric analysis (3)
- optimales Investment (3)
- Asymptotic Expansion (2)
- Asymptotik (2)
- B-Spline (2)
- B-splines (2)
- Derivat <Wertpapier> (2)
- Diskrete Fourier-Transformation (2)
- Elasticity (2)
- Endliche Geometrie (2)
- Erdmagnetismus (2)
- FFT (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- Geometric Ergodicity (2)
- Hamilton-Jacobi-Differentialgleichung (2)
- Hochskalieren (2)
- IMRT (2)
- Isogeometrische Analyse (2)
- Kreditrisiko (2)
- Langevin equation (2)
- Lebensversicherung (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Lineare partielle Differentialgleichung (2)
- Local smoothing (2)
- Mathematik (2)
- Mehrskalenanalyse (2)
- Mehrskalenmodell (2)
- Mikrostruktur (2)
- Modulraum (2)
- Multiset Multicover (2)
- Partial Differential Equations (2)
- Partielle Differentialgleichung (2)
- Poröser Stoff (2)
- Regressionsanalyse (2)
- Regularisierung (2)
- Robust Optimization (2)
- Schnitttheorie (2)
- Statistisches Modell (2)
- Stochastic Control (2)
- Stochastische Differentialgleichung (2)
- Transaktionskosten (2)
- Upscaling (2)
- Vektorwavelets (2)
- White Noise Analysis (2)
- curve singularity (2)
- domain decomposition (2)
- duality (2)
- finite volume method (2)
- geomagnetism (2)
- homogenization (2)
- illiquidity (2)
- interface problem (2)
- mesh generation (2)
- optimal investment (2)
- regression analysis (2)
- splines (2)
- "Slender-Body"-Theorie (1)
- (Joint) chance constraints (1)
- 3D image analysis (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- ALE-Methode (1)
- Ableitungsfreie Optimierung (1)
- Adjoint method (1)
- Advanced Encryption Standard (1)
- Agriculture Loan (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraic groups (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Analysis (1)
- Angewandte Mathematik (1)
- Annulus (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Arc distance (1)
- Archimedische Kopula (1)
- Asiatische Option (1)
- Asset allocation (1)
- Asset-liability management (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Ausfallrisiko (1)
- Automorphismengruppe (1)
- Autoregressive Hilbertian model (1)
- Balance sheet (1)
- Barriers (1)
- Basic Scheme (1)
- Basis Risk (1)
- Basket Option (1)
- Bayes-Entscheidungstheorie (1)
- Beam models (1)
- Beam orientation (1)
- Bernstein–Gelfand–Gelfand construction (1)
- Beschichtungsprozess (1)
- Beschränkte Krümmung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Bilanzstrukturmanagement (1)
- Bildsegmentierung (1)
- Binomialbaum (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Boltzmann Equation (1)
- Bondindizes (1)
- Bootstrap (1)
- Boundary Value Problem / Oblique Derivative (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- CDO (1)
- CDS (1)
- CDSwaption (1)
- CFD (1)
- CHAMP (1)
- CPDO (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Charakter <Gruppentheorie> (1)
- Chi-Quadrat-Test (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Circle Location (1)
- Cluster-Analyse (1)
- Coarse graining (1)
- Cohen-Lenstra heuristic (1)
- Combinatorial Optimization (1)
- Commodity Index (1)
- Complex Structures (1)
- Composite Materials (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer algebra (1)
- Computeralgebra System (1)
- Conditional Value-at-Risk (1)
- Connectivity (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Constraint Generation (1)
- Construction of hypersurfaces (1)
- Convergence Rate (1)
- Copula (1)
- Coupled PDEs (1)
- Coxeter-Freudenthal-Kuhn triangulation (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Curvature (1)
- Curved viscous fibers (1)
- Cycle Decomposition (1)
- DSMC (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Debt Management (1)
- Defaultable Options (1)
- Deformationstheorie (1)
- Degenerate Diffusion Semigroups (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Differential forms (1)
- Differenzenverfahren (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionsprozess (1)
- Discriminatory power (1)
- Dispersionsrelation (1)
- Dissertation (1)
- Diversifikation (1)
- Druckkorrektur (1)
- Dünnfilmapproximation (1)
- EDF observation models (1)
- EM algorithm (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Efficiency (1)
- Efficient Reliability Estimation (1)
- Effizienter Algorithmus (1)
- Effizienz (1)
- Eikonal equation (1)
- Elastische Deformation (1)
- Elastoplastizität (1)
- Elektromagnetische Streuung (1)
- Eliminationsverfahren (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Endliche Gruppe (1)
- Endliche Lie-Gruppe (1)
- Energy markets (1)
- Entscheidungsbaum (1)
- Entscheidungsunterstützung (1)
- Enumerative Geometrie (1)
- Erdöl Prospektierung (1)
- Erwartungswert-Varianz-Ansatz (1)
- Essential m-dissipativity (1)
- Expected shortfall (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Extrapolation (1)
- Extreme Events (1)
- Extreme value theory (1)
- FEM (1)
- FPM (1)
- Faden (1)
- Fatigue (1)
- Feedfoward Neural Networks (1)
- Feynman Integrals (1)
- Feynman path integrals (1)
- Fiber suspension flow (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finite-Elemente-Methode (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Kopplung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Forward-Backward Stochastic Differential Equation (1)
- Fourier-Transformation (1)
- Fredholmsche Integralgleichung (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionenkörper (1)
- GARCH (1)
- GARCH Modelle (1)
- Galerkin-Methode (1)
- Gamma-Konvergenz (1)
- Garantiezins (1)
- Garbentheorie (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Geo-referenced data (1)
- Geodesie (1)
- Geometrische Ergodizität (1)
- Gewichteter Sobolev-Raum (1)
- Gittererzeugung (1)
- Gleichgewichtsstrategien (1)
- Gradient based optimization (1)
- Granular flow (1)
- Granulat (1)
- Graph Theory (1)
- Gravitationsfeld (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Große Abweichung (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner-basis (1)
- Gyroscopic (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamiltonian Path Integrals (1)
- Handelsstrategien (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Hazard Functions (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Helmholtz Type Boundary Value Problems (1)
- Heston-Modell (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- Hilbert complexes (1)
- Homogenization (1)
- Homologische Algebra (1)
- Hub Location Problem (1)
- Hydrostatischer Druck (1)
- Hyperelastizität (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hyperspektraler Sensor (1)
- Hypocoercivity (1)
- Hysterese (1)
- ITSM (1)
- Idealklassengruppe (1)
- Illiquidität (1)
- Image restoration (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Index Insurance (1)
- Inflation (1)
- Infrarotspektroskopie (1)
- Insurance (1)
- Intensität (1)
- Internationale Diversifikation (1)
- Interpolation Algorithm (1)
- Inverse Problem (1)
- Irreduzibler Charakter (1)
- Isogeometric Analysis (1)
- Ito (1)
- Jacobigruppe (1)
- Kanalcodierung (1)
- Karhunen-Loève expansion (1)
- Kategorientheorie (1)
- Kelvin Transformation (1)
- Kirchhoff-Love shell (1)
- Kiyoshi (1)
- Kombinatorik (1)
- Kommutative Algebra (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsphysik (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Kreitderivaten (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kurvenschar (1)
- LIBOR (1)
- Lagrangian relaxation (1)
- Laplace transform (1)
- Lattice Boltzmann (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Leading-Order Optimality (1)
- Least-squares Monte Carlo method (1)
- Level set methods (1)
- Lie algebras (1)
- Lie-Typ-Gruppe (1)
- Lippmann-Schwinger Equation (1)
- Lippmann-Schwinger equation (1)
- Liquidität (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- MBS (1)
- MKS (1)
- ML-estimation (1)
- Macaulay’s inverse system (1)
- Magneto-Elastic Coupling (1)
- Magnetoelastic coupling (1)
- Magnetoelasticity (1)
- Magnetostriction (1)
- Marangoni-Effekt (1)
- Market Equilibrium (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Markov-Prozess (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martingaloptimalitätsprinzip (1)
- Maschinelles Lernen (1)
- Mathematical Finance (1)
- Mathematics (1)
- Mathematische Modellierung (1)
- Mathematisches Modell (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Maxwell's equations (1)
- McKay conjecture (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrkriterielle Optimierung (1)
- Mehrskalen (1)
- Microstructure (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikroelektronik (1)
- Mixed Connectivity (1)
- Mixed integer programming (1)
- Mixed method (1)
- Model-Dynamics (1)
- Modellbildung (1)
- Molekulardynamik (1)
- Momentum and Mas Transfer (1)
- Monte Carlo (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multicriteria optimization (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiresolution Analysis (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- Networks (1)
- Netzwerksynthese (1)
- Neural Networks (1)
- Neuronales Netz (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Niederschlag (1)
- Nilpotent elements (1)
- No-Arbitrage (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Mathematik / Algorithmus (1)
- Numerisches Verfahren (1)
- Oberflächenmaße (1)
- Oberflächenspannung (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimale Portfolios (1)
- Optimierung (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Order (1)
- Ovoid (1)
- PDE-Constrained Optimization, Robust Design, Multi-Objective Optimization (1)
- POD (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Partikel Methoden (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathwise Optimality (1)
- Pedestrian FLow (1)
- Periodic Homogenization (1)
- Pfadintegral (1)
- Planares Polynom (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- PolyBoRi (1)
- Population Balance Equation (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Preimage of an ideal under a morphism of algebras (1)
- Probust optimization (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Prox-Regularisierung (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- Quadratischer Raum (1)
- Quantile autoregression (1)
- Quantization (1)
- Quasi-Variational Inequalities (1)
- RKHS (1)
- Radial Basis Functions (1)
- Radiotherapy (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rarefied gas (1)
- Reflexionsspektroskopie (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regularisierung / Stoppkriterium (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Reliability (1)
- Restricted Regions (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Risikoanalyse (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Management (1)
- Risk Measures (1)
- Risk Sharing (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Räumliche Statistik (1)
- SWARM (1)
- Sandwiching algorithm (1)
- Scale function (1)
- Schaum (1)
- Schaumzerfall (1)
- Schiefe Ableitung (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Second Order Conditions (1)
- Semi-Markov-Kette (1)
- Semi-infinite optimization (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Shallow Water Equations (1)
- Shape optimization (1)
- Simulation (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Slender body theory (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Solvency II (1)
- Solvency-II-Richtlinie (1)
- Spannungs-Dehn (1)
- Spatial Statistics (1)
- Spectral Method (1)
- Spectral theory (1)
- Spektralanalyse <Stochastik> (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sprung-Diffusions-Prozesse (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Standard basis (1)
- Standortprobleme (1)
- Statistics (1)
- Steuer (1)
- Stochastic Impulse Control (1)
- Stochastic Processes (1)
- Stochastic optimization (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische Zinsen (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stochastisches Modell (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stornierung (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Structural Reliability (1)
- Strukturiertes Finanzprodukt (1)
- Strukturoptimierung (1)
- Strömungsdynamik (1)
- Strömungsmechanik (1)
- Subset Simulationen (1)
- Success Run (1)
- Survival Analysis (1)
- Systemidentifikation (1)
- Sägezahneffekt (1)
- Tail Dependence Koeffizient (1)
- Temporal Variational Autoencoders (1)
- Test for Changepoint (1)
- Thermophoresis (1)
- Thin film approximation (1)
- Tichonov-Regularisierung (1)
- Time Series (1)
- Time-Series (1)
- Time-delay-Netz (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Traffic flow (1)
- Transaction costs (1)
- Trennschärfe <Statistik> (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Two-Scale Convergence (1)
- Two-phase flow (1)
- Unreinheitsfunktion (1)
- Untermannigfaltigkeit (1)
- Upwind-Verfahren (1)
- Usage modeling (1)
- Utility (1)
- Value at Risk (1)
- Value at risk (1)
- Value-at-Risk (1)
- Variational autoencoders (1)
- Variationsrechnung (1)
- Vectorfield approximation (1)
- Vektorfeldapproximation (1)
- Vektorkugelfunktionen (1)
- Verschwindungsatz (1)
- Versicherung (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Volatilität (1)
- Volatilitätsarbitrage (1)
- Vorkonditionierer (1)
- Vorwärts-Rückwärts-Stochastische-Differentialgleichung (1)
- Water reservoir management (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weißes Rauschen (1)
- White Noise (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Wissenschaftliches Rechnen (1)
- Worst-Case (1)
- Wärmeleitfähigkeit (1)
- Yaglom limits (1)
- Zeitintegrale Modelle (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- Zero-dimensional schemes (1)
- Zopfgruppe (1)
- Zufälliges Feld (1)
- Zweiphasenströmung (1)
- abgeleitete Kategorie (1)
- adaptive algorithm (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alternating minimization (1)
- alternating optimization (1)
- analoge Mikroelektronik (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- applied mathematics (1)
- arbitrary Lagrangian-Eulerian methods (ALE) (1)
- archimedean copula (1)
- asian option (1)
- asymptotic-preserving (1)
- auto-pruning (1)
- basket option (1)
- benders decomposition (1)
- bending strip method (1)
- binomial tree (1)
- blackout period (1)
- bocses (1)
- boundary value problem (1)
- canonical ideal (1)
- canonical module (1)
- changing market coefficients (1)
- characteristic polynomial (1)
- closure approximation (1)
- clustering (1)
- clustering methods (1)
- combinatorics (1)
- composites (1)
- computational finance (1)
- computer algebra (1)
- computeralgebra (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- correlated errors (1)
- coupling methods (1)
- crash (1)
- crash hedging (1)
- credit risk (1)
- curvature (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- degenerations of an elliptic curve (1)
- dense univariate rational interpolation (1)
- derived category (1)
- determinant (1)
- diffusion models (1)
- discrepancy (1)
- diversification (1)
- domain parametrization (1)
- double exponential distribution (1)
- downward continuation (1)
- efficiency loss (1)
- elastoplasticity (1)
- elliptical distribution (1)
- endomorphism ring (1)
- enumerative geometry (1)
- equilibrium strategies (1)
- equisingular families (1)
- face value (1)
- fiber reinforced silicon carbide (1)
- fibre lay-down dynamics (1)
- filtration (1)
- financial mathematics (1)
- finite difference schemes (1)
- finite element method (1)
- finite groups of Lie type (1)
- finite spin group (1)
- first hitting time (1)
- float glass (1)
- flood risk (1)
- fluid structure (1)
- fluid structure interaction (1)
- fluid-structure interaction (FSI) (1)
- forward-shooting grid (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- generic character table (1)
- gitter (1)
- glioblastoma (1)
- good semigroup (1)
- graph p-Laplacian (1)
- gravitation (1)
- group action (1)
- groups of Lie type (1)
- großer Investor (1)
- haptotaxis (1)
- hedging (1)
- heuristic (1)
- hierarchical matrix (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hyperspectal unmixing (1)
- hypocoercivity (1)
- idealclass group (1)
- image analysis (1)
- image denoising (1)
- impulse control (1)
- impurity functions (1)
- incompressible elasticity (1)
- infinite-dimensional analysis (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- integer programming (1)
- integral constitutive equations (1)
- intensity (1)
- inverse optimization (1)
- inverse problem (1)
- isogeometric analysis (IGA) (1)
- jump-diffusion process (1)
- kernel (1)
- kinetic equations (1)
- large investor (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- level K-algebras (1)
- level set method (1)
- life insurance (1)
- limit theorems (1)
- linear code (1)
- linear systems (1)
- local-global conjectures (1)
- localizing basis (1)
- longevity bonds (1)
- loss analysis (1)
- low-rank approximation (1)
- machine learning (1)
- macro derivative (1)
- market crash (1)
- market manipulation (1)
- markov model (1)
- martingale optimality principle (1)
- mathematical modelling (1)
- mathematical morphology (1)
- matrix problems (1)
- matroid flows (1)
- mean-variance approach (1)
- mesh deformation (1)
- micromechanics (1)
- minimal polynomial (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- modal derivatives (1)
- model order reduction (1)
- moduli space (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- multi scale (1)
- multi-asset option (1)
- multi-class image segmentation (1)
- multi-level Monte Carlo (1)
- multi-phase flow (1)
- multi-scale model (1)
- multicategory (1)
- multifilament superconductor (1)
- multigrid method (1)
- multileaf collimator (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative noise (1)
- multiscale denoising (1)
- multiscale methods (1)
- multivariate chi-square-test (1)
- naive diversification (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- non square linear system solving (1)
- non-desarguesian plane (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear elasticity (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- number fields (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerics (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- option pricing (1)
- option valuation (1)
- partial differential equation (1)
- partial information (1)
- path-dependent options (1)
- pattern (1)
- penalty methods (1)
- penalty-free formulation (1)
- petroleum exploration (1)
- planar polynomial (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- posterior collapse (1)
- potential (1)
- preconditioners (1)
- pressure correction (1)
- primal-dual algorithm (1)
- probability distribution (1)
- projective surfaces (1)
- proximation (1)
- proxy modeling (1)
- quadrinomial tree (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiation therapy (1)
- radiotherapy (1)
- rare disasters (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- real quadratic number fields (1)
- reconstructions (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regime-shift model (1)
- regularization methods (1)
- rheology (1)
- risk analysis (1)
- risk measures (1)
- risk reduction (1)
- sampling (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- scaled boundary isogeometric analysis (1)
- scaled boundary parametrizations (1)
- second class group (1)
- seismic tomography (1)
- semigroup of values (1)
- semisprays (1)
- sheaf theory (1)
- similarity measures (1)
- singularities (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparsity (1)
- spherical approximation (1)
- sputtering process (1)
- star-shaped domain (1)
- stochastic arbitrage (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stochastische Arbitrage (1)
- stop- and play-operator (1)
- stratifolds (1)
- subgradient (1)
- superposed fluids (1)
- surface measures (1)
- surrender options (1)
- surrogate algorithm (1)
- syzygies (1)
- tail dependence coefficient (1)
- tax (1)
- tensions (1)
- time delays (1)
- topological asymptotic expansion (1)
- toric geometry (1)
- torische Geometrie (1)
- total variation (1)
- total variation spatial regularization (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- tropical geometry (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- unimodular certification (1)
- unimodularity (1)
- value semigroup (1)
- valuing contracts (1)
- variable selection (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector spherical harmonics (1)
- vectorial wavelets (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- viscoelastic fluids (1)
- volatility arbitrage (1)
- vortex seperation (1)
- well-posedness (1)
- worst-case (1)
- worst-case scenario (1)
- Äquisingularität (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)
Faculty / Organisational entity
In the last few years a lot of work has been done in the investigation of Brownian motion with point interaction(s) in one and higher dimensions. Roughly speaking a Brownian motion with point interaction is nothing else than a Brownian motion whose generator is disturbed by a measure supported in just one point.
The purpose of the present work is the introducing of curve interactions of the two dimensional Brownian motion for a closed curve \(\mathcal{C}\). We will understand a curve interaction as a self-adjoint extension of the restriction of the Laplacian to the set of infinitely often continuously differentiable functions with compact support in \(\mathbb{R}^{2}\) which are constantly 0 at the closed curve. We will give a full description of all these self-adjoint extensions.
In the second chapter we will prove a generalization of Tanaka's formula to \(\mathbb{R}^{2}\). We define \(g\) to be a so-called harmonic single layer with continuous layer function \(\eta\) in \(\mathbb{R}^{2}\). For such a function \(g\) we prove
\begin{align}
g\left(B_{t}\right)=g\left(B_{0}\right)+\int\limits_{0}^{t}{\nabla g\left(B_{s}\right)\mathrm{d}B_{s}}+\int\limits_{0}^{t}\eta\left(B_{s}\right)\mathrm{d}L\left(s,\mathcal{C}\right)
\end{align}
where \(B_{t}\) is just the usual Brownian motion in \(\mathbb{R}^{2}\) and \(L\left(t,\mathcal{C}\right)\) is the connected unique local time process of \(B_{t}\) on the closed curve \(\mathcal{C}\).
We will use the generalized Tanaka formula in the following chapter to construct classes of processes related to curve interactions. In a first step we get the generalization of point interactions in a second step we get processes which behaves like a Brownian motion in the complement of \(\mathcal{C}\) and has an additional movement along the curve in the time- scale of \(L\left(t,\mathcal{C}\right)\). Such processes do not exist in the one point case since there we cannot move when the Brownian motion is in the point.
By establishing an approximation of a curve interaction by operators of the form Laplacian \(+V_{n}\) with "nice" potentials \(V_{n}\) we are able to deduce the existence of superprocesses related to curve interactions.
The last step is to give an approximation of these superprocesses by a sytem of branching particles. This approximation gives a better understanding of the related mass creation.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
Die vorliegende Arbeit wurde angeregt durch die in A.N. Borodin(2000) [Version of the Feynman-Kac Formula. Journal of Mathematical Sciences, 99(2):1044-1052, 2000] und in B. Simon(2000) [A Feynman-Kac Formula for Unbounded Semigroups. Canadian Math. Soc. Conf. Proc., 28:317-321, 2000] dargestellten Feynman-Kac-Formeln. Sie beschäftigt sich mit dem Problem, den Geltungsbereich der Feynman-Kac-Formel im Hinblick auf die Bedingungen der Potentiale und der Anfangsbedingung der zugehörigen partiellen Differentialgleichung zu erweitern. Es ist bekannt, dass die Feynman-Kac-Formel für beschränkte Potentiale gilt. Ausserdem gilt sie auch für Anfangsbedingungen, die im Raum \(C_{0}(\mathbb{R}^{n})\) oder im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegen. Die Darstellung der Feynman-Kac-Formel für die Anfangsbedingung, die im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegt, liefert die Lösung der partiellen Differentialgleichung. Wir können sie auch als stark stetige Halbgruppe auf dem Raum \(C_{0}(\mathbb{R}^{n})\) auffassen. Diese zwei verschiedenen Darstellungen sind äquivalent. In dieser Arbeit zeigen wir zunächst, dass die Feynman-Kac-Formel auch für unbeschränkte Potentiale \(V\) gilt, wobei \(|V(x)| \leq \varepsilon ||x||^{2} + C_{\varepsilon} \) für alle \(\varepsilon > 0; C_{\varepsilon} > 0\) und \(x \in \mathbb{R}^{n}\) ist. Ausserdem zeigen wir, dass sie für alle Anfangsbedingungen \(f\) gilt mit \(x \mapsto e^{-\varepsilon |x|^{2}} f(x) \in H^{2,2}(\mathbb{R}^{n})\). Der Beweis ist wahrscheinlichkeitstheoretisch und benutzt keine Spektraltheorie. Der spektraltheoretische Zugang, in dem eine Darstellung des Operators \(e^{-tH}\), wobei \(H = -\frac{1}{2} \Delta + V\) gegeben wird, wurde von B. Simon(2000) auch auf die obige Klasse von Potentialen ausgeweitet. Wir lassen zusätzlich auch Potentiale der Form \(V = V_{1} + V_{2}\) zu, wobei \(V_{1} \in L^{2}(\mathbb{R}^{3})\) ist und für alle \(\varepsilon > 0\) gibt es \(C_{\varepsilon} > 0\), so dass \(|V_{2}(x)| \leq\varepsilon ||x||^{2} + C_{\varepsilon}\) für alle \(x \in \mathbb{R}^{3}\) ist. Im Gegensatz zur klassischen Situation ist \(e^{-tH}\) jetzt ein unbeschränkter Operator. Schließlich wird in dieser Arbeit auch der Zusammenhang zwischen der Feynman-Kac-It\(\hat{o}\)-Formel, der Feynman-Kac-Formel und der Kolmogorov-Rückwärtsgleichung untersucht.
Die Arbeit beschäftigt sich mit den Charakteren des Normalisators und des Zentralisators eines Sylowtorus. Dabei wird jede Gruppe G vom Lie-Typ als Fixpunktgruppe einer einfach-zusammenhängenden einfachen Gruppe unter einer Frobeniusabbildung aufgefaßt. Für jeden Sylowtorus S der algebraischen Gruppe wird gezeigt, dass die irreduziblen Charaktere des Zentralisators von S in G sich auf ihre Trägheitsgruppe im Normalisator von S fortsetzen. Diese Fragestellung entsteht aus dem Studium der Höhe 0 Charaktere bei endlichen reduktiven Gruppen vom Lie-Typ im Zusammenhang mit der McKay-Vermutung. Neuere Resultate von Isaacs, Malle und Navarro führen diese Vermutung auf eine Eigenschaft von einfachen Gruppen zurück, die sie dann für eine Primzahl gut nennen. Bei Gruppen vom Lie-Typ zeigt das obige Resultat zusammen mit einer aktuellen Arbeit von Malle einige dabei wichtige und notwendige Eigenschaften. Anhand der Steinberg-Präsentation werden vor allem bei den klassischen Gruppen genauere Aussagen über die Struktur des Zentralisators und des Normalisators eines Sylowtorus bewiesen. Wichtig dabei ist die von Tits eingeführte erweiterte Weylgruppe, die starke Verbindungen zu Zopfgruppen besitzt. Das Resultat wird in zahlreichen Einzelfallbetrachtungen gezeigt, bei denen in dieser Arbeit bewiesene Vererbungsregeln von Fortsetzbarkeitseigenschaften benutzt werden.
In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.
In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.
Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.
The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.
In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.
The goal of this work is to develop a simulation-based algorithm, allowing the prediction
of the effective mechanical properties of textiles on the basis of their microstructure
and corresponding properties of fibers. This method can be used for optimization of the
microstructure, in order to obtain a better stiffness or strength of the corresponding fiber
material later on. An additional aspect of the thesis is that we want to take into account the microcontacts
between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the
textile. An introduction of an additional asymptotics with respect to a small parameter,
the relation between the thickness and the representative length of the fibers, allows a
reduction of local contact problems between fibers to 1-dimensional problems, which
reduces numerical computations significantly.
A fiber composite material with periodic microstructure and multiple frictional microcontacts
between fibers is studied. The textile is modeled by introducing small geometrical
parameters: the periodicity of the microstructure and the characteristic
diameter of fibers. The contact linear elasticity problem is considered. A two-scale
approach is used for obtaining the effective mechanical properties.
The algorithm using asymptotic two-scale homogenization for computation of the
effective mechanical properties of textiles with periodic rod or fiber microstructure
is proposed. The algorithm is based on the consequent passing to the asymptotics
with respect to the in-plane period and the characteristic diameter of fibers. This
allows to come to the equivalent homogenized problem and to reduce the dimension
of the auxiliary problems. Further numerical simulations of the cell problems give
the effective material properties of the textile.
The homogenization of the boundary conditions on the vanishing out-of-plane interface
of a textile or fiber structured layer has been studied. Introducing additional
auxiliary functions into the formal asymptotic expansion for a heterogeneous
plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous
Neumann boundary condition were deduced. It is incorporated into the right hand
side of the homogenized problem via effective out-of-plane moduli.
FiberFEM, a C++ finite element code for solving contact elasticity problems, is
developed. The code is based on the implementation of the algorithm for the contact
between fibers, proposed in the thesis.
Numerical examples of homogenization of geotexiles and wovens are obtained in the
work by implementation of the developed algorithm. The effective material moduli
are computed numerically using the finite element solutions of the auxiliary contact
problems obtained by FiberFEM.
This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.
Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall
modeling and simulation of physical and chemical processes occuring in the course of an accident
is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor tech-
nology and computer programming. The aim of the study is therefore to create the foundations
of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based
on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under
different working and accident conditions, and to develop proper action plans for minimizing the
risks of accidents, and/or minimizing the consequences of possible accidents. A very large number
of scenarios have to be simulated, and at the same time acceptable accuracy for the critical param-
eters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software
tools are either too slow, or not accurate enough. This thesis deals with developing customized al-
gorithm and software tools for simulation of isothermal and non-isothermal flows in a containment
pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented.
The goal of the work is to achieve a balance between accuracy and speed of calculation, and to
develop customized algorithm for this special case. Different discretization and solution approaches
are studied and those which correspond best to the formulated goal are selected, adjusted, and when
possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated
geometries, in presence of solid and porous obstales, is in the core of the algorithm. Developing
suitable pre-processor and customized domain decomposition algorithms are essential part of the
overall algorithm amd software. Results from numerical simulations in test geometries and in real
geometries are presented and discussed.
In this thesis, the quasi-static Biot poroelasticity system in bounded multilayered domains in one and three dimensions is studied. In more detail, in the one-dimensional case, a finite volume discretization for the Biot system with discontinuous coefficients is derived. The discretization results in a difference scheme with harmonic averaging of the coefficients. Detailed theoretical analysis of the obtained discrete model is performed. Error estimates, which establish convergence rates for both primary as well as flux unknowns are derived. Besides, modified and more accurate discretizations, which can be applied when the interface position coincides with a grid node, are obtained. These discretizations yield second order convergence of the fluxes of the problem. Finally, the solver for the solution of the produced system of linear equations is developed and extensively tested. A number of numerical experiments, which confirm the theoretical considerations are performed. In the three-dimensional case, the finite volume discretization of the system involves construction of special interpolating polynomials in the dual volumes. These polynomials are derived so that they satisfy the same continuity conditions across the interface, as the original system of PDEs. This technique allows to obtain such a difference scheme, which provides accurate computation of the primary as well as of the flux unknowns, including the points adjacent to the interface. Numerical experiments, based on the obtained discretization, show second order convergence for auxiliary problems with known analytical solutions. A multigrid solver, which incorporates the features of the discrete model, is developed in order to solve efficiently the linear system, produced by the finite volume discretization of the three-dimensional problem. The crucial point is to derive problem-dependent restriction and prolongation operators. Such operators are a well-known remedy for the scalar PDEs with discontinuous coefficients. Here, these operators are derived for the system of PDEs, taking into account interdependence of different unknowns within the system. In the derivation, the interpolating polynomials from the finite volume discretization are employed again, linking thus the discretization and the solution processes. The developed multigrid solver is tested on several model problems. Numerical experiments show that, due to the proper problem-dependent intergrid transfer, the multigrid solver is robust with respect to the discontinuities of the coefficients of the system. In the end, the poroelasticity system with discontinuous coefficients is used to model a real problem. The Biot model, describing this problem, is treated numerically, i.e., discretized by the developed finite volume techniques and then solved by the constructed multigrid solver. Physical characteristics of the process, such as displacement of the skeleton, pressure of the fluid, components of the stress tensor, are calculated and then presented at certain cross-sections.
We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.
Efficient time integration and nonlinear model reduction for incompressible hyperelastic materials
(2013)
This thesis deals with the time integration and nonlinear model reduction of nearly incompressible materials that have been discretized in space by mixed finite elements. We analyze the structure of the equations of motion and show that a differential-algebraic system of index 1 with a singular perturbation term needs to be solved. In the limit case the index may jump to index 3 and thus renders the time integration into a difficult problem. For the time integration we apply Rosenbrock methods and study their convergence behavior for a test problem, which highlights the importance of the well-known Scholz conditions for this problem class. Numerical tests demonstrate that such linear-implicit methods are an attractive alternative to established time integration methods in structural dynamics. In the second part we combine the simulation of nonlinear materials with a model reduction step. We use the method of proper orthogonal decomposition and apply it to the discretized system of second order. For a nonlinear model reduction to be efficient we approximate the nonlinearity by following the lookup approach. In a practical example we show that large CPU time savings can achieved. This work is in order to prepare the ground for including such finite element structures as components in complex vehicle dynamics applications.
Zusammenfassung. In dieser Arbeit werden Probleme der numerischen Lösung finiter Differenzenverfahren partieller Differentialgleichungen in einem algebraischen Ansatz behandelt. Es werden sowohl theoretische Ergebnisse präsentiert als auch die praktische Implementierung mithilfe der Systeme SINGULAR und QEPCAD vorgeführt. Dabei beziehen sich die algebraischen Methoden auf zwei unterschiedliche Aspekte bei finiten Differenzenverfahren: die Erzeugung von Schemata mithilfe von Gröbnerbasen und die darauf folgende Stabilitätsanalyse mittels Quantorenelimination durch algebraische zylindrische Dekomposition. Beim Aufbau der Arbeit werden in den ersten drei Kapiteln in einer Rückschau die nötigen Begriffe aus der Computeralgebra gelegt, die Grundzüge der numerischen Konvergenztheorie finiter Differenzenschemata erklärt sowie die Anwendung des CAD-Algorithmus zur Quantorenelimierung skizziert. Das Kapitel 4 entwickelt ausgehend vom zugrunde liegenden Kontext die Formulierung und die dafür nötigen Bedingungen an Differenzenschemata, die algebraisch nach Definition ein Ideal in einem Polynomring darstellen. Neben der praktischen Handhabbarkeit der Objekte liegt die Betonung auf größtmöglicher Allgemeinheit in den Definitionen der Begriffe. Es werden äquivalente Wege der Erzeugung sowie Eigenschaften der Eindeutigkeit unter sehr speziellen Bedingungen an die verwendeten Approximationen gezeigt. Die Anwendung des CAD-Algorithmus auf die Abschätzung des Symbols eines Schemas wird erläutert. Das fünfte Kapitel beschreibt die SINGULAR-Bibliothek findiff.lib, welche das Zusammenspiel von SINGULAR und QEPCAD garantiert und eine vollständige Automatisierung der Erzeugung und Stabilitätsanalyse eines finiten Differenzenverfahrens ermöglicht.
Die vorliegende Dissertation besteht aus zwei Hauptteilen: Neue Ergebnisse aus der Gaußchen Analysis und ihre Anwendung auf die Theorie der Pfadintegrale. Das zentrale Resultat des ersten Teils ist die Charakterisierung aller regulären Distributionen die man mit Donsker's Delta multiplizieren kann. Dabei wird eine explizite Formel für solche Produkte, die sogenannte Wick-Formel, angegeben. Im Anwendungsteil dieser Arbeit wird zunächst eine komplex skalierte Feynman-Kac-Formel und ihre zugehörigen Kerne mit Hilfe dieser Wick-Formel gezeigt. Desweiteren werden Feynman Integranden für neue Klassen von Potentialen als White Noise Distributionen konstruiert.
The German energy mix, which provides an overview of the sources of electricity available in Germany, is changing as a result of the expansion of renewable energy sources. With this shift towards sustainable energy sources such as wind and solar power, the electricity market situation is also in flux. Whereas in the past there were few uncertainties in electricity generation and only demand was subject to stochastic uncertainties, generation is now subject to stochastic fluctuations as well, especially due to weather dependency. To provide a supportive framework for this different situation, the electricity market has introduced, among other things, the intraday market, products with half-hourly and quarter-hourly time slices, and a modified balancing energy market design. As a result, both electricity price forecasting and optimization issues remain topical.
In this thesis, we first address intraday market modeling and intraday index forecasting. To do so, we move to the level of individual bids in the intraday market and use them to model the limit order books of intraday products. Based on statistics of the modeled limit order books, we present a novel estimator for the intraday indices. Especially for less liquid products, the order book statistics contain relevant information that allows for significantly more accurate predictions in comparison to the benchmark estimator.
Unlike the intraday market, the day ahead market allows smaller companies without their own trading department to participate since it is operated as a market with daily auctions. We optimize the flexibility offer of such a small company in the day ahead market and model the prices with a stochastic multi-factor model already used in the industry. To make this model accessible for stochastic optimization, we discretize it in time and space using scenario trees. Here we present existing algorithms for scenario tree generation as well as our own extensions and adaptations. These are based on the nested distance, which measures the distance between two distributions of stochastic processes. Based on the resulting scenario trees, we apply the stochastic optimization methods of stochastic programming, dynamic programming, and reinforcement learning to illustrate in which context the methods are appropriate.
The main objects of study in this thesis are abelian varieties and their endomorphism rings. Abelian varieties are not just interesting in their own right, they also have numerous applications in various areas such as in algebraic geometry, number theory and information security. In fact, they make up one of the best choices in public key cryptography and more recently in post-quantum cryptography. Endomorphism rings are objects attached to abelian varieties. Their computation plays an important role in explicit class field theory and in the security of some post-quantum cryptosystems.
There are subexponential algorithms to compute the endomorphism rings of abelian varieties of dimension one and two. Prior to this work, all these subexponential algorithms came with a probability of failure and additional steps were required to unconditionally prove the output. In addition, these methods do not cover all abelian varieties of dimension two. The objective of this thesis is to analyse the subexponential methods and develop ways to deal with the exceptional cases.
We improve the existing methods by developing algorithms that always output the correct endomorphism ring. In addition to that, we develop a novel approach to compute endomorphism rings of some abelian varieties that could not be handled before. We also prove that the subexponential approaches are simply not good enough to cover all the cases. We use some of our results to construct a family of abelian surfaces with which we build post-quantum cryptosystems that are believed to resist subexponential quantum attacks - a desirable property for cryptosystems. This has the potential of providing an efficient non interactive isogeny based key exchange protocol, which is also capable of resisting subexponential quantum attacks and will be the first of its kind.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
A classical conjecture in the representation theory of finite groups, the McKay conjecture, states that for any finite group and prime number p the number of complex irreducible characters of degree prime to p is equal to the number of complex irreducible characters of degree prime to p of the normalizer of a p-Sylow subgroup. Recently a reduction theorem was proved by Isaacs, Malle and Navarro: If all simple groups are “good”, then the McKay conjecture holds. In this work we are concerned with the problem of goodness for finite groups of Lie type in their defining characteristic. A simple group is called “good” if certain equivariant bijections between the involved character sets exist. We present a structural approach to the construction of such a bijection by utilizing the so-called “Steinberg-Map”. This yields very natural bijections and we prove most of the desired properties.
Fibre reinforced polymers(FRPs) are one the newest and modern materials. In FRPs a light polymer matrix holds but weak polymer matrix is strengthened by glass or carbon fibres. The result is a material that is light and compared to its weight, very strong.\par
The stiffness of the resulting material is governed by the direction and the length of the fibres. To better understand the behaviour of FRPs we need to know the fibre length distribution in the resulting material. The classic method for this is ashing, where a sample of the material is burned and destroyed. We look at CT images of the material. In the first part we assumed that we have a full fibre segmentation, we can fit an a cylinder to each individual fibre. In this setting we identified two problems, sampling bias and censoring.\par
Sampling bias occurs since a longer fibre has a higher probability to be visible in the observation window. To solve this problem we used a reweighed fibre length distribution. The weight depends on the used sampling rule.\par
For the censoring we used an EM algorithm. The EM algorithm is used to get a Maximum Likelihood estimator in cases of missing or censored data.\par
For this setting we deduced conditions such that the EM algorithm converges to at least a stationary point of the underlying likelihood function. We further found conditions such that if the EM converges to the correct ML estimator, the estimator is consistent and asymptotically normally distributed.\par
Since obtaining a full fibre segmentation is hard we further looked in the fibre endpoint process. The fibre end point process can be modelled as a Neymann-Scott cluster process. Using this model we can find a formula for the reduced second moment measure for this process. We use this formula to get an estimator for the fibre length distribution.\par
We investigated all estimators using simulation studies. We especially investigated their performance in the case of non overlapping fibres.
Estimation and Portfolio Optimization with Expert Opinions in Discrete-time Financial Markets
(2021)
In this thesis, we mainly discuss the problem of parameter estimation and
portfolio optimization with partial information in discrete-time. In the portfolio optimization problem, we specifically aim at maximizing the utility of
terminal wealth. We focus on the logarithmic and power utility functions. We consider expert opinions as another observation in addition to stock returns to improve estimation of drift and volatility parameters at different times and for the purpose of asset optimization.
In the first part, we assume that the drift term has a fixed distribution, and
the volatility term is constant. We use the Kalman filter to combine the two
types of observations. Moreover, we discuss how to transform this problem
into a non-linear problem of Gaussian noise when the expert opinion is uniformly distributed. The generalized Kalman filter is used to estimate the parameters in this problem.
In the second part, we assume that drift and volatility of asset returns are both driven by a Markov chain. We mainly use the change-of-measure technique to estimate various values required by the EM algorithm. In addition,
we focus on different ways to combine the two observations, expert opinions and asset returns. First, we use the linear combination method. At the same time, we discuss how to use a logistic regression model to quantify expert
opinions. Second, we consider that expert opinions follow a mixed Dirichlet distribution. Under this assumption, we use another probability measure to
estimate the unnormalized filters, needed for the EM algorithm.
In the third part, we assume that expert opinions follow a mixed Dirichlet distribution and focus on how we can obtain approximate optimal portfolio
strategies in different observation settings. We claim the approximate strategies from the dynamic programming equations in different settings and analyze the dependence on the discretization step. Finally we compute different
observation settings in a simulation study.
Estimation of Motion Vector Fields of Complex Microstructures by Time Series of Volume Images
(2023)
Mechanical tests form one of the pillars in development and assessment of modern materials. In a world that will be forced to handle its resources more carefully in the near future, development of materials that are favorable regarding for example weight or material consumption is inevitable. To guarantee that such materials can also be used in critical infrastructure, such as foamed materials in automotive industry or new types of concrete in civil engineering, mechanical properties like tensile or compressive strength have to be thoroughly described. One method to do so is by so called in situ tests, where the mechanical test is combined with an image acquisition technique such as Computed Tomography.
The resulting time series of volume images comprise the delicate and individual nature of each material. The objective of this thesis is to present and develop methods to unveil this behavior and make the motion accessible by algorithms. The estimation of motion has been tackled by many communities, and two of them have already made big effort to solve the problems we are facing. Digital Volume Correlation (DVC) on the one hand has been developed by material scientists and was applied in many different context in mechanical testing, but almost never produces displacement fields that allocate one vector per voxel. Medical Image Registration (MIR) on the other hand does produce voxel precise estimates, but is limited to very smooth motion estimates.
The unification of both families, DVC and MIR, under one roof, will therefore be illustrated in the first half of this thesis. Using the theory of inverse problems, we lay the mathematical foundations to explain why in our impression none of the families is sufficient to deal with all of the problems that come with motion estimation in in situ tests. We then proceed by presenting a third community in motion estimation, namely Optical flow, which is normally only applied in two dimensions. Nevertheless, within this community algorithms have been developed that meet many of our requirements. Strategies for large displacement exist as well as methods that resolve jumps, and on top the displacement is always calculated on pixel level. This thesis therefore proceeds by extending some of the most successful methods to 3D.
To ensure the competitiveness of our approach, the last part of this thesis deals with a detailed evaluation of proposed extensions. We focus on three types of materials, foam, fibre systems and concrete, and use simulated and real in situ tests to compare the Optical flow based methods to their competitors from DVC and MIR. By using synthetically generated and simulated displacement fields, we also assess the quality of the calculated displacement fields - a novelty in this area. We conclude this thesis by two specialized applications of our algorithm, which show how the voxel-precise displacement fields serve as useful information to engineers in investigating their materials.
The famous Mather-Yau theorem in singularity theory yields a bijection of isomorphy classes of germs of isolated hypersurface singularities and their respective Tjurina algebras.
This result has been generalized by T. Gaffney and H. Hauser to singularities of isolated singularity type. Due to the fact that both results do not have a constructive proof, it is the objective of this thesis to extract explicit information about hypersurface singularities from their Tjurina algebras.
First we generalize the result by Gaffney-Hauser to germs of hypersurface singularities, which are strongly Euler-homogeneous at the origin. Afterwards we investigate the Lie algebra structure of the module of logarithmic derivations of Tjurina algebra while considering the theory of graded analytic algebras by G. Scheja and H. Wiebe. We use the aforementioned theory to show that germs of hypersurface singularities with positively graded Tjurina algebras are strongly Euler-homogeneous at the origin. We deduce the classification of hypersurface singularities with Stanley-Reisner Tjurina ideals.
The notion of freeness and holonomicity play an important role in the investigation of properties of the aforementioned singularities. Both notions have been introduced by K. Saito in 1980. We show that hypersurface singularities with Stanley--Reisner Tjurina ideals are holonomic and have a free singular locus. Furthermore, we present a Las Vegas algorithm, which decides whether a given zero-dimensional \(\mathbb{C}\)-algebra is the Tjurina algebra of a quasi-homogeneous isolated hypersurface singularity. The algorithm is implemented in the computer algebra system OSCAR.
Extensions of Shallow Water Equations The subject of the thesis of Michael Hilden is the simulation of floods in urban areas. In case of strong rain events, water can flow out of the overloaded sewer system onto the street and damage the connected houses. The dependable simulation of water flow out of a manhole ("manhole") and over a curb ("curb") is crucial for the assessment of the flood risks. The incompressible 3D-Navier-Stokes Equations (3D-NSE) describe the free surface flow of water accurately, but require expensive computations. Therefore, the less CPU-intensive (factor ca.1/100) Shallow Water Equations (SWE) are usually applied in hydrology. They can be derived from 3D-NSE under the assumption of a hydrostatic pressure distribution via depth-integration and are applied successfully in particular to simulations of river flow processes. The SWE-computations of the flow problems "manhole" and "curb" differ to the 3D-NSE results. Thus, SWE need to be extended appropriately to give reliable forecasts for flood risks in urban areas within reduced computational efforts. These extensions are developed based on physical considerations not considered in the classical SWE. In one extension, a vortex layer on the ground is separated from the main flow representing its new bottom. In a further extension, the hydrostatic pressure distribution is corrected by additional terms due to approximations of vertical velocities and their interaction with the flow. These extensions increase the quality of the SWE results for these flow problems up to the quality level of the NSE results within a moderate increase of the CPU efforts.
Factorization of multivariate polynomials is a cornerstone of many applications in computer algebra. To compute it, one uses an algorithm by Zassenhaus who used it in 1969 to factorize univariate polynomials over \(\mathbb{Z}\). Later Musser generalized it to the multivariate case. Subsequently, the algorithm was refined and improved.
In this work every step of the algorithm is described as well as the problems that arise in these steps.
In doing so, we restrict to the coefficient domains \(\mathbb{F}_{q}\), \(\mathbb{Z}\), and \(\mathbb{Q}(\alpha)\) while focussing on a fast implementation. The author has implemented almost all algorithms mentioned in this work in the C++ library factory which is part of the computer algebra system Singular.
Besides, a new bound on the coefficients of a factor of a multivariate polynomial over \(\mathbb{Q}(\alpha)\) is proven which does not require \(\alpha\) to be an algebraic integer. This bound is used to compute Hensel lifting and recombination of factors in a modular fashion. Furthermore, several sub-steps are improved.
Finally, an overview on the capability of the implementation is given which includes benchmark examples as well as random generated input which is supposed to give an impression of the average performance.
The study of families of curves with prescribed singularities has a long tradition. Its foundations were laid by Plücker, Severi, Segre, and Zariski at the beginning of the 20th century. Leading to interesting results with applications in singularity theory and in the topology of complex algebraic curves and surfaces it has attained the continuous attraction of algebraic geometers since then. Throughout this thesis we examine the varieties V(D,S1,...,Sr) of irreducible reduced curves in a fixed linear system |D| on a smooth projective surface S over the complex numbers having precisely r singular points of types S1,...,Sr. We are mainly interested in the following three questions: 1) Is V(D,S1,...,Sr) non-empty? 2) Is V(D,S1,...,Sr) T-smooth, that is smooth of the expected dimension? 3) Is V(D,S1,...Sr) irreducible? We would like to answer the questions in such a way that we present numerical conditions depending on invariants of the divisor D and of the singularity types S1,...,Sr, which ensure a positive answer. The main conditions which we derive will be of the type inv(S1)+...+inv(Sr) < aD^2+bD.K+c, where inv is some invariant of singularity types, a, b and c are some constants, and K is some fixed divisor. The case that S is the projective plane has been very well studied by many authors, and on other surfaces some results for curves with nodes and cusps have been derived in the past. We, however, consider arbitrary singularity types, and the results which we derive apply to large classes of surfaces, including surfaces in projective three-space, K3-surfaces, products of curves and geometrically ruled surfaces.
In this dissertation we consider complex, projective hypersurfaces with many isolated singularities. The leading questions concern the maximal number of prescribed singularities of such hypersurfaces in a given linear system, and geometric properties of the equisingular stratum. In the first part a systematic introduction to the theory of equianalytic families of hypersurfaces is given. Furthermore, the patchworking method for constructing hypersurfaces with singularities of prescribed types is described. In the second part we present new existence results for hypersurfaces with many singularities. Using the patchworking method, we show asymptotically proper results for hypersurfaces in P^n with singularities of corank less than two. In the case of simple singularities, the results are even asymptotically optimal. These statements improve all previous general existence results for hypersurfaces with these singularities. Moreover, the results are also transferred to hypersurfaces defined over the real numbers. The last part of the dissertation deals with the Castelnuovo function for studying the cohomology of ideal sheaves of zero-dimensional schemes. Parts of the theory of this function for schemes in P^2 are generalized to the case of schemes on general surfaces in P^3. As an application we show an H^1-vanishing theorem for such schemes.
The thesis is concerned with multiscale approximation by means of radial basis functions on hierarchically structured spherical grids. A new approach is proposed to construct a biorthogonal system of locally supported zonal functions. By use of this biorthogonal system of locally supported zonal functions, a spherical fast wavelet transform (SFWT) is established. Finally, based on the wavelet analysis, geophysically and geodetically relevant problems involving rotation-invariant pseudodifferential operators are shown to be efficiently and economically solvable.
This thesis is primarily motivated by a project with Deutsche Bahn about offer preparation in rail freight transport. At its core, a customer should be offered three train paths to choose from in response to a freight train request. As part of this cooperation with DB Netz AG, we investigated how to compute these train paths efficiently. They should be all "good" but also "as different as possible". We solved this practical problem using combinatorial optimization techniques.
At the beginning of this thesis, we describe the practical aspects of our research collaboration. The more theoretical problems, which we consider afterwards, are divided into two parts.
In Part I, we deal with a dual pair of problems on directed graphs with two designated end-vertices. The Almost Disjoint Paths (ADP) problem asks for a maximum number of paths between the end-vertices any two of which have at most one arc in common. In comparison, for the Separating by Forbidden Pairs (SFP) problem we have to select as few arc pairs as possible such that every path between the end-vertices contains both arcs of a chosen pair. The main results of this more theoretical part are the classifications of ADP as an NP-complete and SFP as a Sigma-2-P-complete problem.
In Part II, we address a simplified version of the practical project: the Fastest Path with Time Profiles and Waiting (FPTPW) problem. In a directed acyclic graph with durations on the arcs and time windows at the vertices, we search for a fastest path from a source to a target vertex. We are only allowed to be at a vertex within its time windows, and we are only allowed to wait at specified vertices. After introducing departure-duration functions we develop solution algorithms based on these. We consider special cases that significantly reduce the complexity or are of practical relevance. Furthermore, we show that already this simplified problem is in general NP-hard and investigate the complexity status more closely.
The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.
In this dissertation a model of melt spinning (by Doufas, McHugh and Miller) has been investigated. The model (DMM model) which takes into account effects of inertia, air drag, gravity and surface tension in the momentum equation and heat exchange between air and fibre surface, viscous dissipation and crystallization in the energy equation also has a complicated coupling with the microstructure. The model has two parts, before onset of crystallization (BOC) and after onset of crystallization (AOC) with the point of onset of crystallization as the unknown interface. Mathematically the model has been formulated as a Free boundary value problem. Changes have been introduced in the model with respect to the air drag and an interface condition at the free boundary. The mathematical analysis of the nonlinear, coupled free boundary value problem shows that the solution of this problem depends heavily on initial conditions and parameters which renders the global analysis impossible. But by defining a physically acceptable solution, it is shown that for a more restricted set of initial conditions if a unique solution exists for IVP BOC then it is physically acceptable. For this the important property of the positivity of the conformation tensor variables has been proved. Further it is shown that if a physically acceptable solution exists for IVP BOC then under certain conditions it also exists for IVP AOC. This gives an important relation between the initial conditions of IVP BOC and the existence of a physically acceptable solution of IVP AOC. A new investigation has been done for the melt spinning process in the framework of classical mechanics. A Hamiltonian formulation has been done for the melt spinning process for which appropriate Poisson brackets have been derived for the 1-d, elongational flow of a viscoelastic fluid. From the Hamiltonian, cross sectionally averaged balance mass and momentum equations of melt spinning can be derived along with the microstructural equations. These studies show that the complicated problem of melt spinning can also be studied under the framework of classical mechanics. This work provides the basic groundwork on which further investigations on the dynamics of a fibre could be carried out. The Free boundary value problem has been solved numerically using shooting method. Matlab routines have been used to solve the IVPs arising in the problem. Some numerical case studies have been done to study the sensitivity of the ODE systems with respect to the initial guess and parameters. These experiments support the analysis done and throw more light on the stiff nature and ill posedness of the ODE systems. To validate the model, simulations have been performed on sets of data provided by the company. Comparison of numerical results (axial velocity profiles) has been done with the experimental profiles provided by the company. Numerical results have been found to be in excellent agreement with the experimental profiles.
Filtering, Approximation and Portfolio Optimization for Shot-Noise Models and the Heston Model
(2012)
We consider a continuous time market model in which stock returns satisfy a stochastic differential equation with stochastic drift, e.g. following an Ornstein-Uhlenbeck process. The driving noise of the stock returns consists not only of Brownian motion but also of a jump part (shot noise or compound Poisson process). The investor's objective is to maximize expected utility of terminal wealth under partial information which means that the investor only observes stock prices but does not observe the drift process. Since the drift of the stock prices is unobservable, it has to be estimated using filtering techniques. E.g., if the drift follows an Ornstein-Uhlenbeck process and without
jump part, Kalman filtering can be applied and optimal strategies can be computed explicitly. Also in other cases, like for an underlying
Markov chain, finite-dimensional filters exist. But for certain jump processes (e.g. shot noise) or certain nonlinear drift dynamics explicit computations, based on discrete observations, are no longer possible or existence of finite dimensional filters is no longer valid. The same
computational difficulties apply to the optimal strategy since it depends on the filter. In this case the model may be approximated by
a model where the filter is known and can be computed. E.g., we use statistical linearization for non-linear drift processes, finite-state-Markov chain approximations for the drift process and/or diffusion approximations for small jumps in the noise term.
In the approximating models, filters and optimal strategies can often be computed explicitly. We analyze and compare different approximation methods, in particular in view of performance of the corresponding utility maximizing strategies.
In the present work, we investigated how to correct the questionable normality, linear and quadratic assumptions underlying existing Value-at-Risk methodologies. In order to take also into account the skewness, the heavy tailedness and the stochastic feature of the volatility of the market values of financial instruments, the constant volatility hypothesis widely used by existing Value-at-Risk appproches has also been investigated and corrected and the tails of the financial returns distributions have been handled via Generalized Pareto or Extreme Value Distributions. Artificial Neural Networks have been combined by Extreme Value Theory in order to build consistent and nonparametric Value-at-Risk measures without the need to make any of the questionable assumption specified above. For that, either autoregressive models (AR-GARCH) have been used or the direct characterization of conditional quantiles due to Bassett, Koenker [1978] and Smith [1987]. In order to build consistent and nonparametric Value-at-Risk estimates, we have proved some new results extending White Artificial Neural Network denseness results to unbounded random variables and provide a generalisation of the Bernstein inequality, which is needed to establish the consistency of our new Value-at-Risk estimates. For an accurate estimation of the quantile of the unexpected returns, Generalized Pareto and Extreme Value Distributions have been used. The new Artificial Neural Networks denseness results enable to build consistent, asymptotically normal and nonparametric estimates of conditional means and stochastic volatilities. The denseness results uses the Sobolev metric space L^m (my) for some m >= 1 and some probability measure my and which holds for a certain subclass of square integrable functions. The Fourier transform, the new extension of the Bernstein inequality for unbounded random variables from stationary alpha-mixing processes combined with the new generalization of a result of White and Wooldrige [1990] have been the main tool to establich the extension of White's neural network denseness results. To illustrate the goodness and level of accuracy of the new denseness results, we were able to demonstrate the applicability of the new Value-at-Risk approaches by means of three examples with real financial data mainly from the banking sector traded on the Frankfort Stock Exchange.
In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.
In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.
We work in the setting of time series of financial returns. Our starting point are the GARCH models, which are very common in practice. We introduce the possibility of having crashes in such GARCH models. A crash will be modeled by drawing innovations from a distribution with much mass on extremely negative events, while in ''normal'' times the innovations will be drawn from a normal distribution. The probability of a crash is modeled to be time dependent, depending on the past of the observed time series and/or exogenous variables. The aim is a splitting of risk into ''normal'' risk coming mainly from the GARCH dynamic and extreme event risk coming from the modeled crashes. We will present several incarnations of this modeling idea and give some basic properties like the conditional first and second moments. For the special case that we just have an ARCH dynamic we can establish geometric ergodicity and, thus, stationarity and mixing conditions. Also in the ARCH case we formulate (quasi) maximum likelihood estimators and can derive conditions for consistency and asymptotic normality of the parameter estimates. In a special case of genuine GARCH dynamic we are able to establish L_1-approximability and hence laws of large numbers for the processes itself. We can formulate a conditional maximum likelihood estimator in this case, but cannot completely establish consistency for them. On the practical side we look for the outcome of estimating models with genuine GARCH dynamic and compare the result to classical GARCH models. We apply the models to Value at Risk estimation and see that in comparison to the classical models many of ours seem to work better although we chose the crash distributions quite heuristically.
Risk management is an indispensable component of the financial system. In this context, capital requirements are built by financial institutions to avoid future bankruptcy. Their calculation is based on a specific kind of maps, so-called risk measures. There exist several forms and definitions of them. Multi-asset risk measures are the starting point of this dissertation. They determine the capital requirements as the minimal amount of money invested into multiple eligible assets to secure future payoffs. The dissertation consists of three main contributions: First, multi-asset risk measures are used to calculate pricing bounds for European type options. Second, multi-asset risk measures are combined with recently proposed intrinsic risk measures to obtain a new kind of a risk measure which we call a multi-asset intrinsic (MAI) risk measure. Third, the preferences of an agent are included in the calculation of the capital requirements. This leads to another new risk measure which we call a scalarized utility-based multi-asset (SUBMA) risk measure.
In the introductory chapter, we recall the definition and properties of multi-asset risk
measures. Then, each of the aforementioned contributions covers a separate chapter. In
the following, the content of these three chapters is explained in more detail:
Risk measures can be used to calculate pricing bounds for financial derivatives. In
Chapter 2, we deal with the pricing of European options in an incomplete financial market
model. We use the common risk measures Value-at-Risk and Expected Shortfall to define
good deals on a financial market with log-normally distributed rates of return. We show that the pricing bounds obtained from Value-at-Risk may have a non-smooth behavior under parameter changes. Additionally, we find situations in which the seller's bound for a call option is smaller than the buyer's bound. We identify the missing convexity of the Value-at-Risk as main reason for this behavior. Due to the strong connection between the obtained pricing bounds and the theory of risk measures, we further obtain new insights in the finiteness and the continuity of multi-asset risk measures.
In Chapter 3, we construct the MAI risk measure. Therefore, recall that a multi-asset risk measure describes the minimal external capital that has to be raised into multiple eligible assets to make a future financial position acceptable, i.e., that it passes a capital adequacy test. Recently, the alternative methodology of intrinsic risk measures
was introduced in the literature. These ask for the minimal proportion of the financial position that has to be reallocated to pass the capital adequacy test, i.e., only internal capital is used. We combine these two concepts and call this new type of risk measure an MAI risk measure. It allows to secure the financial position by external capital as well as reallocating parts of the portfolio as an internal rebooking. We investigate several properties to demonstrate similarities and differences to the two
aforementioned classical types of risk measures. We find out that diversification reduces
the capital requirement only in special situations depending on the financial positions. With the help of Sion's minimax theorem we also prove a dual representation for MAI risk measures. Finally, we determine capital requirements in a model motivated by the Solvency II methodology.
In the final Chapter 4, we construct the SUBMA risk measure. In doing so, we consider the situation in which a financial institution has to satisfy a capital adequacy test, e.g., by the Basel Accords for banks or by Solvency II for insurers. If the financial situation of this institution is tight, then it can happen that no reallocation of the initial
endowment would pass the capital adequacy test. The classical portfolio optimization approach breaks down and a capital increase is needed. We introduce the SUBMA risk measure which optimizes the hedging costs and the expected utility of the institution simultaneously subject to the capital adequacy test. We find out that the SUBMA risk measure is coherent if the utility function has constant relative risk aversion and the capital adequacy test leads to a coherent acceptance set. In a one-period financial market model we present a sufficient condition for the SUBMA risk measure to be finite-valued and continuous. Finally, we calculate the SUBMA risk measure in a continuous-time financial market model for two benchmark capital adequacy tests.
This thesis is devoted to constructive module theory of polynomial
graded commutative algebras over a field.
It treats the theory of Groebner bases (GB), standard bases (SB) and syzygies as well as algorithms
and their implementations.
Graded commutative algebras naturally unify exterior and commutative polynomial algebras.
They are graded non-commutative, associative unital algebras over fields and may contain zero-divisors.
In this thesis
we try to make the most use out of _a priori_ knowledge about
their characteristic (super-commutative) structure
in developing direct symbolic methods, algorithms and implementations,
which are intrinsic to graded commutative algebras and practically efficient.
For our symbolic treatment we represent them as polynomial algebras
and redefine the product rule in order to allow super-commutative structures
and, in particular, to allow zero-divisors.
Using this representation we give a nice characterization
of a GB and an algorithm for its computation.
We can also tackle central localizations of graded commutative algebras by allowing commutative variables to be _local_,
generalizing Mora algorithm (in a similar fashion as G.M.Greuel and G.Pfister by allowing local or mixed monomial orderings)
and working with SBs.
In this general setting we prove a generalized Buchberger's criterion,
which shows that syzygies of leading terms play the utmost important role
in SB and syzygy module computations.
Furthermore, we develop a variation of the La Scala-Stillman free resolution algorithm,
which we can formulate particularly close to our implementation.
On the implementation side
we have further developed the Singular non-commutative subsystem Plural
in order to allow polynomial arithmetic
and more involved non-commutative basic Computer Algebra computations (e.g. S-polynomial, GB)
to be easily implementable for specific algebras.
At the moment graded commutative algebra-related algorithms
are implemented in this framework.
Benchmarks show that our new algorithms and implementation are practically efficient.
The developed framework has a lot of applications in various
branches of mathematics and theoretical physics.
They include computation of sheaf cohomology, coordinate-free verification of affine geometry
theorems and computation of cohomology rings of p-groups, which are partially described in this thesis.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.
Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.
This thesis is separated into three main parts: Development of Gaussian and White Noise Analysis, Hamiltonian Path Integrals as White Noise Distributions, Numerical methods for polymers driven by fractional Brownian motion.
Throughout this thesis the Donsker's delta function plays a key role. We investigate this generalized function also in Chapter 2. Moreover we show by giving a counterexample, that the general definition for complex kernels is not true.
In Chapter 3 we take a closer look to generalized Gauss kernels and generalize these concepts to the case of vector-valued White Noise. These results are the basis for Hamiltonian path integrals of quadratic type. The core result of this chapter gives conditions under which pointwise products of generalized Gauss kernels and certain Hida distributions have a mathematical rigorous meaning as distributions in the Hida space.
In Chapter 4 we discuss operators which are related to applications for Feynman Integrals as differential operators, scaling, translation and projection. We show the relation of these operators to differential operators, which leads to the well-known notion of so called convolution operators. We generalize the central homomorphy theorem to regular generalized functions.
We generalize the concept of complex scaling to scaling with bounded operators and discuss the relation to generalized Radon-Nikodym derivatives. With the help of this we consider products of generalized functions in chapter 5. We show that the projection operator from the Wick formula for products with Donsker's deltais not closable on the square-integrable functions..
In Chapter 5 we discuss products of generalized functions. Moreover the Wick formula is revisited. We investigate under which conditions and on which spaces the Wick formula can be generalized to. At the end of the chapter we consider the products of Donsker's delta function with a generalized function with help of a measure transformation. Here also problems as measurability are concerned.
In Chapter 6 we characterize Hamiltonian path integrands for the free particle, the harmonic oscillator and the charged particle in a constant magnetic field as Hida distributions. This is done in terms of the T-transform and with the help of the results from chapter 3. For the free particle and the harmonic oscillator we also investigate the momentum space propagators. At the same time, the $T$-transform of the constructed Feynman integrands provides us with their generating functional. In Chapter 7, we can show that the generalized expectation (generating functional at zero) gives the Greens function to the corresponding Schrödinger equation.
Moreover, with help of the generating functional we can show that the canonical commutation relations for the free particle and the harmonic oscillator in phase space are fulfilled. This confirms on a mathematical rigorous level the heuristics developed by Feynman and Hibbs.
In Chapter 8 we give an outlook, how the scaling approach which is successfully applied in the Feynman integral setting can be transferred to the phase space setting. We give a mathematical rigorous meaning to an analogue construction to the scaled Feynman-Kac kernel. It is open if the expression solves the Schrödinger equation. At least for quadratic potentials we can get the right physics.
In the last chapter, we focus on the numerical analysis of polymer chains driven by fractional Brownian motion. Instead of complicated lattice algorithms, our discretization is based on the correlation matrix. Using fBm one can achieve a long-range dependence of the interaction of the monomers inside a polymer chain. Here a Metropolis algorithm is used to create the paths of a polymer driven by fBm taking the excluded volume effect in account.
The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.
This work aims to study textile structures in the frame of linear elasticity to understand how
the structure and material parameters influence the macroscopic homogenized model. More
precisely, we are interested in how the textile design parameters, such as the ratio between
fibers’ distance and cross-section width, the strength of the contact sliding between yarns,
and the partial clamp on the textile boundaries determine the phenomena that one can see in
shear experiments with textiles. Among others, when the warp and weft yarns change their
in-plane angles first and, after reaching some critical shear angle, the textile plate comes out
of the plane, and its folding starts.
The textile structure under consideration is a woven square, partially clamped on the left
and bottom boundary, made of long thin fibers that cross each other in a periodic pattern.
The fibers cannot penetrate each other, and in-plane sliding is allowed. This last assumption,
together with the partial clamp, adds new levels of complexity to the problem due to
the anisotropy in the yarn’s behavior in the unclamped subdomains of the textile.
The limiting behavior and macroscopic strain fields are found by passing to the limit with
respect to the yarn’s thickness r and the distance between them e, parameters that are asymptotically
related. The homogenization and dimension reduction are done via the unfolding
method, which separates the macroscopic scale from the periodicity cell. In addition to the
homogenization, a dimension reduction from a 3D to a 2D problem is applied. Adapting
the classical unfolding results to both the anisotropic context and to lattice grids (which are
constructed starting from the center lines of the rods crossing each other) are the main tools
we developed to tackle this type of model. They represent the first part of the thesis and are
published in Falconi, Griso, and Orlik, 2022b and Falconi, Griso, and Orlik, 2022a.
Given the parameters mentioned above, we then proceed to classify different textile problems,
incorporating the results from other works on the topic and thoroughly investigating
some others. After the study is conducted, we draw conclusions and give a mathematical
explanation concerning the expected approximation of the displacements, the expected solvability
of the limit problems, and the phenomena mentioned above. The results can be found
in “Asymptotic behavior for textiles with loose contact”, which has been recently submitted.
The dissertation deals with the application of Hub Location models in public transport planning. The author proposes new mathematical models along with different solution approaches to solve the instances. Moreover, a novel multi-period formulation is proposed as an extension to the general model. Due to its high complexity heuristic approaches are formulated to find a good solution within a reasonable amount of time.
This dissertation is intended to give a systematic treatment of hypersurface singularities in arbitrary characteristic which provides the necessary tools, theoretically and computationally, for the purpose of classification. This thesis consists of five chapters: In chapter 1, we introduce the background on isolated hypersurface singularities needed for our work. In chapter 2, we formalize the notions of piecewise-homogeneous grading and we discuss thoroughly non-degeneracy in arbitrary characteristic. Chapter 3 is devoted to determinacy and normal forms of isolated hypersurface singularities. In the first part, we give finite determinacy theorems in arbitrary characteristic with respect to right respectively contact equivalence. Furthermore, we show that "isolated" and finite determinacy properties are equivalent. In the second part, we formalize Arnol'd's key ideas for the computation of normal forms an define the conditions (AA) and (AAC). The last part of Chapter 3 is devoted to the study of normal forms in the general setting of hypersurface singularities imposing neither condition (A) nor Newton-Nondegeneracy. In Chapter 4, we present algorithms which we implement in Singular for the purpose of explicit computation of regular bases and normal forms. In chapter 5, we transfer some classical results on invariants over the field C of complex numbers to algebraically closed fields of characteristic zero known as Lefschetz principle.
In many industrial applications fast and accurate solutions of linear elliptic partial differential equations are needed as one of the building blocks of more complex problems. The domains are often highly complex and meshing turns out to be expensive and difficult to obtain with a sufficient quality. In such cases methods with a regular, not boundary adapted grid offer an attractive alternative. The Explicit Jump Immersed Interface Method is one of these algorithms. The main interest of this work lies in solving the linear elasticity equations. For this purpose the existing EJIIM algorithm has been extended to three dimensions. The Poisson equation is always considered in parallel as the most typical representative of elliptic PDEs. During the work it became clear that EJIIM can have very high computational memory requirements. To overcome this problem an improvement, Reduced EJIIM is proposed. The main theoretical result in this work is the proof of the smoothing property of inverses of elliptic finite difference operators in two and three space dimensions. It is an often observed phenomena that the local truncation error is allowed to be of lower order along some lower dimensional manifold without influencing the global convergence order of the solution.