Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (291) (remove)
Has Fulltext
- yes (291)
Keywords
- Algebraische Geometrie (6)
- Portfolio Selection (6)
- Finanzmathematik (5)
- Optimization (5)
- Stochastische dynamische Optimierung (5)
- Elastizität (4)
- Homogenisierung <Mathematik> (4)
- Navier-Stokes-Gleichung (4)
- Numerische Mathematik (4)
- Portfolio-Optimierung (4)
- portfolio optimization (4)
- Bewertung (3)
- Computeralgebra (3)
- Erwarteter Nutzen (3)
- Finite-Volumen-Methode (3)
- Gröbner-Basis (3)
- Inverses Problem (3)
- Monte-Carlo-Simulation (3)
- Mosco convergence (3)
- NURBS (3)
- Numerische Strömungssimulation (3)
- Optionspreistheorie (3)
- Portfolio Optimization (3)
- Portfoliomanagement (3)
- Risikomanagement (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Wavelet (3)
- isogeometric analysis (3)
- optimales Investment (3)
- Asymptotic Expansion (2)
- Asymptotik (2)
- B-Spline (2)
- B-splines (2)
- Derivat <Wertpapier> (2)
- Diskrete Fourier-Transformation (2)
- Elasticity (2)
- Endliche Geometrie (2)
- Erdmagnetismus (2)
- FFT (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- Geometric Ergodicity (2)
- Hamilton-Jacobi-Differentialgleichung (2)
- Hochskalieren (2)
- IMRT (2)
- Isogeometrische Analyse (2)
- Kreditrisiko (2)
- Langevin equation (2)
- Lebensversicherung (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Lineare partielle Differentialgleichung (2)
- Local smoothing (2)
- Mathematik (2)
- Mehrskalenanalyse (2)
- Mehrskalenmodell (2)
- Mikrostruktur (2)
- Modulraum (2)
- Multiset Multicover (2)
- Partial Differential Equations (2)
- Partielle Differentialgleichung (2)
- Poröser Stoff (2)
- Regressionsanalyse (2)
- Regularisierung (2)
- Robust Optimization (2)
- Schnitttheorie (2)
- Statistisches Modell (2)
- Stochastic Control (2)
- Stochastische Differentialgleichung (2)
- Transaktionskosten (2)
- Upscaling (2)
- Vektorwavelets (2)
- White Noise Analysis (2)
- curve singularity (2)
- domain decomposition (2)
- duality (2)
- finite volume method (2)
- geomagnetism (2)
- homogenization (2)
- illiquidity (2)
- interface problem (2)
- mesh generation (2)
- optimal investment (2)
- regression analysis (2)
- splines (2)
- "Slender-Body"-Theorie (1)
- (Joint) chance constraints (1)
- 3D image analysis (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- ALE-Methode (1)
- Ableitungsfreie Optimierung (1)
- Adjoint method (1)
- Advanced Encryption Standard (1)
- Agriculture Loan (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraic groups (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Analysis (1)
- Angewandte Mathematik (1)
- Annulus (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Arc distance (1)
- Archimedische Kopula (1)
- Asiatische Option (1)
- Asset allocation (1)
- Asset-liability management (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Ausfallrisiko (1)
- Automorphismengruppe (1)
- Autoregressive Hilbertian model (1)
- Balance sheet (1)
- Barriers (1)
- Basic Scheme (1)
- Basis Risk (1)
- Basket Option (1)
- Bayes-Entscheidungstheorie (1)
- Beam models (1)
- Beam orientation (1)
- Bernstein–Gelfand–Gelfand construction (1)
- Beschichtungsprozess (1)
- Beschränkte Krümmung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Bilanzstrukturmanagement (1)
- Bildsegmentierung (1)
- Binomialbaum (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Boltzmann Equation (1)
- Bondindizes (1)
- Bootstrap (1)
- Boundary Value Problem / Oblique Derivative (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- CDO (1)
- CDS (1)
- CDSwaption (1)
- CFD (1)
- CHAMP (1)
- CPDO (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Charakter <Gruppentheorie> (1)
- Chi-Quadrat-Test (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Circle Location (1)
- Cluster-Analyse (1)
- Coarse graining (1)
- Cohen-Lenstra heuristic (1)
- Combinatorial Optimization (1)
- Commodity Index (1)
- Complex Structures (1)
- Composite Materials (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer algebra (1)
- Computeralgebra System (1)
- Conditional Value-at-Risk (1)
- Connectivity (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Constraint Generation (1)
- Construction of hypersurfaces (1)
- Convergence Rate (1)
- Copula (1)
- Coupled PDEs (1)
- Coxeter-Freudenthal-Kuhn triangulation (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Curvature (1)
- Curved viscous fibers (1)
- Cycle Decomposition (1)
- DSMC (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Debt Management (1)
- Defaultable Options (1)
- Deformationstheorie (1)
- Degenerate Diffusion Semigroups (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Differential forms (1)
- Differenzenverfahren (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionsprozess (1)
- Discriminatory power (1)
- Dispersionsrelation (1)
- Dissertation (1)
- Diversifikation (1)
- Druckkorrektur (1)
- Dünnfilmapproximation (1)
- EDF observation models (1)
- EM algorithm (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Efficiency (1)
- Efficient Reliability Estimation (1)
- Effizienter Algorithmus (1)
- Effizienz (1)
- Eikonal equation (1)
- Elastische Deformation (1)
- Elastoplastizität (1)
- Elektromagnetische Streuung (1)
- Eliminationsverfahren (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Endliche Gruppe (1)
- Endliche Lie-Gruppe (1)
- Entscheidungsbaum (1)
- Entscheidungsunterstützung (1)
- Enumerative Geometrie (1)
- Erdöl Prospektierung (1)
- Erwartungswert-Varianz-Ansatz (1)
- Essential m-dissipativity (1)
- Expected shortfall (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Extrapolation (1)
- Extreme Events (1)
- Extreme value theory (1)
- FEM (1)
- FPM (1)
- Faden (1)
- Fatigue (1)
- Feedfoward Neural Networks (1)
- Feynman Integrals (1)
- Feynman path integrals (1)
- Fiber suspension flow (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finite-Elemente-Methode (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Kopplung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Forward-Backward Stochastic Differential Equation (1)
- Fourier-Transformation (1)
- Fredholmsche Integralgleichung (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionenkörper (1)
- GARCH (1)
- GARCH Modelle (1)
- Galerkin-Methode (1)
- Gamma-Konvergenz (1)
- Garantiezins (1)
- Garbentheorie (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Geo-referenced data (1)
- Geodesie (1)
- Geometrische Ergodizität (1)
- Gewichteter Sobolev-Raum (1)
- Gittererzeugung (1)
- Gleichgewichtsstrategien (1)
- Gradient based optimization (1)
- Granular flow (1)
- Granulat (1)
- Graph Theory (1)
- Gravitationsfeld (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Große Abweichung (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner-basis (1)
- Gyroscopic (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamiltonian Path Integrals (1)
- Handelsstrategien (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Hazard Functions (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Helmholtz Type Boundary Value Problems (1)
- Heston-Modell (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- Hilbert complexes (1)
- Homogenization (1)
- Homologische Algebra (1)
- Hub Location Problem (1)
- Hydrostatischer Druck (1)
- Hyperelastizität (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hyperspektraler Sensor (1)
- Hypocoercivity (1)
- Hysterese (1)
- ITSM (1)
- Idealklassengruppe (1)
- Illiquidität (1)
- Image restoration (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Index Insurance (1)
- Inflation (1)
- Infrarotspektroskopie (1)
- Insurance (1)
- Intensität (1)
- Internationale Diversifikation (1)
- Interpolation Algorithm (1)
- Inverse Problem (1)
- Irreduzibler Charakter (1)
- Isogeometric Analysis (1)
- Ito (1)
- Jacobigruppe (1)
- Kanalcodierung (1)
- Karhunen-Loève expansion (1)
- Kategorientheorie (1)
- Kelvin Transformation (1)
- Kirchhoff-Love shell (1)
- Kiyoshi (1)
- Kombinatorik (1)
- Kommutative Algebra (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsphysik (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Kreitderivaten (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kurvenschar (1)
- LIBOR (1)
- Lagrangian relaxation (1)
- Laplace transform (1)
- Lattice Boltzmann (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Leading-Order Optimality (1)
- Least-squares Monte Carlo method (1)
- Level set methods (1)
- Lie algebras (1)
- Lie-Typ-Gruppe (1)
- Lippmann-Schwinger Equation (1)
- Lippmann-Schwinger equation (1)
- Liquidität (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- MBS (1)
- MKS (1)
- ML-estimation (1)
- Macaulay’s inverse system (1)
- Magneto-Elastic Coupling (1)
- Magnetoelastic coupling (1)
- Magnetoelasticity (1)
- Magnetostriction (1)
- Marangoni-Effekt (1)
- Market Equilibrium (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Markov-Prozess (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martingaloptimalitätsprinzip (1)
- Maschinelles Lernen (1)
- Mathematical Finance (1)
- Mathematics (1)
- Mathematische Modellierung (1)
- Mathematisches Modell (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Maxwell's equations (1)
- McKay conjecture (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrkriterielle Optimierung (1)
- Mehrskalen (1)
- Microstructure (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikroelektronik (1)
- Mixed Connectivity (1)
- Mixed integer programming (1)
- Mixed method (1)
- Model-Dynamics (1)
- Modellbildung (1)
- Molekulardynamik (1)
- Momentum and Mas Transfer (1)
- Monte Carlo (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multicriteria optimization (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiresolution Analysis (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- Networks (1)
- Netzwerksynthese (1)
- Neural Networks (1)
- Neuronales Netz (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Niederschlag (1)
- Nilpotent elements (1)
- No-Arbitrage (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Mathematik / Algorithmus (1)
- Numerisches Verfahren (1)
- Oberflächenmaße (1)
- Oberflächenspannung (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimale Portfolios (1)
- Optimierung (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Order (1)
- Ovoid (1)
- PDE-Constrained Optimization, Robust Design, Multi-Objective Optimization (1)
- POD (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Partikel Methoden (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathwise Optimality (1)
- Pedestrian FLow (1)
- Periodic Homogenization (1)
- Pfadintegral (1)
- Planares Polynom (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- PolyBoRi (1)
- Population Balance Equation (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Preimage of an ideal under a morphism of algebras (1)
- Probust optimization (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Prox-Regularisierung (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- Quadratischer Raum (1)
- Quantile autoregression (1)
- Quasi-Variational Inequalities (1)
- RKHS (1)
- Radial Basis Functions (1)
- Radiotherapy (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rarefied gas (1)
- Reflexionsspektroskopie (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regularisierung / Stoppkriterium (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Reliability (1)
- Restricted Regions (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Risikoanalyse (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Management (1)
- Risk Measures (1)
- Risk Sharing (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Räumliche Statistik (1)
- SWARM (1)
- Sandwiching algorithm (1)
- Scale function (1)
- Schaum (1)
- Schaumzerfall (1)
- Schiefe Ableitung (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Second Order Conditions (1)
- Semi-Markov-Kette (1)
- Semi-infinite optimization (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Shallow Water Equations (1)
- Shape optimization (1)
- Simulation (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Slender body theory (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Solvency II (1)
- Solvency-II-Richtlinie (1)
- Spannungs-Dehn (1)
- Spatial Statistics (1)
- Spectral Method (1)
- Spectral theory (1)
- Spektralanalyse <Stochastik> (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sprung-Diffusions-Prozesse (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Standard basis (1)
- Standortprobleme (1)
- Statistics (1)
- Steuer (1)
- Stochastic Impulse Control (1)
- Stochastic Processes (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische Zinsen (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stochastisches Modell (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stornierung (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Structural Reliability (1)
- Strukturiertes Finanzprodukt (1)
- Strukturoptimierung (1)
- Strömungsdynamik (1)
- Strömungsmechanik (1)
- Subset Simulationen (1)
- Success Run (1)
- Survival Analysis (1)
- Systemidentifikation (1)
- Sägezahneffekt (1)
- Tail Dependence Koeffizient (1)
- Temporal Variational Autoencoders (1)
- Test for Changepoint (1)
- Thermophoresis (1)
- Thin film approximation (1)
- Tichonov-Regularisierung (1)
- Time Series (1)
- Time-Series (1)
- Time-delay-Netz (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Traffic flow (1)
- Transaction costs (1)
- Trennschärfe <Statistik> (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Two-Scale Convergence (1)
- Two-phase flow (1)
- Unreinheitsfunktion (1)
- Untermannigfaltigkeit (1)
- Upwind-Verfahren (1)
- Usage modeling (1)
- Utility (1)
- Value at Risk (1)
- Value at risk (1)
- Value-at-Risk (1)
- Variational autoencoders (1)
- Variationsrechnung (1)
- Vectorfield approximation (1)
- Vektorfeldapproximation (1)
- Vektorkugelfunktionen (1)
- Verschwindungsatz (1)
- Versicherung (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Volatilität (1)
- Volatilitätsarbitrage (1)
- Vorkonditionierer (1)
- Vorwärts-Rückwärts-Stochastische-Differentialgleichung (1)
- Water reservoir management (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weißes Rauschen (1)
- White Noise (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Wissenschaftliches Rechnen (1)
- Worst-Case (1)
- Wärmeleitfähigkeit (1)
- Yaglom limits (1)
- Zeitintegrale Modelle (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- Zero-dimensional schemes (1)
- Zopfgruppe (1)
- Zufälliges Feld (1)
- Zweiphasenströmung (1)
- abgeleitete Kategorie (1)
- adaptive algorithm (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alternating minimization (1)
- alternating optimization (1)
- analoge Mikroelektronik (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- applied mathematics (1)
- arbitrary Lagrangian-Eulerian methods (ALE) (1)
- archimedean copula (1)
- asian option (1)
- asymptotic-preserving (1)
- auto-pruning (1)
- basket option (1)
- benders decomposition (1)
- bending strip method (1)
- binomial tree (1)
- blackout period (1)
- bocses (1)
- boundary value problem (1)
- canonical ideal (1)
- canonical module (1)
- changing market coefficients (1)
- characteristic polynomial (1)
- closure approximation (1)
- clustering (1)
- clustering methods (1)
- combinatorics (1)
- composites (1)
- computational finance (1)
- computer algebra (1)
- computeralgebra (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- correlated errors (1)
- coupling methods (1)
- crash (1)
- crash hedging (1)
- credit risk (1)
- curvature (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- degenerations of an elliptic curve (1)
- dense univariate rational interpolation (1)
- derived category (1)
- determinant (1)
- diffusion models (1)
- discrepancy (1)
- diversification (1)
- domain parametrization (1)
- double exponential distribution (1)
- downward continuation (1)
- efficiency loss (1)
- elastoplasticity (1)
- elliptical distribution (1)
- endomorphism ring (1)
- enumerative geometry (1)
- equilibrium strategies (1)
- equisingular families (1)
- face value (1)
- fiber reinforced silicon carbide (1)
- fibre lay-down dynamics (1)
- filtration (1)
- financial mathematics (1)
- finite difference schemes (1)
- finite element method (1)
- finite groups of Lie type (1)
- finite spin group (1)
- first hitting time (1)
- float glass (1)
- flood risk (1)
- fluid structure (1)
- fluid structure interaction (1)
- fluid-structure interaction (FSI) (1)
- forward-shooting grid (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- generic character table (1)
- gitter (1)
- glioblastoma (1)
- good semigroup (1)
- graph p-Laplacian (1)
- gravitation (1)
- group action (1)
- groups of Lie type (1)
- großer Investor (1)
- haptotaxis (1)
- hedging (1)
- heuristic (1)
- hierarchical matrix (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hyperspectal unmixing (1)
- hypocoercivity (1)
- idealclass group (1)
- image analysis (1)
- image denoising (1)
- impulse control (1)
- impurity functions (1)
- incompressible elasticity (1)
- infinite-dimensional analysis (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- integer programming (1)
- integral constitutive equations (1)
- intensity (1)
- inverse optimization (1)
- inverse problem (1)
- isogeometric analysis (IGA) (1)
- jump-diffusion process (1)
- kernel (1)
- kinetic equations (1)
- large investor (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- level K-algebras (1)
- level set method (1)
- life insurance (1)
- limit theorems (1)
- linear code (1)
- linear systems (1)
- local-global conjectures (1)
- localizing basis (1)
- longevity bonds (1)
- loss analysis (1)
- low-rank approximation (1)
- machine learning (1)
- macro derivative (1)
- market crash (1)
- market manipulation (1)
- markov model (1)
- martingale optimality principle (1)
- mathematical modelling (1)
- mathematical morphology (1)
- matrix problems (1)
- matroid flows (1)
- mean-variance approach (1)
- mesh deformation (1)
- micromechanics (1)
- minimal polynomial (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- modal derivatives (1)
- model order reduction (1)
- moduli space (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- multi scale (1)
- multi-asset option (1)
- multi-class image segmentation (1)
- multi-level Monte Carlo (1)
- multi-phase flow (1)
- multi-scale model (1)
- multicategory (1)
- multifilament superconductor (1)
- multigrid method (1)
- multileaf collimator (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative noise (1)
- multiscale denoising (1)
- multiscale methods (1)
- multivariate chi-square-test (1)
- naive diversification (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- non square linear system solving (1)
- non-desarguesian plane (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear elasticity (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- number fields (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerics (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- option pricing (1)
- option valuation (1)
- partial differential equation (1)
- partial information (1)
- path-dependent options (1)
- pattern (1)
- penalty methods (1)
- penalty-free formulation (1)
- petroleum exploration (1)
- planar polynomial (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- posterior collapse (1)
- potential (1)
- preconditioners (1)
- pressure correction (1)
- primal-dual algorithm (1)
- probability distribution (1)
- projective surfaces (1)
- proximation (1)
- proxy modeling (1)
- quadrinomial tree (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiation therapy (1)
- radiotherapy (1)
- rare disasters (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- real quadratic number fields (1)
- reconstructions (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regime-shift model (1)
- regularization methods (1)
- rheology (1)
- risk analysis (1)
- risk measures (1)
- risk reduction (1)
- sampling (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- scaled boundary isogeometric analysis (1)
- scaled boundary parametrizations (1)
- second class group (1)
- seismic tomography (1)
- semigroup of values (1)
- semisprays (1)
- sheaf theory (1)
- similarity measures (1)
- singularities (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparsity (1)
- spherical approximation (1)
- sputtering process (1)
- star-shaped domain (1)
- stochastic arbitrage (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stochastische Arbitrage (1)
- stop- and play-operator (1)
- stratifolds (1)
- subgradient (1)
- superposed fluids (1)
- surface measures (1)
- surrender options (1)
- surrogate algorithm (1)
- syzygies (1)
- tail dependence coefficient (1)
- tax (1)
- tensions (1)
- time delays (1)
- topological asymptotic expansion (1)
- toric geometry (1)
- torische Geometrie (1)
- total variation (1)
- total variation spatial regularization (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- tropical geometry (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- unimodular certification (1)
- unimodularity (1)
- value semigroup (1)
- valuing contracts (1)
- variable selection (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector spherical harmonics (1)
- vectorial wavelets (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- viscoelastic fluids (1)
- volatility arbitrage (1)
- vortex seperation (1)
- well-posedness (1)
- worst-case (1)
- worst-case scenario (1)
- Äquisingularität (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)
Faculty / Organisational entity
Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
This thesis is devoted to constructive module theory of polynomial
graded commutative algebras over a field.
It treats the theory of Groebner bases (GB), standard bases (SB) and syzygies as well as algorithms
and their implementations.
Graded commutative algebras naturally unify exterior and commutative polynomial algebras.
They are graded non-commutative, associative unital algebras over fields and may contain zero-divisors.
In this thesis
we try to make the most use out of _a priori_ knowledge about
their characteristic (super-commutative) structure
in developing direct symbolic methods, algorithms and implementations,
which are intrinsic to graded commutative algebras and practically efficient.
For our symbolic treatment we represent them as polynomial algebras
and redefine the product rule in order to allow super-commutative structures
and, in particular, to allow zero-divisors.
Using this representation we give a nice characterization
of a GB and an algorithm for its computation.
We can also tackle central localizations of graded commutative algebras by allowing commutative variables to be _local_,
generalizing Mora algorithm (in a similar fashion as G.M.Greuel and G.Pfister by allowing local or mixed monomial orderings)
and working with SBs.
In this general setting we prove a generalized Buchberger's criterion,
which shows that syzygies of leading terms play the utmost important role
in SB and syzygy module computations.
Furthermore, we develop a variation of the La Scala-Stillman free resolution algorithm,
which we can formulate particularly close to our implementation.
On the implementation side
we have further developed the Singular non-commutative subsystem Plural
in order to allow polynomial arithmetic
and more involved non-commutative basic Computer Algebra computations (e.g. S-polynomial, GB)
to be easily implementable for specific algebras.
At the moment graded commutative algebra-related algorithms
are implemented in this framework.
Benchmarks show that our new algorithms and implementation are practically efficient.
The developed framework has a lot of applications in various
branches of mathematics and theoretical physics.
They include computation of sheaf cohomology, coordinate-free verification of affine geometry
theorems and computation of cohomology rings of p-groups, which are partially described in this thesis.
Risk management is an indispensable component of the financial system. In this context, capital requirements are built by financial institutions to avoid future bankruptcy. Their calculation is based on a specific kind of maps, so-called risk measures. There exist several forms and definitions of them. Multi-asset risk measures are the starting point of this dissertation. They determine the capital requirements as the minimal amount of money invested into multiple eligible assets to secure future payoffs. The dissertation consists of three main contributions: First, multi-asset risk measures are used to calculate pricing bounds for European type options. Second, multi-asset risk measures are combined with recently proposed intrinsic risk measures to obtain a new kind of a risk measure which we call a multi-asset intrinsic (MAI) risk measure. Third, the preferences of an agent are included in the calculation of the capital requirements. This leads to another new risk measure which we call a scalarized utility-based multi-asset (SUBMA) risk measure.
In the introductory chapter, we recall the definition and properties of multi-asset risk
measures. Then, each of the aforementioned contributions covers a separate chapter. In
the following, the content of these three chapters is explained in more detail:
Risk measures can be used to calculate pricing bounds for financial derivatives. In
Chapter 2, we deal with the pricing of European options in an incomplete financial market
model. We use the common risk measures Value-at-Risk and Expected Shortfall to define
good deals on a financial market with log-normally distributed rates of return. We show that the pricing bounds obtained from Value-at-Risk may have a non-smooth behavior under parameter changes. Additionally, we find situations in which the seller's bound for a call option is smaller than the buyer's bound. We identify the missing convexity of the Value-at-Risk as main reason for this behavior. Due to the strong connection between the obtained pricing bounds and the theory of risk measures, we further obtain new insights in the finiteness and the continuity of multi-asset risk measures.
In Chapter 3, we construct the MAI risk measure. Therefore, recall that a multi-asset risk measure describes the minimal external capital that has to be raised into multiple eligible assets to make a future financial position acceptable, i.e., that it passes a capital adequacy test. Recently, the alternative methodology of intrinsic risk measures
was introduced in the literature. These ask for the minimal proportion of the financial position that has to be reallocated to pass the capital adequacy test, i.e., only internal capital is used. We combine these two concepts and call this new type of risk measure an MAI risk measure. It allows to secure the financial position by external capital as well as reallocating parts of the portfolio as an internal rebooking. We investigate several properties to demonstrate similarities and differences to the two
aforementioned classical types of risk measures. We find out that diversification reduces
the capital requirement only in special situations depending on the financial positions. With the help of Sion's minimax theorem we also prove a dual representation for MAI risk measures. Finally, we determine capital requirements in a model motivated by the Solvency II methodology.
In the final Chapter 4, we construct the SUBMA risk measure. In doing so, we consider the situation in which a financial institution has to satisfy a capital adequacy test, e.g., by the Basel Accords for banks or by Solvency II for insurers. If the financial situation of this institution is tight, then it can happen that no reallocation of the initial
endowment would pass the capital adequacy test. The classical portfolio optimization approach breaks down and a capital increase is needed. We introduce the SUBMA risk measure which optimizes the hedging costs and the expected utility of the institution simultaneously subject to the capital adequacy test. We find out that the SUBMA risk measure is coherent if the utility function has constant relative risk aversion and the capital adequacy test leads to a coherent acceptance set. In a one-period financial market model we present a sufficient condition for the SUBMA risk measure to be finite-valued and continuous. Finally, we calculate the SUBMA risk measure in a continuous-time financial market model for two benchmark capital adequacy tests.
We work in the setting of time series of financial returns. Our starting point are the GARCH models, which are very common in practice. We introduce the possibility of having crashes in such GARCH models. A crash will be modeled by drawing innovations from a distribution with much mass on extremely negative events, while in ''normal'' times the innovations will be drawn from a normal distribution. The probability of a crash is modeled to be time dependent, depending on the past of the observed time series and/or exogenous variables. The aim is a splitting of risk into ''normal'' risk coming mainly from the GARCH dynamic and extreme event risk coming from the modeled crashes. We will present several incarnations of this modeling idea and give some basic properties like the conditional first and second moments. For the special case that we just have an ARCH dynamic we can establish geometric ergodicity and, thus, stationarity and mixing conditions. Also in the ARCH case we formulate (quasi) maximum likelihood estimators and can derive conditions for consistency and asymptotic normality of the parameter estimates. In a special case of genuine GARCH dynamic we are able to establish L_1-approximability and hence laws of large numbers for the processes itself. We can formulate a conditional maximum likelihood estimator in this case, but cannot completely establish consistency for them. On the practical side we look for the outcome of estimating models with genuine GARCH dynamic and compare the result to classical GARCH models. We apply the models to Value at Risk estimation and see that in comparison to the classical models many of ours seem to work better although we chose the crash distributions quite heuristically.
In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.
In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.
In the present work, we investigated how to correct the questionable normality, linear and quadratic assumptions underlying existing Value-at-Risk methodologies. In order to take also into account the skewness, the heavy tailedness and the stochastic feature of the volatility of the market values of financial instruments, the constant volatility hypothesis widely used by existing Value-at-Risk appproches has also been investigated and corrected and the tails of the financial returns distributions have been handled via Generalized Pareto or Extreme Value Distributions. Artificial Neural Networks have been combined by Extreme Value Theory in order to build consistent and nonparametric Value-at-Risk measures without the need to make any of the questionable assumption specified above. For that, either autoregressive models (AR-GARCH) have been used or the direct characterization of conditional quantiles due to Bassett, Koenker [1978] and Smith [1987]. In order to build consistent and nonparametric Value-at-Risk estimates, we have proved some new results extending White Artificial Neural Network denseness results to unbounded random variables and provide a generalisation of the Bernstein inequality, which is needed to establish the consistency of our new Value-at-Risk estimates. For an accurate estimation of the quantile of the unexpected returns, Generalized Pareto and Extreme Value Distributions have been used. The new Artificial Neural Networks denseness results enable to build consistent, asymptotically normal and nonparametric estimates of conditional means and stochastic volatilities. The denseness results uses the Sobolev metric space L^m (my) for some m >= 1 and some probability measure my and which holds for a certain subclass of square integrable functions. The Fourier transform, the new extension of the Bernstein inequality for unbounded random variables from stationary alpha-mixing processes combined with the new generalization of a result of White and Wooldrige [1990] have been the main tool to establich the extension of White's neural network denseness results. To illustrate the goodness and level of accuracy of the new denseness results, we were able to demonstrate the applicability of the new Value-at-Risk approaches by means of three examples with real financial data mainly from the banking sector traded on the Frankfort Stock Exchange.
Filtering, Approximation and Portfolio Optimization for Shot-Noise Models and the Heston Model
(2012)
We consider a continuous time market model in which stock returns satisfy a stochastic differential equation with stochastic drift, e.g. following an Ornstein-Uhlenbeck process. The driving noise of the stock returns consists not only of Brownian motion but also of a jump part (shot noise or compound Poisson process). The investor's objective is to maximize expected utility of terminal wealth under partial information which means that the investor only observes stock prices but does not observe the drift process. Since the drift of the stock prices is unobservable, it has to be estimated using filtering techniques. E.g., if the drift follows an Ornstein-Uhlenbeck process and without
jump part, Kalman filtering can be applied and optimal strategies can be computed explicitly. Also in other cases, like for an underlying
Markov chain, finite-dimensional filters exist. But for certain jump processes (e.g. shot noise) or certain nonlinear drift dynamics explicit computations, based on discrete observations, are no longer possible or existence of finite dimensional filters is no longer valid. The same
computational difficulties apply to the optimal strategy since it depends on the filter. In this case the model may be approximated by
a model where the filter is known and can be computed. E.g., we use statistical linearization for non-linear drift processes, finite-state-Markov chain approximations for the drift process and/or diffusion approximations for small jumps in the noise term.
In the approximating models, filters and optimal strategies can often be computed explicitly. We analyze and compare different approximation methods, in particular in view of performance of the corresponding utility maximizing strategies.
In this dissertation a model of melt spinning (by Doufas, McHugh and Miller) has been investigated. The model (DMM model) which takes into account effects of inertia, air drag, gravity and surface tension in the momentum equation and heat exchange between air and fibre surface, viscous dissipation and crystallization in the energy equation also has a complicated coupling with the microstructure. The model has two parts, before onset of crystallization (BOC) and after onset of crystallization (AOC) with the point of onset of crystallization as the unknown interface. Mathematically the model has been formulated as a Free boundary value problem. Changes have been introduced in the model with respect to the air drag and an interface condition at the free boundary. The mathematical analysis of the nonlinear, coupled free boundary value problem shows that the solution of this problem depends heavily on initial conditions and parameters which renders the global analysis impossible. But by defining a physically acceptable solution, it is shown that for a more restricted set of initial conditions if a unique solution exists for IVP BOC then it is physically acceptable. For this the important property of the positivity of the conformation tensor variables has been proved. Further it is shown that if a physically acceptable solution exists for IVP BOC then under certain conditions it also exists for IVP AOC. This gives an important relation between the initial conditions of IVP BOC and the existence of a physically acceptable solution of IVP AOC. A new investigation has been done for the melt spinning process in the framework of classical mechanics. A Hamiltonian formulation has been done for the melt spinning process for which appropriate Poisson brackets have been derived for the 1-d, elongational flow of a viscoelastic fluid. From the Hamiltonian, cross sectionally averaged balance mass and momentum equations of melt spinning can be derived along with the microstructural equations. These studies show that the complicated problem of melt spinning can also be studied under the framework of classical mechanics. This work provides the basic groundwork on which further investigations on the dynamics of a fibre could be carried out. The Free boundary value problem has been solved numerically using shooting method. Matlab routines have been used to solve the IVPs arising in the problem. Some numerical case studies have been done to study the sensitivity of the ODE systems with respect to the initial guess and parameters. These experiments support the analysis done and throw more light on the stiff nature and ill posedness of the ODE systems. To validate the model, simulations have been performed on sets of data provided by the company. Comparison of numerical results (axial velocity profiles) has been done with the experimental profiles provided by the company. Numerical results have been found to be in excellent agreement with the experimental profiles.
The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.
This thesis is primarily motivated by a project with Deutsche Bahn about offer preparation in rail freight transport. At its core, a customer should be offered three train paths to choose from in response to a freight train request. As part of this cooperation with DB Netz AG, we investigated how to compute these train paths efficiently. They should be all "good" but also "as different as possible". We solved this practical problem using combinatorial optimization techniques.
At the beginning of this thesis, we describe the practical aspects of our research collaboration. The more theoretical problems, which we consider afterwards, are divided into two parts.
In Part I, we deal with a dual pair of problems on directed graphs with two designated end-vertices. The Almost Disjoint Paths (ADP) problem asks for a maximum number of paths between the end-vertices any two of which have at most one arc in common. In comparison, for the Separating by Forbidden Pairs (SFP) problem we have to select as few arc pairs as possible such that every path between the end-vertices contains both arcs of a chosen pair. The main results of this more theoretical part are the classifications of ADP as an NP-complete and SFP as a Sigma-2-P-complete problem.
In Part II, we address a simplified version of the practical project: the Fastest Path with Time Profiles and Waiting (FPTPW) problem. In a directed acyclic graph with durations on the arcs and time windows at the vertices, we search for a fastest path from a source to a target vertex. We are only allowed to be at a vertex within its time windows, and we are only allowed to wait at specified vertices. After introducing departure-duration functions we develop solution algorithms based on these. We consider special cases that significantly reduce the complexity or are of practical relevance. Furthermore, we show that already this simplified problem is in general NP-hard and investigate the complexity status more closely.
The thesis is concerned with multiscale approximation by means of radial basis functions on hierarchically structured spherical grids. A new approach is proposed to construct a biorthogonal system of locally supported zonal functions. By use of this biorthogonal system of locally supported zonal functions, a spherical fast wavelet transform (SFWT) is established. Finally, based on the wavelet analysis, geophysically and geodetically relevant problems involving rotation-invariant pseudodifferential operators are shown to be efficiently and economically solvable.
In this dissertation we consider complex, projective hypersurfaces with many isolated singularities. The leading questions concern the maximal number of prescribed singularities of such hypersurfaces in a given linear system, and geometric properties of the equisingular stratum. In the first part a systematic introduction to the theory of equianalytic families of hypersurfaces is given. Furthermore, the patchworking method for constructing hypersurfaces with singularities of prescribed types is described. In the second part we present new existence results for hypersurfaces with many singularities. Using the patchworking method, we show asymptotically proper results for hypersurfaces in P^n with singularities of corank less than two. In the case of simple singularities, the results are even asymptotically optimal. These statements improve all previous general existence results for hypersurfaces with these singularities. Moreover, the results are also transferred to hypersurfaces defined over the real numbers. The last part of the dissertation deals with the Castelnuovo function for studying the cohomology of ideal sheaves of zero-dimensional schemes. Parts of the theory of this function for schemes in P^2 are generalized to the case of schemes on general surfaces in P^3. As an application we show an H^1-vanishing theorem for such schemes.
The study of families of curves with prescribed singularities has a long tradition. Its foundations were laid by Plücker, Severi, Segre, and Zariski at the beginning of the 20th century. Leading to interesting results with applications in singularity theory and in the topology of complex algebraic curves and surfaces it has attained the continuous attraction of algebraic geometers since then. Throughout this thesis we examine the varieties V(D,S1,...,Sr) of irreducible reduced curves in a fixed linear system |D| on a smooth projective surface S over the complex numbers having precisely r singular points of types S1,...,Sr. We are mainly interested in the following three questions: 1) Is V(D,S1,...,Sr) non-empty? 2) Is V(D,S1,...,Sr) T-smooth, that is smooth of the expected dimension? 3) Is V(D,S1,...Sr) irreducible? We would like to answer the questions in such a way that we present numerical conditions depending on invariants of the divisor D and of the singularity types S1,...,Sr, which ensure a positive answer. The main conditions which we derive will be of the type inv(S1)+...+inv(Sr) < aD^2+bD.K+c, where inv is some invariant of singularity types, a, b and c are some constants, and K is some fixed divisor. The case that S is the projective plane has been very well studied by many authors, and on other surfaces some results for curves with nodes and cusps have been derived in the past. We, however, consider arbitrary singularity types, and the results which we derive apply to large classes of surfaces, including surfaces in projective three-space, K3-surfaces, products of curves and geometrically ruled surfaces.
Factorization of multivariate polynomials is a cornerstone of many applications in computer algebra. To compute it, one uses an algorithm by Zassenhaus who used it in 1969 to factorize univariate polynomials over \(\mathbb{Z}\). Later Musser generalized it to the multivariate case. Subsequently, the algorithm was refined and improved.
In this work every step of the algorithm is described as well as the problems that arise in these steps.
In doing so, we restrict to the coefficient domains \(\mathbb{F}_{q}\), \(\mathbb{Z}\), and \(\mathbb{Q}(\alpha)\) while focussing on a fast implementation. The author has implemented almost all algorithms mentioned in this work in the C++ library factory which is part of the computer algebra system Singular.
Besides, a new bound on the coefficients of a factor of a multivariate polynomial over \(\mathbb{Q}(\alpha)\) is proven which does not require \(\alpha\) to be an algebraic integer. This bound is used to compute Hensel lifting and recombination of factors in a modular fashion. Furthermore, several sub-steps are improved.
Finally, an overview on the capability of the implementation is given which includes benchmark examples as well as random generated input which is supposed to give an impression of the average performance.
Extensions of Shallow Water Equations The subject of the thesis of Michael Hilden is the simulation of floods in urban areas. In case of strong rain events, water can flow out of the overloaded sewer system onto the street and damage the connected houses. The dependable simulation of water flow out of a manhole ("manhole") and over a curb ("curb") is crucial for the assessment of the flood risks. The incompressible 3D-Navier-Stokes Equations (3D-NSE) describe the free surface flow of water accurately, but require expensive computations. Therefore, the less CPU-intensive (factor ca.1/100) Shallow Water Equations (SWE) are usually applied in hydrology. They can be derived from 3D-NSE under the assumption of a hydrostatic pressure distribution via depth-integration and are applied successfully in particular to simulations of river flow processes. The SWE-computations of the flow problems "manhole" and "curb" differ to the 3D-NSE results. Thus, SWE need to be extended appropriately to give reliable forecasts for flood risks in urban areas within reduced computational efforts. These extensions are developed based on physical considerations not considered in the classical SWE. In one extension, a vortex layer on the ground is separated from the main flow representing its new bottom. In a further extension, the hydrostatic pressure distribution is corrected by additional terms due to approximations of vertical velocities and their interaction with the flow. These extensions increase the quality of the SWE results for these flow problems up to the quality level of the NSE results within a moderate increase of the CPU efforts.
The famous Mather-Yau theorem in singularity theory yields a bijection of isomorphy classes of germs of isolated hypersurface singularities and their respective Tjurina algebras.
This result has been generalized by T. Gaffney and H. Hauser to singularities of isolated singularity type. Due to the fact that both results do not have a constructive proof, it is the objective of this thesis to extract explicit information about hypersurface singularities from their Tjurina algebras.
First we generalize the result by Gaffney-Hauser to germs of hypersurface singularities, which are strongly Euler-homogeneous at the origin. Afterwards we investigate the Lie algebra structure of the module of logarithmic derivations of Tjurina algebra while considering the theory of graded analytic algebras by G. Scheja and H. Wiebe. We use the aforementioned theory to show that germs of hypersurface singularities with positively graded Tjurina algebras are strongly Euler-homogeneous at the origin. We deduce the classification of hypersurface singularities with Stanley-Reisner Tjurina ideals.
The notion of freeness and holonomicity play an important role in the investigation of properties of the aforementioned singularities. Both notions have been introduced by K. Saito in 1980. We show that hypersurface singularities with Stanley--Reisner Tjurina ideals are holonomic and have a free singular locus. Furthermore, we present a Las Vegas algorithm, which decides whether a given zero-dimensional \(\mathbb{C}\)-algebra is the Tjurina algebra of a quasi-homogeneous isolated hypersurface singularity. The algorithm is implemented in the computer algebra system OSCAR.
Estimation of Motion Vector Fields of Complex Microstructures by Time Series of Volume Images
(2023)
Mechanical tests form one of the pillars in development and assessment of modern materials. In a world that will be forced to handle its resources more carefully in the near future, development of materials that are favorable regarding for example weight or material consumption is inevitable. To guarantee that such materials can also be used in critical infrastructure, such as foamed materials in automotive industry or new types of concrete in civil engineering, mechanical properties like tensile or compressive strength have to be thoroughly described. One method to do so is by so called in situ tests, where the mechanical test is combined with an image acquisition technique such as Computed Tomography.
The resulting time series of volume images comprise the delicate and individual nature of each material. The objective of this thesis is to present and develop methods to unveil this behavior and make the motion accessible by algorithms. The estimation of motion has been tackled by many communities, and two of them have already made big effort to solve the problems we are facing. Digital Volume Correlation (DVC) on the one hand has been developed by material scientists and was applied in many different context in mechanical testing, but almost never produces displacement fields that allocate one vector per voxel. Medical Image Registration (MIR) on the other hand does produce voxel precise estimates, but is limited to very smooth motion estimates.
The unification of both families, DVC and MIR, under one roof, will therefore be illustrated in the first half of this thesis. Using the theory of inverse problems, we lay the mathematical foundations to explain why in our impression none of the families is sufficient to deal with all of the problems that come with motion estimation in in situ tests. We then proceed by presenting a third community in motion estimation, namely Optical flow, which is normally only applied in two dimensions. Nevertheless, within this community algorithms have been developed that meet many of our requirements. Strategies for large displacement exist as well as methods that resolve jumps, and on top the displacement is always calculated on pixel level. This thesis therefore proceeds by extending some of the most successful methods to 3D.
To ensure the competitiveness of our approach, the last part of this thesis deals with a detailed evaluation of proposed extensions. We focus on three types of materials, foam, fibre systems and concrete, and use simulated and real in situ tests to compare the Optical flow based methods to their competitors from DVC and MIR. By using synthetically generated and simulated displacement fields, we also assess the quality of the calculated displacement fields - a novelty in this area. We conclude this thesis by two specialized applications of our algorithm, which show how the voxel-precise displacement fields serve as useful information to engineers in investigating their materials.
Estimation and Portfolio Optimization with Expert Opinions in Discrete-time Financial Markets
(2021)
In this thesis, we mainly discuss the problem of parameter estimation and
portfolio optimization with partial information in discrete-time. In the portfolio optimization problem, we specifically aim at maximizing the utility of
terminal wealth. We focus on the logarithmic and power utility functions. We consider expert opinions as another observation in addition to stock returns to improve estimation of drift and volatility parameters at different times and for the purpose of asset optimization.
In the first part, we assume that the drift term has a fixed distribution, and
the volatility term is constant. We use the Kalman filter to combine the two
types of observations. Moreover, we discuss how to transform this problem
into a non-linear problem of Gaussian noise when the expert opinion is uniformly distributed. The generalized Kalman filter is used to estimate the parameters in this problem.
In the second part, we assume that drift and volatility of asset returns are both driven by a Markov chain. We mainly use the change-of-measure technique to estimate various values required by the EM algorithm. In addition,
we focus on different ways to combine the two observations, expert opinions and asset returns. First, we use the linear combination method. At the same time, we discuss how to use a logistic regression model to quantify expert
opinions. Second, we consider that expert opinions follow a mixed Dirichlet distribution. Under this assumption, we use another probability measure to
estimate the unnormalized filters, needed for the EM algorithm.
In the third part, we assume that expert opinions follow a mixed Dirichlet distribution and focus on how we can obtain approximate optimal portfolio
strategies in different observation settings. We claim the approximate strategies from the dynamic programming equations in different settings and analyze the dependence on the discretization step. Finally we compute different
observation settings in a simulation study.
Fibre reinforced polymers(FRPs) are one the newest and modern materials. In FRPs a light polymer matrix holds but weak polymer matrix is strengthened by glass or carbon fibres. The result is a material that is light and compared to its weight, very strong.\par
The stiffness of the resulting material is governed by the direction and the length of the fibres. To better understand the behaviour of FRPs we need to know the fibre length distribution in the resulting material. The classic method for this is ashing, where a sample of the material is burned and destroyed. We look at CT images of the material. In the first part we assumed that we have a full fibre segmentation, we can fit an a cylinder to each individual fibre. In this setting we identified two problems, sampling bias and censoring.\par
Sampling bias occurs since a longer fibre has a higher probability to be visible in the observation window. To solve this problem we used a reweighed fibre length distribution. The weight depends on the used sampling rule.\par
For the censoring we used an EM algorithm. The EM algorithm is used to get a Maximum Likelihood estimator in cases of missing or censored data.\par
For this setting we deduced conditions such that the EM algorithm converges to at least a stationary point of the underlying likelihood function. We further found conditions such that if the EM converges to the correct ML estimator, the estimator is consistent and asymptotically normally distributed.\par
Since obtaining a full fibre segmentation is hard we further looked in the fibre endpoint process. The fibre end point process can be modelled as a Neymann-Scott cluster process. Using this model we can find a formula for the reduced second moment measure for this process. We use this formula to get an estimator for the fibre length distribution.\par
We investigated all estimators using simulation studies. We especially investigated their performance in the case of non overlapping fibres.
A classical conjecture in the representation theory of finite groups, the McKay conjecture, states that for any finite group and prime number p the number of complex irreducible characters of degree prime to p is equal to the number of complex irreducible characters of degree prime to p of the normalizer of a p-Sylow subgroup. Recently a reduction theorem was proved by Isaacs, Malle and Navarro: If all simple groups are “good”, then the McKay conjecture holds. In this work we are concerned with the problem of goodness for finite groups of Lie type in their defining characteristic. A simple group is called “good” if certain equivariant bijections between the involved character sets exist. We present a structural approach to the construction of such a bijection by utilizing the so-called “Steinberg-Map”. This yields very natural bijections and we prove most of the desired properties.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
The main objects of study in this thesis are abelian varieties and their endomorphism rings. Abelian varieties are not just interesting in their own right, they also have numerous applications in various areas such as in algebraic geometry, number theory and information security. In fact, they make up one of the best choices in public key cryptography and more recently in post-quantum cryptography. Endomorphism rings are objects attached to abelian varieties. Their computation plays an important role in explicit class field theory and in the security of some post-quantum cryptosystems.
There are subexponential algorithms to compute the endomorphism rings of abelian varieties of dimension one and two. Prior to this work, all these subexponential algorithms came with a probability of failure and additional steps were required to unconditionally prove the output. In addition, these methods do not cover all abelian varieties of dimension two. The objective of this thesis is to analyse the subexponential methods and develop ways to deal with the exceptional cases.
We improve the existing methods by developing algorithms that always output the correct endomorphism ring. In addition to that, we develop a novel approach to compute endomorphism rings of some abelian varieties that could not be handled before. We also prove that the subexponential approaches are simply not good enough to cover all the cases. We use some of our results to construct a family of abelian surfaces with which we build post-quantum cryptosystems that are believed to resist subexponential quantum attacks - a desirable property for cryptosystems. This has the potential of providing an efficient non interactive isogeny based key exchange protocol, which is also capable of resisting subexponential quantum attacks and will be the first of its kind.
Die vorliegende Dissertation besteht aus zwei Hauptteilen: Neue Ergebnisse aus der Gaußchen Analysis und ihre Anwendung auf die Theorie der Pfadintegrale. Das zentrale Resultat des ersten Teils ist die Charakterisierung aller regulären Distributionen die man mit Donsker's Delta multiplizieren kann. Dabei wird eine explizite Formel für solche Produkte, die sogenannte Wick-Formel, angegeben. Im Anwendungsteil dieser Arbeit wird zunächst eine komplex skalierte Feynman-Kac-Formel und ihre zugehörigen Kerne mit Hilfe dieser Wick-Formel gezeigt. Desweiteren werden Feynman Integranden für neue Klassen von Potentialen als White Noise Distributionen konstruiert.
Zusammenfassung. In dieser Arbeit werden Probleme der numerischen Lösung finiter Differenzenverfahren partieller Differentialgleichungen in einem algebraischen Ansatz behandelt. Es werden sowohl theoretische Ergebnisse präsentiert als auch die praktische Implementierung mithilfe der Systeme SINGULAR und QEPCAD vorgeführt. Dabei beziehen sich die algebraischen Methoden auf zwei unterschiedliche Aspekte bei finiten Differenzenverfahren: die Erzeugung von Schemata mithilfe von Gröbnerbasen und die darauf folgende Stabilitätsanalyse mittels Quantorenelimination durch algebraische zylindrische Dekomposition. Beim Aufbau der Arbeit werden in den ersten drei Kapiteln in einer Rückschau die nötigen Begriffe aus der Computeralgebra gelegt, die Grundzüge der numerischen Konvergenztheorie finiter Differenzenschemata erklärt sowie die Anwendung des CAD-Algorithmus zur Quantorenelimierung skizziert. Das Kapitel 4 entwickelt ausgehend vom zugrunde liegenden Kontext die Formulierung und die dafür nötigen Bedingungen an Differenzenschemata, die algebraisch nach Definition ein Ideal in einem Polynomring darstellen. Neben der praktischen Handhabbarkeit der Objekte liegt die Betonung auf größtmöglicher Allgemeinheit in den Definitionen der Begriffe. Es werden äquivalente Wege der Erzeugung sowie Eigenschaften der Eindeutigkeit unter sehr speziellen Bedingungen an die verwendeten Approximationen gezeigt. Die Anwendung des CAD-Algorithmus auf die Abschätzung des Symbols eines Schemas wird erläutert. Das fünfte Kapitel beschreibt die SINGULAR-Bibliothek findiff.lib, welche das Zusammenspiel von SINGULAR und QEPCAD garantiert und eine vollständige Automatisierung der Erzeugung und Stabilitätsanalyse eines finiten Differenzenverfahrens ermöglicht.
Efficient time integration and nonlinear model reduction for incompressible hyperelastic materials
(2013)
This thesis deals with the time integration and nonlinear model reduction of nearly incompressible materials that have been discretized in space by mixed finite elements. We analyze the structure of the equations of motion and show that a differential-algebraic system of index 1 with a singular perturbation term needs to be solved. In the limit case the index may jump to index 3 and thus renders the time integration into a difficult problem. For the time integration we apply Rosenbrock methods and study their convergence behavior for a test problem, which highlights the importance of the well-known Scholz conditions for this problem class. Numerical tests demonstrate that such linear-implicit methods are an attractive alternative to established time integration methods in structural dynamics. In the second part we combine the simulation of nonlinear materials with a model reduction step. We use the method of proper orthogonal decomposition and apply it to the discretized system of second order. For a nonlinear model reduction to be efficient we approximate the nonlinearity by following the lookup approach. In a practical example we show that large CPU time savings can achieved. This work is in order to prepare the ground for including such finite element structures as components in complex vehicle dynamics applications.
We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.
In this thesis, the quasi-static Biot poroelasticity system in bounded multilayered domains in one and three dimensions is studied. In more detail, in the one-dimensional case, a finite volume discretization for the Biot system with discontinuous coefficients is derived. The discretization results in a difference scheme with harmonic averaging of the coefficients. Detailed theoretical analysis of the obtained discrete model is performed. Error estimates, which establish convergence rates for both primary as well as flux unknowns are derived. Besides, modified and more accurate discretizations, which can be applied when the interface position coincides with a grid node, are obtained. These discretizations yield second order convergence of the fluxes of the problem. Finally, the solver for the solution of the produced system of linear equations is developed and extensively tested. A number of numerical experiments, which confirm the theoretical considerations are performed. In the three-dimensional case, the finite volume discretization of the system involves construction of special interpolating polynomials in the dual volumes. These polynomials are derived so that they satisfy the same continuity conditions across the interface, as the original system of PDEs. This technique allows to obtain such a difference scheme, which provides accurate computation of the primary as well as of the flux unknowns, including the points adjacent to the interface. Numerical experiments, based on the obtained discretization, show second order convergence for auxiliary problems with known analytical solutions. A multigrid solver, which incorporates the features of the discrete model, is developed in order to solve efficiently the linear system, produced by the finite volume discretization of the three-dimensional problem. The crucial point is to derive problem-dependent restriction and prolongation operators. Such operators are a well-known remedy for the scalar PDEs with discontinuous coefficients. Here, these operators are derived for the system of PDEs, taking into account interdependence of different unknowns within the system. In the derivation, the interpolating polynomials from the finite volume discretization are employed again, linking thus the discretization and the solution processes. The developed multigrid solver is tested on several model problems. Numerical experiments show that, due to the proper problem-dependent intergrid transfer, the multigrid solver is robust with respect to the discontinuities of the coefficients of the system. In the end, the poroelasticity system with discontinuous coefficients is used to model a real problem. The Biot model, describing this problem, is treated numerically, i.e., discretized by the developed finite volume techniques and then solved by the constructed multigrid solver. Physical characteristics of the process, such as displacement of the skeleton, pressure of the fluid, components of the stress tensor, are calculated and then presented at certain cross-sections.
Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall
modeling and simulation of physical and chemical processes occuring in the course of an accident
is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor tech-
nology and computer programming. The aim of the study is therefore to create the foundations
of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based
on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under
different working and accident conditions, and to develop proper action plans for minimizing the
risks of accidents, and/or minimizing the consequences of possible accidents. A very large number
of scenarios have to be simulated, and at the same time acceptable accuracy for the critical param-
eters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software
tools are either too slow, or not accurate enough. This thesis deals with developing customized al-
gorithm and software tools for simulation of isothermal and non-isothermal flows in a containment
pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented.
The goal of the work is to achieve a balance between accuracy and speed of calculation, and to
develop customized algorithm for this special case. Different discretization and solution approaches
are studied and those which correspond best to the formulated goal are selected, adjusted, and when
possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated
geometries, in presence of solid and porous obstales, is in the core of the algorithm. Developing
suitable pre-processor and customized domain decomposition algorithms are essential part of the
overall algorithm amd software. Results from numerical simulations in test geometries and in real
geometries are presented and discussed.
This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.
The goal of this work is to develop a simulation-based algorithm, allowing the prediction
of the effective mechanical properties of textiles on the basis of their microstructure
and corresponding properties of fibers. This method can be used for optimization of the
microstructure, in order to obtain a better stiffness or strength of the corresponding fiber
material later on. An additional aspect of the thesis is that we want to take into account the microcontacts
between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the
textile. An introduction of an additional asymptotics with respect to a small parameter,
the relation between the thickness and the representative length of the fibers, allows a
reduction of local contact problems between fibers to 1-dimensional problems, which
reduces numerical computations significantly.
A fiber composite material with periodic microstructure and multiple frictional microcontacts
between fibers is studied. The textile is modeled by introducing small geometrical
parameters: the periodicity of the microstructure and the characteristic
diameter of fibers. The contact linear elasticity problem is considered. A two-scale
approach is used for obtaining the effective mechanical properties.
The algorithm using asymptotic two-scale homogenization for computation of the
effective mechanical properties of textiles with periodic rod or fiber microstructure
is proposed. The algorithm is based on the consequent passing to the asymptotics
with respect to the in-plane period and the characteristic diameter of fibers. This
allows to come to the equivalent homogenized problem and to reduce the dimension
of the auxiliary problems. Further numerical simulations of the cell problems give
the effective material properties of the textile.
The homogenization of the boundary conditions on the vanishing out-of-plane interface
of a textile or fiber structured layer has been studied. Introducing additional
auxiliary functions into the formal asymptotic expansion for a heterogeneous
plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous
Neumann boundary condition were deduced. It is incorporated into the right hand
side of the homogenized problem via effective out-of-plane moduli.
FiberFEM, a C++ finite element code for solving contact elasticity problems, is
developed. The code is based on the implementation of the algorithm for the contact
between fibers, proposed in the thesis.
Numerical examples of homogenization of geotexiles and wovens are obtained in the
work by implementation of the developed algorithm. The effective material moduli
are computed numerically using the finite element solutions of the auxiliary contact
problems obtained by FiberFEM.
In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.
The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.
Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.
In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.
In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.
Die Arbeit beschäftigt sich mit den Charakteren des Normalisators und des Zentralisators eines Sylowtorus. Dabei wird jede Gruppe G vom Lie-Typ als Fixpunktgruppe einer einfach-zusammenhängenden einfachen Gruppe unter einer Frobeniusabbildung aufgefaßt. Für jeden Sylowtorus S der algebraischen Gruppe wird gezeigt, dass die irreduziblen Charaktere des Zentralisators von S in G sich auf ihre Trägheitsgruppe im Normalisator von S fortsetzen. Diese Fragestellung entsteht aus dem Studium der Höhe 0 Charaktere bei endlichen reduktiven Gruppen vom Lie-Typ im Zusammenhang mit der McKay-Vermutung. Neuere Resultate von Isaacs, Malle und Navarro führen diese Vermutung auf eine Eigenschaft von einfachen Gruppen zurück, die sie dann für eine Primzahl gut nennen. Bei Gruppen vom Lie-Typ zeigt das obige Resultat zusammen mit einer aktuellen Arbeit von Malle einige dabei wichtige und notwendige Eigenschaften. Anhand der Steinberg-Präsentation werden vor allem bei den klassischen Gruppen genauere Aussagen über die Struktur des Zentralisators und des Normalisators eines Sylowtorus bewiesen. Wichtig dabei ist die von Tits eingeführte erweiterte Weylgruppe, die starke Verbindungen zu Zopfgruppen besitzt. Das Resultat wird in zahlreichen Einzelfallbetrachtungen gezeigt, bei denen in dieser Arbeit bewiesene Vererbungsregeln von Fortsetzbarkeitseigenschaften benutzt werden.
Die vorliegende Arbeit wurde angeregt durch die in A.N. Borodin(2000) [Version of the Feynman-Kac Formula. Journal of Mathematical Sciences, 99(2):1044-1052, 2000] und in B. Simon(2000) [A Feynman-Kac Formula for Unbounded Semigroups. Canadian Math. Soc. Conf. Proc., 28:317-321, 2000] dargestellten Feynman-Kac-Formeln. Sie beschäftigt sich mit dem Problem, den Geltungsbereich der Feynman-Kac-Formel im Hinblick auf die Bedingungen der Potentiale und der Anfangsbedingung der zugehörigen partiellen Differentialgleichung zu erweitern. Es ist bekannt, dass die Feynman-Kac-Formel für beschränkte Potentiale gilt. Ausserdem gilt sie auch für Anfangsbedingungen, die im Raum \(C_{0}(\mathbb{R}^{n})\) oder im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegen. Die Darstellung der Feynman-Kac-Formel für die Anfangsbedingung, die im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegt, liefert die Lösung der partiellen Differentialgleichung. Wir können sie auch als stark stetige Halbgruppe auf dem Raum \(C_{0}(\mathbb{R}^{n})\) auffassen. Diese zwei verschiedenen Darstellungen sind äquivalent. In dieser Arbeit zeigen wir zunächst, dass die Feynman-Kac-Formel auch für unbeschränkte Potentiale \(V\) gilt, wobei \(|V(x)| \leq \varepsilon ||x||^{2} + C_{\varepsilon} \) für alle \(\varepsilon > 0; C_{\varepsilon} > 0\) und \(x \in \mathbb{R}^{n}\) ist. Ausserdem zeigen wir, dass sie für alle Anfangsbedingungen \(f\) gilt mit \(x \mapsto e^{-\varepsilon |x|^{2}} f(x) \in H^{2,2}(\mathbb{R}^{n})\). Der Beweis ist wahrscheinlichkeitstheoretisch und benutzt keine Spektraltheorie. Der spektraltheoretische Zugang, in dem eine Darstellung des Operators \(e^{-tH}\), wobei \(H = -\frac{1}{2} \Delta + V\) gegeben wird, wurde von B. Simon(2000) auch auf die obige Klasse von Potentialen ausgeweitet. Wir lassen zusätzlich auch Potentiale der Form \(V = V_{1} + V_{2}\) zu, wobei \(V_{1} \in L^{2}(\mathbb{R}^{3})\) ist und für alle \(\varepsilon > 0\) gibt es \(C_{\varepsilon} > 0\), so dass \(|V_{2}(x)| \leq\varepsilon ||x||^{2} + C_{\varepsilon}\) für alle \(x \in \mathbb{R}^{3}\) ist. Im Gegensatz zur klassischen Situation ist \(e^{-tH}\) jetzt ein unbeschränkter Operator. Schließlich wird in dieser Arbeit auch der Zusammenhang zwischen der Feynman-Kac-It\(\hat{o}\)-Formel, der Feynman-Kac-Formel und der Kolmogorov-Rückwärtsgleichung untersucht.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
In the last few years a lot of work has been done in the investigation of Brownian motion with point interaction(s) in one and higher dimensions. Roughly speaking a Brownian motion with point interaction is nothing else than a Brownian motion whose generator is disturbed by a measure supported in just one point.
The purpose of the present work is the introducing of curve interactions of the two dimensional Brownian motion for a closed curve \(\mathcal{C}\). We will understand a curve interaction as a self-adjoint extension of the restriction of the Laplacian to the set of infinitely often continuously differentiable functions with compact support in \(\mathbb{R}^{2}\) which are constantly 0 at the closed curve. We will give a full description of all these self-adjoint extensions.
In the second chapter we will prove a generalization of Tanaka's formula to \(\mathbb{R}^{2}\). We define \(g\) to be a so-called harmonic single layer with continuous layer function \(\eta\) in \(\mathbb{R}^{2}\). For such a function \(g\) we prove
\begin{align}
g\left(B_{t}\right)=g\left(B_{0}\right)+\int\limits_{0}^{t}{\nabla g\left(B_{s}\right)\mathrm{d}B_{s}}+\int\limits_{0}^{t}\eta\left(B_{s}\right)\mathrm{d}L\left(s,\mathcal{C}\right)
\end{align}
where \(B_{t}\) is just the usual Brownian motion in \(\mathbb{R}^{2}\) and \(L\left(t,\mathcal{C}\right)\) is the connected unique local time process of \(B_{t}\) on the closed curve \(\mathcal{C}\).
We will use the generalized Tanaka formula in the following chapter to construct classes of processes related to curve interactions. In a first step we get the generalization of point interactions in a second step we get processes which behaves like a Brownian motion in the complement of \(\mathcal{C}\) and has an additional movement along the curve in the time- scale of \(L\left(t,\mathcal{C}\right)\). Such processes do not exist in the one point case since there we cannot move when the Brownian motion is in the point.
By establishing an approximation of a curve interaction by operators of the form Laplacian \(+V_{n}\) with "nice" potentials \(V_{n}\) we are able to deduce the existence of superprocesses related to curve interactions.
The last step is to give an approximation of these superprocesses by a sytem of branching particles. This approximation gives a better understanding of the related mass creation.
In traditional portfolio optimization under the threat of a crash the investment horizon or time to maturity is neglected. Developing the so-called crash hedging strategies (which are portfolio strategies which make an investor indifferent to the occurrence of an uncertain (down) jumps of the price of the risky asset) the time to maturity turns out to be essential. The crash hedging strategies are derived as solutions of non-linear differential equations which itself are consequences of an equilibrium strategy. Hereby the situation of changing market coefficients after a possible crash is considered for the case of logarithmic utility as well as for the case of general utility functions. A benefit-cost analysis of the crash hedging strategy is done as well as a comparison of the crash hedging strategy with the optimal portfolio strategies given in traditional crash models. Moreover, it will be shown that the crash hedging strategies optimize the worst-case bound for the expected utility from final wealth subject to some restrictions. Another application is to model crash hedging strategies in situations where both the number and the height of the crash are uncertain but bounded. Taking the additional information of the probability of a possible crash happening into account leads to the development of the q-quantile crash hedging strategy.
The topic of this thesis is the coupling of an atomistic and a coarse scale region in molecular dynamics simulations with the focus on the reflection of waves at the interface between the two scales and the velocity of waves in the coarse scale region for a non-equilibrium process. First, two models from the literature for such a coupling, the concurrent coupling of length scales and the bridging scales method are investigated for a one dimensional system with harmonic interaction. It turns out that the concurrent coupling of length scales method leads to the reflection of fine scale waves at the interface, while the bridging scales method gives an approximated system that is not energy conserving. The velocity of waves in the coarse scale region is in both models not correct. To circumvent this problems, we present a coupling based on the displacement splitting of the bridging scales method together with choosing appropriate variables in orthogonal subspaces. This coupling allows the derivation of evolution equations of fine and coarse scale degrees of freedom together with a reflectionless boundary condition at the interface directly from the Lagrangian of the system. This leads to an energy conserving approximated system with a clear separation between modeling errors an errors due to the numerical solution. Possible approximations in the Lagrangian and the numerical computation of the memory integral and other numerical errors are discussed. We further present a method to choose the interpolation from coarse to atomistic scale in such a way, that the fine scale degrees of freedom in the coarse scale region can be neglected. The interpolation weights are computed by comparing the dispersion relations of the coarse scale equations and the fully atomistic system. With this new interpolation weights, the number of degrees of freedom can be drastically reduced without creating an error in the velocity of the waves in the coarse scale region. We give an alternative derivation of the new coupling with the Mori-Zwanzig projection operator formalism, and explain how the method can be extended to non-zero temperature simulations. For the comparison of the results of the approximated with the fully atomistic system, we use a local stress tensor and the energy in the atomistic region. Examples for the numerical solution of the approximated system for harmonic potentials are given in one and two dimensions.
This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.
This thesis concerns itself with the long-term behavior of generalized Langevin dynamics with multiplicative noise,
i.e. the solutions to a class of two-component stochastic differential equations in \( \mathbb{R}^{d_1}\times\mathbb{R}^{d_2} \)
subject to outer influence induced by potentials \( \Phi \) and \( \Psi \),
where the stochastic term is only present in the second component, on which it is dependent.
In particular, convergence to an equilibrium defined by an invariant initial distribution \( \mu \) is shown
for weak solutions to the generalized Langevin equation obtained via generalized Dirichlet forms,
and the convergence rate is estimated by applying hypocoercivity methods relying on weak or classical Poincaré inequalities.
As a prerequisite, the space of compactly supported smooth functions is proven to be a domain of essential m-dissipativity
for the associated Kolmogorov backward operator on \(L^2(\mu)\).
In the second part of the thesis, similar Langevin dynamics are considered, however defined on a product of infinite-dimensional separable Hilbert spaces.
The set of finitely based smooth bounded functions is shown to be a domain of essential m-dissipativity for the corresponding Kolmogorov operator \( L \) on \( L^2(\mu) \)
for a Gaussian measure \( \mu \), by applying the previous finite-dimensional result to appropriate restrictions of \( L \).
Under further bounding conditions on the diffusion coefficient relative to the covariance operators of \( \mu \),
hypocoercivity of the generated semigroup is proved, as well as the existence of an associated weakly continuous Markov process
which analytically weakly provides a weak solution to the considered Langevin equation.
In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.
In this thesis we show that the theory of algebraic correspondences introduced by Deuring in the 1930s can be applied to construct non-trivial homomorphisms between the Jacobi groups of hyperelliptic function fields. Concretely, we deduce algorithms to add and multiply correspondences which perform in a reasonable time if the degrees of the associated divisors of the double field are small. Moreover, we show how to compute the differential matrices associated to prime divisors of the double field for arbitrary genus. These matrices give a representation for the homomorphisms or endomorphisms in the additive group (ring) of matrices which is even faithful if the ground field has characteristic zero. As first examples for non-trivial correspondences we investigate multiplication by m endomorphisms. Afterwards we use factorisations of certain bivariate polynomials to construct prime divisors of the double field that are not equivalent to 0 in a coarser sense. Applying the theory of Deuring, these divisors yield homomorphisms between the Jacobi groups of special classes of hyperelliptic function fields. Finally, we generalise the Richelot isogeny to higher genus and by this way derive a class of hyperelliptic function fields given in terms of their defining polynomials which admit non-trivial homomorphisms. These include homomorphisms between the Jacobi groups of hyperelliptic curves of different as well as of equal genus. In addition we provide an explicit method to construct genus 2 function fields the endomorphism ring of which contains a sqrt(2) multiplication with the help of the Cholesky decomposition of a certain matrix.
Motivated by the results of infinite dimensional Gaussian analysis and especially white noise analysis, we construct a Mittag-Leffler analysis. This is an infinite dimensional analysis with respect to non-Gaussian measures of Mittag-Leffler type which we call Mittag-Leffler measures. Our results indicate that the Wick ordered polynomials, which play a key role in Gaussian analysis, cannot be generalized to this non-Gaussian case. We provide evidence that a system of biorthogonal polynomials, called generalized Appell system, is applicable to the Mittag-Leffler measures, instead of using Wick ordered polynomials. With the help of an Appell system, we introduce a test function and a distribution space. Furthermore we give characterizations of the distribution space and we characterize the weak integrable functions and the convergent sequences within the distribution space. We construct Donsker's delta in a non-Gaussian setting as an application.
In the second part, we develop a grey noise analysis. This is a special application of the Mittag-Leffler analysis. In this framework, we introduce generalized grey Brownian motion and prove differentiability in a distributional sense and the existence of generalized grey Brownian motion local times. Grey noise analysis is then applied to the time-fractional heat equation and the time-fractional Schrödinger equation. We prove a generalization of the fractional Feynman-Kac formula for distributional initial values. In this way, we find a Green's function for the time-fractional heat equation which coincides with the solutions given in the literature.
Competing Neural Networks as Models for Non Stationary Financial Time Series -Changepoint Analysis-
(2005)
The problem of structural changes (variations) play a central role in many scientific fields. One of the most current debates is about climatic changes. Further, politicians, environmentalists, scientists, etc. are involved in this debate and almost everyone is concerned with the consequences of climatic changes. However, in this thesis we will not move into the latter direction, i.e. the study of climatic changes. Instead, we consider models for analyzing changes in the dynamics of observed time series assuming these changes are driven by a non-observable stochastic process. To this end, we consider a first order stationary Markov Chain as hidden process and define the Generalized Mixture of AR-ARCH model(GMAR-ARCH) which is an extension of the classical ARCH model to suit to model with dynamical changes. For this model we provide sufficient conditions that ensure its geometric ergodic property. Further, we define a conditional likelihood given the hidden process and a pseudo conditional likelihood in turn. For the pseudo conditional likelihood we assume that at each time instant the autoregressive and volatility functions can be suitably approximated by given Feedfoward Networks. Under this setting the consistency of the parameter estimates is derived and versions of the well-known Expectation Maximization algorithm and Viterbi Algorithm are designed to solve the problem numerically. Moreover, considering the volatility functions to be constants, we establish the consistency of the autoregressive functions estimates given some parametric classes of functions in general and some classes of single layer Feedfoward Networks in particular. Beside this hidden Markov Driven model, we define as alternative a Weighted Least Squares for estimating the time of change and the autoregressive functions. For the latter formulation, we consider a mixture of independent nonlinear autoregressive processes and assume once more that the autoregressive functions can be approximated by given single layer Feedfoward Networks. We derive the consistency and asymptotic normality of the parameter estimates. Further, we prove the convergence of Backpropagation for this setting under some regularity assumptions. Last but not least, we consider a Mixture of Nonlinear autoregressive processes with only one abrupt unknown changepoint and design a statistical test that can validate such changes.
Aerodynamic design optimization, considered in this thesis, is a large and complex area spanning different disciplines from mathematics to engineering. To perform optimizations on industrially relevant test cases, various algorithms and techniques have been proposed throughout the literature, including the Sobolev smoothing of gradients. This thesis combines the Sobolev methodology for PDE constrained flow problems with the parameterization of the computational grid and interprets the resulting matrix as an approximation of the reduced shape Hessian.
Traditionally, Sobolev gradient methods help prevent a loss of regularity and reduce high-frequency noise in the derivative calculation. Such a reinterpretation of the gradient in a different Hilbert space can be seen as a shape Hessian approximation. In the past, such approaches have been formulated in a non-parametric setting, while industrially relevant applications usually have a parameterized setting. In this thesis, the presence of a design parameterization for the shape description is explicitly considered. This research aims to demonstrate how a combination of Sobolev methods and parameterization can be done successfully, using a novel mathematical result based on the generalized Faà di Bruno formula. Such a formulation can yield benefits even if a smooth parameterization is already used.
The results obtained allow for the formulation of an efficient and flexible optimization strategy, which can incorporate the Sobolev smoothing procedure for test cases where a parameterization describes the shape, e.g., a CAD model, and where additional constraints on the geometry and the flow are to be considered. Furthermore, the algorithm is also extended to One Shot optimization methods. One Shot algorithms are a tool for simultaneous analysis and design when dealing with inexact flow and adjoint solutions in a PDE constrained optimization. The proposed parameterized Sobolev smoothing approach is especially beneficial in such a setting to ensure a fast and robust convergence towards an optimal design.
Key features of the implementation of the algorithms developed herein are pointed out, including the construction of the Laplace-Beltrami operator via finite elements and an efficient evaluation of the parameterization Jacobian using algorithmic differentiation. The newly derived algorithms are applied to relevant test cases featuring drag minimization problems, particularly for three-dimensional flows with turbulent RANS equations. These problems include additional constraints on the flow, e.g., constant lift, and the geometry, e.g., minimal thickness. The Sobolev smoothing combined with the parameterization is applied in classical and One Shot optimization settings and is compared to other traditional optimization algorithms. The numerical results show a performance improvement in runtime for the new combined algorithm over a classical Quasi-Newton scheme.
Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.
In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.
Many tasks in image processing can be tackled by modeling an appropriate data fidelity term \(\Phi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and then solve one of the regularized minimization problems \begin{align*}
&{}(P_{1,\tau}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \big\{ \Phi(x) \;{\rm s.t.}\; \Psi(x) \leq \tau \big\} \\ &{}(P_{2,\lambda}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \{ \Phi(x) + \lambda \Psi(x) \}, \; \lambda > 0 \end{align*} with some function \(\Psi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and a good choice of the parameter(s). Two tasks arise naturally here: \begin{align*} {}& \text{1. Study the solver sets \({\rm SOL}(P_{1,\tau})\) and
\({\rm SOL}(P_{2,\lambda})\) of the minimization problems.} \\ {}& \text{2. Ensure that the minimization problems have solutions.} \end{align*} This thesis provides contributions to both tasks: Regarding the first task for a more special setting we prove that there are intervals \((0,c)\) and \((0,d)\) such that the setvalued curves \begin{align*}
\tau \mapsto {}& {\rm SOL}(P_{1,\tau}), \; \tau \in (0,c) \\ {} \lambda \mapsto {}& {\rm SOL}(P_{2,\lambda}), \; \lambda \in (0,d) \end{align*} are the same, besides an order reversing parameter change \(g: (0,c) \rightarrow (0,d)\). Moreover we show that the solver sets are changing all the time while \(\tau\) runs from \(0\) to \(c\) and \(\lambda\) runs from \(d\) to \(0\).
In the presence of lower semicontinuity the second task is done if we have additionally coercivity. We regard lower semicontinuity and coercivity from a topological point of view and develop a new technique for proving lower semicontinuity plus coercivity.
Dropping any lower semicontinuity assumption we also prove a theorem on the coercivity of a sum of functions.
The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.
In this thesis we propose an efficient method to compute the automorphism group of an arbitrary hyperelliptic function field over a given constant field of odd characteristic as well as over its algebraic extensions. Beside theoretical applications, knowing the automorphism group also is useful in cryptography: The Jacobians of hyperelliptic curves have been suggested by Koblitz as groups for cryptographic purposes, because the discrete logarithm is believed to be hard in this kind of groups. In order to obtain "secure" Jacobians, it is necessary to prevent attacks like Pohlig/Hellman's and Duursma/Gaudry/Morain's. The latter is only feasible, if the corresponding function field has an automorphism of large order. According to a theorem by Madan, automorphisms seem to allow the Pohlig/Hellman attack, too. Hence, the function field of a secure Jacobian will most likely have trivial automorphism group. In other words: Computing the automorphism group of a hyperelliptic function field promises to be a quick test for insecure Jacobians. Let us outline our algorithm for computing the automorphism group Aut(F/k) of a hyperelliptic function field F/k. It is well known that Aut(F/k) is finite. For each possible subgroup U of Aut(F/k), Rolf Brandt has given a normal form for F if k is algebraically closed. Hence our problem reduces to deciding, whether a given hyperelliptic function field F=k(x,y), y^2=D_x has a defining equation of the form given by Brandt. This question can be answered using theorem III.18: We have F=k(t,u), u^2=D_t iff x is a fraction of linear polynomials in t and y=pu, where the factor p is a rational function w.r.t. t which can be determined explicitly from the coefficients of x. This condition can be checked efficiently using Gröbner basis techniques. With additional effort, it is also possible to compute Aut(F/k) if k is not algebraically closed. Investigating a huge number of examples one gets the impression that the above motivation of getting a quick test for insecure Jacobians is partially fulfilled: The computation of automorphism groups is quite fast using the suggested algorithm. Furthermore, fields with nontrivial automorphism groups seem to have insecure Jacobians. Only fields of small characteristic seem to have a reasonable chance of having nontrivial automorphisms. Hence, from a cryptographic point of view, computing Aut(F/k) seems to make sense whenever k has small characteristic.
Many loads acting on a vehicle depend on the condition and quality of roads
traveled as well as on the driving style of the motorist. Thus, during vehicle development,
good knowledge on these further operations conditions is advantageous.
For that purpose, usage models for different kinds of vehicles are considered. Based
on these mathematical descriptions, representative routes for multiple user
types can be simulated in a predefined geographical region. The obtained individual
driving schedules consist of coordinates of starting and target points and can
thus be routed on the true road network. Additionally, different factors, like the
topography, can be evaluated along the track.
Available statistics resulting from travel survey are integrated to guarantee reasonable
trip length. Population figures are used to estimate the number of vehicles in
contained administrative units. The creation of thousands of those geo-referenced
trips then allows the determination of realistic measures of the durability loads.
Private as well as commercial use of vehicles is modeled. For the former, commuters
are modeled as the main user group conducting daily drives to work and
additional leisure time a shopping trip during workweek. For the latter, taxis as
example for users of passenger cars are considered. The model of light-duty commercial
vehicles is split into two types of driving patterns, stars and tours, and in
the common traffic classes of long-distance, local and city traffic.
Algorithms to simulate reasonable target points based on geographical and statistical
data are presented in detail. Examples for the evaluation of routes based
on topographical factors and speed profiles comparing the influence of the driving
style are included.
The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.
In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.
The lattice Boltzmann method (LBM) is a numerical solver for the Navier-Stokes equations, based on an underlying molecular dynamic model. Recently, it has been extended towardsthe simulation of complex fluids. We use the asymptotic expansion technique to investigate the standard scheme, the initialization problem and possible developments towards moving boundary and fluid-structure interaction problems. At the same time, it will be shown how the mathematical analysis can be used to understand and improve the algorithm. First of all, we elaborate the tool "asymptotic analysis", proposing a general formulation of the technique and explaining the methods and the strategy we use for the investigation. A first standard application to the LBM is described, which leads to the approximation of the Navier-Stokes solution starting from the lattice Boltzmann equation. As next, we extend the analysis to investigate origin and dynamics of initial layers. A class of initialization algorithms to generate accurate initial values within the LB framework is described in detail. Starting from existing routines, we will be able to improve the schemes in term of efficiency and accuracy. Then we study the features of a simple moving boundary LBM. In particular, we concentrate on the initialization of new fluid nodes created by the variations of the computational fluid domain. An overview of existing possible choices is presented. Performing a careful analysis of the problem we propose a modified algorithm, which produces satisfactory results. Finally, to set up an LBM for fluid structure interaction, efficient routines to evaluate forces are required. We describe the Momentum Exchange algorithm (MEA). Precise accuracy estimates are derived, and the analysis leads to the construction of an improved method to evaluate the interface stresses. In conclusion, we test the defined code and validate the results of the analysis on several simple benchmarks. From the theoretical point of view, in the thesis we have developed a general formulation of the asymptotic expansion, which is expected to offer a more flexible tool in the investigation of numerical methods. The main practical contribution offered by this work is the detailed analysis of the numerical method. It allows to understand and improve the algorithms, and construct new routines, which can be considered as starting points for future researches.
In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).
The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem.
In this thesis, we focus on the application of the Heath-Platen (HP) estimator in option
pricing. In particular, we extend the approach of the HP estimator for pricing path dependent
options under the Heston model. The theoretical background of the estimator
was first introduced by Heath and Platen [32]. The HP estimator was originally interpreted
as a control variate technique and an application for European vanilla options was
presented in [32]. For European vanilla options, the HP estimator provided a considerable
amount of variance reduction. Thus, applying the technique for path dependent options
under the Heston model is the main contribution of this thesis.
The first part of the thesis deals with the implementation of the HP estimator for pricing
one-sided knockout barrier options. The main difficulty for the implementation of the HP
estimator is located in the determination of the first hitting time of the barrier. To test the
efficiency of the HP estimator we conduct numerical tests with regard to various aspects.
We provide a comparison among the crude Monte Carlo estimation, the crude control
variate technique and the HP estimator for all types of barrier options. Furthermore, we
present the numerical results for at the money, in the money and out of the money barrier
options. As numerical results imply, the HP estimator performs superior among others
for pricing one-sided knockout barrier options under the Heston model.
Another contribution of this thesis is the application of the HP estimator in pricing bond
options under the Cox-Ingersoll-Ross (CIR) model and the Fong-Vasicek (FV) model. As
suggested in the original paper of Heath and Platen [32], the HP estimator has a wide
range of applicability for derivative pricing. Therefore, transferring the structure of the
HP estimator for pricing bond options is a promising contribution. As the approximating
Vasicek process does not seem to be as good as the deterministic volatility process in the
Heston setting, the performance of the HP estimator in the CIR model is only relatively
good. However, for the FV model the variance reduction provided by the HP estimator is
again considerable.
Finally, the numerical result concerning the weak convergence rate of the HP estimator
for pricing European vanilla options in the Heston model is presented. As supported by
numerical analysis, the HP estimator has weak convergence of order almost 1.
The overall goal of the work is to simulate rarefied flows inside geometries with moving boundaries. The behavior of a rarefied flow is characterized through the Knudsen number \(Kn\), which can be very small (\(Kn < 0.01\) continuum flow) or larger (\(Kn > 1\) molecular flow). The transition region (\(0.01 < Kn < 1\)) is referred to as the transition flow regime.
Continuum flows are mainly simulated by using commercial CFD methods, which are used to solve the Euler equations. In the case of molecular flows one uses statistical methods, such as the Direct Simulation Monte Carlo (DSMC) method. In the transition region Euler equations are not adequate to model gas flows. Because of the rapid increase of particle collisions the DSMC method tends to fail, as well
Therefore, we develop a deterministic method, which is suitable to simulate problems of rarefied gases for any Knudsen number and is appropriate to simulate flows inside geometries with moving boundaries. Thus, the method we use is the Finite Pointset Method (FPM), which is a mesh-free numerical method developed at the ITWM Kaiserslautern and is mainly used to solve fluid dynamical problems.
More precisely, we develop a method in the FPM framework to solve the BGK model equation, which is a simplification of the Boltzmann equation. This equation is mainly used to describe rarefied flows.
The FPM based method is implemented for one and two dimensional physical and velocity space and different ranges of the Knudsen number. Numerical examples are shown for problems with moving boundaries. It is seen, that our method is superior to regular grid methods with respect to the implementation of boundary conditions. Furthermore, our results are comparable to reference solutions gained through CFD- and DSMC methods, respectevly.
In this work we study and investigate the minimum width annulus problem (MWAP), the circle center location or circle location problem (CLP) and the point center location or point location problem (PLP) on Rectilinear and Chebyshev planes as well as in networks. The relations between the problems have served as a basis for finding of elegant solution, algorithms for both new and well known problems. So, MWAP was formulated and investigated in Rectilinear space. In contrast to Euclidean metric, MWAP and PLP have at least one common optimal point. Therefore, MWAP on Rectilinear plane was solved in linear time with the help of PLP. Hence, the solution sequence was PLP-->MWAP. It was shown, that MWAP and CLP are equivalent. Thus, CLP can be also solved in linear time. The obtained results were analysed and transfered to Chebyshev metric. After that, the notions of circle, sphere and annulus in networks were introduced. It should be noted that the notion of a circle in a network is different from the notion of a cycle. An O(mn) time algorithm for solution of MWAP was constructed and implemented. The algorithm is based on the fact that the middle point of an edge represents an optimal solution of a local minimum width annulus on this edge. The resulting complexity is better than the complexity O(mn+n^2logn) in unweighted case of the fastest known algorithm for minimizing of the range function, which is mathematically equivalent to MWAP. MWAP in unweighted undirected networks was extended to the MWAP on subsets and to the restricted MWAP. Resulting problems were analysed and solved. Also the p–minimum width annulus problem was formulated and explored. This problem is NP–hard. However, the p–MWAP has been solved in polynomial O(m^2n^3p) time with a natural assumption, that each minimum width annulus covers all vertexes of a network having distances to the central point of annulus less than or equal to the radius of its outer circle. In contrast to the planar case MWAP in undirected unweighted networks have appeared to be a root problem among considered problems. During investigation of properties of circles in networks it was shown that the difference between planar and network circles is significant. This leads to the nonequivalence of CLP and MWAP in the general case. However, MWAP was effectively used in solution procedures for CLP giving the sequence MWAP-->CLP. The complexity of the developed and implemented algorithm is of order O(m^2n^2). It is important to mention that CLP in networks has been formulated for the first time in this work and differs from the well–studied location of cycles in networks. We have constructed an O(mn+n^2logn) algorithm for well–known PLP. The complexity of this algorithm is not worse than the complexity of the currently best algorithms. But the concept of the solution procedure is new – we use MWAP in order to solve PLP building the opposite to the planar case solution sequence MWAP-->PLP and this method has the following advantages: First, the lower bounds LB obtained in the solution procedure are proved to be in any case better than the strongest Halpern’s lower bound. Second, the developed algorithm is so simple that it can be easily applied to complex networks manually. Third, the empirical complexity of the algorithm is equal to O(mn). MWAP was extended to and explored in directed unweighted and weighted networks. The complexity bound O(n^2) of the developed algorithm for finding of the center of a minimum width annulus in the unweighted case does not depend on the number of edges in a network, because the problems can be solved in the order PLP-->MWAP. In the weighted case computational time is of order O(mn^2).
Image restoration and enhancement methods that respect important features such as edges play a fundamental role in digital image processing. In the last decades a large
variety of methods have been proposed. Nevertheless, the correct restoration and
preservation of, e.g., sharp corners, crossings or texture in images is still a challenge, in particular in the presence of severe distortions. Moreover, in the context of image denoising many methods are designed for the removal of additive Gaussian noise and their adaptation for other types of noise occurring in practice requires usually additional efforts.
The aim of this thesis is to contribute to these topics and to develop and analyze new
methods for restoring images corrupted by different types of noise:
First, we present variational models and diffusion methods which are particularly well
suited for the restoration of sharp corners and X junctions in images corrupted by
strong additive Gaussian noise. For their deduction we present and analyze different
tensor based methods for locally estimating orientations in images and show how to
successfully incorporate the obtained information in the denoising process. The advantageous
properties of the obtained methods are shown theoretically as well as by
numerical experiments. Moreover, the potential of the proposed methods is demonstrated
for applications beyond image denoising.
Afterwards, we focus on variational methods for the restoration of images corrupted
by Poisson and multiplicative Gamma noise. Here, different methods from the literature
are compared and the surprising equivalence between a standard model for
the removal of Poisson noise and a recently introduced approach for multiplicative
Gamma noise is proven. Since this Poisson model has not been considered for multiplicative
Gamma noise before, we investigate its properties further for more general
regularizers including also nonlocal ones. Moreover, an efficient algorithm for solving
the involved minimization problems is proposed, which can also handle an additional
linear transformation of the data. The good performance of this algorithm is demonstrated
experimentally and different examples with images corrupted by Poisson and
multiplicative Gamma noise are presented.
In the final part of this thesis new nonlocal filters for images corrupted by multiplicative
noise are presented. These filters are deduced in a weighted maximum likelihood
estimation framework and for the definition of the involved weights a new similarity measure for the comparison of data corrupted by multiplicative noise is applied. The
advantageous properties of the new measure are demonstrated theoretically and by
numerical examples. Besides, denoising results for images corrupted by multiplicative
Gamma and Rayleigh noise show the very good performance of the new filters.
An increasing number of nowadays tasks, such as speech recognition, image generation,
translation, classification or prediction are performed with the help of machine learning.
Especially artificial neural networks (ANNs) provide convincing results for these tasks.
The reasons for this success story are the drastic increase of available data sources in
our more and more digitalized world as well as the development of remarkable ANN
architectures. This development has led to an increasing number of model parameters
together with more and more complex models. Unfortunately, this yields a loss in the
interpretability of deployed models. However, there exists a natural desire to explain the
deployed models, not just by empirical observations but also by analytical calculations.
In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding
of these models. As the name suggests, VAEs are based on standard autoencoders (AEs)
and therefore used to perform dimensionality reduction of data. This is achieved by a
bottleneck structure within the hidden layers of the ANN. From a data input the encoder,
that is the part up to the bottleneck, produces a low dimensional representation. The
decoder, the part from the bottleneck to the output, uses this representation to reconstruct
the input. The model is learned by minimizing the error from the reconstruction.
In our point of view, the most remarkable property and, hence, also a central topic
in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning
is preventing the VAE with thousands of parameters from overfitting. However, such a
desirable property comes with the risk for the model of learning nothing at all. In this
thesis, we look at VAEs and the auto-pruning from two different angles and our main
contributions to research are the following:
(i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the
framework of generalized linear models (GLMs). As a result, we are able to explain
training results of VAEs before conducting the actual training.
(ii) We construct a time dependent VAE and show the effects of the auto-pruning in
this model. As a result, we are able to model financial data sequences and estimate
the value-at-risk (VaR) of associated portfolios. Our results show that we surpass
the standard benchmarks for VaR estimation.
Simplified ODE models describing blood flow rate are governed by the pressure gradient.
However, assuming the orientation of the blood flow in a human body correlates to a positive
direction, a negative pressure gradient forces the valve to shut, which stops the flow through
the valve, hence, the flow rate is zero, whereas the pressure rate is formulated by an ODE.
Presence of ODEs together with algebraic constraints and sudden changes of system characterizations
yield systems of switched differential-algebraic equations (swDAEs). Alternating
dynamics of the heart can be well modelled by means of swDAEs. Moreover, to study pulse
wave propagation in arteries and veins, PDE models have been developed. Connection between
the heart and vessels leads to coupling PDEs and swDAEs. This model motivates
to study PDEs coupled with swDAEs, for which the information exchange happens at PDE
boundaries, where swDAE provides boundary conditions to the PDE and PDE outputs serve
as inputs to swDAE. Such coupled systems occur, e.g. while modelling power grids using
telegrapher’s equations with switches, water flow networks with valves and district
heating networks with rapid consumption changes. Solutions of swDAEs might
include jumps, Dirac impulses and their derivatives of arbitrary high orders. As outputs of
swDAE read as boundary conditions of PDE, a rigorous solution framework for PDE must
be developed so that jumps, Dirac impulses and their derivatives are allowed at PDE boundaries
and in PDE solutions. This is a wider solution class than solutions of small bounded
variation (BV), for instance, used in where nonlinear hyperbolic PDEs are coupled with
ODEs. Similarly, in, the solutions to switched linear PDEs with source terms are
restricted to the class of BV. However, in the presence of Dirac impulses and their derivatives,
BV functions cannot handle the coupled systems including DAEs with index greater than one.
Therefore, hyperbolic PDEs coupled with swDAEs with index one will be studied in the BV
setting and with swDAEs whose index is greater than one will be investigated in the distributional
sense. To this end, the 1D space of piecewise-smooth distributions is extended to a 2D
piecewise-smooth distributional solution framework. 2D space of piecewise-smooth distributions
allows trace evaluations at boundaries of the PDE. Moreover, a relationship between
solutions to coupled system and switched delay DAEs is established. The coupling structure
in this thesis forms a rather general framework. In fact, any arbitrary network, where PDEs
are represented by edges and (switched) DAEs by nodes, is covered via this structure. Given
a network, by rescaling spatial domains which modifies the coefficient matrices by a constant,
each PDE can be defined on the same interval which leads to a formulation of a single
PDE whose unknown is made up of the unknowns of each PDE that are stacked over each
other with a block diagonal coefficient matrix. Likewise, every swDAE is reformulated such
that the unknowns are collected above each other and coefficient matrices compose a block
diagonal coefficient matrix so that each node in the network is expressed as a single swDAE.
The results are illustrated by numerical simulations of the power grid and simplified circulatory
system examples. Numerical results for the power grid display the evolution of jumps
and Dirac impulses caused by initial and boundary conditions as a result of instant switches.
On the other hand, the analysis and numerical results for the simplified circulatory system do
not entail a Dirac impulse, for otherwise such an entity would destroy the entire system. Yet
jumps in the flow rate in the numerical results can come about due to opening and closure of
valves, which suits clinical and physiological findings. Regarding physiological parameters,
numerical results obtained in this thesis for the simplified circulatory system agree well with
medical data and findings from literature when compared for the validation
Various physical phenomenons with sudden transients that results into structrual changes can be modeled via
switched nonlinear differential algebraic equations (DAEs) of the type
\[
E_{\sigma}\dot{x}=A_{\sigma}x+f_{\sigma}+g_{\sigma}(x). \tag{DAE}
\]
where \(E_p,A_p \in \mathbb{R}^{n\times n}, x\mapsto g_p(x),\) is a mapping, \(p \in \{1,\cdots,P\}, P\in \mathbb{N}
f \in \mathbb{R} \rightarrow \mathbb{R}^n , \sigma: \mathbb{R} \rightarrow \{1,\cdots, P\}\).
Two related common tasks are:
Task 1: Investigate if above (DAE) has a solution and if it is unique.
Task 2: Find a connection among a solution of above (DAE) and solutions of related
partial differential equations.
In the linear case \(g(x) \equiv 0\) the task 1 has been tackeled already in a
distributional solution framework.
A main goal of the dissertation is to give contribution to task 1 for the
nonlinear case \(g(x) \not \equiv 0\) ; also contributions to the task 2 are given for
switched nonlinear DAEs arising while modeling sudden transients in water
distribution networks. In addition, this thesis contains the following further
contributions:
The notion of structured switched nonlinear DAEs has been introduced,
allowing also non regular distributions as solutions. This extend a previous
framework that allowed only piecewise smooth functions as solutions. Further six mild conditions were given to ensure existence and uniqueness of the solution within the space of piecewise smooth distribution. The main
condition, namely the regularity of the matrix pair \((E,A)\), is interpreted geometrically for those switched nonlinear DAEs arising from water network graphs.
Another contribution is the introduction of these switched nonlinear DAEs
as a simplication of the PDE model used classically for modeling water networks. Finally, with the support of numerical simulations of the PDE model it has been illustrated that this switched nonlinear DAE model is a good approximation for the PDE model in case of a small compressibility coefficient.
A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.
The main theme of this thesis is the interplay between algebraic and tropical intersection
theory, especially in the context of enumerative geometry. We begin by exploiting
well-known results about tropicalizations of subvarieties of algebraic tori to give a
simple proof of Nishinou and Siebert’s correspondence theorem for rational curves
through given points in toric varieties. Afterwards, we extend this correspondence
by additionally allowing intersections with psi-classes. We do this by constructing
a tropicalization map for cycle classes on toroidal embeddings. It maps algebraic
cycle classes to elements of the Chow group of the cone complex of the toroidal
embedding, that is to weighted polyhedral complexes, which are balanced with respect
to an appropriate map to a vector space, modulo a naturally defined equivalence relation.
We then show that tropicalization respects basic intersection-theoretic operations like
intersections with boundary divisors and apply this to the appropriate moduli spaces
to obtain our correspondence theorem.
Trying to apply similar methods in higher genera inevitably confronts us with moduli
spaces which are not toroidal. This motivates the last part of this thesis, where we
construct tropicalizations of cycles on fine logarithmic schemes. The logarithmic point of
view also motivates our interpretation of tropical intersection theory as the dualization
of the intersection theory of Kato fans. This duality gives a new perspective on the
tropicalization map; namely, as the dualization of a pull-back via the characteristic
morphism of a logarithmic scheme.
Nowadays one of the major objectives in geosciences is the determination of the gravitational field of our planet, the Earth. A precise knowledge of this quantity is not just interesting on its own but it is indeed a key point for a vast number of applications. The important question is how to obtain a good model for the gravitational field on a global scale. The only applicable solution - both in costs and data coverage - is the usage of satellite data. We concentrate on highly precise measurements which will be obtained by GOCE (Gravity Field and Steady State Ocean Circulation Explorer, launch expected 2006). This satellite has a gradiometer onboard which returns the second derivatives of the gravitational potential. Mathematically seen we have to deal with several obstacles. The first one is that the noise in the different components of these second derivatives differs over several orders of magnitude, i.e. a straightforward solution of this outer boundary value problem will not work properly. Furthermore we are not interested in the data at satellite height but we want to know the field at the Earth's surface, thus we need a regularization (downward-continuation) of the data. These two problems are tackled in the thesis and are now described briefly. Split Operators: We have to solve an outer boundary value problem at the height of the satellite track. Classically one can handle first order side conditions which are not tangential to the surface and second derivatives pointing in the radial direction employing integral and pseudo differential equation methods. We present a different approach: We classify all first and purely second order operators which fulfill that a harmonic function stays harmonic under their application. This task is done by using modern algebraic methods for solving systems of partial differential equations symbolically. Now we can look at the problem with oblique side conditions as if we had ordinary i.e. non-derived side conditions. The only additional work which has to be done is an inversion of the differential operator, i.e. integration. In particular we are capable to deal with derivatives which are tangential to the boundary. Auto-Regularization: The second obstacle is finding a proper regularization procedure. This is complicated by the fact that we are facing stochastic rather than deterministic noise. The main question is how to find an optimal regularization parameter which is impossible without any additional knowledge. However we could show that with a very limited number of additional information, which are obtainable also in practice, we can regularize in an asymptotically optimal way. In particular we showed that the knowledge of two input data sets allows an order optimal regularization procedure even under the hard conditions of Gaussian white noise and an exponentially ill-posed problem. A last but rather simple task is combining data from different derivatives which can be done by a weighted least squares approach using the information we obtained out of the regularization procedure. A practical application to the downward-continuation problem for simulated gravitational data is shown.
In this dissertation, we discuss how to price American-style options. Our aim is to study and improve the regression-based Monte Carlo methods. In order to have good benchmarks to compare with them, we also study the tree methods.
In the second chapter, we investigate the tree methods specifically. We do research firstly within the Black-Scholes model and then within the Heston model. In the Black-Scholes model, based on Müller's work, we illustrate how to price one dimensional and multidimensional American options, American Asian options, American lookback options, American barrier options and so on. In the Heston model, based on Sayer's research, we implement his algorithm to price one dimensional American options. In this way, we have good benchmarks of various American-style options and put them all in the appendix.
In the third chapter, we focus on the regression-based Monte Carlo methods theoretically and numerically. Firstly, we introduce two variations, the so called "Tsitsiklis-Roy method" and the "Longstaff-Schwartz method". Secondly, we illustrate the approximation of American option by its Bermudan counterpart. Thirdly we explain the source of low bias and high bias. Fourthly we compare these two methods using in-the-money paths and all paths. Fifthly, we examine the effect using different number and form of basis functions. Finally, we study the Andersen-Broadie method and present the lower and upper bounds.
In the fourth chapter, we study two machine learning techniques to improve the regression part of the Monte Carlo methods: Gaussian kernel method and kernel-based support vector machine. In order to choose a proper smooth parameter, we compare fixed bandwidth, global optimum and suboptimum from a finite set. We also point out that scaling the training data to [0,1] can avoid numerical difficulty. When out-of-sample paths of stock prices are simulated, the kernel method is robust and even performs better in several cases than the Tsitsiklis-Roy method and the Longstaff-Schwartz method. The support vector machine can keep on improving the kernel method and needs less representations of old stock prices during prediction of option continuation value for a new stock price.
In the fifth chapter, we switch to the hardware (FGPA) implementation of the Longstaff-Schwartz method and propose novel reversion formulas for the stock price and volatility within the Black-Scholes and Heston models. The test for this formula within the Black-Scholes model shows that the storage of data is reduced and also the corresponding energy consumption.
This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.
In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
This thesis builds a bridge between singularity theory and computer algebra. To an isolated hypersurface singularity one can associate a regular meromorphic connection, the Gauß-Manin connection, containing a lattice, the Brieskorn lattice. The leading terms of the Brieskorn lattice with respect to the weight and V-filtration of the Gauß-Manin connection define the spectral pairs. They correspond to the Hodge numbers of the mixed Hodge structure on the cohomology of the Milnor fibre and belong to the finest known invariants of isolated hypersurface singularities. The differential structure of the Brieskorn lattice can be described by two complex endomorphisms A0 and A1 containing even more information than the spectral pairs. In this thesis, an algorithmic approach to the Brieskorn lattice in the Gauß-Manin connection is presented. It leads to algorithms to compute the complex monodromy, the spectral pairs, and the differential structure of the Brieskorn lattice. These algorithms are implemented in the computer algebra system Singular.
In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.
This thesis contains the mathematical treatment of a special class of analog microelectronic circuits called translinear circuits. The goal is to provide foundations of a new coherent synthesis approach for this class of circuits. The mathematical methods of the suggested synthesis approach come from graph theory, combinatorics, and from algebraic geometry, in particular symbolic methods from computer algebra. Translinear circuits form a very special class of analog circuits, because they rely on nonlinear device models, but still allow a very structured approach to network analysis and synthesis. Thus, translinear circuits play the role of a bridge between the "unknown space" of nonlinear circuit theory and the very well exploited domain of linear circuit theory. The nonlinear equations describing the behavior of translinear circuits possess a strong algebraic structure that is nonetheless flexible enough for a wide range of nonlinear functionality. Furthermore, translinear circuits offer several technical advantages like high functional density, low supply voltage and insensitivity to temperature. This unique profile is the reason that several authors consider translinear networks as the key to systematic synthesis methods for nonlinear circuits. The thesis proposes the usage of a computer-generated catalog of translinear network topologies as a synthesis tool. The idea to compile such a catalog has grown from the observation that on the one hand, the topology of a translinear network must satisfy strong constraints which severely limit the number of "admissible" topologies, in particular for networks with few transistors, and on the other hand, the topology of a translinear network already fixes its essential behavior, at least for static networks, because the so-called translinear principle requires the continuous parameters of all transistors to be the same. Even though the admissible topologies are heavily restricted, it is a highly nontrivial task to compile such a catalog. Combinatorial techniques have been adapted to undertake this task. In a catalog of translinear network topologies, prototype network equations can be stored along with each topology. When a circuit with a specified behavior is to be designed, one can search the catalog for a network whose equations can be matched with the desired behavior. In this context, two algebraic problems arise: To set up a meaningful equation for a network in the catalog, an elimination of variables must be performed, and to test whether a prototype equation from the catalog and a specified equation of desired behavior can be "matched", a complex system of polynomial equations must be solved, where the solutions are restricted to a finite set of integers. Sophisticated algorithms from computer algebra are applied in both cases to perform the symbolic computations. All mentioned algorithms have been implemented using C++, Singular, and Mathematica, and are successfully applied to actual design problems of humidity sensor circuitry at Analog Microelectronics GmbH, Mainz. As result of the research conducted, an exhaustive catalog of all static formal translinear networks with at most eight transistors is available. The application for the humidity sensor system proves the applicability of the developed synthesis approach. The details and implementations of the algorithms are worked out only for static networks, but can easily be adopted for dynamic networks as well. While the implementation of the combinatorial algorithms is stand-alone software written "from scratch" in C++, the implementation of the algebraic algorithms, namely the symbolic treatment of the network equations and the match finding, heavily rely on the sophisticated Gröbner basis engine of Singular and thus on more than a decade of experience contained in a special-purpose computer algebra system. It should be pointed out that the thesis contains the new observation that the translinear loop equations of a translinear network are precisely represented by the toric ideal of the network's translinear digraph. Altogether, this thesis confirms and strengthenes the key role of translinear circuits as systematically designable nonlinear circuits.
Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)
In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.
Adjoint-Based Shape Optimization and Optimal Control with Applications to Microchannel Systems
(2021)
Optimization problems constrained by partial differential equations (PDEs) play an important role in many areas of science and engineering. They often arise in the optimization of technological applications, where the underlying physical effects are modeled by PDEs. This thesis investigates such problems in the context of shape optimization and optimal control with microchannel systems as novel applications. Such systems are used, e.g., as cooling systems, heat exchangers, or chemical reactors as their high surface-to-volume ratio, which results in beneficial heat and mass transfer characteristics, allows them to excel in these settings. Additionally, this thesis considers general PDE constrained optimization problems with particular regard to their efficient solution.
As our first application, we study a shape optimization problem for a microchannel cooling system: We rigorously analyze this problem, prove its shape differentiability, and calculate the corresponding shape derivative. Afterwards, we consider the numerical optimization of the cooling system for which we employ a hierarchy of reduced models derived via porous medium modeling and a dimension reduction technique. A comparison of the models in this context shows that the reduced models approximate the original one very accurately while requiring substantially less computational resources.
Our second application is the optimization of a chemical microchannel reactor for the Sabatier process using techniques from PDE constrained optimal control. To treat this problem, we introduce two models for the reactor and solve a parameter identification problem to determine the necessary kinetic reaction parameters for our models. Thereafter, we consider the optimization of the reactor's operating conditions with the objective of improving its product yield, which shows considerable potential for enhancing the design of the reactor.
To provide efficient solution techniques for general shape optimization problems, we introduce novel nonlinear conjugate gradient methods for PDE constrained shape optimization and analyze their performance on several well-established benchmark problems. Our results show that the proposed methods perform very well, making them efficient and appealing gradient-based shape optimization algorithms.
Finally, we continue recent software-based developments for PDE constrained optimization and present our novel open-source software package cashocs. Our software implements and automates the adjoint approach and, thus, facilitates the solution of general PDE constrained shape optimization and optimal control problems. Particularly, we highlight our software's user-friendly interface, straightforward applicability, and mesh independent behavior.
Diese Arbeit gehört in die algebraische Geometrie und die Darstellungstheorie und stellt eine Beziehung zwischen beiden Gebieten dar. Man beschäftigt sich mit den abgeleiteten Kategorien auf flachen Entartungen projektiver Geraden und elliptischer Kurven. Als Mittel benutzt man die Technik der Matrixprobleme. Das Hauptergebnis dieser Dissertation ist der folgende Satz: SATZ. Sei X ein Zykel projektiver Geraden. Dann gibt es drei Typen unzerlegbarer Objekte in D^-(Coh_X): - Shifts von Wolkenkratzergarben in einem regulären Punkt; - Bänder B(w,m,lambda), - Saiten S(w). Ganz analog beweist man die Zahmheit der abgeleiteten Kategorien vieler assoziativer Algebren.
This dissertation is intended to transport the theory of Serre functors into the context of A-infinity-categories. We begin with an introduction to multicategories and closed multicategories, which form a framework in which the theory of A-infinity-categories is developed. We prove that (unital) A-infinity-categories constitute a closed symmetric multicategory. We define the notion of A-infinity-bimodule similarly to Tradler and show that it is equivalent to an A-infinity-functor of two arguments which takes values in the differential graded category of complexes of k-modules, where k is a commutative ground ring. Serre A-infinity-functors are defined via A-infinity-bimodules following ideas of Kontsevich and Soibelman. We prove that a unital closed under shifts A-infinity-category over a field admits a Serre A-infinity-functor if and only if its homotopy category admits an ordinary Serre functor. The proof uses categories and Serre functors enriched in the homotopy category of complexes of k-modules. Another important ingredient is an A-infinity-version of the Yoneda Lemma.
The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier.
In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group.
The particular difficulty in such approaches is the formulation of limit and
jump relations for surfaces used in seismic data processing, i.e., non-smooth
surfaces in various topologies (for example, uniform and
quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces.
By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of
a tree algorithm for the fast numerical computation of functions on the boundary.
In order to demonstrate the
applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency
artifacts and noise. Essential feature is the space localization property of
Helmholtz wavelets which numerically enables to discuss the velocity field in
pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.
In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.
The dissertation is concerned with the numerical solution of Fokker-Planck equations in high dimensions arising in the study of dynamics of polymeric liquids. Traditional methods based on tensor product structure are not applicable in high dimensions for the number of nodes required to yield a fixed accuracy increases exponentially with the dimension; a phenomenon often referred to as the curse of dimension. Particle methods or finite point set methods are known to break the curse of dimension. The Monte Carlo method (MCM) applied to such problems are 1/sqrt(N) accurate, where N is the cardinality of the point set considered, independent of the dimension. Deterministic version of the Monte Carlo method called the quasi Monte Carlo method (QMC) are quite effective in integration problems and accuracy of the order of 1/N can be achieved, up to a logarithmic factor. However, such a replacement cannot be carried over to particle simulations due to the correlation among the quasi-random points. The method proposed by Lecot (C.Lecot and F.E.Khettabi, Quasi-Monte Carlo simulation of diffusion, Journal of Complexity, 15 (1999), pp.342-359) is the only known QMC approach, but it not only leads to large particle numbers but also the proven order of convergence is 1/N^(2s) in dimension s. We modify the method presented there, in such a way that the new method works with reasonable particle numbers even in high dimensions and has better order of convergence. Though the provable order of convergence is 1/sqrt(N), the results show less variance and thus the proposed method still slightly outperforms standard MCM.
A Multi-Phase Flow Model Incorporated with Population Balance Equation in a Meshfree Framework
(2011)
This study deals with the numerical solution of a meshfree coupled model of Computational Fluid Dynamics (CFD) and Population Balance Equation (PBE) for liquid-liquid extraction columns. In modeling the coupled hydrodynamics and mass transfer in liquid extraction columns one encounters multidimensional population balance equation that could not be fully resolved numerically within a reasonable time necessary for steady state or dynamic simulations. For this reason, there is an obvious need for a new liquid extraction model that captures all the essential physical phenomena and still tractable from computational point of view. This thesis discusses a new model which focuses on discretization of the external (spatial) and internal coordinates such that the computational time is drastically reduced. For the internal coordinates, the concept of the multi-primary particle method; as a special case of the Sectional Quadrature Method of Moments (SQMOM) is used to represent the droplet internal properties. This model is capable of conserving the most important integral properties of the distribution; namely: the total number, solute and volume concentrations and reduces the computational time when compared to the classical finite difference methods, which require many grid points to conserve the desired physical quantities. On the other hand, due to the discrete nature of the dispersed phase, a meshfree Lagrangian particle method is used to discretize the spatial domain (extraction column height) using the Finite Pointset Method (FPM). This method avoids the extremely difficult convective term discretization using the classical finite volume methods, which require a lot of grid points to capture the moving fronts propagating along column height.
In the filling process of a car tank, the formation of foam plays an unwanted role, as it may prevent the tank from being completely filled or at least delay the filling. Therefore it is of interest to optimize the geometry of the tank using numerical simulation in such a way that the influence of the foam is minimized. In this dissertation, we analyze the behaviour of the foam mathematically on the mezoscopic scale, that is for single lamellae. The most important goals are on the one hand to gain a deeper understanding of the interaction of the relevant physical effects, on the other hand to obtain a model for the simulation of the decay of a lamella which can be integrated in a global foam model. In the first part of this work, we give a short introduction into the physical properties of foam and find that the Marangoni effect is the main cause for its stability. We then develop a mathematical model for the simulation of the dynamical behaviour of a lamella based on an asymptotic analysis using the special geometry of the lamella. The result is a system of nonlinear partial differential equations (PDE) of third order in two spatial and one time dimension. In the second part, we analyze this system mathematically and prove an existence and uniqueness result for a simplified case. For some special parameter domains the system can be further simplified, and in some cases explicit solutions can be derived. In the last part of the dissertation, we solve the system using a finite element approach and discuss the results in detail.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.