## Fachbereich Mathematik

### Filtern

#### Erscheinungsjahr

#### Dokumenttyp

- Dissertation (225) (entfernen)

#### Schlagworte

- Algebraische Geometrie (6)
- Finanzmathematik (5)
- Optimization (5)
- Portfolio Selection (5)
- Stochastische dynamische Optimierung (5)
- Navier-Stokes-Gleichung (4)
- Numerische Mathematik (4)
- Portfolio-Optimierung (4)
- portfolio optimization (4)
- Computeralgebra (3)
- Elastizität (3)
- Erwarteter Nutzen (3)
- Finite-Volumen-Methode (3)
- Gröbner-Basis (3)
- Homogenisierung <Mathematik> (3)
- Inverses Problem (3)
- Numerische Strömungssimulation (3)
- Optionspreistheorie (3)
- Portfoliomanagement (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Wavelet (3)
- optimales Investment (3)
- Asymptotic Expansion (2)
- Asymptotik (2)
- Bewertung (2)
- Derivat <Wertpapier> (2)
- Elasticity (2)
- Endliche Geometrie (2)
- Erdmagnetismus (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- Geometric Ergodicity (2)
- Hamilton-Jacobi-Differentialgleichung (2)
- Hochskalieren (2)
- IMRT (2)
- Kreditrisiko (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Local smoothing (2)
- Mehrskalenanalyse (2)
- Mehrskalenmodell (2)
- Modulraum (2)
- Partial Differential Equations (2)
- Partielle Differentialgleichung (2)
- Portfolio Optimization (2)
- Poröser Stoff (2)
- Regularisierung (2)
- Schnitttheorie (2)
- Stochastic Control (2)
- Stochastische Differentialgleichung (2)
- Transaktionskosten (2)
- Upscaling (2)
- Vektorwavelets (2)
- White Noise Analysis (2)
- curve singularity (2)
- domain decomposition (2)
- duality (2)
- finite volume method (2)
- geomagnetism (2)
- homogenization (2)
- illiquidity (2)
- interface problem (2)
- isogeometric analysis (2)
- mesh generation (2)
- optimal investment (2)
- splines (2)
- "Slender-Body"-Theorie (1)
- 3D image analysis (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- Ableitungsfreie Optimierung (1)
- Advanced Encryption Standard (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Annulus (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Arc distance (1)
- Archimedische Kopula (1)
- Asiatische Option (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Ausfallrisiko (1)
- Automorphismengruppe (1)
- Autoregressive Hilbertian model (1)
- B-Spline (1)
- Barriers (1)
- Basket Option (1)
- Bayes-Entscheidungstheorie (1)
- Beam models (1)
- Beam orientation (1)
- Beschichtungsprozess (1)
- Beschränkte Krümmung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Bildsegmentierung (1)
- Binomialbaum (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Boltzmann Equation (1)
- Bondindizes (1)
- Bootstrap (1)
- Boundary Value Problem / Oblique Derivative (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- CDO (1)
- CDS (1)
- CDSwaption (1)
- CFD (1)
- CHAMP (1)
- CPDO (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Charakter <Gruppentheorie> (1)
- Chi-Quadrat-Test (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Circle Location (1)
- Coarse graining (1)
- Cohen-Lenstra heuristic (1)
- Combinatorial Optimization (1)
- Commodity Index (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer algebra (1)
- Computeralgebra System (1)
- Conditional Value-at-Risk (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Construction of hypersurfaces (1)
- Copula (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Curvature (1)
- Curved viscous fibers (1)
- DSMC (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Debt Management (1)
- Defaultable Options (1)
- Deformationstheorie (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Differenzenverfahren (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionsprozess (1)
- Discriminatory power (1)
- Diskrete Fourier-Transformation (1)
- Dispersionsrelation (1)
- Dissertation (1)
- Druckkorrektur (1)
- Dünnfilmapproximation (1)
- EM algorithm (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Efficiency (1)
- Effizienter Algorithmus (1)
- Effizienz (1)
- Eikonal equation (1)
- Elastische Deformation (1)
- Elastoplastizität (1)
- Elektromagnetische Streuung (1)
- Eliminationsverfahren (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Endliche Gruppe (1)
- Endliche Lie-Gruppe (1)
- Entscheidungsbaum (1)
- Entscheidungsunterstützung (1)
- Enumerative Geometrie (1)
- Erdöl Prospektierung (1)
- Erwartungswert-Varianz-Ansatz (1)
- Expected shortfall (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Extrapolation (1)
- Extreme Events (1)
- Extreme value theory (1)
- FEM (1)
- FFT (1)
- FPM (1)
- Faden (1)
- Fatigue (1)
- Feedfoward Neural Networks (1)
- Feynman Integrals (1)
- Feynman path integrals (1)
- Fiber suspension flow (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finite-Elemente-Methode (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Forward-Backward Stochastic Differential Equation (1)
- Fourier-Transformation (1)
- Fredholmsche Integralgleichung (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionenkörper (1)
- GARCH (1)
- GARCH Modelle (1)
- Galerkin-Methode (1)
- Gamma-Konvergenz (1)
- Garbentheorie (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Geodesie (1)
- Geometrische Ergodizität (1)
- Gewichteter Sobolev-Raum (1)
- Gittererzeugung (1)
- Gleichgewichtsstrategien (1)
- Granular flow (1)
- Granulat (1)
- Gravitationsfeld (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Große Abweichung (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner-basis (1)
- Gyroscopic (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamiltonian Path Integrals (1)
- Handelsstrategien (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Hazard Functions (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Helmholtz Type Boundary Value Problems (1)
- Heston-Modell (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- Homogenization (1)
- Homologische Algebra (1)
- Hub Location Problem (1)
- Hydrostatischer Druck (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hyperspektraler Sensor (1)
- Hysterese (1)
- ITSM (1)
- Idealklassengruppe (1)
- Illiquidität (1)
- Image restoration (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Inflation (1)
- Infrarotspektroskopie (1)
- Intensität (1)
- Internationale Diversifikation (1)
- Inverse Problem (1)
- Irreduzibler Charakter (1)
- Isogeometrische Analyse (1)
- Ito (1)
- Jacobigruppe (1)
- Kanalcodierung (1)
- Karhunen-Loève expansion (1)
- Kategorientheorie (1)
- Kelvin Transformation (1)
- Kirchhoff-Love shell (1)
- Kiyoshi (1)
- Kombinatorik (1)
- Kommutative Algebra (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsphysik (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Kreitderivaten (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kurvenschar (1)
- LIBOR (1)
- Lagrangian relaxation (1)
- Laplace transform (1)
- Lattice Boltzmann (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Leading-Order Optimality (1)
- Level set methods (1)
- Lie-Typ-Gruppe (1)
- Lineare partielle Differentialgleichung (1)
- Lippmann-Schwinger equation (1)
- Liquidität (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- MBS (1)
- MKS (1)
- Macaulay’s inverse system (1)
- Marangoni-Effekt (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Markov-Prozess (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martingaloptimalitätsprinzip (1)
- Mathematical Finance (1)
- Mathematik (1)
- Mathematisches Modell (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrkriterielle Optimierung (1)
- Mehrskalen (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikroelektronik (1)
- Mikrostruktur (1)
- Mixed integer programming (1)
- Modellbildung (1)
- Molekulardynamik (1)
- Momentum and Mas Transfer (1)
- Monte Carlo (1)
- Monte-Carlo-Simulation (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Mosco convergence (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multicriteria optimization (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiresolution Analysis (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- NURBS (1)
- Networks (1)
- Netzwerksynthese (1)
- Neural Networks (1)
- Neuronales Netz (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Niederschlag (1)
- No-Arbitrage (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Mathematik / Algorithmus (1)
- Numerisches Verfahren (1)
- Oberflächenmaße (1)
- Oberflächenspannung (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimale Portfolios (1)
- Optimierung (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Order (1)
- Ovoid (1)
- Gedruckte Kopie bestellen (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Partikel Methoden (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathwise Optimality (1)
- Pedestrian FLow (1)
- Pfadintegral (1)
- Planares Polynom (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- PolyBoRi (1)
- Population Balance Equation (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Preimage of an ideal under a morphism of algebras (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Prox-Regularisierung (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- Quadratischer Raum (1)
- Quantile autoregression (1)
- Quasi-Variational Inequalities (1)
- RKHS (1)
- Radial Basis Functions (1)
- Radiotherapy (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rarefied gas (1)
- Reflexionsspektroskopie (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regressionsanalyse (1)
- Regularisierung / Stoppkriterium (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Reliability (1)
- Restricted Regions (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Risikomanagement (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Measures (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Räumliche Statistik (1)
- SWARM (1)
- Scale function (1)
- Schaum (1)
- Schaumzerfall (1)
- Schiefe Ableitung (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Second Order Conditions (1)
- Semi-Markov-Kette (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Shallow Water Equations (1)
- Shape optimization, gradient based optimization, adjoint method (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Slender body theory (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Spannungs-Dehn (1)
- Spatial Statistics (1)
- Spectral theory (1)
- Spektralanalyse <Stochastik> (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sprung-Diffusions-Prozesse (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Standard basis (1)
- Standortprobleme (1)
- Steuer (1)
- Stochastic Impulse Control (1)
- Stochastic Processes (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische Zinsen (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Strukturiertes Finanzprodukt (1)
- Strukturoptimierung (1)
- Strömungsdynamik (1)
- Strömungsmechanik (1)
- Success Run (1)
- Survival Analysis (1)
- Systemidentifikation (1)
- Sägezahneffekt (1)
- Tail Dependence Koeffizient (1)
- Test for Changepoint (1)
- Thermophoresis (1)
- Thin film approximation (1)
- Tichonov-Regularisierung (1)
- Time Series (1)
- Time-Series (1)
- Time-delay-Netz (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Traffic flow (1)
- Transaction costs (1)
- Trennschärfe <Statistik> (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Two-phase flow (1)
- Unreinheitsfunktion (1)
- Untermannigfaltigkeit (1)
- Upwind-Verfahren (1)
- Utility (1)
- Value at Risk (1)
- Value-at-Risk (1)
- Variationsrechnung (1)
- Vectorfield approximation (1)
- Vektorfeldapproximation (1)
- Vektorkugelfunktionen (1)
- Verschwindungsatz (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Volatilität (1)
- Volatilitätsarbitrage (1)
- Vorkonditionierer (1)
- Vorwärts-Rückwärts-Stochastische-Differentialgleichung (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weißes Rauschen (1)
- White Noise (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Worst-Case (1)
- Wärmeleitfähigkeit (1)
- Yaglom limits (1)
- Zeitintegrale Modelle (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- Zero-dimensional schemes (1)
- Zopfgruppe (1)
- Zufälliges Feld (1)
- Zweiphasenströmung (1)
- abgeleitete Kategorie (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alternating minimization (1)
- alternating optimization (1)
- analoge Mikroelektronik (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- applied mathematics (1)
- archimedean copula (1)
- asian option (1)
- basket option (1)
- benders decomposition (1)
- bending strip method (1)
- binomial tree (1)
- blackout period (1)
- bocses (1)
- boundary value problem (1)
- canonical ideal (1)
- canonical module (1)
- changing market coefficients (1)
- closure approximation (1)
- combinatorics (1)
- composites (1)
- computational finance (1)
- computer algebra (1)
- computeralgebra (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- correlated errors (1)
- coupling methods (1)
- crash (1)
- crash hedging (1)
- credit risk (1)
- curvature (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- degenerations of an elliptic curve (1)
- dense univariate rational interpolation (1)
- derived category (1)
- diffusion models (1)
- discrepancy (1)
- double exponential distribution (1)
- downward continuation (1)
- efficiency loss (1)
- elastoplasticity (1)
- elliptical distribution (1)
- endomorphism ring (1)
- enumerative geometry (1)
- equilibrium strategies (1)
- equisingular families (1)
- face value (1)
- fiber reinforced silicon carbide (1)
- filtration (1)
- financial mathematics (1)
- finite difference schemes (1)
- finite element method (1)
- first hitting time (1)
- float glass (1)
- flood risk (1)
- fluid structure (1)
- fluid structure interaction (1)
- forward-shooting grid (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- gitter (1)
- good semigroup (1)
- graph p-Laplacian (1)
- gravitation (1)
- group action (1)
- großer Investor (1)
- hedging (1)
- heuristic (1)
- hierarchical matrix (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hyperspectal unmixing (1)
- idealclass group (1)
- image analysis (1)
- image denoising (1)
- impulse control (1)
- impurity functions (1)
- incompressible elasticity (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- integer programming (1)
- integral constitutive equations (1)
- intensity (1)
- inverse optimization (1)
- inverse problem (1)
- jump-diffusion process (1)
- large investor (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- level K-algebras (1)
- level set method (1)
- limit theorems (1)
- linear code (1)
- localizing basis (1)
- longevity bonds (1)
- low-rank approximation (1)
- macro derivative (1)
- market manipulation (1)
- markov model (1)
- martingale optimality principle (1)
- mathematical modelling (1)
- mathematical morphology (1)
- matrix problems (1)
- matroid flows (1)
- mean-variance approach (1)
- micromechanics (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- modal derivatives (1)
- model order reduction (1)
- moduli space (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- multi scale (1)
- multi-asset option (1)
- multi-class image segmentation (1)
- multi-level Monte Carlo (1)
- multi-phase flow (1)
- multicategory (1)
- multifilament superconductor (1)
- multigrid method (1)
- multileaf collimator (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative noise (1)
- multiscale denoising (1)
- multiscale methods (1)
- multivariate chi-square-test (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- non-desarguesian plane (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- option pricing (1)
- option valuation (1)
- partial differential equation (1)
- partial information (1)
- path-dependent options (1)
- pattern (1)
- penalty methods (1)
- penalty-free formulation (1)
- petroleum exploration (1)
- planar polynomial (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- potential (1)
- preconditioners (1)
- pressure correction (1)
- primal-dual algorithm (1)
- probability distribution (1)
- projective surfaces (1)
- proximation (1)
- quadrinomial tree (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiation therapy (1)
- radiotherapy (1)
- rare disasters (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- real quadratic number fields (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regime-shift model (1)
- regression analysis (1)
- regularization methods (1)
- rheology (1)
- sampling (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- second class group (1)
- seismic tomography (1)
- semigroup of values (1)
- sheaf theory (1)
- similarity measures (1)
- singularities (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparsity (1)
- spherical approximation (1)
- sputtering process (1)
- stochastic arbitrage (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stochastische Arbitrage (1)
- stop- and play-operator (1)
- subgradient (1)
- superposed fluids (1)
- surface measures (1)
- surrogate algorithm (1)
- syzygies (1)
- tail dependence coefficient (1)
- tax (1)
- tensions (1)
- time delays (1)
- topological asymptotic expansion (1)
- toric geometry (1)
- torische Geometrie (1)
- total variation (1)
- total variation spatial regularization (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- tropical geometry (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- value semigroup (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector spherical harmonics (1)
- vectorial wavelets (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- viscoelastic fluids (1)
- volatility arbitrage (1)
- vortex seperation (1)
- well-posedness (1)
- worst-case (1)
- worst-case scenario (1)
- Äquisingularität (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)

#### Fachbereich / Organisatorische Einheit

- Fachbereich Mathematik (225)
- Fraunhofer (ITWM) (2)

Das zinsoptimierte Schuldenmanagement hat zum Ziel, eine möglichst effiziente Abwägung zwischen den erwarteten Finanzierungskosten einerseits und den Risiken für den Staatshaushalt andererseits zu finden. Um sich diesem Spannungsfeld zu nähern, schlagen wir erstmals die Brücke zwischen den Problemstellungen des Schuldenmanagements und den Methoden der zeitkontinuierlichen, dynamischen Portfoliooptimierung.
Das Schlüsselelement ist dabei eine neue Metrik zur Messung der Finanzierungskosten, die Perpetualkosten. Diese spiegeln die durchschnittlichen zukünftigen Finanzierungskosten wider und beinhalten sowohl die bereits bekannten Zinszahlungen als auch die noch unbekannten Kosten für notwendige Anschlussfinanzierungen. Daher repräsentiert die Volatilität der Perpetualkosten auch das Risiko einer bestimmten Strategie; je langfristiger eine Finanzierung ist, desto kleiner ist die Schwankungsbreite der Perpetualkosten.
Die Perpetualkosten ergeben sich als Produkt aus dem Barwert eines Schuldenportfolios und aus der vom Portfolio unabhängigen Perpetualrate. Für die Modellierung des Barwertes greifen wir auf das aus der dynamischen Portfoliooptimierung bekannte Konzept eines selbstfinanzierenden Bondportfolios zurück, das hier auf einem mehrdimensionalen affin-linearen Zinsmodell basiert. Das Wachstum des Schuldenportfolios wird dabei durch die Einbeziehung des Primärüberschusses des Staates gebremst bzw. verhindert, indem wir diesen als externen Zufluss in das selbstfinanzierende Modell aufnehmen.
Wegen der Vielfältigkeit möglicher Finanzierungsinstrumente wählen wir nicht deren Wertanteile als Kontrollvariable, sondern kontrollieren die Sensitivitäten des Portfolios gegenüber verschiedenen Zinsbewegungen. Aus optimalen Sensitivitäten können in einem nachgelagerten Schritt dann optimale Wertanteile für verschiedenste Finanzierungsinstrumente abgeleitet werden. Beispielhaft demonstrieren wir dies mittels Rolling-Horizon-Bonds unterschiedlicher Laufzeit.
Schließlich lösen wir zwei Optimierungsprobleme mit Methoden der stochastischen Kontrolltheorie. Dabei wird stets der erwartete Nutzen der Perpetualkosten maximiert. Die Nutzenfunktionen sind jeweils an das Schuldenmanagement angepasst und zeichnen sich insbesondere dadurch aus, dass höhere Kosten mit einem niedrigeren Nutzen einhergehen. Im ersten Problem betrachten wir eine Potenznutzenfunktion mit konstanter relativer Risikoaversion, im zweiten wählen wir eine Nutzenfunktion, welche die Einhaltung einer vorgegebenen Schulden- bzw. Kostenobergrenze garantiert.

In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.

The thesis is concerned with the modelling of ionospheric current systems and induced magnetic fields in a multiscale framework. Scaling functions and wavelets are used to realize a multiscale analysis of the function spaces under consideration and to establish a multiscale regularization procedure for the inversion of the considered operator equation. First of all a general multiscale concept for vectorial operator equations between two separable Hilbert spaces is developed in terms of vector kernel functions. The equivalence to the canonical tensorial ansatz is proven and the theory is transferred to the case of multiscale regularization of vectorial inverse problems. As a first application, a special multiresolution analysis of the space of square-integrable vector fields on the sphere, e.g. the Earth’s magnetic field measured on a spherical satellite’s orbit, is presented. By this, a multiscale separation of spherical vector-valued functions with respect to their sources can be established. The vector field is split up into a part induced by sources inside the sphere, a part which is due to sources outside the sphere and a part which is generated by sources on the sphere, i.e. currents crossing the sphere. The multiscale technqiue is tested on a magnetic field data set of the satellite CHAMP and it is shown that crustal field determination can be improved by previously applying our method. In order to reconstruct ionspheric current systems from magnetic field data, an inversion of the Biot-Savart’s law in terms of multiscale regularization is defined. The corresponding operator is formulated and the singular values are calculated. Based on the konwledge of the singular system a regularzation technique in terms of certain product kernels and correponding convolutions can be formed. The method is tested on different simulations and on real magnetic field data of the satellite CHAMP and the proposed satellite mission SWARM.

Diese Doktorarbeit befasst sich mit Volatilitätsarbitrage bei europäischen Kaufoptionen und mit der Modellierung von Collateralized Debt Obligations (CDOs). Zuerst wird anhand einer Idee von Carr gezeigt, dass es stochastische Arbitrage in einem Black-Scholes-ähnlichen Modell geben kann. Danach optimieren wir den Arbitrage- Gewinn mithilfe des Erwartungswert-Varianz-Ansatzes von Markowitz und der Martingaltheorie. Stochastische Arbitrage im stochastischen Volatilitätsmodell von Heston wird auch untersucht. Ferner stellen wir ein Markoff-Modell für CDOs vor. Wir zeigen dann, dass man relativ schnell an die Grenzen dieses Modells stößt: Nach dem Ausfall einer Firma steigen die Ausfallintensitäten der überlebenden Firmen an, und kehren nie wieder zu ihrem Ausgangsniveau zurück. Dieses Verhalten stimmt aber nicht mit Beobachtungen am Markt überein: Nach Turbulenzen auf dem Markt stabilisiert sich der Markt wieder und daher würde man erwarten, dass die Ausfallintensitäten der überlebenden Firmen ebenfalls wieder abflachen. Wir ersetzen daher das Markoff-Modell durch ein Semi-Markoff-Modell, das den Markt viel besser nachbildet.

The present work deals with the (global and local) modeling of the windfield on the real topography of Rheinland-Pfalz. Thereby the focus is on the construction of a vectorial windfield from low, irregularly distributed data given on a topographical surface. The developed spline procedure works by means of vectorial (homogeneous, harmonic) polynomials (outer harmonics) which control the oscillation behaviour of the spline interpoland. In the process the characteristic of the spline curvature which defines the energy norm is assumed to be on a sphere inside the Earth interior and not on the Earth’s surface. The numerical advantage of this method arises from the maximum-minimum principle for harmonic functions.

In this thesis we classify simple coherent sheaves on Kodaira fibers of types II, III and IV (cuspidal and tacnode cubic curves and a plane configuration of three concurrent lines). Indecomposable vector bundles on smooth elliptic curves were classified in 1957 by Atiyah. In works of Burban, Drozd and Greuel it was shown that the categories of vector bundles and coherent sheaves on cycles of projective lines are tame. It turns out, that all other degenerations of elliptic curves are vector-bundle-wild. Nevertheless, we prove that the category of coherent sheaves of an arbitrary reduced plane cubic curve, (including the mentioned Kodaira fibers) is brick-tame. The main technical tool of our approach is the representation theory of bocses. Although, this technique was mainly used for purely theoretical purposes, we illustrate its computational potential for investigating tame behavior in wild categories. In particular, it allows to prove that a simple vector bundle on a reduced cubic curve is determined by its rank, multidegree and determinant, generalizing Atiyah's classification. Our approach leads to an interesting class of bocses, which can be wild but are brick-tame.

Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors.
Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study.
The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.

In this work two main approaches for the evaluation of credit derivatives are analyzed: the copula based approach and the Markov Chain based approach. This work gives the opportunity to use the advantages and avoid disadvantages of both approaches. For example, modeling of contagion effects, i.e. modeling dependencies between counterparty defaults, is complicated under the copula approach. One remedy is to use Markov Chain, where it can be done directly. The work consists of five chapters. The first chapter of this work extends the model for the pricing of CDS contracts presented in the paper by Kraft and Steffensen (2007). In the widely used models for CDS pricing it is assumed that only borrower can default. In our model we assume that each of the counterparties involved in the contract may default. Calculated contract prices are compared with those calculated under usual assumptions. All results are summarized in the form of numerical examples and plots. In the second chapter the copula and its main properties are described. The methods of constructing copulas as well as most common copulas families and its properties are introduced. In the third chapter the method of constructing a copula for the existing Markov Chain is introduced. The cases with two and three counterparties are considered. Necessary relations between the transition intensities are derived to directly find some copula functions. The formulae for default dependencies like Spearman's rho and Kendall's tau for defined copulas are derived. Several numerical examples are presented in which the copulas are built for given Markov Chains. The fourth chapter deals with the approximation of copulas if for a given Markov Chain a copula cannot be provided explicitly. The fifth chapter concludes this thesis.

This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134].
A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk.
Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants.
Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.

This thesis deals with the relationship between no-arbitrage and (strictly) consistent price processes for a financial market with proportional transaction costs
in a discrete time model. The exact mathematical statement behind this relationship is formulated in the so-called Fundamental Theorem of Asset Pricing (FTAP). Among the many proofs of the FTAP without transaction costs there
is also an economic intuitive utility-based approach. It relies on the economic
intuitive fact that the investor can maximize his expected utility from terminal
wealth. This approach is rather constructive since the equivalent martingale measure is then given by the marginal utility evaluated at the optimal terminal payoff.
However, in the presence of proportional transaction costs such a utility-based approach for the existence of consistent price processes is missing in the literature. So far, rather deep methods from functional analysis or from the theory of random sets have been used to show the FTAP under proportional transaction costs.
For the sake of existence of a utility-maximizing payoff we first concentrate on a generic single-period model with only one risky asset. The marignal utility evaluated at the optimal terminal payoff yields the first component of a
consistent price process. The second component is given by the bid-ask prices
depending on the investors optimal action. Even more is true: nearby this consistent price process there are many strictly consistent price processes. Their exact structure allows us to apply this utility-maximizing argument in a multi-period model. In a backwards induction we adapt the given bid-ask prices in such a way so that the strictly consistent price processes found from maximizing utility can be extended to terminal time. In addition possible arbitrage opportunities of the 2nd kind vanish which can present for the original bid-ask process. The notion of arbitrage opportunities of the 2nd kind has been so
far investigated only in models with strict costs in every state. In our model
transaction costs need not be present in every state.
For a model with finitely many risky assets a similar idea is applicable. However, in the single-period case we need to develop new methods compared
to the single-period case with only one risky asset. There are mainly two reasons
for that. Firstly, it is not at all obvious how to get a consistent price process
from the utility-maximizing payoff, since the consistent price process has to be
found for all assets simultaneously. Secondly, we need to show directly that the
so-called vector space property for null payoffs implies the robust no-arbitrage condition. Once this step is accomplished we can à priori use prices with a
smaller spread than the original ones so that the consistent price process found
from the utility-maximizing payoff is strictly consistent for the original prices.
To make the results applicable for the multi-period case we assume that the prices are given by compact and convex random sets. Then the multi-period case is similar to the case with only one risky asset but more demanding with regard to technical questions.

Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.

In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.

A main result of this thesis is a conceptual proof of the fact that the weighted number of tropical curves of given degree and genus, which pass through the right number of general points in the plane (resp., which pass through general points in R^r and represent a given point in the moduli space of genus g curves) is independent of the choices of points. Another main result is a new correspondence theorem between plane tropical cycles and plane elliptic algebraic curves.

This thesis is devoted to two main topics (accordingly, there are two chapters): In the first chapter, we establish a tropical intersection theory with analogue notions and tools as its algebro-geometric counterpart. This includes tropical cycles, rational functions, intersection products of Cartier divisors and cycles, morphisms, their functors and the projection formula, rational equivalence. The most important features of this theory are the following: - It unifies and simplifies many of the existing results of tropical enumerative geometry, which often contained involved ad-hoc computations. - It is indispensable to formulate and solve further tropical enumerative problems. - It shows deep relations to the intersection theory of toric varieties and connected fields. - The relationship between tropical and classical Gromov-Witten invariants found by Mikhalkin is made plausible from inside tropical geometry. - It is interesting on its own as a subfield of convex geometry. In the second chapter, we study tropical gravitational descendants (i.e. Gromov-Witten invariants with incidence and "Psi-class" factors) and show that many concepts of the classical Gromov-Witten theory such as the famous WDVV equations can be carried over to the tropical world. We use this to extend Mikhalkin's results to a certain class of gravitational descendants, i.e. we show that many of the classical gravitational descendants of P^2 and P^1 x P^1 can be computed by counting tropical curves satisfying certain incidence conditions and with prescribed valences of their vertices. Moreover, the presented theory is not restricted to plane curves and therefore provides an important tool to derive similar results in higher dimensions. A more detailed chapter synopsis can be found at the beginning of each individual chapter.

Tropical intersection theory
(2010)

This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.

This thesis is devoted to furthering the tropical intersection theory as well as to applying the
developed theory to gain new insights about tropical moduli spaces.
We use piecewise polynomials to define tropical cocycles that generalise the notion of tropical Cartier divisors to higher codimensions, introduce an intersection product of cocycles with tropical cycles and use the connection to toric geometry to prove a Poincaré duality for certain cases. Our
main application of this Poincaré duality is the construction of intersection-theoretic fibres under a
large class of tropical morphisms.
We construct an intersection product of cycles on matroid varieties which are a natural
generalisation of tropicalisations of classical linear spaces and the local blocks of smooth tropical
varieties. The key ingredient is the ability to express a matroid variety contained in another matroid variety by a piecewise polynomial that is given in terms of the rank functions of the corresponding
matroids. In particular, this enables us to intersect cycles on the moduli spaces of n-marked abstract
rational curves. We also construct a pull-back of cycles along morphisms of smooth varieties, relate
pull-backs to tropical modifications and show that every cycle on a matroid variety is rationally
equivalent to its recession cycle and can be cut out by a cocycle.
Finally, we define families of smooth rational tropical curves over smooth varieties and construct a tropical fibre product in order to show that every morphism of a smooth variety to the moduli space of abstract rational tropical curves induces a family of curves over the domain of the morphism.
This leads to an alternative, inductive way of constructing moduli spaces of rational curves.

Das Ziel dieser Dissertation ist die Entwicklung und Implementation eines Algorithmus zur Berechnung von tropischen Varietäten über allgemeine bewertete Körper. Die Berechnung von tropischen Varietäten über Körper mit trivialer Bewertung ist ein hinreichend gelöstes Problem. Hierfür kombinieren die Autoren Bogart, Jensen, Speyer, Sturmfels und Thomas eindrucksvoll klassische Techniken der Computeralgebra mit konstruktiven Methoden der konvexer Geometrie.
Haben wir allerdings einen Grundkörper mit nicht-trivialer Bewertung, wie zum Beispiel den Körper der \(p\)-adischen Zahlen \(\mathbb{Q}_p\), dann stößt die konventionelle Gröbnerbasentheorie scheinbar an ihre Grenzen. Die zugrundeliegenden Monomordnungen sind nicht geeignet um Problemstellungen zu untersuchen, die von einer nicht-trivialen Bewertung auf den Koeffizienten abhängig sind. Dies führte zu einer Reihe von Arbeiten, welche die gängige Gröbnerbasentheorie modifizieren um die Bewertung des Grundkörpers einzubeziehen.\[\phantom{newline}\]
In dieser Arbeit präsentieren wir einen alternativen Ansatz und zeigen, wie sich die Bewertung mittels einer speziell eingeführten Variable emulieren lässt, so dass eine Modifikation der klassischen Werkzeuge nicht notwendig ist.
Im Rahmen dessen wird Theorie der Standardbasen auf Potenzreihen über einen Koeffizientenring verallgemeinert. Hierbei wird besonders Wert darauf gelegt, dass alle Algorithmen bei polynomialen Eingabedaten mit ihren klassischen Pendants übereinstimmen, sodass für praktische Zwecke auf bereits etablierte Softwaresysteme zurückgegriffen werden kann. Darüber hinaus wird die Konstruktion des Gröbnerfächers sowie die Technik des Gröbnerwalks für leicht inhomogene Ideale eingeführt. Dies ist notwendig, da bei der Einführung der neuen Variable die Homogenität des Ausgangsideal gebrochen wird.\[\phantom{newline}\]
Alle Algorithmen wurden in Singular implementiert und sind als Teil der offiziellen Distribution erhältlich. Es ist die erste Implementation, welches in der Lage ist tropische Varietäten mit \(p\)-adischer Bewertung auszurechnen. Im Rahmen der Arbeit entstand ebenfalls ein Singular Paket für konvexe Geometrie, sowie eine Schnittstelle zu Polymake.

The use of trading stops is a common practice in financial markets for a variety of reasons: it provides a simple way to control losses on a given trade, while also ensuring that profit-taking is not deferred indefinitely; and it allows opportunities to consider reallocating resources to other investments. In this thesis, it is explained why the use of stops may be desirable in certain cases.
This is done by proposing a simple objective to be optimized. Some simple and commonly-used rules for the placing and use of stops are investigated; consisting of fixed or moving barriers, with fixed transaction costs. It is shown how to identify optimal levels at which to set stops, and the performances of different rules and strategies are compared. Thereby, uncertainty and altering of the drift parameter of the investment are incorporated.

The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.

Constructing accurate earth models from seismic data is a challenging task. Traditional methods rely on ray based approximations of the wave equation and reach their limit in geologically complex areas. Full waveform inversion (FWI) on the other side seeks to minimize the misﬁt between modeled and observed data without such approximation.
While superior in accuracy, FWI uses a gradient based iterative scheme that makes it also very computationally expensive. In this thesis we analyse and test an Alternating Direction Implicit (ADI) scheme in order to reduce the costs of the two dimensional time domain algorithm for solving the acoustic wave equation. The ADI scheme can be seen as an intermediate between explicit and implicit ﬁnite diﬀerence modeling schemes. Compared to full implicit schemes the ADI scheme only requires the solution of much smaller matrices and is thus less computationally demanding. Using ADI we can handle coarser discretization compared to an explicit method. Although order of convergence and CFL conditions for the examined explicit method and ADI scheme are comparable, we observe that the ADI scheme is less prone to dispersion. Furhter, our algorithm is eﬃciently parallelized with vectorization and threading techniques. In a numerical comparison, we can demonstrate a runtime advantage of the ADI scheme over an explicit method of the same accuracy.
With the modeling in place, we test and compare several inverse schemes in the second part of the thesis. With the goal of avoiding local minima and improving speed of convergence, we use diﬀerent minimization functions and hierarchical approaches. In several tests, we demonstrate superior results of the L1 norm compared to the L2 norm – especially in the presence of noise. Furthermore we show positive eﬀects for applying three diﬀerent multiscale approaches to the inverse problem. These methods focus on low frequency, early recording, or far oﬀset during early iterations of the minimization and then proceed iteratively towards the full problem. We achieve best results with the frequency based multiscale scheme, for which we also provide a heuristical method of choosing iteratively increasing frequency bands.
Finally, we demonstrate the eﬀectiveness of the diﬀerent methods ﬁrst on the Marmousi model and then on an extract of the 2004 BP model, where we are able to recover both high contrast top salt structures and lower contrast inclusions accurately.

In this dissertation convergence of binomial trees for option pricing is investigated. The focus is on American and European put and call options. For that purpose variations of the binomial tree model are reviewed.
In the first part of the thesis we investigated the convergence behavior of the already known trees from the literature (CRR, RB, Tian and CP) for the European options. The CRR and the RB tree suffer from irregular convergence, so our first aim is to find a way to get the smooth convergence. We first show what causes these oscillations. That will also help us to improve the rate of convergence. As a result we introduce the Tian and the CP tree and we proved that the order of convergence for these trees is \(O \left(\frac{1}{n} \right)\).
Afterwards we introduce the Split tree and explain its properties. We prove the convergence of it and we found an explicit first order error formula. In our setting, the splitting time \(t_{k} = k\Delta t\) is not fixed, i.e. it can be any time between 0 and the maturity time \(T\). This is the main difference compared to the model from the literature. Namely, we show that the good properties of the CRR tree when \(S_{0} = K\) can be preserved even without this condition (which is mainly the case). We achieved the convergence of \(O \left(n^{-\frac{3}{2}} \right)\) and we typically get better results if we split our tree later.

In the thesis the author presents a mathematical model which describes the behaviour of the acoustical pressure (sound), produced by a bass loudspeaker. The underlying physical propagation of sound is described by the non--linear isentropic Euler system in a Lagrangian description. This system is expanded via asymptotical analysis up to third order in the displacement of the membrane of the loudspeaker. The differential equations which describe the behaviour of the key note and the first order harmonic are compared to classical results. The boundary conditions, which are derived up to third order, are based on the principle that the small control volume sticks to the boundary and is allowed to move only along it. Using classical results of the theory of elliptic partial differential equations, the author shows that under appropriate conditions on the input data the appropriate mathematical problems admit, by the Fredholm alternative, unique solutions. Moreover, certain regularity results are shown. Further, a novel Wave Based Method is applied to solve appropriate mathematical problems. However, the known theory of the Wave Based Method, which can be found in the literature, so far, allowed to apply WBM only in the cases of convex domains. The author finds the criterion which allows to apply the WBM in the cases of non--convex domains. In the case of 2D problems we represent this criterion as a small proposition. With the aid of this proposition one is able to subdivide arbitrary 2D domains such that the number of subdomains is minimal, WBM may be applied in each subdomain and the geometry is not altered, e.g. via polygonal approximation. Further, the same principles are used in the case of 3D problem. However, the formulation of a similar proposition in cases of 3D problems has still to be done. Next, we show a simple procedure to solve an inhomogeneous Helmholtz equation using WBM. This procedure, however, is rather computationally expensive and can probably be improved. Several examples are also presented. We present the possibility to apply the Wave Based Technique to solve steady--state acoustic problems in the case of an unbounded 3D domain. The main principle of the classical WBM is extended to the case of an external domain. Two numerical examples are also presented. In order to apply the WBM to our problems we subdivide the computational domain into three subdomains. Therefore, on the interfaces certain coupling conditions are defined. The description of the optimization procedure, based on the principles of the shape gradient method and level set method, and the results of the optimization finalize the thesis.

The central topic of this thesis is Alperin's weight conjecture, a problem concerning the representation theory of finite groups.
This conjecture, which was first proposed by J. L. Alperin in 1986, asserts that for any finite group the number of its irreducible Brauer characters coincides with the number of conjugacy classes of its weights. The blockwise version of Alperin's conjecture partitions this problem into a question concerning the number of irreducible Brauer characters and weights belonging to the blocks of finite groups.
A proof for this conjecture has not (yet) been found. However, the problem has been reduced to a question on non-abelian finite (quasi-) simple groups in the sense that there is a set of conditions, the so-called inductive blockwise Alperin weight condition, whose verification for all non-abelian finite simple groups implies the blockwise Alperin weight conjecture. Now the objective is to prove this condition for all non-abelian finite simple groups, all of which are known via the classification of finite simple groups.
In this thesis we establish the inductive blockwise Alperin weight condition for three infinite series of finite groups of Lie type: the special linear groups \(SL_3(q)\) in the case \(q>2\) and \(q \not\equiv 1 \bmod 3\), the Chevalley groups \(G_2(q)\) for \(q \geqslant 5\), and Steinberg's triality groups \(^3D_4(q)\).

This thesis is devoted to the study of tropical curves with emphasis on their enumerative geometry. Major results include a conceptual proof of the fact that the number of rational tropical plane curves interpolating an appropriate number of general points is independent of the choice of points, the computation of intersection products of Psi-classes on the moduli space of rational tropical curves, a computation of the number of tropical elliptic plane curves of given degree and fixed tropical j-invariant as well as a tropical analogue of the Riemann-Roch theorem for algebraic curves. The result are obtained in joint work with Hannah Markwig and/or Andreas Gathmann.

Tropical geometry is a rather new field of algebraic geometry. The main idea is to replace algebraic varieties by certain piece-wise linear objects in R^n, which can be studied with the aid of combinatorics. There is hope that many algebraically difficult operations become easier in the tropical setting, as the structure of the objects seems to be simpler. In particular, tropical geometry shows promise for application in enumerative geometry. Enumerative geometry deals with the counting of geometric objects that are determined by certain incidence conditions. Until around 1990, not many enumerative questions had been answered and there was not much prospect of solving more. But then Kontsevich introduced the moduli space of stable maps which turned out to be a very useful concept for the study of enumerative geometry. A well-known problem of enumerative geometry is to determine the numbers N_cplx(d,g) of complex genus g plane curves of degree d passing through 3d+g-1 points in general position. Mikhalkin has defined the analogous number N_trop(d,g) for tropical curves and shown that these two numbers coincide (Mikhalkin's Correspondence Theorem). Tropical geometry supplies many new ideas and concepts that could be helpful to answer enumerative problems. However, as a rather new field, tropical geometry has to be studied more thoroughly. This thesis is concerned with the ``translation'' of well-known facts of enumerative geometry to tropical geometry. More precisely, the main results of this thesis are: - a tropical proof of the invariance of N_trop(d,g) of the position of the 3d+g-1 points, - a tropical proof for Kontsevich's recursive formula to compute N_trop(d,0) and - a tropical proof of Caporaso's and Harris' algorithm to compute N_trop(d,g). All results were derived in joint work with my advisor Andreas Gathmann. (Note that tropical research is not restricted to the translation of classically well-known facts, there are actually new results shown by means of tropical geometry that have not been known before. For example, Mikhalkin gave a tropical algorithm to compute the Welschinger invariant for real curves. This shows that tropical geometry can indeed be a tool for a better understanding of classical geometry.)

This work deals with the mathematical modeling and numerical simulation of the dynamics of a curved inertial viscous Newtonian fiber, which is practically applicable to the description of centrifugal spinning processes of glass wool. Neglecting surface tension and temperature dependence, the fiber flow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the fiber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional fiber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms. For the numerical simulation of the derived model a finite volume code is developed. The results of the numerical scheme for high Reynolds numbers are validated by comparing them with the analytical solution of the inviscid problem. Moreover, the influence of parameters, like viscosity and rotation on the fiber dynamics are investigated. Finally, an application based on industrial data is performed.

In the theoretical part of this thesis, the difference of the solutions of the elastic and the elastoplastic boundary value problem is analysed, both for linear kinematic and combined linear kinematic and isotropic hardening material. We consider both models in their quasistatic, rate-independent formulation with linearised geometry. The main result of the thesis is, that the differences of the physical obervables (the stresses, strains and displacements) can be expressed as composition of some linear operators and play operators with respect to the exterior forces. Explicit homotopies between both solutions are presented. The main analytical devices are Lipschitz estimates for the stop and the play operator. We present some generalisations of the standard estimates. They allow different input functions, different initial memories and different scalar products. Thereby, the underlying time involving function spaces are the Sobolov spaces of first order with arbitrary integrability exponent between one and infinity. The main results can easily be generalised for the class of continuous functions with bounded total variation. In the practical part of this work, a method to correct the elastic stress tensor over a long time interval at some chosen points of the body is presented and analysed. In contrast to widespread uniaxial corrections (Neuber or ESED), our method takes multiaxiality phenomena like cyclic hardening/softening, ratchetting and non-masing behaviour into account using Jiang's model of elastoplasticity. It can be easily adapted to other constitutive elastoplastic material laws. The theory for our correction model is developped for linear kinematic hardening material, for which error estimated are derived. Our numerical algorithm is very fast and designed for the case that the elastic stress is piecewise linear. The results for the stresses can be significantly improved with Seeger's empirical strain constraint. For the improved model, a simple predictor-correcor algorithm for smooth input loading is established.

Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory.
Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern.
To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.

This thesis deals with the application of binomial option pricing in a single-asset Black-Scholes market and its extension to multi-dimensional situations. Although the binomial approach is, in principle, an efficient method for lower dimensional valuation problems, there are at least two main problems regarding its application: Firstly, traded options often exhibit discontinuities, so that the Berry- Esséen inequality is in general tight; i.e. conventional tree methods converge no faster than with order 1/sqrt(N). Furthermore, they suffer from an irregular convergence behaviour that impedes the possibility to achieve a higher order of convergence via extrapolation methods. Secondly, in multi-asset markets conventional tree construction methods cannot ensure well-defined transition probabilities for arbitrary correlation structures between the assets. As a major aim of this thesis, we present two approaches to get binomial trees into shape in order to overcome the main problems in applications; the optimal drift model for the valuation of single-asset options and the decoupling approach to multi-dimensional option pricing. The new valuation methods are embedded into a self-contained survey of binomial option pricing, which focuses on the convergence behaviour of binomial trees. The optimal drift model is a new one-dimensional binomial scheme that can lead to convergence of order o(1/N) by exploiting the specific structure of the valuation problem under consideration. As a consequence, it has the potential to outperform benchmark algorithms. The decoupling approach is presented as a universal construction method for multi-dimensional trees. The corresponding trees are well-defined for an arbitrary correlation structure of the underlying assets. In addition, they yield a more regular convergence behaviour. In fact, the sawtooth effect can even vanish completely, so that extrapolation can be applied.

The main aim of this work was to obtain an approximate solution of the seismic traveltime tomography problems with the help of splines based on reproducing kernel Sobolev spaces. In order to be able to apply the spline approximation concept to surface wave as well as to body wave tomography problems, the spherical spline approximation concept was extended for the case where the domain of the function to be approximated is an arbitrary compact set in R^n and a finite number of discontinuity points is allowed. We present applications of such spline method to seismic surface wave as well as body wave tomography, and discuss the theoretical and numerical aspects of such applications. Moreover, we run numerous numerical tests that justify the theoretical considerations.

This dissertation deals with two main subjects. Both are strongly related to boundary problems for the Poisson equation and the Laplace equation, respectively. The oblique boundary problem of potential theory as well as the limit formulae and jump relations of potential theory are investigated. We divide this abstract into two parts and start with the oblique boundary problem. Here we prove existence and uniqueness results for solutions to the outer oblique boundary problem for the Poisson equation under very weak assumptions on boundary, coefficients and inhomogeneities. Main tools are the Kelvin transformation and the solution operator for the regular inner problem, provided in my diploma thesis. Moreover we prove regularization results for the weak solutions of both, the inner and the outer problem. We investigate the non-admissible direction for the oblique vector field, state results with stochastic inhomogeneities and provide a Ritz-Galerkin approximation. Finally we show that the results are applicable to problems from Geomathematics. Now we come to the limit formulae. There we combine the modern theory of Sobolev spaces with the classical theory of limit formulae and jump relations of potential theory. The convergence in Lebesgue spaces for integrable functions is already treated in literature. The achievement of this dissertation is this convergence for the weak derivatives of higher orders. Also the layer functions are elements of Sobolev spaces and the surface is a two dimensional suitable smooth submanifold in the three dimensional space. We are considering the potential of the single layer, the potential of the double layer and their first order normal derivatives. Main tool in the proof in Sobolev norm is the uniform convergence of the tangential derivatives, which is proved with help of some results taken from literature. Additionally, we need a result about the limit formulae in the Lebesgue spaces, which is also taken from literature, and a reduction result for normal derivatives of harmonic functions. Moreover we prove the convergence in the Hölder spaces. Finally we give an application of the limit formulae and jump relations. We generalize a known density of several function systems from Geomathematics in the Lebesgue spaces of square integrable measureable functions, to density in Sobolev spaces, based on the results proved before. Therefore we have prove the limit formula of the single layer potential in dual spaces of Soboelv spaces, where also the layer function is an element of such a distribution space.

In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.

We construct and study two surface measures on the space C([0,1],M) of paths in a compact Riemannian manifold M embedded into the Euclidean space R^n. The first one is induced by conditioning the usual Wiener measure on C([0,T],R^n) to the event that the Brownian particle does not leave the tubular epsilon-neighborhood of M up to time T, and passing to the limit. The second one is defined as the limit of the laws of reflected Brownian motions with reflection on the boundaries of the tubular epsilon-neighborhoods of M. We prove that the both surface measures exist and compare them with the Wiener measure W_M on C([0,T],M). We show that the first one is equivalent to W_M and compute the corresponding density explicitly in terms of the scalar curvature and the mean curvature vector of M. Further, we show that the second surface measure coincides with W_M. Finally, we study the limit behavior of the both surface measures as T tends to infinity.

The thesis deals with the subgradient optimization methods which are serving to solve nonsmooth optimization problems. We are particularly concerned with solving large-scale integer programming problems using the methodology of Lagrangian relaxation and dualization. The goal is to employ the subgradient optimization techniques to solve large-scale optimization problems that originated from radiation therapy planning problem. In the thesis, different kinds of zigzagging phenomena which hamper the speed of the subgradient procedures have been investigated and identified. Moreover, we have established a new procedure which can completely eliminate the zigzagging phenomena of subgradient methods. Procedures used to construct both primal and dual solutions within the subgradient schemes have been also described. We applied the subgradient optimization methods to solve the problem of minimizing total treatment time of radiation therapy. The problem is NP-hard and thus far there exists no method for solving the problem to optimality. We present a new, efficient, and fast algorithm which combines exact and heuristic procedures to solve the problem.

Structure and Construction of Instanton Bundles on P3

This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.

Multiphase materials combine properties of several materials, which makes them interesting for high-performing components. This thesis considers a certain set of multiphase materials, namely silicon-carbide (SiC) particle-reinforced aluminium (Al) metal matrix composites and their modelling based on stochastic geometry models.
Stochastic modelling can be used for the generation of virtual material samples: Once we have fitted a model to the material statistics, we can obtain independent three-dimensional “samples” of the material under investigation without the need of any actual imaging. Additionally, by changing the model parameters, we can easily simulate a new material composition.
The materials under investigation have a rather complicated microstructure, as the system of SiC particles has many degrees of freedom: Size, shape, orientation and spatial distribution. Based on FIB-SEM images, that yield three-dimensional image data, we extract the SiC particle structure using methods of image analysis. Then we model the SiC particles by anisotropically rescaled cells of a random Laguerre tessellation that was fitted to the shapes of isotropically rescaled particles. We fit a log-normal distribution for the volume distribution of the SiC particles. Additionally, we propose models for the Al grain structure and the Aluminium-Copper (\({Al}_2{Cu}\)) precipitations occurring on the grain boundaries and on SiC-Al phase boundaries.
Finally, we show how we can estimate the parameters of the volume-distribution based on two-dimensional SEM images. This estimation is applied to two samples with different mean SiC particle diameters and to a random section through the model. The stereological estimations are within acceptable agreement with the parameters estimated from three-dimensional image data
as well as with the parameters of the model.

In some processes for spinning synthetic fibers the filaments are exposed to highly turbulent air flows to achieve a high degree of stretching (elongation). The quality of the resulting filaments, namely thickness and uniformity, is thus determined essentially by the aerodynamic force coming from the turbulent flow. Up to now, there is a gap between the elongation measured in experiments and the elongation obtained by numerical simulations available in the literature.
The main focus of this thesis is the development of an efficient and sufficiently accurate simulation algorithm for the velocity of a turbulent air flow and the application in turbulent spinning processes.
In stochastic turbulence models the velocity is described by an \(\mathbb{R}^3\)-valued random field. Based on an appropriate description of the random field by Marheineke, we have developed an algorithm that fulfills our requirements of efficiency and accuracy. Applying a resulting stochastic aerodynamic drag force on the fibers then allows the simulation of the fiber dynamics modeled by a random partial differential algebraic equation system as well as a quantization of the elongation in a simplified random ordinary differential equation model for turbulent spinning. The numerical results are very promising: whereas the numerical results available in the literature can only predict elongations up to order \(10^4\) we get an order of \(10^5\), which is closer to the elongations of order \(10^6\) measured in experiments.

Continuous stochastic control theory has found many applications in optimal investment. However, it lacks some reality, as it is based on the assumption that interventions are costless, which yields optimal strategies where the controller has to intervene at every time instant. This thesis consists of the examination of two types of more realistic control methods with possible applications. In the first chapter, we study the stochastic impulse control of a diffusion process. We suppose that the controller minimizes expected discounted costs accumulating as running and controlling cost, respectively. Each control action causes costs which are bounded from below by some positive constant. This makes a continuous control impossible as it would lead to an immediate ruin of the controller. We give a rigorous development of the relevant theory, where our guideline is to establish verification and convergence results under minimal assumptions, without focusing on the existence of solutions to the corresponding (quasi-)variational inequalities. If the impulse control problem can be characterized or approximated by (quasi-)variational inequalities, it remains to solve these equations. In Section 1.2, we solve the stochastic impulse control problem for a one-dimensional diffusion process with constant coefficients and convex running costs. Further, in Section 1.3, we solve a particular multi-dimensional example, where the uncontrolled process is given by an at least two-dimensional Brownian motion and the cost functions are rotationally symmetric. By symmetry, this problem can be reduced to a one-dimensional problem. In the last section of the first chapter, we suggest a new impulse control problem, where the controller is in addition allowed to invest his initial capital into a market consisting of a money market account and a risky asset. The costs which arise upon controlling the diffusion process and upon trading in this market have to be paid out of the controller's bond holdings. The aim of the controller is to minimize the running costs, caused by the abstract diffusion process, without getting ruined. The second chapter is based on a paper which is joint work with Holger Kraft and Frank Seifried. We analyze the portfolio decision of an investor trading in a market where the economy switches randomly between two possible states, a normal state where trading takes place continuously, and an illiquidity state where trading is not allowed at all. We allow for jumps in the market prices at the beginning and at the end of a trading interruption. Section 2.1 provides an explicit representation of the investor's portfolio dynamics in the illiquidity state in an abstract market consisting of two assets. In Section 2.2 we specify this market model and assume that the investor maximizes expected utility from terminal wealth. We establish convergence results, if the maximal number of liquidity breakdowns goes to infinity. In the Markovian framework of Section 2.3, we provide the corresponding Hamilton-Jacobi-Bellman equations and prove a verification result. We apply these results to study the portfolio problem for a logarithmic investor and an investor with a power utility function, respectively. Further, we extend this model to an economy with three regimes. For instance, the third state could model an additional financial crisis where trading is still possible, but the excess return is lower and the volatility is higher than in the normal state.

Nonwoven materials are used as filter media which are the key component of automotive filters such as air filters, oil filters, and fuel filters. Today, the advanced engine technologies require innovative filter media with higher performances. A virtual microstructure of the nonwoven filter medium, which has similar filter properties as the existing material, can be used to design new filter media from existing media. Nonwoven materials considered in this thesis prominently feature non-overlapping fibers, curved fibers, fibers with circular cross section, fibers of apparently infinite length, and fiber bundles. To this end, as part of this thesis, we extend the Altendorf-Jeulin individual fiber model to incorporate all the above mentioned features. The resulting novel stochastic 3D fiber model can generate geometries with good visual resemblance of real filter media. Furthermore, pressure drop, which is one of the important physical properties of the filter, simulated numerically on the computed tomography (CT) data of the real nonwoven material agrees well (with a relative error of 8%) with the pressure drop simulated in the generated microstructure realizations from our model.
Generally, filter properties for the CT data and generated microstructure realizations are computed using numerical simulations. Since numerical simulations require extensive system memory and computation time, it is important to find the representative domain size of the generated microstructure for a required filter property. As part of this thesis, simulation and a statistical approach are used to estimate the representative domain size of our microstructure model. Precisely, the representative domain size with respect to the packing density, the pore size distribution, and the pressure drop are considered. It turns out that the statistical approach can be used to estimate the representative domain size for the given property more precisely and using less generated microstructures than the purely simulation based approach.
Among the various properties of fibrous filter media, fiber thickness and orientation are important characteristics which should be considered in design and quality assurance of filter media. Automatic analysis of images from scanning electron microscopy (SEM) is a suitable tool in that context. Yet, the accuracy of such image analysis tools cannot be judged based on images of real filter media since their true fiber thickness and orientation can never be known accurately. A solution is to employ synthetically generated models for evaluation. By combining our 3D fiber system model with simulation of the SEM imaging process, quantitative evaluation of the fiber thickness and orientation measurements becomes feasible. We evaluate the state-of-the-art automatic thickness and orientation estimation method that way.

The new international capital standard for credit institutions (“Basel II”) allows banks to use internal rating systems in order to determine the risk weights that are relevant for the calculation of capital charge. Therefore, it is necessary to develop a system that enfolds the main practices and methods existing in the context of credit rating. The aim of this thesis is to give a suggestion of setting up a credit rating system, where the main techniques used in practice are analyzed, presenting some alternatives and considering the problems that can arise from a statistical point of view. Finally, we will set up some guidelines on how to accomplish the challenge of credit scoring. The judgement of the quality of a credit with respect to the probability of default is called credit rating. A method based on a multi-dimensional criterion seems to be natural, due to the numerous effects that can influence this rating. However, owing to governmental rules, the tendency is that typically one-dimensional criteria will be required in the future as a measure for the credit worthiness or for the quality of a credit. The problem as described above can be resolved via transformation of a multi-dimensional data set into a one-dimensional one while keeping some monotonicity properties and also keeping the loss of information (due to the loss of dimensionality) at a minimum level.

This thesis investigates the constrained form of the spherical Minimax location problem and the spherical Weber location problem. Specifically, we consider the problem of locating a new facility on the surface of the unit sphere in the presence of convex spherical polygonal restricted regions and forbidden regions such that the maximum weighted distance from the new facility on the surface of the unit sphere to m existing facilities is minimized and the sum of the weighted distance from the new facility on the surface of the unit sphere to m existing facilities is minimized. It is assumed that a forbidden region is an area on the surface of the unit sphere where travel and facility location are not permitted and that distance is measured using the great circle arc distance. We represent a polynomial time algorithm for the spherical Minimax location problem for the special case where all the existing facilities are located on the surface of a hemisphere. Further, we have developed algorithms for spherical Weber location problem using barrier distance on a hemisphere as well as on the unit sphere.

In modern textile manufacturing industries, the function of human eyes to detect disturbances in the production processes which yield defective products is switched to cameras. The camera images are analyzed with various methods to detect these disturbances automatically. There are, however, still problems with in particular semi-regular textures which are typical for weaving patterns. We study three parts of that problem of automatic texture analysis: image smoothing, texture synthesis and defect detection. In image smoothing, we develop a two dimensional kernel smoothing method with locally and directionally adaptive bandwidths allowing correlation in the errors. Two approaches are used in synthesising texture. The first is based on constructing a generalized Ising energy function in the Markov Random Field setup, and for the second, we use two-dimensional periodic bootstrap methods for semi-regular texture synthesis. We treat defect detection as multihypothesis testing problem with the null hypothesis representing the absence of defects and the other hypotheses representing various types of defects. We develop a test based on a nonparametric regression setup, and we use the bootstrap for approximating the distribution of our test statistic.

We discuss some first steps towards experimental design for neural network regression which, at present, is too complex to treat fully in general. We encounter two difficulties: the nonlinearity of the models together with the high parameter dimension on one hand, and the common misspecification of the models on the other hand.
Regarding the first problem, we restrict our consideration to neural networks with only one and two neurons in the hidden layer and a univariate input variable. We prove some results regarding locally D-optimal designs, and present a numerical study using the concept of maximin optimal designs.
In respect of the second problem, we have a look at the effects of misspecification on optimal experimental designs.

The main two problems of continuous-time financial mathematics are option pricing and portfolio optimization. In this thesis, various new aspects of these major topics of financial mathematics will be discussed. In all our considerations we will assume the standard diffusion type setting for securitiy prices which is today well-know under the term "Black-Scholes model". This setting and the basic results of option pricing and portfolio optimization are surveyed in the first chapter. The next three chapters deal with generalizations of the standard portfolio problem, also know as "Merton's problem". Here, we will always use the stochastic control approach as introduced in the seminal papers by Merton (1969, 1971, 1990). One such problem is the very realistic setting of an investor who is faced with fixed monetary streams. More precisely, in addition to maximizing the utility from final wealth via choosing an investment strategy, the investor also has to fulfill certain consumption needs. Also the opposite situation, an additional income stream can now be taken into account in our portfolio optimization problem. We consider various examples and solve them on one hand via classical stochastic control methods and on the other hand by our new separation theorem. This together with some numerical examples forms Chapter 2. Chapter 3 is mainly concerned with the portfolio problem if the investor has different lending and borrowing rates. We give explicit solutions (where possible) and numerical methods to calculate the optimal strategy in the cases of log utility and HARA utility for three different modelling approaches of the dependence of the borrowing rate on the fraction of wealth financed by a credit. The further generalization of the standard Merton problem in Chapter 4 consists in considering simultaneously the possibilities for continuous and discrete consumption. In our general approach there is a possibility for assigning the different consumption times different weights which is a generalization of the usual way of making them comparable via discounting. Chapter 5 deals with the special case of pricing basket options. Here, the main problem is not path-dependence but the multi-dimensionality which makes it impossible to give usuefull analytical representations of the option price. We review the literature and compare six different numerical methods in a systematic way. Thereby we also look at the influence of various parameters such as strike, correlation, forwards or volatilities on the erformance of the different numerical methods. The problem of pricing Asian options on average spot with average strike is the topic of Chapter 6. We here apply the bivariate normal distribution to obtain an approximate option price. This method proves to be very reliable and e±cient for the valuation of different variants of Asian options on average spot with average strike.

In this text we survey some large deviation results for diffusion processes. The first chapters present results from the literature such as the Freidlin-Wentzell theorem for diffusions with small noise. We use these results to prove a new large deviation theorem about diffusion processes with strong drift. This is the main result of the thesis. In the later chapters we give another application of large deviation results, namely to determine the exponential decay rate for the Bayes risk when separating two different processes. The final chapter presents techniques which help to experiment with rare events for diffusion processes by means of computer simulations.

The nowadays increasing number of fields where large quantities of data are collected generates an emergent demand for methods for extracting relevant information from huge databases. Amongst the various existing data mining models, decision trees are widely used since they represent a good trade-off between accuracy and interpretability. However, one of their main problems is that they are very instable, which complicates the process of the knowledge discovery because the users are disturbed by the different decision trees generated from almost the same input learning samples. In the current work, binary tree classifiers are analyzed and partially improved. The analysis of tree classifiers goes from their topology from the graph theory point of view to the creation of a new tree classification model by means of combining decision trees and soft comparison operators (Mlynski, 2003) with the purpose to not only overcome the well known instability problem of decision trees, but also in order to confer the ability of dealing with uncertainty. In order to study and compare the structural stability of tree classifiers, we propose an instability coefficient which is based on the notion of Lipschitz continuity and offer a metric to measure the proximity between decision trees. This thesis converges towards its main part with the presentation of our model ``Soft Operators Decision Tree\'\' (SODT). Mainly, we describe its construction, application and the consistency of the mathematical formulation behind this. Finally we show the results of the implementation of SODT and compare numerically the stability and accuracy of a SODT and a crisp DT. The numerical simulations support the stability hypothesis and a smaller tendency to overfitting the training data with SODT than with crisp DT is observed. A further aspect of this inclusion of soft operators is that we choose them in a way so that the resulting goodness function (used by this method) is differentiable and thus allows to calculate the best split points by means of gradient descent methods. The main drawback of SODT is the incorporation of the unpreciseness factor, which increases the complexity of the algorithm.

In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \).
In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig.
Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question.
\[\]
In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.

Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.

Standard bases are one of the main tools in computational commutative algebra. In 1965
Buchberger presented a criterion for such bases and thus was able to introduce a first approach for their computation. Since the basic version of this algorithm is rather inefficient
due to the fact that it processes lots of useless data during its execution, active research for
improvements of those kind of algorithms is quite important.
In this thesis we introduce the reader to the area of computational commutative algebra with a focus on so-called signature-based standard basis algorithms. We do not only
present the basic version of Buchberger’s algorithm, but give an extensive discussion of different attempts optimizing standard basis computations, from several sorting algorithms
for internal data up to different reduction processes. Afterwards the reader gets a complete
introduction to the origin of signature-based algorithms in general, explaining the under-
lying ideas in detail. Furthermore, we give an extensive discussion in terms of correctness,
termination, and efficiency, presenting various different variants of signature-based standard basis algorithms.
Whereas Buchberger and others found criteria to discard useless computations which
are completely based on the polynomial structure of the elements considered, Faugère presented a first signature-based algorithm in 2002, the F5 Algorithm. This algorithm is famous for generating much less computational overhead during its execution. Within this
thesis we not only present Faugère’s ideas, we also generalize them and end up with several
different, optimized variants of his criteria for detecting redundant data.
Being not completely focussed on theory, we also present information about practical
aspects, comparing the performance of various implementations of those algorithms in the
computer algebra system Singular over a wide range of example sets.
In the end we give a rather extensive overview of recent research in this area of computational commutative algebra.

By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.

Semiparametric estimation of conditional quantiles for time series, with applications in finance
(2003)

The estimation of conditional quantiles has become an increasingly important issue in insurance and financial risk management. The stylized facts of financial time series data has rendered direct applications of extreme value theory methodologies, in the estimation of extreme conditional quantiles, inappropriate. On the other hand, quantile regression based procedures work well in nonextreme parts of a given data but breaks down in extreme probability levels. In order to solve this problem, we combine nonparametric regressions for time series and extreme value theory approaches in the estimation of extreme conditional quantiles for financial time series. To do so, a class of time series models that is similar to nonparametric AR-(G)ARCH models but which does not depend on distributional and moments assumptions, is introduced. We discuss estimation procedures for the nonextreme levels using the models and consider the estimates obtained by inverting conditional distribution estimators and by direct estimation using Koenker-Basset (1978) version for kernels. Under some regularity conditions, the asymptotic normality and uniform convergence, with rates, of the conditional quantile estimator for strong mixing time series, are established. We study the estimation of scale function in the introduced models using similar procedures and show that under some regularity conditions, the scale estimate is weakly consistent and asymptotically normal. The application of introduced models in the estimation of extreme conditional quantiles is achieved by augmenting them with methods in extreme value theory. It is shown that the overal extreme conditional quantiles estimator is consistent. A Monte Carlo study is carried out to illustrate the good performance of the estimates and real data are used to demonstrate the estimation of Value-at-Risk and conditional expected shortfall in financial risk management and their multiperiod predictions discussed.

In this work we focus on the regression models with asymmetrical error distribution,
more precisely, with extreme value error distributions. This thesis arises in the framework
of the project "Robust Risk Estimation". Starting from July 2011, this project won
three years funding by the Volkswagen foundation in the call "Extreme Events: Modelling,
Analysis, and Prediction" within the initiative "New Conceptual Approaches to
Modelling and Simulation of Complex Systems". The project involves applications in
Financial Mathematics (Operational and Liquidity Risk), Medicine (length of stay and
cost), and Hydrology (river discharge data). These applications are bridged by the
common use of robustness and extreme value statistics.
Within the project, in each of these applications arise issues, which can be dealt with by
means of Extreme Value Theory adding extra information in the form of the regression
models. The particular challenge in this context concerns asymmetric error distributions,
which significantly complicate the computations and make desired robustification
extremely difficult. To this end, this thesis makes a contribution.
This work consists of three main parts. The first part is focused on the basic notions
and it gives an overview of the existing results in the Robust Statistics and Extreme
Value Theory. We also provide some diagnostics, which is an important achievement of
our project work. The second part of the thesis presents deeper analysis of the basic
models and tools, used to achieve the main results of the research.
The second part is the most important part of the thesis, which contains our personal
contributions. First, in Chapter 5, we develop robust procedures for the risk management
of complex systems in the presence of extreme events. Mentioned applications use time
structure (e.g. hydrology), therefore we provide extreme value theory methods with time
dynamics. To this end, in the framework of the project we considered two strategies. In
the first one, we capture dynamic with the state-space model and apply extreme value
theory to the residuals, and in the second one, we integrate the dynamics by means of
autoregressive models, where the regressors are described by generalized linear models.
More precisely, since the classical procedures are not appropriate to the case of outlier
presence, for the first strategy we rework classical Kalman smoother and extended
Kalman procedures in a robust way for different types of outliers and illustrate the performance
of the new procedures in a GPS application and a stylized outlier situation.
To apply approach to shrinking neighborhoods we need some smoothness, therefore for
the second strategy, we derive smoothness of the generalized linear model in terms of
L2 differentiability and create sufficient conditions for it in the cases of stochastic and
deterministic regressors. Moreover, we set the time dependence in these models by
linking the distribution parameters to the own past observations. The advantage of
our approach is its applicability to the error distributions with the higher dimensional
parameter and case of regressors of possibly different length for each parameter. Further,
we apply our results to the models with generalized Pareto and generalized extreme value
error distributions.
Finally, we create the exemplary implementation of the fixed point iteration algorithm
for the computation of the optimally robust in
uence curve in R. Here we do not aim to
provide the most
exible implementation, but rather sketch how it should be done and
retain points of particular importance. In the third part of the thesis we discuss three applications,
operational risk, hospitalization times and hydrological river discharge data,
and apply our code to the real data set taken from Jena university hospital ICU and
provide reader with the various illustrations and detailed conclusions.

An autoregressive-ARCH model with possible exogeneous variables is treated. We estimate the conditional volatility of the model by applying feedforward networks to the residuals and prove consistency and asymptotic normality for the estimates under the rate of feedforward networks complexity. Recurrent neural networks estimates of GARCH and value-at-risk is studied. We prove consistency and asymptotic normality for the recurrent neural networks ARMA estimator under the rate of recurrent networks complexity. We also overcome the estimation problem in stochastic variance models in discrete time by feedforward networks and the introduction of a new distributions on the innovations. We use the method to calculate market risk such as expected shortfall and Value-at risk. We tested this distribution together with other new distributions on the GARCH family models against other common distributions on the financial market such as Normal Inverse Gaussian, normal and the Student's t- distributions. As an application of the models, some German stocks are studied and the different approaches are compared together with the most common method of GARCH(1,1) fit.

In this thesis the combinatorial framework of toric geometry is extended to equivariant sheaves over toric varieties. The central questions are how to extract combinatorial information from the so developed description and whether equivariant sheaves can, like toric varieties, be considered as purely combinatorial objects. The thesis consists of three main parts. In the first part, by systematically extending the framework of toric geometry, a formalism is developed for describing equivariant sheaves by certain configurations of vector spaces. In the second part, homological properties of a certain class of equivariant sheaves are investigated, namely that of reflexive equivariant sheaves. Several kinds of resolutions for these sheaves are constructed which depend only on the configuration of their associated vector spaces. Thus a partially positive answer to the question of combinatorial representability is given. As a particular result, a new way for computing minimal resolutions for Z^n - graded modules over polynomial rings is obtained. In the third part a complete classification of the simplest nontrivial sheaves, equivariant vector bundles of rank two over smooth toric surfaces, is given. A combinatorial characterization is given and parameter spaces (moduli spaces) are constructed which depend only on this characterization. In appendices a outlook on equivariant sheaves and the relation of Chern classes to their combinatorial classification is given, particularly focussing on the case of the projective plane. A classification of equivariant vector bundles of rank three over the projective plane is given.

In this thesis, we investigate several upcoming issues occurring in the context of conceiving and building a decision support system. We elaborate new algorithms for computing representative systems with special quality guarantees, provide concepts for supporting the decision makers after a representative system was computed, and consider a methodology of combining two optimization problems.
We review the original Box-Algorithm for two objectives by Hamacher et al. (2007) and discuss several extensions regarding coverage, uniformity, the enumeration of the whole nondominated set, and necessary modifications if the underlying scalarization problem cannot be solved to optimality. In a next step, the original Box-Algorithm is extended to the case of three objective functions to compute a representative system with desired coverage error. Besides the investigation of several theoretical properties, we prove the correctness of the algorithm, derive a bound on the number of iterations needed by the algorithm to meet the desired coverage error, and propose some ideas for possible extensions.
Furthermore, we investigate the problem of selecting a subset with desired cardinality from the computed representative system, the Hypervolume Subset Selection Problem (HSSP). We provide two new formulations for the bicriteria HSSP, a linear programming formulation and a \(k\)-link shortest path formulation. For the latter formulation, we propose an algorithm for which we obtain the currently best known complexity bound for solving the bicriteria HSSP. For the tricriteria HSSP, we propose an integer programming formulation with a corresponding branch-and-bound scheme.
Moreover, we address the issue of how to present the whole set of computed representative points to the decision makers. Based on common illustration methods, we elaborate an algorithm guiding the decision makers in choosing their preferred solution.
Finally, we step back and look from a meta-level on the issue of how to combine two given optimization problems and how the resulting combinations can be related to each other. We come up with several different combined formulations and give some ideas for the practical approach.

In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored.
First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations.
Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established.
Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.

For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.

This thesis discusses methods for the classification of finite projective planes via exhaustive search. In the main part the author classifies all projective planes of order 16 admitting a large quasiregular group of collineations. This is done by a complete search using the computer algebra system GAP. Computational methods for the construction of relative difference sets are discussed. These methods are implemented in a GAP-package, which is available separately. As another result --found in cooperation with U. Dempwolff-- the projective planes defined by planar monomials are classified. Furthermore the full automorphism group of the non-translation planes defined by planar monomials are classified.

The main topic of this thesis is to define and analyze a multilevel Monte Carlo algorithm for path-dependent functionals of the solution of a stochastic differential equation (SDE) which is driven by a square integrable, \(d_X\)-dimensional Lévy process \(X\). We work with standard Lipschitz assumptions and denote by \(Y=(Y_t)_{t\in[0,1]}\) the \(d_Y\)-dimensional strong solution of the SDE.
We investigate the computation of expectations \(S(f) = \mathrm{E}[f(Y)]\) using randomized algorithms \(\widehat S\). Thereby, we are interested in the relation of the error and the computational cost of \(\widehat S\), where \(f:D[0,1] \to \mathbb{R}\) ranges in the class \(F\) of measurable functionals on the space of càdlàg functions on \([0,1]\), that are Lipschitz continuous with respect to the supremum norm.
We consider as error \(e(\widehat S)\) the worst case of the root mean square error over the class of functionals \(F\). The computational cost of an algorithm \(\widehat S\), denoted \(\mathrm{cost}(\widehat S)\), should represent the runtime of the algorithm on a computer. We work in the real number model of computation and further suppose that evaluations of \(f\) are possible for piecewise constant functions in time units according to its number of breakpoints.
We state strong error estimates for an approximate Euler scheme on a random time discretization. With this strong error estimates, the multilevel algorithm leads to upper bounds for the convergence order of the error with respect to the computational cost. The main results can be summarized in terms of the Blumenthal-Getoor index of the driving Lévy process, denoted by \(\beta\in[0,2]\). For \(\beta <1\) and no Brownian component present, we almost reach convergence order \(1/2\), which means, that there exists a sequence of multilevel algorithms \((\widehat S_n)_{n\in \mathbb{N}}\) with \(\mathrm{cost}(\widehat S_n) \leq n\) such that \( e(\widehat S_n) \precsim n^{-1/2}\). Here, by \( \precsim\), we denote a weak asymptotic upper bound, i.e. the inequality holds up to an unspecified positive constant. If \(X\) has a Brownian component, the order has an additional logarithmic term, in which case, we reach \( e(\widehat S_n) \precsim n^{-1/2} \, (\log(n))^{3/2}\).
For the special subclass of $Y$ being the Lévy process itself, we also provide a lower bound, which, up to a logarithmic term, recovers the order \(1/2\), i.e., neglecting logarithmic terms, the multilevel algorithm is order optimal for \( \beta <1\).
An empirical error analysis via numerical experiments matches the theoretical results and completes the analysis.

We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property.
Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price.
We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.

Die Dissertation "Portfoliooptimierung im Binomialmodell" befasst sich mit der Frage, inwieweit
das Problem der optimalen Portfolioauswahl im Binomialmodell lösbar ist bzw. inwieweit
die Ergebnisse auf das stetige Modell übertragbar sind. Dabei werden neben dem
klassischen Modell ohne Kosten und ohne Veränderung der Marktsituation auch Modellerweiterungen
untersucht.

We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous
short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds.
The goal of an investor is to find a portfolio which maximizes expected utility
from terminal wealth under budget and present expected short-fall (PESF) risk
constraints. We analyze this portfolio optimization problem in both complete and
incomplete financial markets in three different cases: (a) when the PESF risk is
minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds
to the unconstrained Merton investment.
In all cases we find the optimal terminal wealth and portfolio process using the
martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the
optimal terminal wealth in the cases mentioned using numerical examples. Without
risk constraints, we further compare the investment strategies for complete
and incomplete market numerically.

One crucial assumption of continuous financial mathematics is that the portfolio can be rebalanced continuously and that there are no transaction costs. In reality, this of course does not work. On the one hand, continuous rebalancing is impossible, on the other hand, each transaction causes costs which have to be subtracted from the wealth. Therefore, we focus on trading strategies which are based on discrete rebalancing - in random or equidistant times - and where transaction costs are considered. These strategies are considered for various utility functions and are compared with the optimal ones of continuous trading.

This thesis is concerned with stochastic control problems under transaction costs. In particular, we consider a generalized menu cost problem with partially controlled regime switching, general multidimensional running cost problems and the maximization of long-term growth rates in incomplete markets. The first two problems are considered under a general cost structure that includes a fixed cost component, whereas the latter is analyzed under proportional and Morton-Pliska
transaction costs.
For the menu cost problem and the running cost problem we provide an equivalent characterization of the value function by means of a generalized version of the Ito-Dynkin formula instead of the more restrictive, traditional approach via the use of quasi-variational inequalities (QVIs). Based on the finite element method and weak solutions of QVIs in suitable Sobolev spaces, the value function is constructed iteratively. In addition to the analytical results, we study a novel application of the menu cost problem in management science. We consider a company that aims to implement an optimal investment and marketing strategy and must decide when to issue a new version of a product and when and how much
to invest into marketing.
For the long-term growth rate problem we provide a rigorous asymptotic analysis under both proportional and Morton-Pliska transaction costs in a general incomplete market that includes, for instance, the Heston stochastic volatility model and the Kim-Omberg stochastic excess return model as special cases. By means of a dynamic programming approach leading-order optimal strategies are constructed
and the leading-order coefficients in the expansions of the long-term growth rates are determined. Moreover, we analyze the asymptotic performance of Morton-Pliska strategies in settings with proportional transaction costs. Finally, pathwise optimality of the constructed strategies is established.

This thesis covers two important fields in financial mathematics, namely the continuous time portfolio optimisation and credit risk modelling. We analyse optimisation problems of portfolios of Call and Put options on the stock and/or the zero coupon bond issued by a firm with default risk. We use the martingale approach for dynamic optimisation problems. Our findings show that the riskier the option gets, the less proportion of his wealth the investor allocates to the risky asset. Further, we analyse the Credit Default Swap (CDS) market quotes on the Eurobonds issued by Turkish sovereign for building the term structure of the sovereign credit risk. Two methods are introduced and compared for bootstrapping the risk-neutral probabilities of default (PD) in an intensity based (or reduced form) credit risk modelling approach. We compare the market-implied PDs with the actual PDs reported by credit rating agencies based on historical experience. Our results highlight the market price of the sovereign credit risk depending on the assigned rating category in the sampling period. Finally, we find an optimal leverage strategy for delivering the payments promised by a Constant Proportion Debt Obligation (CPDO). The problem is solved via the introduction and explicit solution of a stochastic control problem by transforming the related Hamilton-Jacobi-Bellman Equation into its dual. Contrary to the industry practise, the optimal leverage function we derive is a non-linear function of the CPDO asset value. The simulations show promising behaviour of the optimal leverage function compared with the one popular among practitioners.

This thesis is devoted to applying symbolic methods to the problems of decoding linear codes and of algebraic cryptanalysis. The paradigm we employ here is as follows. We reformulate the initial problem in terms of systems of polynomial equations over a finite field. The solution(s) of such systems should yield a way to solve the initial problem. Our main tools for handling polynomials and polynomial systems in such a paradigm is the technique of Gröbner bases and normal form reductions. The first part of the thesis is devoted to formulating and solving specific polynomial systems that reduce the problem of decoding linear codes to the problem of polynomial system solving. We analyze the existing methods (mainly for the cyclic codes) and propose an original method for arbitrary linear codes that in some sense generalizes the Newton identities method widely known for cyclic codes. We investigate the structure of the underlying ideals and show how one can solve the decoding problem - both the so-called bounded decoding and more general nearest codeword decoding - by finding reduced Gröbner bases of these ideals. The main feature of the method is that unlike usual methods based on Gröbner bases for "finite field" situations, we do not add the so-called field equations. This tremendously simplifies the underlying ideals, thus making feasible working with quite large parameters of codes. Further we address complexity issues, by giving some insight to the Macaulay matrix of the underlying systems. By making a series of assumptions we are able to provide an upper bound for the complexity coefficient of our method. We address also finding the minimum distance and the weight distribution. We provide solid experimental material and comparisons with some of the existing methods in this area. In the second part we deal with the algebraic cryptanalysis of block iterative ciphers. Namely, we analyze the small-scale variants of the Advanced Encryption Standard (AES), which is a widely used modern block cipher. Here a cryptanalyst composes the polynomial systems which solutions should yield a secret key used by communicating parties in a symmetric cryptosystem. We analyze the systems formulated by researchers for the algebraic cryptanalysis, and identify the problem that conventional systems have many auxiliary variables that are not actually needed for the key recovery. Moreover, having many such auxiliary variables, specific to a given plaintext/ciphertext pair, complicates the use of several pairs which is common in cryptanalysis. We thus provide a new system where the auxiliary variables are eliminated via normal form reductions. The resulting system in key-variables only is then solved. We present experimental evidence that such an approach is quite good for small scaled ciphers. We investigate further our approach and employ the so-called meet-in-the-middle principle to see how far one can go in analyzing just 2-3 rounds of scaled ciphers. Additional "tuning techniques" are discussed together with experimental material. Overall, we believe that the material of this part of the thesis makes a step further in algebraic cryptanalysis of block ciphers.

Pedestrian Flow Models
(2014)

There have been many crowd disasters because of poor planning of the events. Pedestrian models are useful in analysing the behavior of pedestrians in advance to the events so that no pedestrians will be harmed during the event. This thesis deals with pedestrian flow models on microscopic, hydrodynamic and scalar scales. By following the Hughes' approach, who describes the crowd as a thinking fluid, we use the solution of the Eikonal equation to compute the optimal path for pedestrians. We start with the microscopic model for pedestrian flow and then derive the hydrodynamic and scalar models from it. We use particle methods to solve the governing equations. Moreover, we have coupled a mesh free particle method to the fixed grid for solving the Eikonal equation. We consider an example with a large number of pedestrians to investigate our models for different settings of obstacles and for different parameters. We also consider the pedestrian flow in a straight corridor and through T-junction and compare our numerical results with the experiments. A part of this work is devoted for finding a mesh free method to solve the Eikonal equation. Most of the available methods to solve the Eikonal equation are restricted to either cartesian grid or triangulated grid. In this context, we propose a mesh free method to solve the Eikonal equation, which can be applicable to any arbitrary grid and useful for the complex geometries.

In der Automobilindustrie muss der Nachweis von Bauteilzuverlässigkeiten auf statistischen Verfahren basieren, da die Bauteilfestigkeit und Kundenbeanspruchung streuen. Die bisherigen Vorgehensweisen der Tests führen häufig Fehlentscheidungen bzgl. der Freigabe, was unnötige Design-Änderungen und somit hohe Kosten bedeuten kann. In vorliegender Arbeit wird der Ansatz der partiellen Durchläuferzählung entwickelt, welche die statische Güte der bisherigen Testverfahren (Success Runs) erhöht.

This thesis introduces so-called cone scalarising functions. They are by construction compatible with a partial order for the outcome space given by a cone. The quality of the parametrisations of the efficient set given by the cone scalarising functions are then investigated. Here, the focus lies on the (weak) efficiency of the generated solutions, the reachability of effiecient points and continuity of the solution set. Based on cone scalarising functions Pareto Navigation a novel, interactive, multiobjective optimisation method is proposed. It changes the ordering cone to realise bounds on partial tradeoffs. Besides, its use of an equality constraint for the changing component of the reference point is a new feature. The efficiency of its solutions, the reachability of efficient solutions and continuity is then analysed. Potential problems are demonstrated using a critical example. Furthermore, the use of Pareto Navigation in a two-phase approach and for nonconvex problems is discussed. Finally, its application for intensity-modulated radiotherapy planning is described. Thereby, its realisation in a graphical user interface is shown.

The desire to model in ever increasing detail geometrical and physical features has lead to a steady increase in the number of points used in field solvers. While many solvers have been ported to parallel machines, grid generators have left behind. Sequential generation of meshes of large size is extremely problematic both in terms of time and memory requirements. Therefore, the need for developing parallel mesh generation technique is well justified. In this work a novel algorithm is presented for automatic parallel generation of tetrahedral computational meshes based on geometrical domain decomposition. It has a potential to remove this bottleneck. Different domain decomposition approaches and criteria have been investigated. Questions regarding time and memory consumption, efficiency of computations and quality of generated surface and volume meshes have been considered. As a result of the work parTgen (partitioner and parallel tetrahedral mesh generator) software package based on the developed algorithm has been created. Several real-life examples of relatively complex structures involving large meshes (of order 10^7-10^8 elements) are given. It has been shown that high mesh quality is achieved. Memory and time consumption are reduced significantly, and parallel algorithm is efficient.

Diese Arbeit beschäftigt sich mit Methoden zur Klassifikation von Ovoiden in quadratischen Räumen. Die Anwendung der dazu entwickelten Algorithmen erfolgt hauptsächlich in achtdimensionalen Räumen speziell über den Körpern GF(7), GF(8) und GF(9). Zu verschiedenen, zumeist kleinen, zyklischen Gruppen werden hier die unter diesen Gruppen invarianten Ovoide bestimmt. Die bei dieser Suche auftretenden Ovoide sind alle bereits bekannt. Es ergeben sich jedoch Restriktionen an die Stabilisatoren gegebenenfalls existierender, unbekannter Ovoide.

The application behind the subject of this thesis are multiscale simulations on highly heterogeneous particle-reinforced composites with large jumps in their material coefficients. Such simulations are used, e.g., for the prediction of elastic properties. As the underlying microstructures have very complex geometries, a discretization by means of finite elements typically involves very fine resolved meshes. The latter results in discretized linear systems of more than \(10^8\) unknowns which need to be solved efficiently. However, the variation of the material coefficients even on very small scales reveals the failure of most available methods when solving the arising linear systems. While for scalar elliptic problems of multiscale character, robust domain decomposition methods are developed, their extension and application to 3D elasticity problems needs to be further established.
The focus of the thesis lies in the development and analysis of robust overlapping domain decomposition methods for multiscale problems in linear elasticity. The method combines corrections on local subdomains with a global correction on a coarser grid. As the robustness of the overall method is mainly determined by how well small scale features of the solution can be captured on the coarser grid levels, robust multiscale coarsening strategies need to be developed which properly transfer information between fine and coarse grids.
We carry out a detailed and novel analysis of two-level overlapping domain decomposition methods for the elasticity problems. The study also provides a concept for the construction of multiscale coarsening strategies to robustly solve the discretized linear systems, i.e. with iteration numbers independent of variations in the Young's modulus and the Poisson ratio of the underlying composite. The theory also captures anisotropic elasticity problems and allows applications to multi-phase elastic materials with non-isotropic constituents in two and three spatial dimensions.
Moreover, we develop and construct new multiscale coarsening strategies and show why they should be preferred over standard ones on several model problems. In a parallel implementation (MPI) of the developed methods, we present applications to real composites and robustly solve discretized systems of more than \(200\) million unknowns.

This thesis deals with the solution of special problems arising in financial engineering or financial mathematics. The main focus lies on commodity indices. Chapter 1 addresses the important issue for the financial engineering practice of developing well-suited models for certain assets (here: commodity indices). Descriptive analysis of the Dow Jones-UBS commodity index compared to the Standard & Poor 500 stock index provides us with first insights of some features of the corresponding distributions. Statistical tests of normality and mean reversion then helps us in setting up a model for commodity indices. Additionally, chapter 1 encompasses a thorough introduction to commodity investment, history of commodities trading and the most important derivatives, namely futures and European options on futures. Chapter 2 proposes a model for commodity indices and derives fair prices for the most important derivatives in the commodity markets. It is a Heston model supplemented with a stochastic convenience yield. The Heston model belongs to the model class of stochastic volatility models and is currently widely used in stock markets. For the application in the commodity markets the stochastic convenience yield is included in the drift of the instantaneous spot return process. Motivated by the results of chapter 1 it seems reasonable to model the convenience yield by a mean reverting Ornstein-Uhlenbeck process. Since trading desks only apply and consider models with closed form solutions for options I derive such formulas for commodity futures by solving the corresponding partial differential equation. Additionally, semi-closed form formulas for European options on futures are determined. The Cauchy problem with respect to these options is more challenging than the first one. A solution can be provided. Unlike equities, which typically entitle the holder to a continuing stake in a corporation, commodity futures contracts normally specify a certain date for the delivery of the underlying physical commodity. In order to avoid the delivery process and maintain a futures position, nearby contracts must be sold and contracts that have not yet reached the delivery period must be purchased (so called rolling). Optimal trading days for selling and buying futures are determined by applying statistical tests for stochastic dominance. Besides the optimization of the rolling procedure for commodity futures we dedicate ourselves in chapter 3 with the optimization of the weightings of the commodity futures that make up the index. To this end, I apply the Markowitz approach or mean-variance optimization. The mean-variance optimization penalizes up-side and down-side risk equally, whereas most investors do not mind up-side risk. To overcome this, I consider in the next step other risk measures, namely Value-at-Risk and Conditional Value-at-Risk. The Conditional Value-at-Risk is generalized to discontinuous cumulative distribution functions of the loss. For continuous loss distributions, the Conditional Value-at-Risk at a given confidence level is defined as the expected loss exceeding the Value-at-Risk. Loss distributions associated with finite sampling or scenario modeling are, however, discontinuous. Various risk measures involving discontinuous loss distributions shall be introduced and compared. I then apply the theoretical results to the field of portfolio optimization with commodity indices. Furthermore, I uncover graphically the behavior of these risk measures. For this purpose, I consider the risk measures as a function of the confidence level. Based on a special discrete loss distribution, the graphs demonstrate the different properties of these risk measures. The goal of the first section of chapter 4 is to apply the mathematical concept of excursions for the creation of optimal highly automated or algorithmic trading strategies. The idea is to consider the gain of the strategy and the excursion time it takes to realize the gain. In this section I calculate formulas for the Ornstein-Uhlenbeck process. I show that the corresponding formulas can be calculated quite fast since the only function appearing in the formulas is the so called imaginary error function. This function is already implemented in many programs, such as in Maple. My main contribution of this topic is the optimization of the trading strategy for Ornstein-Uhlenbeck processes via the Banach fixed-point theorem. The second section of chapter 4 deals with statistical arbitrage strategies, a long horizon trading opportunity that generates a riskless profit. The results of this section provide an investor with a tool to investigate empirically if some strategies (for example momentum strategies) constitute statistical arbitrage opportunities or not.

For the last decade, optimization of beam orientations in intensity-modulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity profiles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity profiles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity profiles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity profiles for every selection of beam orientations, making the dependence between beam orientations and its intensity profiles less important. This thesis takes advantage of this property to accelerate the optimization process through an approximation of the intensity profiles that are used for multiple selections of beam orientations, saving a considerable amount of calculation time. A dynamic algorithm (DA) and evolutionary algorithm (EA), for beam orientations in IMRT planning will be presented. The DA mimics, automatically, the methods of beam's eye view and observer's view which are recognized in conventional conformal radiation therapy. The EA is based on a dose-volume histogram evaluation function introduced as an attempt to minimize the deviation between the mathematical and clinical optima. To illustrate the efficiency of the algorithms they have been applied to different clinical examples. In comparison to the standard equally spaced beams plans, improvements are reported for both algorithms in all the clinical examples even when, for some cases, fewer beams are used. A smaller number of beams is always desirable without compromising the quality of the treatment plan. It results in a shorter treatment delivery time, which reduces potential errors in terms of patient movements and decreases discomfort.

Traffic flow on road networks has been a continuous source of challenging mathematical problems. Mathematical modelling can provide an understanding of dynamics of traffic flow and hence helpful in organizing the flow through the network. In this dissertation macroscopic models for the traffic flow in road networks are presented. The primary interest is the extension of the existing macroscopic road network models based on partial differential equations (PDE model). In order to overcome the difficulty of high computational costs of PDE model an ODE model has been introduced. In addition, steady state traffic flow model named as RSA model on road networks has been dicsussed. To obtain the optimal flow through the network cost functionals and corresponding optimal control problems are defined. The solution of these optimization problems provides an information of shortest path through the network subject to road conditions. The resulting constrained optimization problem is solved approximately by solving unconstrained problem invovling exact penalty functions and the penalty parameter. A good estimate of the threshold of the penalty parameter is defined. A well defined algorithm for solving a nonlinear, nonconvex equality and bound constrained optimization problem is introduced. The numerical results on the convergence history of the algorithm support the theoretical results. In addition to this, bottleneck situations in the traffic flow have been treated using a domain decomposition method (DDM). In particular this method could be used to solve the scalar conservation laws with the discontinuous flux functions corresponding to other physical problems too. This method is effective even when the flux function presents more than one discontinuity within the same spatial domain. It is found in the numerical results that the DDM is superior to other schemes and demonstrates good shock resolution.

In this work, we develop a framework for analyzing an executive’s own- company stockholding and work effort preferences. The executive, character- ized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed form for logarithmic and power utility and for exponential utility for the case of zero interest rates. Additionally, a utility indifference rationale is applied to determine his fair compensation. Being unconstrained by performance contracting, the executive’s work effort strategy establishes a base case for theoretical or empirical assessment of the benefits or otherwise of constraining executives with performance contracting. Further, we consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market including the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility and power utility. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two positions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibilityto benefit from her work effort by acquiring own-company shares. This givesinsight into aspects of optimal contract design. Our framework is applicable to the pharmaceutical and financial industry, as well as the IT sector.

This thesis deals with 3 important aspects of optimal investment in real-world financial markets: taxes, crashes, and illiquidity. An introductory chapter reviews the portfolio problem in its historical context and motivates the theme of this work: We extend the standard modelling framework to include specific real-world features and evaluate their significance. In the first chapter, we analyze the optimal portfolio problem with capital gains taxes, assuming that taxes are deferred until the end of the investment horizon. The problem is solved with the help of a modification of the classical martingale method. The second chapter is concerned with optimal asset allocation under the threat of a financial market crash. The investor takes a worst-case attitude towards the crash, so her investment objective is to be best off in the most adverse crash scenario. We first survey the existing literature on the worst-case approach to optimal investment and then present in detail the novel martingale approach to worst-case portfolio optimization. The first part of this chapter is based on joint work with Ralf Korn. In the last chapter, we investigate optimal portfolio decisions in the presence of illiquidity. Illiquidity is understood as a period in which it is impossible to trade on financial markets. We use dynamic programming techniques in combination with abstract convergence results to solve the corresponding optimal investment problem. This chapter is based on joint work with Holger Kraft and Peter Diesinger.

In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.

This study deals with the optimal control problems of the glass tube drawing processes where the aim is to control the cross-sectional area (circular) of the tube by using the adjoint variable approach. The process of tube drawing is modeled by four coupled nonlinear partial differential equations. These equations are derived by the axisymmetric Stokes equations and the energy equation by using the approach based on asymptotic expansions with inverse aspect ratio as small parameter. Existence and uniqueness of the solutions of stationary isothermal model is also proved. By defining the cost functional, we formulated the optimal control problem. Then Lagrange functional associated with minimization problem is introduced and the first and the second order optimality conditions are derived. We also proved the existence and uniqueness of the solutions of the stationary isothermal model. We implemented the optimization algorithms based on the steepest descent, nonlinear conjugate gradient, BFGS, and Newton approaches. In the Newton method, CG iterations are introduced to solve the Newton equation. Numerical results are obtained for two different cases. In the first case, the cross-sectional area for the entire time domain is controlled and in the second case, the area at the final time is controlled. We also compared the performance of the optimization algorithms in terms of the solution iterations, functional evaluations and the computation time.

This thesis generalizes the Cohen-Lenstra heuristic for the class groups of real quadratic
number fields to higher class groups. A "good part" of the second class group is defined.
In general this is a non abelian proper factor group of the second class group. Properties
of those groups are described, a probability distribution on the set of those groups is in-
troduced and proposed as generalization of the Cohen-Lenstra heuristic for real quadratic
number fields. The calculation of number field tables which contain information about
higher class groups is explained and the tables are compared to the heuristic. The agree-
ment is close. A program which can create an internet database for number field tables is
presented.

Limit theorems constitute a classical and important field in probability theory. In several applications, in particular in demographic or medical contexts, killed Markov processes suggest themselves as models for populations undergoing culling by mortality or other processes. In these situations mathematical research features a general interest in the observable distribution of survivors, which is known as Yaglom limit or quasi-stationary distribution. Previous work often focuses on discrete state spaces, commonly birth-death processes (or with some more flexible localization of the transitions), with killing only on the boundary. The central concerns of this thesis are to describe, for a given class of one dimensional diffusion processes, the quasistationary distributions (if any), and to describe the convergence (or not) of the process conditioned on survival to one of these quasistationary distributions. Rather general diffusion processes on the half-line are considered, where 0 is allowed to be regular or an exit boundary. Very similar techniques are applied in this work in order to derive results on the large time behavior of an exotic measure valued process, which is closely related to so-called point interactions, which have been widely studied in the mathematical physics literature.

In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.

In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.

It is considered an analytical model of defaultable bond portfolio in terms of its face value process. The face value process dynamically evolves with time and incorporates changes caused by recovery payment on default followed by purchasing of new bonds. The further studies involve properties, distribution and control of the face value process.

Many real life problems have multiple spatial scales. In addition to the multiscale nature one has to take uncertainty into account. In this work we consider multiscale problems with stochastic coefficients.
We combine multiscale methods, e.g., mixed multiscale finite elements or homogenization, which are used for deterministic problems with stochastic methods, such as multi-level Monte Carlo or polynomial chaos methods.
The work is divided into three parts.
In the first two parts we study homogenization with different stochastic methods. Therefore we consider elliptic stationary diffusion equations with stochastic coefficients.
The last part is devoted to the study of mixed multiscale finite elements in combination with multi-level Monte Carlo methods. In the third part we consider multi-phase flow and transport equations.

This thesis deals with modeling aspects of generalized Newtonian and of non-Newtonian fluids, as well as with development and validation of algorithms used in simulation of such fluids. The main contribution in the modeling part are the introduction and analysis of a new model for the generalized Newtonian fluids, where constitutive equation is of an algebraic form. Distinction between shear and extensional viscosities leads to anisotropic viscosity model. It can be considered as a natural extension of the well known (isotropic viscosity) Carreau model, which deals only with shear viscosity properties of the fluid. The proposed model takes additionally into account extensional viscosity properties. Numerical results show that the anisotropic viscosity model gives much better agreement with experimental observations than the isotropic one. Another contribution of the thesis consists of the development and analysis of robust and reliable algorithms for simulation of generalized Newtonian fluids. For such fluids the momentum equations are strongly coupled through mixed derivatives appearing in the viscous term (unlike the case of Newtonian fluids). It is shown in this thesis, that a careful treatment of those derivatives is essential in deriving robust algorithms. A modification of a standard SIMPLE-like algorithm is given, where all the viscous terms from the momentum equations are discretized in an implicit manner. Moreover, it is shown that a block diagonal preconditioner to the viscous operator is good enough to be used in simulations. Furthermore, different solution techniques, namely projection type methods (consists of solving momentum equations and pressure correction equation) and fully coupled methods (momentum and continuity equations are solved together), are compared. It is shown, that explicit discretization of the mixed derivatives lead to stability problems. Further, analytical estimates of eigenvalue distribution for three different preconditioners, applied to the transformed system arising after discretization and linearization of the momentum and continuity equations, are provided. We propose to apply a block Gauss-Seidel preconditioner to the transformed system. The analysis shows, that this preconditioner is able to cluster eigenvalues around unity independent of the transformation step. It is not the case for other preconditioners applied to the transformed system as discussed in the thesis. The block Gauss-Seidel preconditioner has also shown the best behavior (among all preconditioners discussed in the thesis) in numerical experiments. Further contribution consists of comparison and validation of numerical algorithms applied in simulations of non-Newtonian fluids modeled by time integral constitutive equations. Numerical results from simulations of dilute polymer solutions, described by the integral Oldroyd B model, have shown very good quantitative agreement with the results obtained by differential Oldroyd B counterpart in 4:1 planar contraction domain at low Weissenberg numbers. In this case, the Weissenberg number is changed by changing the relaxation time. However, contrary to the differential Oldroyd B model, the integral one allows to perform stable simulations also in the range of high Weissenberg numbers. Moreover, very good agreement with experimental observations has been achieved. Simulations of concentrated polymer solutions (polystyrene and polybutadiene solutions), modeled by the integral Doi Edwards model, supplemented by chain length fluctuations, have shown very good qualitative agreement with the results obtained by its differential approximation in 4:1:4 constriction domain. Again, much higher Weissenberg numbers can be achieved when the integral model is used. Moreover, very good quantitative results with experimental data of polystyrene solution for the first normal stress difference and shear viscosity defined here as the quotient of a shear stress and a shear rate. Finally, comparison of the two methods used for approximating the time integral constitutive equation, namely Deformation Field Method (DFM) and Backward Lagrangian Particle Method (BLPM), is performed. In BLPM the particle paths are recalculated at every time step of the simulations, what has never been tried before. The results have shown, that in the considered geometries both methods give similar results.

The goal of this work is the development and investigation of an interdisciplinary and in itself closed hydrodynamic approach to the simulation of dilute and dense granular flow. The definition of “granular flow” is a nontrivial task in itself. We say that it is either the flow of grains in a vacuum or in a fluid. A grain is an observable piece of a certain material, for example stone when we mean the flow of sand. Choosing a hydrodynamic view on granular flow, we treat the granular material as a fluid. A hydrodynamic model is developed, that describes the process of flowing granular material. This is done through a system of partial differential equations and algebraic relations. This system is derived by the kinetic theory of granular gases which is characterized by inelastic collisions extended with approaches from soil mechanics. Solutions to the system have to be obtained to understand the process. The equations are so difficult to solve that an analytical solution is out of reach. So approximate solutions must be obtained. Hence the next step is the choice or development of a numerical algorithm to obtain approximate solutions of the model. Common to every problem in numerical simulation, these two steps do not lead to a result without implementation of the algorithm. Hence the author attempts to present this work in the following frame, to participate in and contribute to the three areas Physics, Mathematics and Software implementation and approach the simulation of granular flow in a combined and interdisciplinary way. This work is structured as follows. A continuum model for granular flow which covers the regime of fast dilute flow as well as slow dense flow up to vanishing velocity is presented in the first chapter. This model is strongly nonlinear in the dependence of viscosity and other coefficients on the hydrodynamic variables and it is singular because some coefficients diverge towards the maximum packing fraction of grains. Hence the second difficulty, the challenging task of numerically obtaining approximate solutions for this model is faced in the second chapter. In the third chapter we aim at the validation of both the model and the numerical algorithm through numerical experiments and investigations and show their application to industrial problems. There we focus intensively on the shear flow experiment from the experimental and analytical work of Bocquet et al. which serves well to demonstrate the algorithm, all boundary conditions involved and provides a setting for analytical studies to compare our results. The fourth chapter rounds up the work with the implementation of both the model and the numerical algorithm in a software framework for the solution of complex rheology problems developed as part of this thesis.

The fast development of the financial markets in the last decade has lead to the creation of a variety of innovative interest rate related products that require advanced numerical pricing methods. Examples in this respect are products with a complicated strong path-dependence such as a Target Redemption Note, a Ratchet Cap, a Ladder Swap and others. On the other side, the usage of the standard in the literature one-factor Hull and White (1990) type of short rate models allows only for a perfect correlation between all continuously compounded spot rates or Libor rates and thus are not suited for pricing innovative products depending on several Libor rates such as for example a "steepener" option. One possible solution to this problem deliver the two-factor short rate models and in this thesis we consider a two-factor Hull and White (1990) type of a short rate process derived from the Heath, Jarrow, Morton (1992) framework by limiting the volatility structure of the forward rate process to a deterministic one. In this thesis, we often choose to use a variety of modified (binomial, trinomial and quadrinomial) tree constructions as a main numerical pricing tool due to their flexibility and fast convergence and (when there is no closed-form solution) compare their results with fine grid Monte Carlo simulations. For the purpose of pricing the already mentioned innovative short-rate related products, in this thesis we offer and examine two different lattice construction methods for the two-factor Hull-White type of a short rate process which are able to deal easily both with modeling of the mean-reversion of the underlying process and with the strong path-dependence of the priced options. Additionally, we prove that the so-called rotated lattice construction method overcomes the typical for the existing two-factor tree constructions problem with obtaining negative "risk-neutral probabilities". With a variety of numerical examples, we show that this leads to a stability in the results especially in cases of high volatility parameters and negative correlation between the base factors (which is typically the case in reality). Further, noticing that Chan et al (1992) and Ritchken and Sankarasubramanian (1995) showed that option prices are sensitive to the level of the short rate volatility, we examine the pricing of European and American options where the short rate process has a volatility structure of a Cheyette (1994) type. In this relation, we examine the application of the two offered lattice construction methods and compare their results with the Monte Carlo simulation ones for a variety of examples. Additionally, for the pricing of American options with the Monte Carlo method we expand and implement the simulation algorithm of Longstaff and Schwartz (2000). With a variety of numerical examples we compare again the stability and the convergence of the different lattice construction methods. Dealing with the problems of pricing strongly path-dependent options, we come across the cumulative Parisian barrier option pricing problem. We notice that in their classical form, the cumulative Parisian barrier options have been priced both analytically (in a quasi closed form) and with a tree approximation (based on the Forward Shooting Grid algorithm, see e.g. Hull and White (1993), Kwok and Lau (2001) and others). However, we offer an additional tree construction method which can be seen as a direct binomial tree integration that uses the analytically calculated conditional survival probabilities. The advantage of the offered method is on one side that the conditional survival probabilities are easier to calculate than the closed-form solution itself and on the other side that this tree construction is very flexible in the sense that it allows easy incorporation of additional features such as e.g a forward starting one. The obtained results are better than the Forward Shooting Grid tree ones and are very close to the analytical quasi closed form solution. Finally, we pay our attention to pricing another type of innovative interest rate alike products - namely the Longevity bond - whose coupon payments depend on the survival function of a given cohort. Due to the lack of a market for mortality, for the pricing of the Longevity bonds we develop (following Korn, Natcheva and Zipperer (2006)) a framework that contains principles from both Insurance and Financial mathematic. Further on, we calibrate the existing models for the stochastic mortality dynamics to historical German data and additionally offer new stochastic extensions of the classical (deterministic) models of mortality such as the Gompertz and the Makeham one. Finally, we compare and analyze the results of the application of all considered models to the pricing of a Longevity bond on the longevity of the German males.

In the context of inverse optimization, inverse versions of maximum flow and minimum cost flow problems have thoroughly been investigated. In these network flow problems there are two important problem parameters: flow capacities of the arcs and costs incurred by sending a unit flow on these arcs. Capacity changes for maximum flow problems and cost changes for minimum cost flow problems have been studied under several distance measures such as rectilinear, Chebyshev, and Hamming distances. This thesis also deals with inverse network flow problems and their counterparts tension problems under the aforementioned distance measures. The major goals are to enrich the inverse optimization theory by introducing new inverse network problems that have not yet been treated in the literature, and to extend the well-known combinatorial results of inverse network flows for more general classes of problems with inherent combinatorial properties such as matroid flows on regular matroids and monotropic programming. To accomplish the first objective, the inverse maximum flow problem under Chebyshev norm is analyzed and the capacity inverse minimum cost flow problem, in which only arc capacities are perturbed, is introduced. In this way, it is attempted to close the gap between the capacity perturbing inverse network problems and the cost perturbing ones. The foremost purpose of studying inverse tension problems on networks is to achieve a well-established generalization of the inverse network problems. Since tensions are duals of network flows, carrying the theoretical results of network flows over to tensions follows quite intuitively. Using this intuitive link between network flows and tensions, a generalization to matroid flows and monotropic programs is built gradually up.

On Gyroscopic Stabilization
(2012)

This thesis deals with systems of the form
\(
M\ddot x+D\dot x+Kx=0\;, \; x \in \mathbb R^n\;,
\)
with a positive definite mass matrix \(M\), a symmetric damping matrix \(D\) and a positive definite stiffness
matrix \(K\).
If the equilibrium in the system is unstable, a small disturbance is enough to set the system in motion again. The motion of the system sustains itself, an effect which is called self-excitation or self-induced vibration. The reason behind this effect is the presence of negative damping, which results for example from dry friction.
Negative damping implies that the damping matrix \(D\) is indefinite or negative definite. Throughout our work, we assume \(D\) to be indefinite, and that the system possesses both stable and unstable modes and thus is unstable.
It is now the idea of gyroscopic stabilization to mix the modes of a system with indefinite damping such
that the system is stabilized without introducing further
dissipation. This is done by adding gyroscopic forces \(G\dot x\) with a suitable
skew-symmetric matrix \(G\) to the left-hand side. We call \(G=-G^T\in\mathbb R^{n\times n}\) a gyroscopic stabilizer for
the unstable system, if
\(
M\ddot x+(D+ G)\dot x+Kx=0
\)
is asymptotically stable. We show the existence of \(G\) in space dimensions three and four.

This thesis deals with the numerical study of multiscale problems arising in the modelling of processes of the flow of fluid in plain and porous media. Many of these processes, governed by partial differential equations, are relevant in engineering, industry, and environmental studies. The overall task of modelling and simulating the filtration-related multiscale processes becomes interdisciplinary as it employs physics, mathematics and computer programming to reach its aim. Keeping the challenges in mind, the main focus is to overcome the limitations of accuracy, speed and memory and to develop novel efficient numerical algorithms which could, in part or whole, be utilized by those working in the field of porous media. This work has essentially four parts. A single grid basic algorithm and a corresponding parallel algorithm to solve the macroscopic Navier-Stokes-Brinkmann model is discussed. An upscaling subgrid algorithm is derived and numerically tested for the same model. Moving a step further in the line of multiscale methods, an iterative Mutliscale Finite Volume (iMSFV) method is developed for the Stokes-Darcy system. Additionally, the last part of the thesis deals with ways to incorporate changes occurring at different (meso) scale level. The flow equations are coupled with the Convection-Diffusion-Reaction (CDR) equation, which models the transport and capturing of particle concentrations. By employing the numerical method for the coupled flow and transport problem, we understand the interplay between the flow velocity and filtration.

The thesis at hand deals with the numerical solution of multiscale problems arising in the modeling of processes in fluid and thermo dynamics. Many of these processes, governed by partial differential equations, are relevant in engineering, geoscience, and environmental studies. More precisely, this thesis discusses the efficient numerical computation of effective macroscopic thermal conductivity tensors of high-contrast composite materials. The term "high-contrast" refers to large variations in the conductivities of the constituents of the composite. Additionally, this thesis deals with the numerical solution of Brinkman's equations. This system of equations adequately models viscous flows in (highly) permeable media. It was introduced by Brinkman in 1947 to reduce the deviations between the measurements for flows in such media and the predictions according to Darcy's model.

The present thesis deals with coupled steady state laminar flows of isothermal incompressible viscous Newtonian fluids in plain and in porous media. The flow in the pure fluid region is usually described by the (Navier-)Stokes system of equations. The most popular models for the flow in the porous media are those suggested by Darcy and by Brinkman. Interface conditions, proposed in the mathematical literature for coupling Darcy and Navier-Stokes equations, are shortly reviewed in the thesis. The coupling of Navier-Stokes and Brinkman equations in the literature is based on the so called continuous stress tensor interface conditions. One of the main tasks of this thesis is to investigate another type of interface conditions, namely, the recently suggested stress tensor jump interface conditions. The mathematical models based on these interface conditions were not carefully investigated from the mathematical point of view, and also their validity was a subject of discussions. The considerations within this thesis are a step toward better understanding of these interface conditions. Several aspects of the numerical simulations of such coupled flows are considered: -the choice of proper interface conditions between the plain and porous media -analysis of the well-posedness of the arising systems of partial differential equations; -developing numerical algorithm for the stress tensor jump interface conditions, coupling Navier-Stokes equations in the pure liquid media with the Navier-Stokes-Brinkman equations in the porous media; -validation of the macroscale mathematical models on the base of a comparison with the results from a direct numerical simulation of model representative problems, allowing for grid resolution of the pore level geometry; -developing software and performing numerical simulation of 3-D industrial flows, namely of oil flows through car filters.

Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.

A modular level set algorithm is developed to study the interface and its movement for free moving boundary problems. The algorithm is divided into three basic modules : initialization, propagation and contouring. Initialization is the process of finding the signed distance function from closed objects. We discuss here, a methodology to find an accurate signed distance function from a closed, simply connected surface discretized by triangulation. We compute the signed distance function using the direct method and it is stored efficiently in the neighborhood of the interface by a narrow band level set method. A novel approach is employed to determine the correct sign of the distance function at convex-concave junctions of the surface. The accuracy and convergence of the method with respect to the surface resolution is studied. It is shown that the efficient organization of surface and narrow band data structures enables the solution of large industrial problems. We also compare the accuracy of the signed distance function by direct approach with Fast Marching Method (FMM). It is found that the direct approach is more accurate than FMM. Contouring is performed through a variant of the marching cube algorithm used for the isosurface construction from volumetric data sets. The algorithm is designed to keep foreground and background information consistent, contrary to the neutrality principle followed for surface rendering in computer graphics. The algorithm ensures that the isosurface triangulation is closed, non-degenerate and non-ambiguous. The constructed triangulation has desirable properties required for the generation of good volume meshes. These volume meshes are used in the boundary element method for the study of linear electrostatics. For estimating surface properties like interface position, normal and curvature accurately from a discrete level set function, a method based on higher order weighted least squares is developed. It is found that least squares approach is more accurate than finite difference approximation. Furthermore, the method of least squares requires a more compact stencil than those of finite difference schemes. The accuracy and convergence of the method depends on the surface resolution and the discrete mesh width. This approach is used in propagation for the study of mean curvature flow and bubble dynamics. The advantage of this approach is that the curvature is not discretized explicitly on the grid and is estimated on the interface. The method of constant velocity extension is employed for the propagation of the interface. With least squares approach, the mean curvature flow has considerable reduction in mass loss compared to finite difference techniques. In the bubble dynamics, the modules are used for the study of a bubble under the influence of surface tension forces to validate Young-Laplace law. It is found that the order of curvature estimation plays a crucial role for calculating accurate pressure difference between inside and outside of the bubble. Further, we study the coalescence of two bubbles under surface tension force. The application of these modules to various industrial problems is discussed.

The immiscible lattice BGK method for solving the two-phase incompressible Navier-Stokes equations is analysed in great detail. Equivalent moment analysis and local differential geometry are applied to examine how interface motion is determined and how surface tension effects can be included such that consistency to the two-phase incompressible Navier-Stokes equations can be expected. The results obtained from theoretical analysis are verified by numerical experiments. Since the intrinsic interface tracking scheme of immiscible lattice BGK is found to produce unsatisfactory results in two-dimensional simulations several approaches to improving it are discussed but all of them turn out to yield no substantial improvement. Furthermore, the intrinsic interface tracking scheme of immiscible lattice BGK is found to be closely connected to the well-known conservative volume tracking method. This result suggests to couple the conservative volume tracking method for determining interface motion with the Navier-Stokes solver of immiscible lattice BGK. Applied to simple flow fields, this coupled method yields much better results than plain immiscible lattice BGK.

Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)

Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.

The thesis studies change points in absolute time for censored survival data with some contributions to the more common analysis of change points with respect to survival time. We first introduce the notions and estimates of survival analysis, in particular the hazard function and censoring mechanisms. Then, we discuss change point models for survival data. In the literature, usually change points with respect to survival time are studied. Typical examples are piecewise constant and piecewise linear hazard functions. For that kind of models, we propose a new algorithm for numerical calculation of maximum likelihood estimates based on a cross entropy approach which in our simulations outperforms the common Nelder-Mead algorithm.
Our original motivation was the study of censored survival data (e.g., after diagnosis of breast cancer) over several decades. We wanted to investigate if the hazard functions differ between various time periods due, e.g., to progress in cancer treatment. This is a change point problem in the spirit of classical change point analysis. Horváth (1998) proposed a suitable change point test based on estimates of the cumulative hazard function. As an alternative, we propose similar tests based on nonparametric estimates of the hazard function. For one class of tests related to kernel probability density estimates, we develop fully the asymptotic theory for the change point tests. For the other class of estimates, which are versions of the Watson-Leadbetter estimate with censoring taken into account and which are related to the Nelson-Aalen estimate, we discuss some steps towards developing the full asymptotic theory. We close by applying the change point tests to simulated and real data, in particular to the breast cancer survival data from the SEER study.

In many medical, financial, industrial, e.t.c. applications of statistics, the model parameters may undergo changes at unknown moment of time. In this thesis, we consider change point analysis in a regression setting for dichotomous responses, i.e. they can be modeled as Bernoulli or 0-1 variables. Applications are widespread including credit scoring in financial statistics and dose-response relations in biometry. The model parameters are estimated using neural network method. We show that the parameter estimates are identifiable up to a given family of transformations and derive the consistency and asymptotic normality of the network parameter estimates using the results in Franke and Neumann Franke Neumann (2000). We use a neural network based likelihood ratio test statistic to detect a change point in a given set of data and derive the limit distribution of the estimator using the results in Gombay and Horvath (1994,1996) under the assumption that the model is properly specified. For the misspecified case, we develop a scaled test statistic for the case of one-dimensional parameter. Through simulation, we show that the sample size, change point location and the size of change influence change point detection. In this work, the maximum likelihood estimation method is used to estimate a change point when it has been detected. Through simulation, we show that change point estimation is influenced by the sample size, change point location and the size of change. We present two methods for determining the change point confidence intervals: Profile log-likelihood ratio and Percentile bootstrap methods. Through simulation, the Percentile bootstrap method is shown to be superior to profile log-likelihood ratio method.