## Fachbereich Mathematik

### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (232) (remove)

#### Keywords

- Algebraische Geometrie (6)
- Finanzmathematik (5)
- Optimization (5)
- Portfolio Selection (5)
- Stochastische dynamische Optimierung (5)
- Navier-Stokes-Gleichung (4)
- Numerische Mathematik (4)
- Portfolio-Optimierung (4)
- portfolio optimization (4)
- Computeralgebra (3)
- Elastizität (3)
- Erwarteter Nutzen (3)
- Finite-Volumen-Methode (3)
- Gröbner-Basis (3)
- Homogenisierung <Mathematik> (3)
- Inverses Problem (3)
- Numerische Strömungssimulation (3)
- Optionspreistheorie (3)
- Portfolio Optimization (3)
- Portfoliomanagement (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Wavelet (3)
- optimales Investment (3)
- Asymptotic Expansion (2)
- Asymptotik (2)
- Bewertung (2)
- Derivat <Wertpapier> (2)
- Elasticity (2)
- Endliche Geometrie (2)
- Erdmagnetismus (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- Geometric Ergodicity (2)
- Hamilton-Jacobi-Differentialgleichung (2)
- Hochskalieren (2)
- IMRT (2)
- Kreditrisiko (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Local smoothing (2)
- Mehrskalenanalyse (2)
- Mehrskalenmodell (2)
- Modulraum (2)
- Partial Differential Equations (2)
- Partielle Differentialgleichung (2)
- Poröser Stoff (2)
- Regularisierung (2)
- Schnitttheorie (2)
- Stochastic Control (2)
- Stochastische Differentialgleichung (2)
- Transaktionskosten (2)
- Upscaling (2)
- Vektorwavelets (2)
- White Noise Analysis (2)
- curve singularity (2)
- domain decomposition (2)
- duality (2)
- finite volume method (2)
- geomagnetism (2)
- homogenization (2)
- illiquidity (2)
- interface problem (2)
- isogeometric analysis (2)
- mesh generation (2)
- optimal investment (2)
- splines (2)
- "Slender-Body"-Theorie (1)
- 3D image analysis (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- Ableitungsfreie Optimierung (1)
- Advanced Encryption Standard (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Analysis (1)
- Angewandte Mathematik (1)
- Annulus (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Arc distance (1)
- Archimedische Kopula (1)
- Asiatische Option (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Ausfallrisiko (1)
- Automorphismengruppe (1)
- Autoregressive Hilbertian model (1)
- B-Spline (1)
- Barriers (1)
- Basket Option (1)
- Bayes-Entscheidungstheorie (1)
- Beam models (1)
- Beam orientation (1)
- Beschichtungsprozess (1)
- Beschränkte Krümmung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Bildsegmentierung (1)
- Binomialbaum (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Boltzmann Equation (1)
- Bondindizes (1)
- Bootstrap (1)
- Boundary Value Problem / Oblique Derivative (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- CDO (1)
- CDS (1)
- CDSwaption (1)
- CFD (1)
- CHAMP (1)
- CPDO (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Charakter <Gruppentheorie> (1)
- Chi-Quadrat-Test (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Circle Location (1)
- Coarse graining (1)
- Cohen-Lenstra heuristic (1)
- Combinatorial Optimization (1)
- Commodity Index (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer algebra (1)
- Computeralgebra System (1)
- Conditional Value-at-Risk (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Construction of hypersurfaces (1)
- Copula (1)
- Coupled PDEs (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Curvature (1)
- Curved viscous fibers (1)
- DSMC (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Debt Management (1)
- Defaultable Options (1)
- Deformationstheorie (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Differenzenverfahren (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionsprozess (1)
- Discriminatory power (1)
- Diskrete Fourier-Transformation (1)
- Dispersionsrelation (1)
- Dissertation (1)
- Druckkorrektur (1)
- Dünnfilmapproximation (1)
- EM algorithm (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Efficiency (1)
- Effizienter Algorithmus (1)
- Effizienz (1)
- Eikonal equation (1)
- Elastische Deformation (1)
- Elastoplastizität (1)
- Elektromagnetische Streuung (1)
- Eliminationsverfahren (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Endliche Gruppe (1)
- Endliche Lie-Gruppe (1)
- Entscheidungsbaum (1)
- Entscheidungsunterstützung (1)
- Enumerative Geometrie (1)
- Erdöl Prospektierung (1)
- Erwartungswert-Varianz-Ansatz (1)
- Expected shortfall (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Extrapolation (1)
- Extreme Events (1)
- Extreme value theory (1)
- FEM (1)
- FFT (1)
- FPM (1)
- Faden (1)
- Fatigue (1)
- Feedfoward Neural Networks (1)
- Feynman Integrals (1)
- Feynman path integrals (1)
- Fiber suspension flow (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finite-Elemente-Methode (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Forward-Backward Stochastic Differential Equation (1)
- Fourier-Transformation (1)
- Fredholmsche Integralgleichung (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionenkörper (1)
- GARCH (1)
- GARCH Modelle (1)
- Galerkin-Methode (1)
- Gamma-Konvergenz (1)
- Garbentheorie (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Geo-referenced data (1)
- Geodesie (1)
- Geometrische Ergodizität (1)
- Gewichteter Sobolev-Raum (1)
- Gittererzeugung (1)
- Gleichgewichtsstrategien (1)
- Granular flow (1)
- Granulat (1)
- Gravitationsfeld (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Große Abweichung (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner-basis (1)
- Gyroscopic (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamiltonian Path Integrals (1)
- Handelsstrategien (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Hazard Functions (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Helmholtz Type Boundary Value Problems (1)
- Heston-Modell (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- Homogenization (1)
- Homologische Algebra (1)
- Hub Location Problem (1)
- Hydrostatischer Druck (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hyperspektraler Sensor (1)
- Hysterese (1)
- ITSM (1)
- Idealklassengruppe (1)
- Illiquidität (1)
- Image restoration (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Inflation (1)
- Infrarotspektroskopie (1)
- Intensität (1)
- Internationale Diversifikation (1)
- Inverse Problem (1)
- Irreduzibler Charakter (1)
- Isogeometrische Analyse (1)
- Ito (1)
- Jacobigruppe (1)
- Kanalcodierung (1)
- Karhunen-Loève expansion (1)
- Kategorientheorie (1)
- Kelvin Transformation (1)
- Kirchhoff-Love shell (1)
- Kiyoshi (1)
- Kombinatorik (1)
- Kommutative Algebra (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsphysik (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Kreitderivaten (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kurvenschar (1)
- LIBOR (1)
- Lagrangian relaxation (1)
- Laplace transform (1)
- Lattice Boltzmann (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Leading-Order Optimality (1)
- Level set methods (1)
- Lie-Typ-Gruppe (1)
- Lineare partielle Differentialgleichung (1)
- Lippmann-Schwinger equation (1)
- Liquidität (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- MBS (1)
- MKS (1)
- Macaulay’s inverse system (1)
- Magnetoelastic coupling (1)
- Magnetoelasticity (1)
- Magnetostriction (1)
- Marangoni-Effekt (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Markov-Prozess (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martingaloptimalitätsprinzip (1)
- Mathematical Finance (1)
- Mathematik (1)
- Mathematisches Modell (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Maxwell's equations (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrkriterielle Optimierung (1)
- Mehrskalen (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikroelektronik (1)
- Mikrostruktur (1)
- Mixed integer programming (1)
- Modellbildung (1)
- Molekulardynamik (1)
- Momentum and Mas Transfer (1)
- Monte Carlo (1)
- Monte-Carlo-Simulation (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Mosco convergence (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multicriteria optimization (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiresolution Analysis (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- NURBS (1)
- Networks (1)
- Netzwerksynthese (1)
- Neural Networks (1)
- Neuronales Netz (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Niederschlag (1)
- No-Arbitrage (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Mathematik / Algorithmus (1)
- Numerisches Verfahren (1)
- Oberflächenmaße (1)
- Oberflächenspannung (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimale Portfolios (1)
- Optimierung (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Order (1)
- Ovoid (1)
- Order of printed copy (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Partikel Methoden (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathwise Optimality (1)
- Pedestrian FLow (1)
- Pfadintegral (1)
- Planares Polynom (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- PolyBoRi (1)
- Population Balance Equation (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Preimage of an ideal under a morphism of algebras (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Prox-Regularisierung (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- Quadratischer Raum (1)
- Quantile autoregression (1)
- Quasi-Variational Inequalities (1)
- RKHS (1)
- Radial Basis Functions (1)
- Radiotherapy (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rarefied gas (1)
- Reflexionsspektroskopie (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regressionsanalyse (1)
- Regularisierung / Stoppkriterium (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Reliability (1)
- Restricted Regions (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Risikomanagement (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Measures (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Räumliche Statistik (1)
- SWARM (1)
- Scale function (1)
- Schaum (1)
- Schaumzerfall (1)
- Schiefe Ableitung (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Second Order Conditions (1)
- Semi-Markov-Kette (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Shallow Water Equations (1)
- Shape optimization, gradient based optimization, adjoint method (1)
- Simulation (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Slender body theory (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Spannungs-Dehn (1)
- Spatial Statistics (1)
- Spectral theory (1)
- Spektralanalyse <Stochastik> (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sprung-Diffusions-Prozesse (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Standard basis (1)
- Standortprobleme (1)
- Statistics (1)
- Statistisches Modell (1)
- Steuer (1)
- Stochastic Impulse Control (1)
- Stochastic Processes (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische Zinsen (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Strukturiertes Finanzprodukt (1)
- Strukturoptimierung (1)
- Strömungsdynamik (1)
- Strömungsmechanik (1)
- Success Run (1)
- Survival Analysis (1)
- Systemidentifikation (1)
- Sägezahneffekt (1)
- Tail Dependence Koeffizient (1)
- Test for Changepoint (1)
- Thermophoresis (1)
- Thin film approximation (1)
- Tichonov-Regularisierung (1)
- Time Series (1)
- Time-Series (1)
- Time-delay-Netz (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Traffic flow (1)
- Transaction costs (1)
- Trennschärfe <Statistik> (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Two-phase flow (1)
- Unreinheitsfunktion (1)
- Untermannigfaltigkeit (1)
- Upwind-Verfahren (1)
- Usage modeling (1)
- Utility (1)
- Value at Risk (1)
- Value-at-Risk (1)
- Variationsrechnung (1)
- Vectorfield approximation (1)
- Vektorfeldapproximation (1)
- Vektorkugelfunktionen (1)
- Verschwindungsatz (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Volatilität (1)
- Volatilitätsarbitrage (1)
- Vorkonditionierer (1)
- Vorwärts-Rückwärts-Stochastische-Differentialgleichung (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weißes Rauschen (1)
- White Noise (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Wissenschaftliches Rechnen (1)
- Worst-Case (1)
- Wärmeleitfähigkeit (1)
- Yaglom limits (1)
- Zeitintegrale Modelle (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- Zero-dimensional schemes (1)
- Zopfgruppe (1)
- Zufälliges Feld (1)
- Zweiphasenströmung (1)
- abgeleitete Kategorie (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alternating minimization (1)
- alternating optimization (1)
- analoge Mikroelektronik (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- applied mathematics (1)
- archimedean copula (1)
- asian option (1)
- basket option (1)
- benders decomposition (1)
- bending strip method (1)
- binomial tree (1)
- blackout period (1)
- bocses (1)
- boundary value problem (1)
- canonical ideal (1)
- canonical module (1)
- changing market coefficients (1)
- closure approximation (1)
- combinatorics (1)
- composites (1)
- computational finance (1)
- computer algebra (1)
- computeralgebra (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- correlated errors (1)
- coupling methods (1)
- crash (1)
- crash hedging (1)
- credit risk (1)
- curvature (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- degenerations of an elliptic curve (1)
- dense univariate rational interpolation (1)
- derived category (1)
- diffusion models (1)
- discrepancy (1)
- double exponential distribution (1)
- downward continuation (1)
- efficiency loss (1)
- elastoplasticity (1)
- elliptical distribution (1)
- endomorphism ring (1)
- enumerative geometry (1)
- equilibrium strategies (1)
- equisingular families (1)
- face value (1)
- fiber reinforced silicon carbide (1)
- filtration (1)
- financial mathematics (1)
- finite difference schemes (1)
- finite element method (1)
- first hitting time (1)
- float glass (1)
- flood risk (1)
- fluid structure (1)
- fluid structure interaction (1)
- forward-shooting grid (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- gitter (1)
- good semigroup (1)
- graph p-Laplacian (1)
- gravitation (1)
- group action (1)
- großer Investor (1)
- hedging (1)
- heuristic (1)
- hierarchical matrix (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hyperspectal unmixing (1)
- idealclass group (1)
- image analysis (1)
- image denoising (1)
- impulse control (1)
- impurity functions (1)
- incompressible elasticity (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- integer programming (1)
- integral constitutive equations (1)
- intensity (1)
- inverse optimization (1)
- inverse problem (1)
- jump-diffusion process (1)
- large investor (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- level K-algebras (1)
- level set method (1)
- limit theorems (1)
- linear code (1)
- localizing basis (1)
- longevity bonds (1)
- low-rank approximation (1)
- macro derivative (1)
- market crash (1)
- market manipulation (1)
- markov model (1)
- martingale optimality principle (1)
- mathematical modelling (1)
- mathematical morphology (1)
- matrix problems (1)
- matroid flows (1)
- mean-variance approach (1)
- micromechanics (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- modal derivatives (1)
- model order reduction (1)
- moduli space (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- multi scale (1)
- multi-asset option (1)
- multi-class image segmentation (1)
- multi-level Monte Carlo (1)
- multi-phase flow (1)
- multicategory (1)
- multifilament superconductor (1)
- multigrid method (1)
- multileaf collimator (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative noise (1)
- multiscale denoising (1)
- multiscale methods (1)
- multivariate chi-square-test (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- non-desarguesian plane (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- option pricing (1)
- option valuation (1)
- partial differential equation (1)
- partial information (1)
- path-dependent options (1)
- pattern (1)
- penalty methods (1)
- penalty-free formulation (1)
- petroleum exploration (1)
- planar polynomial (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- potential (1)
- preconditioners (1)
- pressure correction (1)
- primal-dual algorithm (1)
- probability distribution (1)
- projective surfaces (1)
- proximation (1)
- quadrinomial tree (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiation therapy (1)
- radiotherapy (1)
- rare disasters (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- real quadratic number fields (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regime-shift model (1)
- regression analysis (1)
- regularization methods (1)
- rheology (1)
- sampling (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- second class group (1)
- seismic tomography (1)
- semigroup of values (1)
- sheaf theory (1)
- similarity measures (1)
- singularities (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparsity (1)
- spherical approximation (1)
- sputtering process (1)
- stochastic arbitrage (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stochastische Arbitrage (1)
- stop- and play-operator (1)
- subgradient (1)
- superposed fluids (1)
- surface measures (1)
- surrogate algorithm (1)
- syzygies (1)
- tail dependence coefficient (1)
- tax (1)
- tensions (1)
- time delays (1)
- topological asymptotic expansion (1)
- toric geometry (1)
- torische Geometrie (1)
- total variation (1)
- total variation spatial regularization (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- tropical geometry (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- value semigroup (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector spherical harmonics (1)
- vectorial wavelets (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- viscoelastic fluids (1)
- volatility arbitrage (1)
- vortex seperation (1)
- well-posedness (1)
- worst-case (1)
- worst-case scenario (1)
- Äquisingularität (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)

#### Faculty / Organisational entity

- Fachbereich Mathematik (232)
- Fraunhofer (ITWM) (2)

The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.

This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.

The thesis at hand deals with the numerical solution of multiscale problems arising in the modeling of processes in fluid and thermo dynamics. Many of these processes, governed by partial differential equations, are relevant in engineering, geoscience, and environmental studies. More precisely, this thesis discusses the efficient numerical computation of effective macroscopic thermal conductivity tensors of high-contrast composite materials. The term "high-contrast" refers to large variations in the conductivities of the constituents of the composite. Additionally, this thesis deals with the numerical solution of Brinkman's equations. This system of equations adequately models viscous flows in (highly) permeable media. It was introduced by Brinkman in 1947 to reduce the deviations between the measurements for flows in such media and the predictions according to Darcy's model.

In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.

In this dissertation we consider complex, projective hypersurfaces with many isolated singularities. The leading questions concern the maximal number of prescribed singularities of such hypersurfaces in a given linear system, and geometric properties of the equisingular stratum. In the first part a systematic introduction to the theory of equianalytic families of hypersurfaces is given. Furthermore, the patchworking method for constructing hypersurfaces with singularities of prescribed types is described. In the second part we present new existence results for hypersurfaces with many singularities. Using the patchworking method, we show asymptotically proper results for hypersurfaces in P^n with singularities of corank less than two. In the case of simple singularities, the results are even asymptotically optimal. These statements improve all previous general existence results for hypersurfaces with these singularities. Moreover, the results are also transferred to hypersurfaces defined over the real numbers. The last part of the dissertation deals with the Castelnuovo function for studying the cohomology of ideal sheaves of zero-dimensional schemes. Parts of the theory of this function for schemes in P^2 are generalized to the case of schemes on general surfaces in P^3. As an application we show an H^1-vanishing theorem for such schemes.

In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations.
For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions.
For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain.
A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis.
Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods.

In this text we survey some large deviation results for diffusion processes. The first chapters present results from the literature such as the Freidlin-Wentzell theorem for diffusions with small noise. We use these results to prove a new large deviation theorem about diffusion processes with strong drift. This is the main result of the thesis. In the later chapters we give another application of large deviation results, namely to determine the exponential decay rate for the Bayes risk when separating two different processes. The final chapter presents techniques which help to experiment with rare events for diffusion processes by means of computer simulations.

In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.

Die vorliegende Dissertation besteht aus zwei Hauptteilen: Neue Ergebnisse aus der Gaußchen Analysis und ihre Anwendung auf die Theorie der Pfadintegrale. Das zentrale Resultat des ersten Teils ist die Charakterisierung aller regulären Distributionen die man mit Donsker's Delta multiplizieren kann. Dabei wird eine explizite Formel für solche Produkte, die sogenannte Wick-Formel, angegeben. Im Anwendungsteil dieser Arbeit wird zunächst eine komplex skalierte Feynman-Kac-Formel und ihre zugehörigen Kerne mit Hilfe dieser Wick-Formel gezeigt. Desweiteren werden Feynman Integranden für neue Klassen von Potentialen als White Noise Distributionen konstruiert.

Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.

This thesis focuses on dealing with some new aspects of continuous time portfolio optimization by using the stochastic control method.
First, we extend the Busch-Korn-Seifried model for a large investor by using the Vasicek model for the short rate, and that problem is solved explicitly for two types of intensity functions.
Next, we justify the existence of the constant proportion portfolio insurance (CPPI) strategy in a framework containing a stochastic short rate and a Markov switching parameter. The effect of Vasicek short rate on the CPPI strategy has been studied by Horsky (2012). This part of the thesis extends his research by including a Markov switching parameter, and the generalization is based on the B\"{a}uerle-Rieder investment problem. The explicit solutions are obtained for the portfolio problem without the Money Market Account as well as the portfolio problem with the Money Market Account.
Finally, we apply the method used in Busch-Korn-Seifried investment problem to explicitly solve the portfolio optimization with a stochastic benchmark.

In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.

The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.

Image restoration and enhancement methods that respect important features such as edges play a fundamental role in digital image processing. In the last decades a large
variety of methods have been proposed. Nevertheless, the correct restoration and
preservation of, e.g., sharp corners, crossings or texture in images is still a challenge, in particular in the presence of severe distortions. Moreover, in the context of image denoising many methods are designed for the removal of additive Gaussian noise and their adaptation for other types of noise occurring in practice requires usually additional efforts.
The aim of this thesis is to contribute to these topics and to develop and analyze new
methods for restoring images corrupted by different types of noise:
First, we present variational models and diffusion methods which are particularly well
suited for the restoration of sharp corners and X junctions in images corrupted by
strong additive Gaussian noise. For their deduction we present and analyze different
tensor based methods for locally estimating orientations in images and show how to
successfully incorporate the obtained information in the denoising process. The advantageous
properties of the obtained methods are shown theoretically as well as by
numerical experiments. Moreover, the potential of the proposed methods is demonstrated
for applications beyond image denoising.
Afterwards, we focus on variational methods for the restoration of images corrupted
by Poisson and multiplicative Gamma noise. Here, different methods from the literature
are compared and the surprising equivalence between a standard model for
the removal of Poisson noise and a recently introduced approach for multiplicative
Gamma noise is proven. Since this Poisson model has not been considered for multiplicative
Gamma noise before, we investigate its properties further for more general
regularizers including also nonlocal ones. Moreover, an efficient algorithm for solving
the involved minimization problems is proposed, which can also handle an additional
linear transformation of the data. The good performance of this algorithm is demonstrated
experimentally and different examples with images corrupted by Poisson and
multiplicative Gamma noise are presented.
In the final part of this thesis new nonlocal filters for images corrupted by multiplicative
noise are presented. These filters are deduced in a weighted maximum likelihood
estimation framework and for the definition of the involved weights a new similarity measure for the comparison of data corrupted by multiplicative noise is applied. The
advantageous properties of the new measure are demonstrated theoretically and by
numerical examples. Besides, denoising results for images corrupted by multiplicative
Gamma and Rayleigh noise show the very good performance of the new filters.

We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.

Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.

Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.

In this dissertation, we discuss how to price American-style options. Our aim is to study and improve the regression-based Monte Carlo methods. In order to have good benchmarks to compare with them, we also study the tree methods.
In the second chapter, we investigate the tree methods specifically. We do research firstly within the Black-Scholes model and then within the Heston model. In the Black-Scholes model, based on Müller's work, we illustrate how to price one dimensional and multidimensional American options, American Asian options, American lookback options, American barrier options and so on. In the Heston model, based on Sayer's research, we implement his algorithm to price one dimensional American options. In this way, we have good benchmarks of various American-style options and put them all in the appendix.
In the third chapter, we focus on the regression-based Monte Carlo methods theoretically and numerically. Firstly, we introduce two variations, the so called "Tsitsiklis-Roy method" and the "Longstaff-Schwartz method". Secondly, we illustrate the approximation of American option by its Bermudan counterpart. Thirdly we explain the source of low bias and high bias. Fourthly we compare these two methods using in-the-money paths and all paths. Fifthly, we examine the effect using different number and form of basis functions. Finally, we study the Andersen-Broadie method and present the lower and upper bounds.
In the fourth chapter, we study two machine learning techniques to improve the regression part of the Monte Carlo methods: Gaussian kernel method and kernel-based support vector machine. In order to choose a proper smooth parameter, we compare fixed bandwidth, global optimum and suboptimum from a finite set. We also point out that scaling the training data to [0,1] can avoid numerical difficulty. When out-of-sample paths of stock prices are simulated, the kernel method is robust and even performs better in several cases than the Tsitsiklis-Roy method and the Longstaff-Schwartz method. The support vector machine can keep on improving the kernel method and needs less representations of old stock prices during prediction of option continuation value for a new stock price.
In the fifth chapter, we switch to the hardware (FGPA) implementation of the Longstaff-Schwartz method and propose novel reversion formulas for the stock price and volatility within the Black-Scholes and Heston models. The test for this formula within the Black-Scholes model shows that the storage of data is reduced and also the corresponding energy consumption.

Competing Neural Networks as Models for Non Stationary Financial Time Series -Changepoint Analysis-
(2005)

The problem of structural changes (variations) play a central role in many scientific fields. One of the most current debates is about climatic changes. Further, politicians, environmentalists, scientists, etc. are involved in this debate and almost everyone is concerned with the consequences of climatic changes. However, in this thesis we will not move into the latter direction, i.e. the study of climatic changes. Instead, we consider models for analyzing changes in the dynamics of observed time series assuming these changes are driven by a non-observable stochastic process. To this end, we consider a first order stationary Markov Chain as hidden process and define the Generalized Mixture of AR-ARCH model(GMAR-ARCH) which is an extension of the classical ARCH model to suit to model with dynamical changes. For this model we provide sufficient conditions that ensure its geometric ergodic property. Further, we define a conditional likelihood given the hidden process and a pseudo conditional likelihood in turn. For the pseudo conditional likelihood we assume that at each time instant the autoregressive and volatility functions can be suitably approximated by given Feedfoward Networks. Under this setting the consistency of the parameter estimates is derived and versions of the well-known Expectation Maximization algorithm and Viterbi Algorithm are designed to solve the problem numerically. Moreover, considering the volatility functions to be constants, we establish the consistency of the autoregressive functions estimates given some parametric classes of functions in general and some classes of single layer Feedfoward Networks in particular. Beside this hidden Markov Driven model, we define as alternative a Weighted Least Squares for estimating the time of change and the autoregressive functions. For the latter formulation, we consider a mixture of independent nonlinear autoregressive processes and assume once more that the autoregressive functions can be approximated by given single layer Feedfoward Networks. We derive the consistency and asymptotic normality of the parameter estimates. Further, we prove the convergence of Backpropagation for this setting under some regularity assumptions. Last but not least, we consider a Mixture of Nonlinear autoregressive processes with only one abrupt unknown changepoint and design a statistical test that can validate such changes.

The topic of this thesis is the coupling of an atomistic and a coarse scale region in molecular dynamics simulations with the focus on the reflection of waves at the interface between the two scales and the velocity of waves in the coarse scale region for a non-equilibrium process. First, two models from the literature for such a coupling, the concurrent coupling of length scales and the bridging scales method are investigated for a one dimensional system with harmonic interaction. It turns out that the concurrent coupling of length scales method leads to the reflection of fine scale waves at the interface, while the bridging scales method gives an approximated system that is not energy conserving. The velocity of waves in the coarse scale region is in both models not correct. To circumvent this problems, we present a coupling based on the displacement splitting of the bridging scales method together with choosing appropriate variables in orthogonal subspaces. This coupling allows the derivation of evolution equations of fine and coarse scale degrees of freedom together with a reflectionless boundary condition at the interface directly from the Lagrangian of the system. This leads to an energy conserving approximated system with a clear separation between modeling errors an errors due to the numerical solution. Possible approximations in the Lagrangian and the numerical computation of the memory integral and other numerical errors are discussed. We further present a method to choose the interpolation from coarse to atomistic scale in such a way, that the fine scale degrees of freedom in the coarse scale region can be neglected. The interpolation weights are computed by comparing the dispersion relations of the coarse scale equations and the fully atomistic system. With this new interpolation weights, the number of degrees of freedom can be drastically reduced without creating an error in the velocity of the waves in the coarse scale region. We give an alternative derivation of the new coupling with the Mori-Zwanzig projection operator formalism, and explain how the method can be extended to non-zero temperature simulations. For the comparison of the results of the approximated with the fully atomistic system, we use a local stress tensor and the energy in the atomistic region. Examples for the numerical solution of the approximated system for harmonic potentials are given in one and two dimensions.

In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.

In this work two main approaches for the evaluation of credit derivatives are analyzed: the copula based approach and the Markov Chain based approach. This work gives the opportunity to use the advantages and avoid disadvantages of both approaches. For example, modeling of contagion effects, i.e. modeling dependencies between counterparty defaults, is complicated under the copula approach. One remedy is to use Markov Chain, where it can be done directly. The work consists of five chapters. The first chapter of this work extends the model for the pricing of CDS contracts presented in the paper by Kraft and Steffensen (2007). In the widely used models for CDS pricing it is assumed that only borrower can default. In our model we assume that each of the counterparties involved in the contract may default. Calculated contract prices are compared with those calculated under usual assumptions. All results are summarized in the form of numerical examples and plots. In the second chapter the copula and its main properties are described. The methods of constructing copulas as well as most common copulas families and its properties are introduced. In the third chapter the method of constructing a copula for the existing Markov Chain is introduced. The cases with two and three counterparties are considered. Necessary relations between the transition intensities are derived to directly find some copula functions. The formulae for default dependencies like Spearman's rho and Kendall's tau for defined copulas are derived. Several numerical examples are presented in which the copulas are built for given Markov Chains. The fourth chapter deals with the approximation of copulas if for a given Markov Chain a copula cannot be provided explicitly. The fifth chapter concludes this thesis.

The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.

Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).

This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.

This thesis deals with generalized inverses, multivariate polynomial interpolation and approximation of scattered data. Moreover, it covers the lifting scheme, which basically links the aforementioned topics. For instance, determining filters for the lifting scheme is connected to multivariate polynomial interpolation. More precisely, sets of interpolation sites are required that can be interpolated by a unique polynomial of a certain degree. In this thesis a new class of such sets is introduced and elements from this class are used to construct new and computationally more efficient filters for the lifting scheme.
Furthermore, a method to approximate multidimensional scattered data is introduced which is based on the lifting scheme. A major task in this method is to solve an ordinary linear least squares problem which possesses a special structure. Exploiting this structure yields better approximations and therefore this particular least squares problem is analyzed in detail. This leads to a characterization of special generalized inverses with partially prescribed image spaces.

Die Arbeit beschäftigt sich mit den Charakteren des Normalisators und des Zentralisators eines Sylowtorus. Dabei wird jede Gruppe G vom Lie-Typ als Fixpunktgruppe einer einfach-zusammenhängenden einfachen Gruppe unter einer Frobeniusabbildung aufgefaßt. Für jeden Sylowtorus S der algebraischen Gruppe wird gezeigt, dass die irreduziblen Charaktere des Zentralisators von S in G sich auf ihre Trägheitsgruppe im Normalisator von S fortsetzen. Diese Fragestellung entsteht aus dem Studium der Höhe 0 Charaktere bei endlichen reduktiven Gruppen vom Lie-Typ im Zusammenhang mit der McKay-Vermutung. Neuere Resultate von Isaacs, Malle und Navarro führen diese Vermutung auf eine Eigenschaft von einfachen Gruppen zurück, die sie dann für eine Primzahl gut nennen. Bei Gruppen vom Lie-Typ zeigt das obige Resultat zusammen mit einer aktuellen Arbeit von Malle einige dabei wichtige und notwendige Eigenschaften. Anhand der Steinberg-Präsentation werden vor allem bei den klassischen Gruppen genauere Aussagen über die Struktur des Zentralisators und des Normalisators eines Sylowtorus bewiesen. Wichtig dabei ist die von Tits eingeführte erweiterte Weylgruppe, die starke Verbindungen zu Zopfgruppen besitzt. Das Resultat wird in zahlreichen Einzelfallbetrachtungen gezeigt, bei denen in dieser Arbeit bewiesene Vererbungsregeln von Fortsetzbarkeitseigenschaften benutzt werden.

In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.

This thesis deals with the relationship between no-arbitrage and (strictly) consistent price processes for a financial market with proportional transaction costs
in a discrete time model. The exact mathematical statement behind this relationship is formulated in the so-called Fundamental Theorem of Asset Pricing (FTAP). Among the many proofs of the FTAP without transaction costs there
is also an economic intuitive utility-based approach. It relies on the economic
intuitive fact that the investor can maximize his expected utility from terminal
wealth. This approach is rather constructive since the equivalent martingale measure is then given by the marginal utility evaluated at the optimal terminal payoff.
However, in the presence of proportional transaction costs such a utility-based approach for the existence of consistent price processes is missing in the literature. So far, rather deep methods from functional analysis or from the theory of random sets have been used to show the FTAP under proportional transaction costs.
For the sake of existence of a utility-maximizing payoff we first concentrate on a generic single-period model with only one risky asset. The marignal utility evaluated at the optimal terminal payoff yields the first component of a
consistent price process. The second component is given by the bid-ask prices
depending on the investors optimal action. Even more is true: nearby this consistent price process there are many strictly consistent price processes. Their exact structure allows us to apply this utility-maximizing argument in a multi-period model. In a backwards induction we adapt the given bid-ask prices in such a way so that the strictly consistent price processes found from maximizing utility can be extended to terminal time. In addition possible arbitrage opportunities of the 2nd kind vanish which can present for the original bid-ask process. The notion of arbitrage opportunities of the 2nd kind has been so
far investigated only in models with strict costs in every state. In our model
transaction costs need not be present in every state.
For a model with finitely many risky assets a similar idea is applicable. However, in the single-period case we need to develop new methods compared
to the single-period case with only one risky asset. There are mainly two reasons
for that. Firstly, it is not at all obvious how to get a consistent price process
from the utility-maximizing payoff, since the consistent price process has to be
found for all assets simultaneously. Secondly, we need to show directly that the
so-called vector space property for null payoffs implies the robust no-arbitrage condition. Once this step is accomplished we can à priori use prices with a
smaller spread than the original ones so that the consistent price process found
from the utility-maximizing payoff is strictly consistent for the original prices.
To make the results applicable for the multi-period case we assume that the prices are given by compact and convex random sets. Then the multi-period case is similar to the case with only one risky asset but more demanding with regard to technical questions.

Traffic flow on road networks has been a continuous source of challenging mathematical problems. Mathematical modelling can provide an understanding of dynamics of traffic flow and hence helpful in organizing the flow through the network. In this dissertation macroscopic models for the traffic flow in road networks are presented. The primary interest is the extension of the existing macroscopic road network models based on partial differential equations (PDE model). In order to overcome the difficulty of high computational costs of PDE model an ODE model has been introduced. In addition, steady state traffic flow model named as RSA model on road networks has been dicsussed. To obtain the optimal flow through the network cost functionals and corresponding optimal control problems are defined. The solution of these optimization problems provides an information of shortest path through the network subject to road conditions. The resulting constrained optimization problem is solved approximately by solving unconstrained problem invovling exact penalty functions and the penalty parameter. A good estimate of the threshold of the penalty parameter is defined. A well defined algorithm for solving a nonlinear, nonconvex equality and bound constrained optimization problem is introduced. The numerical results on the convergence history of the algorithm support the theoretical results. In addition to this, bottleneck situations in the traffic flow have been treated using a domain decomposition method (DDM). In particular this method could be used to solve the scalar conservation laws with the discontinuous flux functions corresponding to other physical problems too. This method is effective even when the flux function presents more than one discontinuity within the same spatial domain. It is found in the numerical results that the DDM is superior to other schemes and demonstrates good shock resolution.

Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule.
We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC.
We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate.
To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band.
Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.

We construct and study two surface measures on the space C([0,1],M) of paths in a compact Riemannian manifold M embedded into the Euclidean space R^n. The first one is induced by conditioning the usual Wiener measure on C([0,T],R^n) to the event that the Brownian particle does not leave the tubular epsilon-neighborhood of M up to time T, and passing to the limit. The second one is defined as the limit of the laws of reflected Brownian motions with reflection on the boundaries of the tubular epsilon-neighborhoods of M. We prove that the both surface measures exist and compare them with the Wiener measure W_M on C([0,T],M). We show that the first one is equivalent to W_M and compute the corresponding density explicitly in terms of the scalar curvature and the mean curvature vector of M. Further, we show that the second surface measure coincides with W_M. Finally, we study the limit behavior of the both surface measures as T tends to infinity.

We present a numerical scheme to simulate a moving rigid body with arbitrary shape suspended in a rarefied gas micro flows, in view of applications to complex computations of moving structures in micro or vacuum systems. The rarefied gas is simulated by solving the Boltzmann equation using a DSMC particle method. The motion of the rigid body is governed by the Newton-Euler equations, where the force and the torque on the rigid body is computed from the momentum transfer of the gas molecules colliding with the body. The resulting motion of the rigid body affects in turn again the gas flow in the surroundings. This means that a two-way coupling has been modeled. We validate the scheme by performing various numerical experiments in 1-, 2- and 3-dimensional computational domains. We have presented 1-dimensional actuator problem, 2-dimensional cavity driven flow problem, Brownian diffusion of a spherical particle both with translational and rotational motions, and finally thermophoresis on a spherical particles. We compare the numerical results obtained from the numerical simulations with the existing theories in each test examples.

The work consists of two parts.
In the first part an optimization problem of structures of linear elastic material with contact modeled by Robin-type boundary conditions is considered. The structures model textile-like materials and possess certain quasiperiodicity properties. The homogenization method is used to represent the structures by homogeneous elastic bodies and is essential for formulations of the effective stress and Poisson's ratio optimization problems. At the micro-level, the classical one-dimensional Euler-Bernoulli beam model extended with jump conditions at contact interfaces is used. The stress optimization problem is of a PDE-constrained optimization type, and the adjoint approach is exploited. Several numerical results are provided.
In the second part a non-linear model for simulation of textiles is proposed. The yarns are modeled by hyperelastic law and have no bending stiffness. The friction is modeled by the Capstan equation. The model is formulated as a problem with the rate-independent dissipation, and the basic continuity and convexity properties are investigated. The part ends with numerical experiments and a comparison of the results to a real measurement.

A Multi-Phase Flow Model Incorporated with Population Balance Equation in a Meshfree Framework
(2011)

This study deals with the numerical solution of a meshfree coupled model of Computational Fluid Dynamics (CFD) and Population Balance Equation (PBE) for liquid-liquid extraction columns. In modeling the coupled hydrodynamics and mass transfer in liquid extraction columns one encounters multidimensional population balance equation that could not be fully resolved numerically within a reasonable time necessary for steady state or dynamic simulations. For this reason, there is an obvious need for a new liquid extraction model that captures all the essential physical phenomena and still tractable from computational point of view. This thesis discusses a new model which focuses on discretization of the external (spatial) and internal coordinates such that the computational time is drastically reduced. For the internal coordinates, the concept of the multi-primary particle method; as a special case of the Sectional Quadrature Method of Moments (SQMOM) is used to represent the droplet internal properties. This model is capable of conserving the most important integral properties of the distribution; namely: the total number, solute and volume concentrations and reduces the computational time when compared to the classical finite difference methods, which require many grid points to conserve the desired physical quantities. On the other hand, due to the discrete nature of the dispersed phase, a meshfree Lagrangian particle method is used to discretize the spatial domain (extraction column height) using the Finite Pointset Method (FPM). This method avoids the extremely difficult convective term discretization using the classical finite volume methods, which require a lot of grid points to capture the moving fronts propagating along column height.

This thesis is divided into two parts. Both cope with multi-class image segmentation and utilize
non-smooth optimization algorithms.
The topic of the first part, namely unsupervised segmentation, is the application of clustering
to image pixels. Therefore, we start with an introduction of the biconvex center-based clustering
algorithms c-means and fuzzy c-means, where c denotes the number of classes. We show that
fuzzy c-means can be seen as an approximation of c-means in terms of power means.
Since noise is omnipresent in our image data, these simple clustering models are not suitable
for its segmentation. To this end, we introduce a general and finite dimensional segmentation
model that consists of a data term stemming from the aforementioned clustering models plus a
continuous regularization term. We tackle this optimization model via an alternating minimiza-
tion approach called regularized c-centers (RcC). Thereby, we fix the centers and optimize the
segment membership of the pixels and vice versa. In this general setting, we prove convergence
in the sense of set-valued algorithms using Zangwill’s Theory [172].
Further, we present a segmentation model with a total variation regularizer. While updating
the cluster centers is straightforward for fixed segment memberships of the pixels, updating the
segment membership can be solved iteratively via non-smooth, convex optimization. Thereby,
we do not iterate a convex optimization algorithm until convergence. Instead, we stop as soon as
we have a certain amount of decrease in the objective functional to increase the efficiency. This
algorithm is a particular implementation of RcC providing also the corresponding convergence
theory. Moreover, we show the good performance of our method in various examples such as
simulated 2d images of brain tissue and 3d volumes of two materials, namely a multi-filament
composite superconductor and a carbon fiber reinforced silicon carbide ceramics. Thereby, we
exploit the property of the latter material that two components have no common boundary in
our adapted model.
The second part of the thesis is concerned with supervised segmentation. We leave the area
of center based models and investigate convex approaches related to graph p-Laplacians and
reproducing kernel Hilbert spaces (RKHSs). We study the effect of different weights used to
construct the graph. In practical experiments we show on the one hand image types that
are better segmented by the p-Laplacian model and on the other hand images that are better
segmented by the RKHS-based approach. This is due to the fact that the p-Laplacian approach
provides smoother results, while the RKHS approach provides often more accurate and detailed
segmentations. Finally, we propose a novel combination of both approaches to benefit from the
advantages of both models and study the performance on challenging medical image data.

This thesis deals with 3 important aspects of optimal investment in real-world financial markets: taxes, crashes, and illiquidity. An introductory chapter reviews the portfolio problem in its historical context and motivates the theme of this work: We extend the standard modelling framework to include specific real-world features and evaluate their significance. In the first chapter, we analyze the optimal portfolio problem with capital gains taxes, assuming that taxes are deferred until the end of the investment horizon. The problem is solved with the help of a modification of the classical martingale method. The second chapter is concerned with optimal asset allocation under the threat of a financial market crash. The investor takes a worst-case attitude towards the crash, so her investment objective is to be best off in the most adverse crash scenario. We first survey the existing literature on the worst-case approach to optimal investment and then present in detail the novel martingale approach to worst-case portfolio optimization. The first part of this chapter is based on joint work with Ralf Korn. In the last chapter, we investigate optimal portfolio decisions in the presence of illiquidity. Illiquidity is understood as a period in which it is impossible to trade on financial markets. We use dynamic programming techniques in combination with abstract convergence results to solve the corresponding optimal investment problem. This chapter is based on joint work with Holger Kraft and Peter Diesinger.

In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored.
First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations.
Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established.
Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.

We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property.
Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price.
We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.

In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.

This thesis builds a bridge between singularity theory and computer algebra. To an isolated hypersurface singularity one can associate a regular meromorphic connection, the Gauß-Manin connection, containing a lattice, the Brieskorn lattice. The leading terms of the Brieskorn lattice with respect to the weight and V-filtration of the Gauß-Manin connection define the spectral pairs. They correspond to the Hodge numbers of the mixed Hodge structure on the cohomology of the Milnor fibre and belong to the finest known invariants of isolated hypersurface singularities. The differential structure of the Brieskorn lattice can be described by two complex endomorphisms A0 and A1 containing even more information than the spectral pairs. In this thesis, an algorithmic approach to the Brieskorn lattice in the Gauß-Manin connection is presented. It leads to algorithms to compute the complex monodromy, the spectral pairs, and the differential structure of the Brieskorn lattice. These algorithms are implemented in the computer algebra system Singular.

The central topic of this thesis is Alperin's weight conjecture, a problem concerning the representation theory of finite groups.
This conjecture, which was first proposed by J. L. Alperin in 1986, asserts that for any finite group the number of its irreducible Brauer characters coincides with the number of conjugacy classes of its weights. The blockwise version of Alperin's conjecture partitions this problem into a question concerning the number of irreducible Brauer characters and weights belonging to the blocks of finite groups.
A proof for this conjecture has not (yet) been found. However, the problem has been reduced to a question on non-abelian finite (quasi-) simple groups in the sense that there is a set of conditions, the so-called inductive blockwise Alperin weight condition, whose verification for all non-abelian finite simple groups implies the blockwise Alperin weight conjecture. Now the objective is to prove this condition for all non-abelian finite simple groups, all of which are known via the classification of finite simple groups.
In this thesis we establish the inductive blockwise Alperin weight condition for three infinite series of finite groups of Lie type: the special linear groups \(SL_3(q)\) in the case \(q>2\) and \(q \not\equiv 1 \bmod 3\), the Chevalley groups \(G_2(q)\) for \(q \geqslant 5\), and Steinberg's triality groups \(^3D_4(q)\).

The goal of this work is the development and investigation of an interdisciplinary and in itself closed hydrodynamic approach to the simulation of dilute and dense granular flow. The definition of “granular flow” is a nontrivial task in itself. We say that it is either the flow of grains in a vacuum or in a fluid. A grain is an observable piece of a certain material, for example stone when we mean the flow of sand. Choosing a hydrodynamic view on granular flow, we treat the granular material as a fluid. A hydrodynamic model is developed, that describes the process of flowing granular material. This is done through a system of partial differential equations and algebraic relations. This system is derived by the kinetic theory of granular gases which is characterized by inelastic collisions extended with approaches from soil mechanics. Solutions to the system have to be obtained to understand the process. The equations are so difficult to solve that an analytical solution is out of reach. So approximate solutions must be obtained. Hence the next step is the choice or development of a numerical algorithm to obtain approximate solutions of the model. Common to every problem in numerical simulation, these two steps do not lead to a result without implementation of the algorithm. Hence the author attempts to present this work in the following frame, to participate in and contribute to the three areas Physics, Mathematics and Software implementation and approach the simulation of granular flow in a combined and interdisciplinary way. This work is structured as follows. A continuum model for granular flow which covers the regime of fast dilute flow as well as slow dense flow up to vanishing velocity is presented in the first chapter. This model is strongly nonlinear in the dependence of viscosity and other coefficients on the hydrodynamic variables and it is singular because some coefficients diverge towards the maximum packing fraction of grains. Hence the second difficulty, the challenging task of numerically obtaining approximate solutions for this model is faced in the second chapter. In the third chapter we aim at the validation of both the model and the numerical algorithm through numerical experiments and investigations and show their application to industrial problems. There we focus intensively on the shear flow experiment from the experimental and analytical work of Bocquet et al. which serves well to demonstrate the algorithm, all boundary conditions involved and provides a setting for analytical studies to compare our results. The fourth chapter rounds up the work with the implementation of both the model and the numerical algorithm in a software framework for the solution of complex rheology problems developed as part of this thesis.

The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.

Matrix Compression Methods for the Numerical Solution of Radiative Transfer in Scattering Media
(2002)

Radiative transfer in scattering media is usually described by the radiative transfer equation, an integro-differential equation which describes the propagation of the radiative intensity along a ray. The high dimensionality of the equation leads to a very large number of unknowns when discretizing the equation. This is the major difficulty in its numerical solution. In case of isotropic scattering and diffuse boundaries, the radiative transfer equation can be reformulated into a system of integral equations of the second kind, where the position is the only independent variable. By employing the so-called momentum equation, we derive an integral equation, which is also valid in case of linear anisotropic scattering. This equation is very similar to the equation for the isotropic case: no additional unknowns are introduced and the integral operators involved have very similar mapping properties. The discretization of an integral operator leads to a full matrix. Therefore, due to the large dimension of the matrix in practical applcation, it is not feasible to assemble and store the entire matrix. The so-called matrix compression methods circumvent the assembly of the matrix. Instead, the matrix-vector multiplications needed by iterative solvers are performed only approximately, thus, reducing, the computational complexity tremendously. The kernels of the integral equation describing the radiative transfer are very similar to the kernels of the integral equations occuring in the boundary element method. Therefore, with only slight modifications, the matrix compression methods, developed for the latter are readily applicable to the former. As apposed to the boundary element method, the integral kernels for radiative transfer in absorbing and scattering media involve an exponential decay term. We examine how this decay influences the efficiency of the matrix compression methods. Further, a comparison with the discrete ordinate method shows that discretizing the integral equation may lead to reductions in CPU time and to an improved accuracy especially in case of small absorption and scattering coefficients or if local sources are present.

In the filling process of a car tank, the formation of foam plays an unwanted role, as it may prevent the tank from being completely filled or at least delay the filling. Therefore it is of interest to optimize the geometry of the tank using numerical simulation in such a way that the influence of the foam is minimized. In this dissertation, we analyze the behaviour of the foam mathematically on the mezoscopic scale, that is for single lamellae. The most important goals are on the one hand to gain a deeper understanding of the interaction of the relevant physical effects, on the other hand to obtain a model for the simulation of the decay of a lamella which can be integrated in a global foam model. In the first part of this work, we give a short introduction into the physical properties of foam and find that the Marangoni effect is the main cause for its stability. We then develop a mathematical model for the simulation of the dynamical behaviour of a lamella based on an asymptotic analysis using the special geometry of the lamella. The result is a system of nonlinear partial differential equations (PDE) of third order in two spatial and one time dimension. In the second part, we analyze this system mathematically and prove an existence and uniqueness result for a simplified case. For some special parameter domains the system can be further simplified, and in some cases explicit solutions can be derived. In the last part of the dissertation, we solve the system using a finite element approach and discuss the results in detail.

In this work we present and estimate an explanatory model with a predefined system of explanatory equations, a so called lag dependent model. We present a locally optimal, on blocked neural network based lag estimator and theorems about consistensy. We define the change points in context of lag dependent model, and present a powerfull algorithm for change point detection in high dimensional high dynamical systems. We present a special kind of bootstrap for approximating the distribution of statistics of interest in dependent processes.

This thesis discusses methods for the classification of finite projective planes via exhaustive search. In the main part the author classifies all projective planes of order 16 admitting a large quasiregular group of collineations. This is done by a complete search using the computer algebra system GAP. Computational methods for the construction of relative difference sets are discussed. These methods are implemented in a GAP-package, which is available separately. As another result --found in cooperation with U. Dempwolff-- the projective planes defined by planar monomials are classified. Furthermore the full automorphism group of the non-translation planes defined by planar monomials are classified.

In many industrial applications fast and accurate solutions of linear elliptic partial differential equations are needed as one of the building blocks of more complex problems. The domains are often highly complex and meshing turns out to be expensive and difficult to obtain with a sufficient quality. In such cases methods with a regular, not boundary adapted grid offer an attractive alternative. The Explicit Jump Immersed Interface Method is one of these algorithms. The main interest of this work lies in solving the linear elasticity equations. For this purpose the existing EJIIM algorithm has been extended to three dimensions. The Poisson equation is always considered in parallel as the most typical representative of elliptic PDEs. During the work it became clear that EJIIM can have very high computational memory requirements. To overcome this problem an improvement, Reduced EJIIM is proposed. The main theoretical result in this work is the proof of the smoothing property of inverses of elliptic finite difference operators in two and three space dimensions. It is an often observed phenomena that the local truncation error is allowed to be of lower order along some lower dimensional manifold without influencing the global convergence order of the solution.

In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.

Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)

In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.

In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.

The thesis studies change points in absolute time for censored survival data with some contributions to the more common analysis of change points with respect to survival time. We first introduce the notions and estimates of survival analysis, in particular the hazard function and censoring mechanisms. Then, we discuss change point models for survival data. In the literature, usually change points with respect to survival time are studied. Typical examples are piecewise constant and piecewise linear hazard functions. For that kind of models, we propose a new algorithm for numerical calculation of maximum likelihood estimates based on a cross entropy approach which in our simulations outperforms the common Nelder-Mead algorithm.
Our original motivation was the study of censored survival data (e.g., after diagnosis of breast cancer) over several decades. We wanted to investigate if the hazard functions differ between various time periods due, e.g., to progress in cancer treatment. This is a change point problem in the spirit of classical change point analysis. Horváth (1998) proposed a suitable change point test based on estimates of the cumulative hazard function. As an alternative, we propose similar tests based on nonparametric estimates of the hazard function. For one class of tests related to kernel probability density estimates, we develop fully the asymptotic theory for the change point tests. For the other class of estimates, which are versions of the Watson-Leadbetter estimate with censoring taken into account and which are related to the Nelson-Aalen estimate, we discuss some steps towards developing the full asymptotic theory. We close by applying the change point tests to simulated and real data, in particular to the breast cancer survival data from the SEER study.

The aim of the thesis is the numerical investigation of saturated, stationary, incompressible Newtonian flow in porous media when inertia is not negligible. We focus our attention to the Navier-Stokes system with two pressures derived by two-scale homogenization. The thesis is subdivided into five Chapters. After the introductory remarks on porous media, filtration laws and upscaling methods, the first chapter is closed by stating the basic terminology and mathematical fundamentals. In Chapter 2, we start by formulating the Navier-Stokes equations on a periodic porous medium. By two-scale expansions of the velocity and pressure, we formally derive the Navier-Stokes system with two pressures. For the sake of completeness, known existence and uniqueness results are repeated and a convergence proof is given. Finally, we consider Stokes and Navier-Stokes systems with two pressures with respect to their relation to Darcy's law. Chapter 3 and Chapter 4 are devoted to the numerical solution of the nonlinear two pressure system. Therefore, we follow two approaches. The first approach which is developed in Chapter 3 is based on a splitting of the Navier-Stokes system with two pressures into micro and macro problems. The splitting is achieved by Taylor expanding the permeability function or by discretely computing the permeability function. The problems to be solved are a series of Stokes and Navier-Stokes problems on the periodicity cell. The Stokes problems are solved by an Uzawa conjugate gradient method. The Navier-Stokes equations are linearized by a least-squares conjugate gradient method, which leads to the solution of a sequence of Stokes problems. The macro problem consists of solving a nonlinear uniformly elliptic equation of second order. The least-squares linearization is applied to the macro problem leading to a sequence of Poisson problems. All equations will be discretized by finite elements. Numerical results are presented at the end of Chapter 3. The second approach presented in Chapter 4 relies on the variational formulation in a certain Hilbert space setting of the Navier-Stokes system with two pressures. The nonlinear problem is again linearized by the least-squares conjugate gradient method. We obtain a sequence of Stokes systems with two pressures. For the latter systems, we propose a fast solution method which relies on pre-computing Stokes systems on the periodicity cell for finite element basis functions acting as right hand sides. Finally, numerical results are discussed. In Chapter 5 we are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and the chapter is closed by a numerical investigation of the model with special focus on inertia effects.

Das Ziel dieser Dissertation ist die Entwicklung und Implementation eines Algorithmus zur Berechnung von tropischen Varietäten über allgemeine bewertete Körper. Die Berechnung von tropischen Varietäten über Körper mit trivialer Bewertung ist ein hinreichend gelöstes Problem. Hierfür kombinieren die Autoren Bogart, Jensen, Speyer, Sturmfels und Thomas eindrucksvoll klassische Techniken der Computeralgebra mit konstruktiven Methoden der konvexer Geometrie.
Haben wir allerdings einen Grundkörper mit nicht-trivialer Bewertung, wie zum Beispiel den Körper der \(p\)-adischen Zahlen \(\mathbb{Q}_p\), dann stößt die konventionelle Gröbnerbasentheorie scheinbar an ihre Grenzen. Die zugrundeliegenden Monomordnungen sind nicht geeignet um Problemstellungen zu untersuchen, die von einer nicht-trivialen Bewertung auf den Koeffizienten abhängig sind. Dies führte zu einer Reihe von Arbeiten, welche die gängige Gröbnerbasentheorie modifizieren um die Bewertung des Grundkörpers einzubeziehen.\[\phantom{newline}\]
In dieser Arbeit präsentieren wir einen alternativen Ansatz und zeigen, wie sich die Bewertung mittels einer speziell eingeführten Variable emulieren lässt, so dass eine Modifikation der klassischen Werkzeuge nicht notwendig ist.
Im Rahmen dessen wird Theorie der Standardbasen auf Potenzreihen über einen Koeffizientenring verallgemeinert. Hierbei wird besonders Wert darauf gelegt, dass alle Algorithmen bei polynomialen Eingabedaten mit ihren klassischen Pendants übereinstimmen, sodass für praktische Zwecke auf bereits etablierte Softwaresysteme zurückgegriffen werden kann. Darüber hinaus wird die Konstruktion des Gröbnerfächers sowie die Technik des Gröbnerwalks für leicht inhomogene Ideale eingeführt. Dies ist notwendig, da bei der Einführung der neuen Variable die Homogenität des Ausgangsideal gebrochen wird.\[\phantom{newline}\]
Alle Algorithmen wurden in Singular implementiert und sind als Teil der offiziellen Distribution erhältlich. Es ist die erste Implementation, welches in der Lage ist tropische Varietäten mit \(p\)-adischer Bewertung auszurechnen. Im Rahmen der Arbeit entstand ebenfalls ein Singular Paket für konvexe Geometrie, sowie eine Schnittstelle zu Polymake.

The central theme in this thesis concerns the development of enhanced methods and algorithms for appraising market and credit risks and their application within the context of standard and more advanced market models. Generally, methods and algorithms for analysing market risk of complex portfolios involve detailed knowledge of option sensitivities, the so-called "Greeks". Based on an analysis of symmetries in financial market models, relations between option sensitivities are obtained, which can be used for the efficient valuation of the Greeks. Mainly, the relations are derived within the Black Scholes model, however, some relations are also valid for more general models, for instance the Heston model. Portfolios are usually influenced by lots of underlyings, so it is necessary to characterise the dependencies of these basic instruments. It is usual to describe such dependencies by correlation matrices. However, estimations of correlation matrices in practice are disturbed by statistical noise and usually have the problem of rank deficiency due to missing data. A fast algorithm is presented which performs a generalized Cholesky decomposition of a perturbed correlation matrix. In contrast to the standard Cholesky algorithm, an advantage of the generalized method is that it works for semi-positive, rank deficient matrices as well. Moreover, it gives an approximative decomposition when the input matrix is indefinite. A comparison with known algorithms with similar features is performed and it turns out, that the new algorithm can be recommended in situations where computation time is the critical issue. The determination of a profit and loss distribution by Fourier inversion of its characteristic function is a powerful tool, but it can break down when the characteristic function is not integrable. In this thesis, methods for Fourier inversion of non-integrable characteristic functions are studied. In this respect, two theorems are obtained which are based on a suitable approximation of the unknown distribution with known density and characteristic function. Further it will be shown, that straightforward Fast Fourier inversion works, when the according density lives on a bounded interval. The above techniques are of crucial importance to determine the profit and loss distribution (P&L) of large portfolios efficiently. The so-called Delta Gamma normal approach has become industrial standard for the estimation of market risk. It is shown, that the performance of the Delta Gamma normal approach can be improved substantially by application of the developed methods. The same optimization procedure also applies to the Delta Gamma Student model. A standard tool for computing the P&L distribution of a loan portfolio is the CreditRisk+ model. Basically, the CreditRisk+ distribution is a discrete distribution which can be computed from its probability generating function. For this a numerically stable method is presented and as an alternative, a new algorithm based on Fourier inversion is proposed. Finally, an extension of the CreditRisk+ model to market risk is developed, which distribution can be obtained efficiently by the presented Fourier inversion methods as well.

This thesis is devoted to two main topics (accordingly, there are two chapters): In the first chapter, we establish a tropical intersection theory with analogue notions and tools as its algebro-geometric counterpart. This includes tropical cycles, rational functions, intersection products of Cartier divisors and cycles, morphisms, their functors and the projection formula, rational equivalence. The most important features of this theory are the following: - It unifies and simplifies many of the existing results of tropical enumerative geometry, which often contained involved ad-hoc computations. - It is indispensable to formulate and solve further tropical enumerative problems. - It shows deep relations to the intersection theory of toric varieties and connected fields. - The relationship between tropical and classical Gromov-Witten invariants found by Mikhalkin is made plausible from inside tropical geometry. - It is interesting on its own as a subfield of convex geometry. In the second chapter, we study tropical gravitational descendants (i.e. Gromov-Witten invariants with incidence and "Psi-class" factors) and show that many concepts of the classical Gromov-Witten theory such as the famous WDVV equations can be carried over to the tropical world. We use this to extend Mikhalkin's results to a certain class of gravitational descendants, i.e. we show that many of the classical gravitational descendants of P^2 and P^1 x P^1 can be computed by counting tropical curves satisfying certain incidence conditions and with prescribed valences of their vertices. Moreover, the presented theory is not restricted to plane curves and therefore provides an important tool to derive similar results in higher dimensions. A more detailed chapter synopsis can be found at the beginning of each individual chapter.

This dissertation deals with two main subjects. Both are strongly related to boundary problems for the Poisson equation and the Laplace equation, respectively. The oblique boundary problem of potential theory as well as the limit formulae and jump relations of potential theory are investigated. We divide this abstract into two parts and start with the oblique boundary problem. Here we prove existence and uniqueness results for solutions to the outer oblique boundary problem for the Poisson equation under very weak assumptions on boundary, coefficients and inhomogeneities. Main tools are the Kelvin transformation and the solution operator for the regular inner problem, provided in my diploma thesis. Moreover we prove regularization results for the weak solutions of both, the inner and the outer problem. We investigate the non-admissible direction for the oblique vector field, state results with stochastic inhomogeneities and provide a Ritz-Galerkin approximation. Finally we show that the results are applicable to problems from Geomathematics. Now we come to the limit formulae. There we combine the modern theory of Sobolev spaces with the classical theory of limit formulae and jump relations of potential theory. The convergence in Lebesgue spaces for integrable functions is already treated in literature. The achievement of this dissertation is this convergence for the weak derivatives of higher orders. Also the layer functions are elements of Sobolev spaces and the surface is a two dimensional suitable smooth submanifold in the three dimensional space. We are considering the potential of the single layer, the potential of the double layer and their first order normal derivatives. Main tool in the proof in Sobolev norm is the uniform convergence of the tangential derivatives, which is proved with help of some results taken from literature. Additionally, we need a result about the limit formulae in the Lebesgue spaces, which is also taken from literature, and a reduction result for normal derivatives of harmonic functions. Moreover we prove the convergence in the Hölder spaces. Finally we give an application of the limit formulae and jump relations. We generalize a known density of several function systems from Geomathematics in the Lebesgue spaces of square integrable measureable functions, to density in Sobolev spaces, based on the results proved before. Therefore we have prove the limit formula of the single layer potential in dual spaces of Soboelv spaces, where also the layer function is an element of such a distribution space.

The new international capital standard for credit institutions (“Basel II”) allows banks to use internal rating systems in order to determine the risk weights that are relevant for the calculation of capital charge. Therefore, it is necessary to develop a system that enfolds the main practices and methods existing in the context of credit rating. The aim of this thesis is to give a suggestion of setting up a credit rating system, where the main techniques used in practice are analyzed, presenting some alternatives and considering the problems that can arise from a statistical point of view. Finally, we will set up some guidelines on how to accomplish the challenge of credit scoring. The judgement of the quality of a credit with respect to the probability of default is called credit rating. A method based on a multi-dimensional criterion seems to be natural, due to the numerous effects that can influence this rating. However, owing to governmental rules, the tendency is that typically one-dimensional criteria will be required in the future as a measure for the credit worthiness or for the quality of a credit. The problem as described above can be resolved via transformation of a multi-dimensional data set into a one-dimensional one while keeping some monotonicity properties and also keeping the loss of information (due to the loss of dimensionality) at a minimum level.

Filtering, Approximation and Portfolio Optimization for Shot-Noise Models and the Heston Model
(2012)

We consider a continuous time market model in which stock returns satisfy a stochastic differential equation with stochastic drift, e.g. following an Ornstein-Uhlenbeck process. The driving noise of the stock returns consists not only of Brownian motion but also of a jump part (shot noise or compound Poisson process). The investor's objective is to maximize expected utility of terminal wealth under partial information which means that the investor only observes stock prices but does not observe the drift process. Since the drift of the stock prices is unobservable, it has to be estimated using filtering techniques. E.g., if the drift follows an Ornstein-Uhlenbeck process and without
jump part, Kalman filtering can be applied and optimal strategies can be computed explicitly. Also in other cases, like for an underlying
Markov chain, finite-dimensional filters exist. But for certain jump processes (e.g. shot noise) or certain nonlinear drift dynamics explicit computations, based on discrete observations, are no longer possible or existence of finite dimensional filters is no longer valid. The same
computational difficulties apply to the optimal strategy since it depends on the filter. In this case the model may be approximated by
a model where the filter is known and can be computed. E.g., we use statistical linearization for non-linear drift processes, finite-state-Markov chain approximations for the drift process and/or diffusion approximations for small jumps in the noise term.
In the approximating models, filters and optimal strategies can often be computed explicitly. We analyze and compare different approximation methods, in particular in view of performance of the corresponding utility maximizing strategies.

Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors.
Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study.
The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.

In this work we focus on the regression models with asymmetrical error distribution,
more precisely, with extreme value error distributions. This thesis arises in the framework
of the project "Robust Risk Estimation". Starting from July 2011, this project won
three years funding by the Volkswagen foundation in the call "Extreme Events: Modelling,
Analysis, and Prediction" within the initiative "New Conceptual Approaches to
Modelling and Simulation of Complex Systems". The project involves applications in
Financial Mathematics (Operational and Liquidity Risk), Medicine (length of stay and
cost), and Hydrology (river discharge data). These applications are bridged by the
common use of robustness and extreme value statistics.
Within the project, in each of these applications arise issues, which can be dealt with by
means of Extreme Value Theory adding extra information in the form of the regression
models. The particular challenge in this context concerns asymmetric error distributions,
which significantly complicate the computations and make desired robustification
extremely difficult. To this end, this thesis makes a contribution.
This work consists of three main parts. The first part is focused on the basic notions
and it gives an overview of the existing results in the Robust Statistics and Extreme
Value Theory. We also provide some diagnostics, which is an important achievement of
our project work. The second part of the thesis presents deeper analysis of the basic
models and tools, used to achieve the main results of the research.
The second part is the most important part of the thesis, which contains our personal
contributions. First, in Chapter 5, we develop robust procedures for the risk management
of complex systems in the presence of extreme events. Mentioned applications use time
structure (e.g. hydrology), therefore we provide extreme value theory methods with time
dynamics. To this end, in the framework of the project we considered two strategies. In
the first one, we capture dynamic with the state-space model and apply extreme value
theory to the residuals, and in the second one, we integrate the dynamics by means of
autoregressive models, where the regressors are described by generalized linear models.
More precisely, since the classical procedures are not appropriate to the case of outlier
presence, for the first strategy we rework classical Kalman smoother and extended
Kalman procedures in a robust way for different types of outliers and illustrate the performance
of the new procedures in a GPS application and a stylized outlier situation.
To apply approach to shrinking neighborhoods we need some smoothness, therefore for
the second strategy, we derive smoothness of the generalized linear model in terms of
L2 differentiability and create sufficient conditions for it in the cases of stochastic and
deterministic regressors. Moreover, we set the time dependence in these models by
linking the distribution parameters to the own past observations. The advantage of
our approach is its applicability to the error distributions with the higher dimensional
parameter and case of regressors of possibly different length for each parameter. Further,
we apply our results to the models with generalized Pareto and generalized extreme value
error distributions.
Finally, we create the exemplary implementation of the fixed point iteration algorithm
for the computation of the optimally robust in
uence curve in R. Here we do not aim to
provide the most
exible implementation, but rather sketch how it should be done and
retain points of particular importance. In the third part of the thesis we discuss three applications,
operational risk, hospitalization times and hydrological river discharge data,
and apply our code to the real data set taken from Jena university hospital ICU and
provide reader with the various illustrations and detailed conclusions.

Paper production is a problem with significant importance for the society and it is a challenging topic for scientific investigations. This study is concerned with the simulations of the pressing section of a paper machine. We aim at the development of an advanced mathematical model of the pressing section, which is able to recover the behavior of the fluid flow within the paper felt sandwich obtained in laboratory experiments.
From the modeling point of view the pressing of the paper-felt sandwich is a complex process since one has to deal with the two-phase flow in moving and deformable porous media. To account for the solid deformations, we use developments from the PhD thesis by S. Rief where the elasticity model is stated and discussed in detail. The flow model which accounts for the movement of water within the paper-felt sandwich is described with the help of two flow regimes: single-phase water flow and two-phase air-water flow. The model for the saturated flow is presented by the Darcy's law and the mass conservation. The second regime is described by the Richards' approach together with dynamic capillary effects. The model for the dynamic capillary pressure - saturation relation proposed by Hassanizadeh and Gray is adapted for the needs of the paper manufacturing process.
We have started the development of the flow model with the mathematical modeling in one-dimensional case. The one-dimensional flow model is derived from a two-dimensional one by an averaging procedure in vertical direction. The model is numerically studied and verified in comparison with measurements. Some theoretical investigations are performed to prove the convergence of the discrete solution to the continuous one. For completeness of the studies, the models with the static and dynamic capillary pressure–saturation relations are considered. Existence, compactness and convergence results are obtained for both models.
Then, a two-dimensional model is developed, which accounts for a multilayer computational domain and formation of the fully saturated zones. For discretization we use a non-orthogonal grid resolving the layer interfaces and the multipoint flux approximation O-method. The numerical experiments are carried out for parameters which are typical for the production process. The static and dynamic capillary pressure-saturation relations are tested to evaluate the influence of the dynamic capillary effect.
The last part of the thesis is an investigation of the validity range of the Richards’ assumption for the two-dimensional flow model with the static capillary pressure-saturation relation. Numerical experiments show that the Richards’ assumption is not the best choice in simulating processes in the pressing section.

By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.

Since its invention by Sir Allistair Pilkington in 1952, the float glass process has been used to manufacture long thin flat sheets of glass. Today, float glass is very popular due to its high quality and relatively low production costs. When producing thinner glass the main concern is to retain its optical quality, which can be deteriorated during the manufacturing process. The most important stage of this process is the floating part, hence is considered to be responsible for the loss in the optical quality. A series of investigations performed on the finite products showed the existence of many short wave patterns, which strongly affect the optical quality of the glass. Our work is concerned with finding the mechanism for wave development, taking into account all possible factors. In this thesis, we model the floating part of the process by an theoretical study of the stability of two superposed fluids confined between two infinite plates and subjected to a large horizontal temperature gradient. Our approach is to take into account the mixed convection effects (viscous shear and buoyancy), neglecting on the other hand the thermo-capillarity effects due to the length of our domain and the presence of a small stabilizing vertical temperature gradient. Both fluids are treated as Newtonian with constant viscosity. They are immiscible, incompressible, have very different properties and have a free surface between them. The lower fluid is a liquid metal with a very small kinematic viscosity, whereas the upper fluid is less dense. The two fluids move with different velocities: the speed of the upper fluid is imposed, whereas the lower fluid moves as a result of buoyancy effects. We examine the problem by means of small perturbation analysis, and obtain a system of two Orr-Sommerfeld equations coupled with two energy equations, and general interface and boundary conditions. We solve the system analytically in the long- and short- wave limit, by using asymptotic expansions with respect to the wave number. Moreover, we write the system in the form of a general eigenvalue problem and we solve the system numerically by using Chebyshev spectral methods for fluid dynamics. The results (both analytical and numerical) show the existence of the small-amplitude travelling waves, which move with constant velocity for wave numbers in the intermediate range. We show that the stability of the system is ensured in the long wave limit, a fact which is in agreement with the real float glass process. We analyze the stability for a wide range of wave numbers, Reynolds, Weber and Grashof number, and explain the physical implications on the dynamics of the problem. The consequences of the linear stability results are discussed. In reality in the float glass process, the temperature strongly influences the viscosity of both molten metal and hot glass, which will have direct consequences on the stability of the system. We investigate the linear stability of two superposed fluids with temperature dependent viscosities by considering a different model for the viscosity dependence of each fluid. Although, the temperature-viscosity relationships for glass and metal are more complex than those used in our computations, our intention is to emphasize the effects of this dependence on the stability of the system. It is known from the literature that in the case of one fluid, the heat, which causes viscosity to decrease along the domain, usually destabilizes the flow. For the two superposed fluids problem we investigate this behaviour and discuss the consequences of the linear stability in this new case.

Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.

Das zinsoptimierte Schuldenmanagement hat zum Ziel, eine möglichst effiziente Abwägung zwischen den erwarteten Finanzierungskosten einerseits und den Risiken für den Staatshaushalt andererseits zu finden. Um sich diesem Spannungsfeld zu nähern, schlagen wir erstmals die Brücke zwischen den Problemstellungen des Schuldenmanagements und den Methoden der zeitkontinuierlichen, dynamischen Portfoliooptimierung.
Das Schlüsselelement ist dabei eine neue Metrik zur Messung der Finanzierungskosten, die Perpetualkosten. Diese spiegeln die durchschnittlichen zukünftigen Finanzierungskosten wider und beinhalten sowohl die bereits bekannten Zinszahlungen als auch die noch unbekannten Kosten für notwendige Anschlussfinanzierungen. Daher repräsentiert die Volatilität der Perpetualkosten auch das Risiko einer bestimmten Strategie; je langfristiger eine Finanzierung ist, desto kleiner ist die Schwankungsbreite der Perpetualkosten.
Die Perpetualkosten ergeben sich als Produkt aus dem Barwert eines Schuldenportfolios und aus der vom Portfolio unabhängigen Perpetualrate. Für die Modellierung des Barwertes greifen wir auf das aus der dynamischen Portfoliooptimierung bekannte Konzept eines selbstfinanzierenden Bondportfolios zurück, das hier auf einem mehrdimensionalen affin-linearen Zinsmodell basiert. Das Wachstum des Schuldenportfolios wird dabei durch die Einbeziehung des Primärüberschusses des Staates gebremst bzw. verhindert, indem wir diesen als externen Zufluss in das selbstfinanzierende Modell aufnehmen.
Wegen der Vielfältigkeit möglicher Finanzierungsinstrumente wählen wir nicht deren Wertanteile als Kontrollvariable, sondern kontrollieren die Sensitivitäten des Portfolios gegenüber verschiedenen Zinsbewegungen. Aus optimalen Sensitivitäten können in einem nachgelagerten Schritt dann optimale Wertanteile für verschiedenste Finanzierungsinstrumente abgeleitet werden. Beispielhaft demonstrieren wir dies mittels Rolling-Horizon-Bonds unterschiedlicher Laufzeit.
Schließlich lösen wir zwei Optimierungsprobleme mit Methoden der stochastischen Kontrolltheorie. Dabei wird stets der erwartete Nutzen der Perpetualkosten maximiert. Die Nutzenfunktionen sind jeweils an das Schuldenmanagement angepasst und zeichnen sich insbesondere dadurch aus, dass höhere Kosten mit einem niedrigeren Nutzen einhergehen. Im ersten Problem betrachten wir eine Potenznutzenfunktion mit konstanter relativer Risikoaversion, im zweiten wählen wir eine Nutzenfunktion, welche die Einhaltung einer vorgegebenen Schulden- bzw. Kostenobergrenze garantiert.

In this thesis the combinatorial framework of toric geometry is extended to equivariant sheaves over toric varieties. The central questions are how to extract combinatorial information from the so developed description and whether equivariant sheaves can, like toric varieties, be considered as purely combinatorial objects. The thesis consists of three main parts. In the first part, by systematically extending the framework of toric geometry, a formalism is developed for describing equivariant sheaves by certain configurations of vector spaces. In the second part, homological properties of a certain class of equivariant sheaves are investigated, namely that of reflexive equivariant sheaves. Several kinds of resolutions for these sheaves are constructed which depend only on the configuration of their associated vector spaces. Thus a partially positive answer to the question of combinatorial representability is given. As a particular result, a new way for computing minimal resolutions for Z^n - graded modules over polynomial rings is obtained. In the third part a complete classification of the simplest nontrivial sheaves, equivariant vector bundles of rank two over smooth toric surfaces, is given. A combinatorial characterization is given and parameter spaces (moduli spaces) are constructed which depend only on this characterization. In appendices a outlook on equivariant sheaves and the relation of Chern classes to their combinatorial classification is given, particularly focussing on the case of the projective plane. A classification of equivariant vector bundles of rank three over the projective plane is given.

This work deals with the mathematical modeling and numerical simulation of the dynamics of a curved inertial viscous Newtonian fiber, which is practically applicable to the description of centrifugal spinning processes of glass wool. Neglecting surface tension and temperature dependence, the fiber flow is modeled as a three-dimensional free boundary value problem via instationary incompressible Navier-Stokes equations. From regular asymptotic expansions in powers of the slenderness parameter leading-order balance laws for mass (cross-section) and momentum are derived that combine the unrestricted motion of the fiber center-line with the inner viscous transport. The physically reasonable form of the one-dimensional fiber model results thereby from the introduction of the intrinsic velocity that characterizes the convective terms. For the numerical simulation of the derived model a finite volume code is developed. The results of the numerical scheme for high Reynolds numbers are validated by comparing them with the analytical solution of the inviscid problem. Moreover, the influence of parameters, like viscosity and rotation on the fiber dynamics are investigated. Finally, an application based on industrial data is performed.

Die vorliegende Arbeit wurde angeregt durch die in A.N. Borodin(2000) [Version of the Feynman-Kac Formula. Journal of Mathematical Sciences, 99(2):1044-1052, 2000] und in B. Simon(2000) [A Feynman-Kac Formula for Unbounded Semigroups. Canadian Math. Soc. Conf. Proc., 28:317-321, 2000] dargestellten Feynman-Kac-Formeln. Sie beschäftigt sich mit dem Problem, den Geltungsbereich der Feynman-Kac-Formel im Hinblick auf die Bedingungen der Potentiale und der Anfangsbedingung der zugehörigen partiellen Differentialgleichung zu erweitern. Es ist bekannt, dass die Feynman-Kac-Formel für beschränkte Potentiale gilt. Ausserdem gilt sie auch für Anfangsbedingungen, die im Raum \(C_{0}(\mathbb{R}^{n})\) oder im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegen. Die Darstellung der Feynman-Kac-Formel für die Anfangsbedingung, die im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegt, liefert die Lösung der partiellen Differentialgleichung. Wir können sie auch als stark stetige Halbgruppe auf dem Raum \(C_{0}(\mathbb{R}^{n})\) auffassen. Diese zwei verschiedenen Darstellungen sind äquivalent. In dieser Arbeit zeigen wir zunächst, dass die Feynman-Kac-Formel auch für unbeschränkte Potentiale \(V\) gilt, wobei \(|V(x)| \leq \varepsilon ||x||^{2} + C_{\varepsilon} \) für alle \(\varepsilon > 0; C_{\varepsilon} > 0\) und \(x \in \mathbb{R}^{n}\) ist. Ausserdem zeigen wir, dass sie für alle Anfangsbedingungen \(f\) gilt mit \(x \mapsto e^{-\varepsilon |x|^{2}} f(x) \in H^{2,2}(\mathbb{R}^{n})\). Der Beweis ist wahrscheinlichkeitstheoretisch und benutzt keine Spektraltheorie. Der spektraltheoretische Zugang, in dem eine Darstellung des Operators \(e^{-tH}\), wobei \(H = -\frac{1}{2} \Delta + V\) gegeben wird, wurde von B. Simon(2000) auch auf die obige Klasse von Potentialen ausgeweitet. Wir lassen zusätzlich auch Potentiale der Form \(V = V_{1} + V_{2}\) zu, wobei \(V_{1} \in L^{2}(\mathbb{R}^{3})\) ist und für alle \(\varepsilon > 0\) gibt es \(C_{\varepsilon} > 0\), so dass \(|V_{2}(x)| \leq\varepsilon ||x||^{2} + C_{\varepsilon}\) für alle \(x \in \mathbb{R}^{3}\) ist. Im Gegensatz zur klassischen Situation ist \(e^{-tH}\) jetzt ein unbeschränkter Operator. Schließlich wird in dieser Arbeit auch der Zusammenhang zwischen der Feynman-Kac-It\(\hat{o}\)-Formel, der Feynman-Kac-Formel und der Kolmogorov-Rückwärtsgleichung untersucht.

This thesis is concerned with tropical moduli spaces, which are an important tool in tropical enumerative geometry. The main result is a construction of tropical moduli spaces of rational tropical covers of smooth tropical curves and of tropical lines in smooth tropical surfaces. The construction of a moduli space of tropical curves in a smooth tropical variety is reduced to the case of smooth fans. Furthermore, we point out relations to intersection theory on suitable moduli spaces on algebraic curves.

Structure and Construction of Instanton Bundles on P3

Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory.
Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern.
To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.

In this dissertation convergence of binomial trees for option pricing is investigated. The focus is on American and European put and call options. For that purpose variations of the binomial tree model are reviewed.
In the first part of the thesis we investigated the convergence behavior of the already known trees from the literature (CRR, RB, Tian and CP) for the European options. The CRR and the RB tree suffer from irregular convergence, so our first aim is to find a way to get the smooth convergence. We first show what causes these oscillations. That will also help us to improve the rate of convergence. As a result we introduce the Tian and the CP tree and we proved that the order of convergence for these trees is \(O \left(\frac{1}{n} \right)\).
Afterwards we introduce the Split tree and explain its properties. We prove the convergence of it and we found an explicit first order error formula. In our setting, the splitting time \(t_{k} = k\Delta t\) is not fixed, i.e. it can be any time between 0 and the maturity time \(T\). This is the main difference compared to the model from the literature. Namely, we show that the good properties of the CRR tree when \(S_{0} = K\) can be preserved even without this condition (which is mainly the case). We achieved the convergence of \(O \left(n^{-\frac{3}{2}} \right)\) and we typically get better results if we split our tree later.

We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous
short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds.
The goal of an investor is to find a portfolio which maximizes expected utility
from terminal wealth under budget and present expected short-fall (PESF) risk
constraints. We analyze this portfolio optimization problem in both complete and
incomplete financial markets in three different cases: (a) when the PESF risk is
minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds
to the unconstrained Merton investment.
In all cases we find the optimal terminal wealth and portfolio process using the
martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the
optimal terminal wealth in the cases mentioned using numerical examples. Without
risk constraints, we further compare the investment strategies for complete
and incomplete market numerically.

This thesis deals with modeling aspects of generalized Newtonian and of non-Newtonian fluids, as well as with development and validation of algorithms used in simulation of such fluids. The main contribution in the modeling part are the introduction and analysis of a new model for the generalized Newtonian fluids, where constitutive equation is of an algebraic form. Distinction between shear and extensional viscosities leads to anisotropic viscosity model. It can be considered as a natural extension of the well known (isotropic viscosity) Carreau model, which deals only with shear viscosity properties of the fluid. The proposed model takes additionally into account extensional viscosity properties. Numerical results show that the anisotropic viscosity model gives much better agreement with experimental observations than the isotropic one. Another contribution of the thesis consists of the development and analysis of robust and reliable algorithms for simulation of generalized Newtonian fluids. For such fluids the momentum equations are strongly coupled through mixed derivatives appearing in the viscous term (unlike the case of Newtonian fluids). It is shown in this thesis, that a careful treatment of those derivatives is essential in deriving robust algorithms. A modification of a standard SIMPLE-like algorithm is given, where all the viscous terms from the momentum equations are discretized in an implicit manner. Moreover, it is shown that a block diagonal preconditioner to the viscous operator is good enough to be used in simulations. Furthermore, different solution techniques, namely projection type methods (consists of solving momentum equations and pressure correction equation) and fully coupled methods (momentum and continuity equations are solved together), are compared. It is shown, that explicit discretization of the mixed derivatives lead to stability problems. Further, analytical estimates of eigenvalue distribution for three different preconditioners, applied to the transformed system arising after discretization and linearization of the momentum and continuity equations, are provided. We propose to apply a block Gauss-Seidel preconditioner to the transformed system. The analysis shows, that this preconditioner is able to cluster eigenvalues around unity independent of the transformation step. It is not the case for other preconditioners applied to the transformed system as discussed in the thesis. The block Gauss-Seidel preconditioner has also shown the best behavior (among all preconditioners discussed in the thesis) in numerical experiments. Further contribution consists of comparison and validation of numerical algorithms applied in simulations of non-Newtonian fluids modeled by time integral constitutive equations. Numerical results from simulations of dilute polymer solutions, described by the integral Oldroyd B model, have shown very good quantitative agreement with the results obtained by differential Oldroyd B counterpart in 4:1 planar contraction domain at low Weissenberg numbers. In this case, the Weissenberg number is changed by changing the relaxation time. However, contrary to the differential Oldroyd B model, the integral one allows to perform stable simulations also in the range of high Weissenberg numbers. Moreover, very good agreement with experimental observations has been achieved. Simulations of concentrated polymer solutions (polystyrene and polybutadiene solutions), modeled by the integral Doi Edwards model, supplemented by chain length fluctuations, have shown very good qualitative agreement with the results obtained by its differential approximation in 4:1:4 constriction domain. Again, much higher Weissenberg numbers can be achieved when the integral model is used. Moreover, very good quantitative results with experimental data of polystyrene solution for the first normal stress difference and shear viscosity defined here as the quotient of a shear stress and a shear rate. Finally, comparison of the two methods used for approximating the time integral constitutive equation, namely Deformation Field Method (DFM) and Backward Lagrangian Particle Method (BLPM), is performed. In BLPM the particle paths are recalculated at every time step of the simulations, what has never been tried before. The results have shown, that in the considered geometries both methods give similar results.

Over the last decades, mathematical modeling has reached nearly all fields of natural science. The abstraction and reduction to a mathematical model has proven to be a powerful tool to gain a deeper insight into physical and technical processes. The increasing computing power has made numerical simulations available for many industrial applications. In recent years, mathematicians and engineers have turned there attention to model solid materials. New challenges have been found in the simulation of solids and fluid-structure interactions. In this context, it is indispensable to study the dynamics of elastic solids. Elasticity is a main feature of solid bodies while demanding a great deal of the numerical treatment. There exists a multitude of commercial tools to simulate the behavior of elastic solids. Anyhow, the majority of these software packages consider quasi-stationary problems. In the present work, we are interested in highly dynamical problems, e.g. the rotation of a solid. The applicability to free-boundary problems is a further emphasis of our considerations. In the last years, meshless or particle methods have attracted more and more attention. In many fields of numerical simulation these methods are on a par with classical methods or superior to them. In this work, we present the Finite Pointset Method (FPM) which uses a moving least squares particle approximation operator. The application of this method to various industrial problems at the Fraunhofer ITWM has shown that FPM is particularly suitable for highly dynamical problems with free surfaces and strongly changing geometries. Thereby, FPM offers exactly the features that we require for the analysis of the dynamics of solid bodies. In the present work, we provide a numerical scheme capable to simulate the behavior of elastic solids. We present the system of partial differential equations describing the dynamics of elastic solids and show its hyperbolic character. In particular, we focus our attention to the constitutive law for the stress tensor and provide evolution equations for the deviatoric part of the stress tensor in order to circumvent limitations of the classical Hooke's law. Furthermore, we present the basic principle of the Finite Pointset Method. In particular, we provide the concept of upwinding in a given direction as a key ingredient for stabilizing hyperbolic systems. The main part of this work describes the design of a numerical scheme based on FPM and an operator splitting to take the different processes within a solid body into account. Each resulting subsystem is treated separately in an adequate way. Hereby, we introduce the notion of system-inherent directions and dimensional upwinding. Finally, a coupling strategy for the subsystems and results are presented. We close this work with some final conclusions and an outlook on future work.

In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.

This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.

Non–woven materials consist of many thousands of fibres laid down on a conveyor belt
under the influence of a turbulent air stream. To improve industrial processes for the
production of non–woven materials, we develop and explore novel mathematical fibre and
material models.
In Part I of this thesis we improve existing mathematical models describing the fibres on the
belt in the meltspinning process. In contrast to existing models, we include the fibre–fibre
interaction caused by the fibres’ thickness which prevents the intersection of the fibres and,
hence, results in a more accurate mathematical description. We start from a microscopic
characterisation, where each fibre is described by a stochastic functional differential
equation and include the interaction along the whole fibre path, which is described by a
delay term. As many fibres are required for the production of a non–woven material, we
consider the corresponding mean–field equation, which describes the evolution of the fibre
distribution with respect to fibre position and orientation. To analyse the particular case of
large turbulences in the air stream, we develop the diffusion approximation which yields a
distribution describing the fibre position. Considering the convergence to equilibrium on
an analytical level, as well as performing numerical experiments, gives an insight into the
influence of the novel interaction term in the equations.
In Part II of this thesis we model the industrial airlay process, which is a production method
whereby many short fibres build a three–dimensional non–woven material. We focus on
the development of a material model based on original fibre properties, machine data and
micro computer tomography. A possible linking of these models to other simulation tools,
for example virtual tensile tests, is discussed.
The models and methods presented in this thesis promise to further the field in mathematical
modelling and computational simulation of non–woven materials.

In this thesis, the quasi-static Biot poroelasticity system in bounded multilayered domains in one and three dimensions is studied. In more detail, in the one-dimensional case, a finite volume discretization for the Biot system with discontinuous coefficients is derived. The discretization results in a difference scheme with harmonic averaging of the coefficients. Detailed theoretical analysis of the obtained discrete model is performed. Error estimates, which establish convergence rates for both primary as well as flux unknowns are derived. Besides, modified and more accurate discretizations, which can be applied when the interface position coincides with a grid node, are obtained. These discretizations yield second order convergence of the fluxes of the problem. Finally, the solver for the solution of the produced system of linear equations is developed and extensively tested. A number of numerical experiments, which confirm the theoretical considerations are performed. In the three-dimensional case, the finite volume discretization of the system involves construction of special interpolating polynomials in the dual volumes. These polynomials are derived so that they satisfy the same continuity conditions across the interface, as the original system of PDEs. This technique allows to obtain such a difference scheme, which provides accurate computation of the primary as well as of the flux unknowns, including the points adjacent to the interface. Numerical experiments, based on the obtained discretization, show second order convergence for auxiliary problems with known analytical solutions. A multigrid solver, which incorporates the features of the discrete model, is developed in order to solve efficiently the linear system, produced by the finite volume discretization of the three-dimensional problem. The crucial point is to derive problem-dependent restriction and prolongation operators. Such operators are a well-known remedy for the scalar PDEs with discontinuous coefficients. Here, these operators are derived for the system of PDEs, taking into account interdependence of different unknowns within the system. In the derivation, the interpolating polynomials from the finite volume discretization are employed again, linking thus the discretization and the solution processes. The developed multigrid solver is tested on several model problems. Numerical experiments show that, due to the proper problem-dependent intergrid transfer, the multigrid solver is robust with respect to the discontinuities of the coefficients of the system. In the end, the poroelasticity system with discontinuous coefficients is used to model a real problem. The Biot model, describing this problem, is treated numerically, i.e., discretized by the developed finite volume techniques and then solved by the constructed multigrid solver. Physical characteristics of the process, such as displacement of the skeleton, pressure of the fluid, components of the stress tensor, are calculated and then presented at certain cross-sections.

The fast development of the financial markets in the last decade has lead to the creation of a variety of innovative interest rate related products that require advanced numerical pricing methods. Examples in this respect are products with a complicated strong path-dependence such as a Target Redemption Note, a Ratchet Cap, a Ladder Swap and others. On the other side, the usage of the standard in the literature one-factor Hull and White (1990) type of short rate models allows only for a perfect correlation between all continuously compounded spot rates or Libor rates and thus are not suited for pricing innovative products depending on several Libor rates such as for example a "steepener" option. One possible solution to this problem deliver the two-factor short rate models and in this thesis we consider a two-factor Hull and White (1990) type of a short rate process derived from the Heath, Jarrow, Morton (1992) framework by limiting the volatility structure of the forward rate process to a deterministic one. In this thesis, we often choose to use a variety of modified (binomial, trinomial and quadrinomial) tree constructions as a main numerical pricing tool due to their flexibility and fast convergence and (when there is no closed-form solution) compare their results with fine grid Monte Carlo simulations. For the purpose of pricing the already mentioned innovative short-rate related products, in this thesis we offer and examine two different lattice construction methods for the two-factor Hull-White type of a short rate process which are able to deal easily both with modeling of the mean-reversion of the underlying process and with the strong path-dependence of the priced options. Additionally, we prove that the so-called rotated lattice construction method overcomes the typical for the existing two-factor tree constructions problem with obtaining negative "risk-neutral probabilities". With a variety of numerical examples, we show that this leads to a stability in the results especially in cases of high volatility parameters and negative correlation between the base factors (which is typically the case in reality). Further, noticing that Chan et al (1992) and Ritchken and Sankarasubramanian (1995) showed that option prices are sensitive to the level of the short rate volatility, we examine the pricing of European and American options where the short rate process has a volatility structure of a Cheyette (1994) type. In this relation, we examine the application of the two offered lattice construction methods and compare their results with the Monte Carlo simulation ones for a variety of examples. Additionally, for the pricing of American options with the Monte Carlo method we expand and implement the simulation algorithm of Longstaff and Schwartz (2000). With a variety of numerical examples we compare again the stability and the convergence of the different lattice construction methods. Dealing with the problems of pricing strongly path-dependent options, we come across the cumulative Parisian barrier option pricing problem. We notice that in their classical form, the cumulative Parisian barrier options have been priced both analytically (in a quasi closed form) and with a tree approximation (based on the Forward Shooting Grid algorithm, see e.g. Hull and White (1993), Kwok and Lau (2001) and others). However, we offer an additional tree construction method which can be seen as a direct binomial tree integration that uses the analytically calculated conditional survival probabilities. The advantage of the offered method is on one side that the conditional survival probabilities are easier to calculate than the closed-form solution itself and on the other side that this tree construction is very flexible in the sense that it allows easy incorporation of additional features such as e.g a forward starting one. The obtained results are better than the Forward Shooting Grid tree ones and are very close to the analytical quasi closed form solution. Finally, we pay our attention to pricing another type of innovative interest rate alike products - namely the Longevity bond - whose coupon payments depend on the survival function of a given cohort. Due to the lack of a market for mortality, for the pricing of the Longevity bonds we develop (following Korn, Natcheva and Zipperer (2006)) a framework that contains principles from both Insurance and Financial mathematic. Further on, we calibrate the existing models for the stochastic mortality dynamics to historical German data and additionally offer new stochastic extensions of the classical (deterministic) models of mortality such as the Gompertz and the Makeham one. Finally, we compare and analyze the results of the application of all considered models to the pricing of a Longevity bond on the longevity of the German males.

The goal of this work is to develop a simulation-based algorithm, allowing the prediction
of the effective mechanical properties of textiles on the basis of their microstructure
and corresponding properties of fibers. This method can be used for optimization of the
microstructure, in order to obtain a better stiffness or strength of the corresponding fiber
material later on. An additional aspect of the thesis is that we want to take into account the microcontacts
between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the
textile. An introduction of an additional asymptotics with respect to a small parameter,
the relation between the thickness and the representative length of the fibers, allows a
reduction of local contact problems between fibers to 1-dimensional problems, which
reduces numerical computations significantly.
A fiber composite material with periodic microstructure and multiple frictional microcontacts
between fibers is studied. The textile is modeled by introducing small geometrical
parameters: the periodicity of the microstructure and the characteristic
diameter of fibers. The contact linear elasticity problem is considered. A two-scale
approach is used for obtaining the effective mechanical properties.
The algorithm using asymptotic two-scale homogenization for computation of the
effective mechanical properties of textiles with periodic rod or fiber microstructure
is proposed. The algorithm is based on the consequent passing to the asymptotics
with respect to the in-plane period and the characteristic diameter of fibers. This
allows to come to the equivalent homogenized problem and to reduce the dimension
of the auxiliary problems. Further numerical simulations of the cell problems give
the effective material properties of the textile.
The homogenization of the boundary conditions on the vanishing out-of-plane interface
of a textile or fiber structured layer has been studied. Introducing additional
auxiliary functions into the formal asymptotic expansion for a heterogeneous
plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous
Neumann boundary condition were deduced. It is incorporated into the right hand
side of the homogenized problem via effective out-of-plane moduli.
FiberFEM, a C++ finite element code for solving contact elasticity problems, is
developed. The code is based on the implementation of the algorithm for the contact
between fibers, proposed in the thesis.
Numerical examples of homogenization of geotexiles and wovens are obtained in the
work by implementation of the developed algorithm. The effective material moduli
are computed numerically using the finite element solutions of the auxiliary contact
problems obtained by FiberFEM.

This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.

This thesis deals with the application of binomial option pricing in a single-asset Black-Scholes market and its extension to multi-dimensional situations. Although the binomial approach is, in principle, an efficient method for lower dimensional valuation problems, there are at least two main problems regarding its application: Firstly, traded options often exhibit discontinuities, so that the Berry- Esséen inequality is in general tight; i.e. conventional tree methods converge no faster than with order 1/sqrt(N). Furthermore, they suffer from an irregular convergence behaviour that impedes the possibility to achieve a higher order of convergence via extrapolation methods. Secondly, in multi-asset markets conventional tree construction methods cannot ensure well-defined transition probabilities for arbitrary correlation structures between the assets. As a major aim of this thesis, we present two approaches to get binomial trees into shape in order to overcome the main problems in applications; the optimal drift model for the valuation of single-asset options and the decoupling approach to multi-dimensional option pricing. The new valuation methods are embedded into a self-contained survey of binomial option pricing, which focuses on the convergence behaviour of binomial trees. The optimal drift model is a new one-dimensional binomial scheme that can lead to convergence of order o(1/N) by exploiting the specific structure of the valuation problem under consideration. As a consequence, it has the potential to outperform benchmark algorithms. The decoupling approach is presented as a universal construction method for multi-dimensional trees. The corresponding trees are well-defined for an arbitrary correlation structure of the underlying assets. In addition, they yield a more regular convergence behaviour. In fact, the sawtooth effect can even vanish completely, so that extrapolation can be applied.

Semiparametric estimation of conditional quantiles for time series, with applications in finance
(2003)

The estimation of conditional quantiles has become an increasingly important issue in insurance and financial risk management. The stylized facts of financial time series data has rendered direct applications of extreme value theory methodologies, in the estimation of extreme conditional quantiles, inappropriate. On the other hand, quantile regression based procedures work well in nonextreme parts of a given data but breaks down in extreme probability levels. In order to solve this problem, we combine nonparametric regressions for time series and extreme value theory approaches in the estimation of extreme conditional quantiles for financial time series. To do so, a class of time series models that is similar to nonparametric AR-(G)ARCH models but which does not depend on distributional and moments assumptions, is introduced. We discuss estimation procedures for the nonextreme levels using the models and consider the estimates obtained by inverting conditional distribution estimators and by direct estimation using Koenker-Basset (1978) version for kernels. Under some regularity conditions, the asymptotic normality and uniform convergence, with rates, of the conditional quantile estimator for strong mixing time series, are established. We study the estimation of scale function in the introduced models using similar procedures and show that under some regularity conditions, the scale estimate is weakly consistent and asymptotically normal. The application of introduced models in the estimation of extreme conditional quantiles is achieved by augmenting them with methods in extreme value theory. It is shown that the overal extreme conditional quantiles estimator is consistent. A Monte Carlo study is carried out to illustrate the good performance of the estimates and real data are used to demonstrate the estimation of Value-at-Risk and conditional expected shortfall in financial risk management and their multiperiod predictions discussed.

Nonlinear diffusion filtering of images using the topological gradient approach to edges detection
(2007)

In this thesis, the problem of nonlinear diffusion filtering of gray-scale images is theoretically and numerically investigated. In the first part of the thesis, we derive the topological asymptotic expansion of the Mumford-Shah like functional. We show that the dominant term of this expansion can be regarded as a criterion to edges detection in an image. In the numerical part, we propose the finite volume discretization for the Catté et al. and the Weickert diffusion filter models. The proposed discretization is based on the integro-interpolation method introduced by Samarskii. The numerical schemes are derived for the case of uniform and nonuniform cell-centered grids of the computational domain \(\Omega \subset \mathbb{R}^2\). In order to generate a nonuniform grid, the adaptive coarsening technique is proposed.

In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.

This thesis is devoted to constructive module theory of polynomial
graded commutative algebras over a field.
It treats the theory of Groebner bases (GB), standard bases (SB) and syzygies as well as algorithms
and their implementations.
Graded commutative algebras naturally unify exterior and commutative polynomial algebras.
They are graded non-commutative, associative unital algebras over fields and may contain zero-divisors.
In this thesis
we try to make the most use out of _a priori_ knowledge about
their characteristic (super-commutative) structure
in developing direct symbolic methods, algorithms and implementations,
which are intrinsic to graded commutative algebras and practically efficient.
For our symbolic treatment we represent them as polynomial algebras
and redefine the product rule in order to allow super-commutative structures
and, in particular, to allow zero-divisors.
Using this representation we give a nice characterization
of a GB and an algorithm for its computation.
We can also tackle central localizations of graded commutative algebras by allowing commutative variables to be _local_,
generalizing Mora algorithm (in a similar fashion as G.M.Greuel and G.Pfister by allowing local or mixed monomial orderings)
and working with SBs.
In this general setting we prove a generalized Buchberger's criterion,
which shows that syzygies of leading terms play the utmost important role
in SB and syzygy module computations.
Furthermore, we develop a variation of the La Scala-Stillman free resolution algorithm,
which we can formulate particularly close to our implementation.
On the implementation side
we have further developed the Singular non-commutative subsystem Plural
in order to allow polynomial arithmetic
and more involved non-commutative basic Computer Algebra computations (e.g. S-polynomial, GB)
to be easily implementable for specific algebras.
At the moment graded commutative algebra-related algorithms
are implemented in this framework.
Benchmarks show that our new algorithms and implementation are practically efficient.
The developed framework has a lot of applications in various
branches of mathematics and theoretical physics.
They include computation of sheaf cohomology, coordinate-free verification of affine geometry
theorems and computation of cohomology rings of p-groups, which are partially described in this thesis.

This thesis introduces so-called cone scalarising functions. They are by construction compatible with a partial order for the outcome space given by a cone. The quality of the parametrisations of the efficient set given by the cone scalarising functions are then investigated. Here, the focus lies on the (weak) efficiency of the generated solutions, the reachability of effiecient points and continuity of the solution set. Based on cone scalarising functions Pareto Navigation a novel, interactive, multiobjective optimisation method is proposed. It changes the ordering cone to realise bounds on partial tradeoffs. Besides, its use of an equality constraint for the changing component of the reference point is a new feature. The efficiency of its solutions, the reachability of efficient solutions and continuity is then analysed. Potential problems are demonstrated using a critical example. Furthermore, the use of Pareto Navigation in a two-phase approach and for nonconvex problems is discussed. Finally, its application for intensity-modulated radiotherapy planning is described. Thereby, its realisation in a graphical user interface is shown.

This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.

The thesis is concerned with multiscale approximation by means of radial basis functions on hierarchically structured spherical grids. A new approach is proposed to construct a biorthogonal system of locally supported zonal functions. By use of this biorthogonal system of locally supported zonal functions, a spherical fast wavelet transform (SFWT) is established. Finally, based on the wavelet analysis, geophysically and geodetically relevant problems involving rotation-invariant pseudodifferential operators are shown to be efficiently and economically solvable.

Diese Arbeit beschäftigt sich mit Methoden zur Klassifikation von Ovoiden in quadratischen Räumen. Die Anwendung der dazu entwickelten Algorithmen erfolgt hauptsächlich in achtdimensionalen Räumen speziell über den Körpern GF(7), GF(8) und GF(9). Zu verschiedenen, zumeist kleinen, zyklischen Gruppen werden hier die unter diesen Gruppen invarianten Ovoide bestimmt. Die bei dieser Suche auftretenden Ovoide sind alle bereits bekannt. Es ergeben sich jedoch Restriktionen an die Stabilisatoren gegebenenfalls existierender, unbekannter Ovoide.

Intersection Theory on Tropical Toric Varieties and Compactifications of Tropical Parameter Spaces
(2011)

We study toric varieties over the tropical semifield. We define tropical cycles inside these toric varieties and extend the stable intersection of tropical cycles in R^n to these toric varieties. In particular, we show that every tropical cycle can be degenerated into a sum of torus-invariant cycles. This allows us to tropicalize algebraic cycles of toric varieties over an algebraically closed field with non-Archimedean valuation. We see that the tropicalization map is a homomorphism on cycles and an isomorphism on cycle classes. Furthermore, we can use projective toric varieties to compactify known tropical varieties and study their combinatorics. We do this for the tropical Grassmannian in the Plücker embedding and compactify the tropical parameter space of rational degree d curves in tropical projective space using Chow quotients of the tropical Grassmannian.

Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.

In traditional portfolio optimization under the threat of a crash the investment horizon or time to maturity is neglected. Developing the so-called crash hedging strategies (which are portfolio strategies which make an investor indifferent to the occurrence of an uncertain (down) jumps of the price of the risky asset) the time to maturity turns out to be essential. The crash hedging strategies are derived as solutions of non-linear differential equations which itself are consequences of an equilibrium strategy. Hereby the situation of changing market coefficients after a possible crash is considered for the case of logarithmic utility as well as for the case of general utility functions. A benefit-cost analysis of the crash hedging strategy is done as well as a comparison of the crash hedging strategy with the optimal portfolio strategies given in traditional crash models. Moreover, it will be shown that the crash hedging strategies optimize the worst-case bound for the expected utility from final wealth subject to some restrictions. Another application is to model crash hedging strategies in situations where both the number and the height of the crash are uncertain but bounded. Taking the additional information of the probability of a possible crash happening into account leads to the development of the q-quantile crash hedging strategy.

This thesis is concerned with stochastic control problems under transaction costs. In particular, we consider a generalized menu cost problem with partially controlled regime switching, general multidimensional running cost problems and the maximization of long-term growth rates in incomplete markets. The first two problems are considered under a general cost structure that includes a fixed cost component, whereas the latter is analyzed under proportional and Morton-Pliska
transaction costs.
For the menu cost problem and the running cost problem we provide an equivalent characterization of the value function by means of a generalized version of the Ito-Dynkin formula instead of the more restrictive, traditional approach via the use of quasi-variational inequalities (QVIs). Based on the finite element method and weak solutions of QVIs in suitable Sobolev spaces, the value function is constructed iteratively. In addition to the analytical results, we study a novel application of the menu cost problem in management science. We consider a company that aims to implement an optimal investment and marketing strategy and must decide when to issue a new version of a product and when and how much
to invest into marketing.
For the long-term growth rate problem we provide a rigorous asymptotic analysis under both proportional and Morton-Pliska transaction costs in a general incomplete market that includes, for instance, the Heston stochastic volatility model and the Kim-Omberg stochastic excess return model as special cases. By means of a dynamic programming approach leading-order optimal strategies are constructed
and the leading-order coefficients in the expansions of the long-term growth rates are determined. Moreover, we analyze the asymptotic performance of Morton-Pliska strategies in settings with proportional transaction costs. Finally, pathwise optimality of the constructed strategies is established.