## Fachbereich Mathematik

### Filtern

#### Erscheinungsjahr

#### Dokumenttyp

- Dissertation (225) (entfernen)

#### Schlagworte

- Algebraische Geometrie (6)
- Finanzmathematik (5)
- Optimization (5)
- Portfolio Selection (5)
- Stochastische dynamische Optimierung (5)
- Navier-Stokes-Gleichung (4)
- Numerische Mathematik (4)
- Portfolio-Optimierung (4)
- portfolio optimization (4)
- Computeralgebra (3)
- Elastizität (3)
- Erwarteter Nutzen (3)
- Finite-Volumen-Methode (3)
- Gröbner-Basis (3)
- Homogenisierung <Mathematik> (3)
- Inverses Problem (3)
- Numerische Strömungssimulation (3)
- Optionspreistheorie (3)
- Portfoliomanagement (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Wavelet (3)
- optimales Investment (3)
- Asymptotic Expansion (2)
- Asymptotik (2)
- Bewertung (2)
- Derivat <Wertpapier> (2)
- Elasticity (2)
- Endliche Geometrie (2)
- Erdmagnetismus (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- Geometric Ergodicity (2)
- Hamilton-Jacobi-Differentialgleichung (2)
- Hochskalieren (2)
- IMRT (2)
- Kreditrisiko (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Local smoothing (2)
- Mehrskalenanalyse (2)
- Mehrskalenmodell (2)
- Modulraum (2)
- Partial Differential Equations (2)
- Partielle Differentialgleichung (2)
- Portfolio Optimization (2)
- Poröser Stoff (2)
- Regularisierung (2)
- Schnitttheorie (2)
- Stochastic Control (2)
- Stochastische Differentialgleichung (2)
- Transaktionskosten (2)
- Upscaling (2)
- Vektorwavelets (2)
- White Noise Analysis (2)
- curve singularity (2)
- domain decomposition (2)
- duality (2)
- finite volume method (2)
- geomagnetism (2)
- homogenization (2)
- illiquidity (2)
- interface problem (2)
- isogeometric analysis (2)
- mesh generation (2)
- optimal investment (2)
- splines (2)
- "Slender-Body"-Theorie (1)
- 3D image analysis (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- Ableitungsfreie Optimierung (1)
- Advanced Encryption Standard (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Annulus (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Arc distance (1)
- Archimedische Kopula (1)
- Asiatische Option (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Ausfallrisiko (1)
- Automorphismengruppe (1)
- Autoregressive Hilbertian model (1)
- B-Spline (1)
- Barriers (1)
- Basket Option (1)
- Bayes-Entscheidungstheorie (1)
- Beam models (1)
- Beam orientation (1)
- Beschichtungsprozess (1)
- Beschränkte Krümmung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Bildsegmentierung (1)
- Binomialbaum (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Boltzmann Equation (1)
- Bondindizes (1)
- Bootstrap (1)
- Boundary Value Problem / Oblique Derivative (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- CDO (1)
- CDS (1)
- CDSwaption (1)
- CFD (1)
- CHAMP (1)
- CPDO (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Charakter <Gruppentheorie> (1)
- Chi-Quadrat-Test (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Circle Location (1)
- Coarse graining (1)
- Cohen-Lenstra heuristic (1)
- Combinatorial Optimization (1)
- Commodity Index (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer algebra (1)
- Computeralgebra System (1)
- Conditional Value-at-Risk (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Construction of hypersurfaces (1)
- Copula (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Curvature (1)
- Curved viscous fibers (1)
- DSMC (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Debt Management (1)
- Defaultable Options (1)
- Deformationstheorie (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Differenzenverfahren (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionsprozess (1)
- Discriminatory power (1)
- Diskrete Fourier-Transformation (1)
- Dispersionsrelation (1)
- Dissertation (1)
- Druckkorrektur (1)
- Dünnfilmapproximation (1)
- EM algorithm (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Efficiency (1)
- Effizienter Algorithmus (1)
- Effizienz (1)
- Eikonal equation (1)
- Elastische Deformation (1)
- Elastoplastizität (1)
- Elektromagnetische Streuung (1)
- Eliminationsverfahren (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Endliche Gruppe (1)
- Endliche Lie-Gruppe (1)
- Entscheidungsbaum (1)
- Entscheidungsunterstützung (1)
- Enumerative Geometrie (1)
- Erdöl Prospektierung (1)
- Erwartungswert-Varianz-Ansatz (1)
- Expected shortfall (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Extrapolation (1)
- Extreme Events (1)
- Extreme value theory (1)
- FEM (1)
- FFT (1)
- FPM (1)
- Faden (1)
- Fatigue (1)
- Feedfoward Neural Networks (1)
- Feynman Integrals (1)
- Feynman path integrals (1)
- Fiber suspension flow (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finite-Elemente-Methode (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Forward-Backward Stochastic Differential Equation (1)
- Fourier-Transformation (1)
- Fredholmsche Integralgleichung (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionenkörper (1)
- GARCH (1)
- GARCH Modelle (1)
- Galerkin-Methode (1)
- Gamma-Konvergenz (1)
- Garbentheorie (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Geodesie (1)
- Geometrische Ergodizität (1)
- Gewichteter Sobolev-Raum (1)
- Gittererzeugung (1)
- Gleichgewichtsstrategien (1)
- Granular flow (1)
- Granulat (1)
- Gravitationsfeld (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Große Abweichung (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner-basis (1)
- Gyroscopic (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamiltonian Path Integrals (1)
- Handelsstrategien (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Hazard Functions (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Helmholtz Type Boundary Value Problems (1)
- Heston-Modell (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- Homogenization (1)
- Homologische Algebra (1)
- Hub Location Problem (1)
- Hydrostatischer Druck (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hyperspektraler Sensor (1)
- Hysterese (1)
- ITSM (1)
- Idealklassengruppe (1)
- Illiquidität (1)
- Image restoration (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Inflation (1)
- Infrarotspektroskopie (1)
- Intensität (1)
- Internationale Diversifikation (1)
- Inverse Problem (1)
- Irreduzibler Charakter (1)
- Isogeometrische Analyse (1)
- Ito (1)
- Jacobigruppe (1)
- Kanalcodierung (1)
- Karhunen-Loève expansion (1)
- Kategorientheorie (1)
- Kelvin Transformation (1)
- Kirchhoff-Love shell (1)
- Kiyoshi (1)
- Kombinatorik (1)
- Kommutative Algebra (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsphysik (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Kreitderivaten (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kurvenschar (1)
- LIBOR (1)
- Lagrangian relaxation (1)
- Laplace transform (1)
- Lattice Boltzmann (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Leading-Order Optimality (1)
- Level set methods (1)
- Lie-Typ-Gruppe (1)
- Lineare partielle Differentialgleichung (1)
- Lippmann-Schwinger equation (1)
- Liquidität (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- MBS (1)
- MKS (1)
- Macaulay’s inverse system (1)
- Marangoni-Effekt (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Markov-Prozess (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martingaloptimalitätsprinzip (1)
- Mathematical Finance (1)
- Mathematik (1)
- Mathematisches Modell (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrkriterielle Optimierung (1)
- Mehrskalen (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikroelektronik (1)
- Mikrostruktur (1)
- Mixed integer programming (1)
- Modellbildung (1)
- Molekulardynamik (1)
- Momentum and Mas Transfer (1)
- Monte Carlo (1)
- Monte-Carlo-Simulation (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Mosco convergence (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multicriteria optimization (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiresolution Analysis (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- NURBS (1)
- Networks (1)
- Netzwerksynthese (1)
- Neural Networks (1)
- Neuronales Netz (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Niederschlag (1)
- No-Arbitrage (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Mathematik / Algorithmus (1)
- Numerisches Verfahren (1)
- Oberflächenmaße (1)
- Oberflächenspannung (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimale Portfolios (1)
- Optimierung (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Order (1)
- Ovoid (1)
- Gedruckte Kopie bestellen (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Partikel Methoden (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathwise Optimality (1)
- Pedestrian FLow (1)
- Pfadintegral (1)
- Planares Polynom (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- PolyBoRi (1)
- Population Balance Equation (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Preimage of an ideal under a morphism of algebras (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Prox-Regularisierung (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- Quadratischer Raum (1)
- Quantile autoregression (1)
- Quasi-Variational Inequalities (1)
- RKHS (1)
- Radial Basis Functions (1)
- Radiotherapy (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rarefied gas (1)
- Reflexionsspektroskopie (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regressionsanalyse (1)
- Regularisierung / Stoppkriterium (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Reliability (1)
- Restricted Regions (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Risikomanagement (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Measures (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Räumliche Statistik (1)
- SWARM (1)
- Scale function (1)
- Schaum (1)
- Schaumzerfall (1)
- Schiefe Ableitung (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Second Order Conditions (1)
- Semi-Markov-Kette (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Shallow Water Equations (1)
- Shape optimization, gradient based optimization, adjoint method (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Slender body theory (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Spannungs-Dehn (1)
- Spatial Statistics (1)
- Spectral theory (1)
- Spektralanalyse <Stochastik> (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sprung-Diffusions-Prozesse (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Standard basis (1)
- Standortprobleme (1)
- Steuer (1)
- Stochastic Impulse Control (1)
- Stochastic Processes (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische Zinsen (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Strukturiertes Finanzprodukt (1)
- Strukturoptimierung (1)
- Strömungsdynamik (1)
- Strömungsmechanik (1)
- Success Run (1)
- Survival Analysis (1)
- Systemidentifikation (1)
- Sägezahneffekt (1)
- Tail Dependence Koeffizient (1)
- Test for Changepoint (1)
- Thermophoresis (1)
- Thin film approximation (1)
- Tichonov-Regularisierung (1)
- Time Series (1)
- Time-Series (1)
- Time-delay-Netz (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Traffic flow (1)
- Transaction costs (1)
- Trennschärfe <Statistik> (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Two-phase flow (1)
- Unreinheitsfunktion (1)
- Untermannigfaltigkeit (1)
- Upwind-Verfahren (1)
- Utility (1)
- Value at Risk (1)
- Value-at-Risk (1)
- Variationsrechnung (1)
- Vectorfield approximation (1)
- Vektorfeldapproximation (1)
- Vektorkugelfunktionen (1)
- Verschwindungsatz (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Volatilität (1)
- Volatilitätsarbitrage (1)
- Vorkonditionierer (1)
- Vorwärts-Rückwärts-Stochastische-Differentialgleichung (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weißes Rauschen (1)
- White Noise (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Worst-Case (1)
- Wärmeleitfähigkeit (1)
- Yaglom limits (1)
- Zeitintegrale Modelle (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- Zero-dimensional schemes (1)
- Zopfgruppe (1)
- Zufälliges Feld (1)
- Zweiphasenströmung (1)
- abgeleitete Kategorie (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alternating minimization (1)
- alternating optimization (1)
- analoge Mikroelektronik (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- applied mathematics (1)
- archimedean copula (1)
- asian option (1)
- basket option (1)
- benders decomposition (1)
- bending strip method (1)
- binomial tree (1)
- blackout period (1)
- bocses (1)
- boundary value problem (1)
- canonical ideal (1)
- canonical module (1)
- changing market coefficients (1)
- closure approximation (1)
- combinatorics (1)
- composites (1)
- computational finance (1)
- computer algebra (1)
- computeralgebra (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- correlated errors (1)
- coupling methods (1)
- crash (1)
- crash hedging (1)
- credit risk (1)
- curvature (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- degenerations of an elliptic curve (1)
- dense univariate rational interpolation (1)
- derived category (1)
- diffusion models (1)
- discrepancy (1)
- double exponential distribution (1)
- downward continuation (1)
- efficiency loss (1)
- elastoplasticity (1)
- elliptical distribution (1)
- endomorphism ring (1)
- enumerative geometry (1)
- equilibrium strategies (1)
- equisingular families (1)
- face value (1)
- fiber reinforced silicon carbide (1)
- filtration (1)
- financial mathematics (1)
- finite difference schemes (1)
- finite element method (1)
- first hitting time (1)
- float glass (1)
- flood risk (1)
- fluid structure (1)
- fluid structure interaction (1)
- forward-shooting grid (1)
- free surface (1)
- freie Oberfläche (1)
- gebietszerlegung (1)
- gitter (1)
- good semigroup (1)
- graph p-Laplacian (1)
- gravitation (1)
- group action (1)
- großer Investor (1)
- hedging (1)
- heuristic (1)
- hierarchical matrix (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hyperspectal unmixing (1)
- idealclass group (1)
- image analysis (1)
- image denoising (1)
- impulse control (1)
- impurity functions (1)
- incompressible elasticity (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- integer programming (1)
- integral constitutive equations (1)
- intensity (1)
- inverse optimization (1)
- inverse problem (1)
- jump-diffusion process (1)
- large investor (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- level K-algebras (1)
- level set method (1)
- limit theorems (1)
- linear code (1)
- localizing basis (1)
- longevity bonds (1)
- low-rank approximation (1)
- macro derivative (1)
- market manipulation (1)
- markov model (1)
- martingale optimality principle (1)
- mathematical modelling (1)
- mathematical morphology (1)
- matrix problems (1)
- matroid flows (1)
- mean-variance approach (1)
- micromechanics (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- modal derivatives (1)
- model order reduction (1)
- moduli space (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- multi scale (1)
- multi-asset option (1)
- multi-class image segmentation (1)
- multi-level Monte Carlo (1)
- multi-phase flow (1)
- multicategory (1)
- multifilament superconductor (1)
- multigrid method (1)
- multileaf collimator (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative noise (1)
- multiscale denoising (1)
- multiscale methods (1)
- multivariate chi-square-test (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- non-desarguesian plane (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- option pricing (1)
- option valuation (1)
- partial differential equation (1)
- partial information (1)
- path-dependent options (1)
- pattern (1)
- penalty methods (1)
- penalty-free formulation (1)
- petroleum exploration (1)
- planar polynomial (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- potential (1)
- preconditioners (1)
- pressure correction (1)
- primal-dual algorithm (1)
- probability distribution (1)
- projective surfaces (1)
- proximation (1)
- quadrinomial tree (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiation therapy (1)
- radiotherapy (1)
- rare disasters (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- real quadratic number fields (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regime-shift model (1)
- regression analysis (1)
- regularization methods (1)
- rheology (1)
- sampling (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- second class group (1)
- seismic tomography (1)
- semigroup of values (1)
- sheaf theory (1)
- similarity measures (1)
- singularities (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparsity (1)
- spherical approximation (1)
- sputtering process (1)
- stochastic arbitrage (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stochastische Arbitrage (1)
- stop- and play-operator (1)
- subgradient (1)
- superposed fluids (1)
- surface measures (1)
- surrogate algorithm (1)
- syzygies (1)
- tail dependence coefficient (1)
- tax (1)
- tensions (1)
- time delays (1)
- topological asymptotic expansion (1)
- toric geometry (1)
- torische Geometrie (1)
- total variation (1)
- total variation spatial regularization (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- tropical geometry (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- value semigroup (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector spherical harmonics (1)
- vectorial wavelets (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- viscoelastic fluids (1)
- volatility arbitrage (1)
- vortex seperation (1)
- well-posedness (1)
- worst-case (1)
- worst-case scenario (1)
- Äquisingularität (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)

#### Fachbereich / Organisatorische Einheit

- Fachbereich Mathematik (225)
- Fraunhofer (ITWM) (2)

The aim of the thesis is the numerical investigation of saturated, stationary, incompressible Newtonian flow in porous media when inertia is not negligible. We focus our attention to the Navier-Stokes system with two pressures derived by two-scale homogenization. The thesis is subdivided into five Chapters. After the introductory remarks on porous media, filtration laws and upscaling methods, the first chapter is closed by stating the basic terminology and mathematical fundamentals. In Chapter 2, we start by formulating the Navier-Stokes equations on a periodic porous medium. By two-scale expansions of the velocity and pressure, we formally derive the Navier-Stokes system with two pressures. For the sake of completeness, known existence and uniqueness results are repeated and a convergence proof is given. Finally, we consider Stokes and Navier-Stokes systems with two pressures with respect to their relation to Darcy's law. Chapter 3 and Chapter 4 are devoted to the numerical solution of the nonlinear two pressure system. Therefore, we follow two approaches. The first approach which is developed in Chapter 3 is based on a splitting of the Navier-Stokes system with two pressures into micro and macro problems. The splitting is achieved by Taylor expanding the permeability function or by discretely computing the permeability function. The problems to be solved are a series of Stokes and Navier-Stokes problems on the periodicity cell. The Stokes problems are solved by an Uzawa conjugate gradient method. The Navier-Stokes equations are linearized by a least-squares conjugate gradient method, which leads to the solution of a sequence of Stokes problems. The macro problem consists of solving a nonlinear uniformly elliptic equation of second order. The least-squares linearization is applied to the macro problem leading to a sequence of Poisson problems. All equations will be discretized by finite elements. Numerical results are presented at the end of Chapter 3. The second approach presented in Chapter 4 relies on the variational formulation in a certain Hilbert space setting of the Navier-Stokes system with two pressures. The nonlinear problem is again linearized by the least-squares conjugate gradient method. We obtain a sequence of Stokes systems with two pressures. For the latter systems, we propose a fast solution method which relies on pre-computing Stokes systems on the periodicity cell for finite element basis functions acting as right hand sides. Finally, numerical results are discussed. In Chapter 5 we are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and the chapter is closed by a numerical investigation of the model with special focus on inertia effects.

Nonlinear diffusion filtering of images using the topological gradient approach to edges detection
(2007)

In this thesis, the problem of nonlinear diffusion filtering of gray-scale images is theoretically and numerically investigated. In the first part of the thesis, we derive the topological asymptotic expansion of the Mumford-Shah like functional. We show that the dominant term of this expansion can be regarded as a criterion to edges detection in an image. In the numerical part, we propose the finite volume discretization for the Catté et al. and the Weickert diffusion filter models. The proposed discretization is based on the integro-interpolation method introduced by Samarskii. The numerical schemes are derived for the case of uniform and nonuniform cell-centered grids of the computational domain \(\Omega \subset \mathbb{R}^2\). In order to generate a nonuniform grid, the adaptive coarsening technique is proposed.

Non-commutative polynomial algebras appear in a wide range of applications, from quantum groups and theoretical physics to linear differential and difference equations. In the thesis, we have developed a framework, unifying many important algebras in the classes of \(G\)- and \(GR\)-algebras and studied their ring-theoretic properties. Let \(A\) be a \(G\)-algebra in \(n\) variables. We establish necessary and sufficient conditions for \(A\) to have a Poincar'e-Birkhoff-Witt (PBW) basis. Further on, we show that besides the existence of a PBW basis, \(A\) shares some other properties with the commutative polynomial ring \(\mathbb{K}[x_1,\ldots,x_n]\). In particular, \(A\) is a Noetherian integral domain of Gel'fand-Kirillov dimension \(n\). Both Krull and global homological dimension of \(A\) are bounded by \(n\); we provide examples of \(G\)-algebras where these inequalities are strict. Finally, we prove that \(A\) is Auslander-regular and a Cohen-Macaulay algebra. In order to perform symbolic computations with modules over \(GR\)-algebras, we generalize Gröbner bases theory, develop and respectively enhance new and existing algorithms. We unite the most fundamental algorithms in a suite of applications, called "Gröbner basics" in the literature. Furthermore, we discuss algorithms appearing in the non-commutative case only, among others two-sided Gröbner bases for bimodules, annihilators of left modules and operations with opposite algebras. An important role in Representation Theory is played by various subalgebras, like the center and the Gel'fand-Zetlin subalgebra. We discuss their properties and their relations to Gröbner bases, and briefly comment some aspects of their computation. We proceed with these subalgebras in the chapter devoted to the algorithmic study of morphisms between \(GR\)-algebras. We provide new results and algorithms for computing the preimage of a left ideal under a morphism of \(GR\)-algebras and show both merits and limitations of several methods that we propose. We use this technique for the computation of the kernel of a morphism, decomposition of a module into central characters and algebraic dependence of pairwise commuting elements. We give an algorithm for computing the set of one-dimensional representations of a \(G\)-algebra \(A\), and prove, moreover, that if the set of finite dimensional representations of \(A\) over a ground field \(K\) is not empty, then the homological dimension of \(A\) equals \(n\). All the algorithms are implemented in a kernel extension Plural of the computer algebra system Singular. We discuss the efficiency of computations and provide a comparison with other computer algebra systems. We propose a collection of benchmarks for testing the performance of algorithms; the comparison of timings shows that our implementation outperforms all of the modern systems with the combination of both broad functionality and fast implementation. In the thesis, there are many new non-trivial examples, and also the solutions to various problems, arising in different fields of mathematics. All of them were obtained with the developed theory and the implementation in Plural, most of them are treated computationally in this thesis for the first time.

This thesis focuses on dealing with some new aspects of continuous time portfolio optimization by using the stochastic control method.
First, we extend the Busch-Korn-Seifried model for a large investor by using the Vasicek model for the short rate, and that problem is solved explicitly for two types of intensity functions.
Next, we justify the existence of the constant proportion portfolio insurance (CPPI) strategy in a framework containing a stochastic short rate and a Markov switching parameter. The effect of Vasicek short rate on the CPPI strategy has been studied by Horsky (2012). This part of the thesis extends his research by including a Markov switching parameter, and the generalization is based on the B\"{a}uerle-Rieder investment problem. The explicit solutions are obtained for the portfolio problem without the Money Market Account as well as the portfolio problem with the Money Market Account.
Finally, we apply the method used in Busch-Korn-Seifried investment problem to explicitly solve the portfolio optimization with a stochastic benchmark.

Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule.
We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC.
We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate.
To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band.
Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.

In this work we present and estimate an explanatory model with a predefined system of explanatory equations, a so called lag dependent model. We present a locally optimal, on blocked neural network based lag estimator and theorems about consistensy. We define the change points in context of lag dependent model, and present a powerfull algorithm for change point detection in high dimensional high dynamical systems. We present a special kind of bootstrap for approximating the distribution of statistics of interest in dependent processes.

Das zentrale Thema dieser Arbeit sind vollständig gekoppelte reflektierte Vorwärts-Rückwärts-Stochastische-Differentialgleichungen (FBSDE). Zunächst wird ein Spezialfall, die teilweise gekoppelten FBSDE, betrachtet und deren Verbindung zur Bewertung Amerikanischer Optionen aufgezeigt. Für die Lösung dieser Gleichung wird Monte-Carlo-Simulation benötigt, daher werden verschiedene Varianzreduktionsmaßnahmen erarbeitet und miteinander verglichen. Im Folgenden wird der allgemeinere Fall der vollständig gekoppelten reflektierten FBSDE behandelt. Es wird gezeigt, wie das Problem der Lösung dieser Gleichungen in ein Optimierungsproblem übertragen werden kann und infolgedessen mit numerischen Methoden aus diesem Bereich der Mathematik bearbeitet werden kann. Abschließend folgen Vergleiche der verschiedenen numerischen Ansätze mit bereits existierenden Verfahren.

Zwei zentrale Probleme der modernen Finanzmathematik sind die Portfolio-Optimierung und die Optionsbewertung. Während es bei der Portfolio-Optimierung darum geht, das Vermögen optimal auf verschiedene Anlagemöglichkeiten zu verteilen, versucht die Optionsbewertung faire Preise von derivativen Finanzinstrumenten zu bestimmen. In dieser Arbeit werden Fragestellungen aus beiden dieser Themenbereiche bearbeitet. Die Arbeit beginnt mit einem Kapitel über Grundlagen, in dem zum Beispiel das Portfolio-Problem von Merton dargestellt und die Black/Scholes-Formel zur Optionsbewertung hergeleitet wird. In Kapitel 2 wird das Portfolio-Problem von Morton und Pliska betrachtet, die in das Merton-Modell fixe Transaktionskosten eingeführt haben. Dabei muß der Investor bei jeder Transaktion einen fixen Anteil vom derzeitigen Vermögen als Kosten abführen. Es wird die asymptotische Approximation dieses Modells von Atkinson und Wilmott vorgestellt und die optimale Portfoliostrategie aus den Marktparametern hergeleitet. Danach werden die tatsächlichen Transaktionskosten abgeschätzt und ein User Guide zur praktischen Anwendung dieses Transaktionskostenmodells angegeben. Zum Schluß wird das Modell numerisch analysiert, indem unter anderem die erwartete Handelszeit und die Güte der Abschätzung der tatsächlichen Transaktionskosten berechnet werden. Ein Portfolio-Problem mit internationalen Märkten wird in Kapitel 3 vorgestellt. Dem Investor steht zusätzlich zu seinem Heimatland noch ein weiteres Land für seine Vermögensanlagen zur Verfügung. Dabei werden die Preisprozesse für die ausländischen Wertpapiere mit einem stochastischen Wechselkurs in die Heimatwährung umgerechnet. In einer statischen Analyse wird unter anderem berechnet, wieviel weniger Vermögen der Investor benötigt, um das gleiche erwartete Endvermögen zu erhalten wie in dem Fall, wenn ihm keine Auslandsanlagen zur Verfügung stehen. Kapitel 4 behandelt drei verschiedene Portfolio-Probleme mit Sprung-Diffusions-Prozessen. Nach der Herleitung eines Verifikationssatzes wird das Problem bei Anlagemöglichkeit in eine Aktie und in ein Geldmarktkonto jeweils für eine konstante und eine stochastische Zinsrate untersucht. Im ersten Fall wird eine implizite Darstellung für den optimalen Portfolioprozeß und eine Bedingung angegeben, unter der diese Darstellung eindeutig lösbar ist. Außerdem wird der optimale Portfolioprozeß für verschiedene Verteilungen für die Sprunghöhe untersucht. Im Falle einer stochastischen Zinsrate kann nur ein Kandidat für den optimalen Lösungsprozeß angeben werden. Dieser hat wieder eine implizite Darstellung. Das letzte Portfolio-Problem ist eine Abwandlung des Modells aus Kapitel 3. Wird dort der Wechselkurs durch eine geometrisch Brownsche Bewegung modelliert, ist er hier ein reiner Sprungprozeß. Es wird wieder der optimale Portfolioprozeß hergeleitet, wobei ein Anteil davon unter Umständen nur numerisch lösbar ist. Eine hinreichende Bedingung für die Lösbarkeit wird angegeben. In Kapitel 5 werden verschiedene Bewertungsansätze für Optionen auf Bondindizes präsentiert. Es wird eine Methode vorgestellt, mit der die Optionen anhand von Marktpreisen bewertet werden können. Für den Fall, daß es nicht genug Marktpreise gibt, wird ein Verfahren angegeben, um den Bondindex realitätsnah zu simulieren und künstliche Marktpreise zu erzeugen. Diese Preise können dann für eine Kalibrierung verwendet werden.

This thesis deals with generalized inverses, multivariate polynomial interpolation and approximation of scattered data. Moreover, it covers the lifting scheme, which basically links the aforementioned topics. For instance, determining filters for the lifting scheme is connected to multivariate polynomial interpolation. More precisely, sets of interpolation sites are required that can be interpolated by a unique polynomial of a certain degree. In this thesis a new class of such sets is introduced and elements from this class are used to construct new and computationally more efficient filters for the lifting scheme.
Furthermore, a method to approximate multidimensional scattered data is introduced which is based on the lifting scheme. A major task in this method is to solve an ordinary linear least squares problem which possesses a special structure. Exploiting this structure yields better approximations and therefore this particular least squares problem is analyzed in detail. This leads to a characterization of special generalized inverses with partially prescribed image spaces.

Different aspects of geomagnetic field modelling from satellite data are examined in the framework of modern multiscale approximation. The thesis is mostly concerned with wavelet techniques, i.e. multiscale methods based on certain classes of kernel functions which are able to realize a multiscale analysis of the funtion (data) space under consideration. It is thus possible to break up complicated functions like the geomagnetic field, electric current densities or geopotentials into different pieces and study these pieces separately. Based on a general approach to scalar and vectorial multiscale methods, topics include multiscale denoising, crustal field approximation and downward continuation, wavelet-parametrizations of the magnetic field in Mie-representation as well as multiscale-methods for the analysis of time-dependent spherical vector fields. For each subject the necessary theoretical framework is established and numerical applications examine and illustrate the practical aspects.

Multilevel Constructions
(2014)

The thesis consists of the two chapters.
The first chapter is addressed to make a deep investigation of the MLMC method. In particular we take an optimisation view at the estimate. Rather than fixing the number of discretisation points \(n_i\) to be a geometric sequence, we are trying to find an optimal set up for \(n_i\) such that for a fixed error the estimate can be computed within a minimal time.
In the second chapter we propose to enhance the MLMC estimate with the weak extrapolation technique. This technique helps to improve order of a weak convergence of a scheme and as a result reduce CC of an estimate. In particular we study high order weak extrapolation approach, which is know not be inefficient in the standard settings. However, a combination of the MLMC and the weak extrapolation yields an improvement of the MLMC.

This thesis is divided into two parts. Both cope with multi-class image segmentation and utilize
non-smooth optimization algorithms.
The topic of the first part, namely unsupervised segmentation, is the application of clustering
to image pixels. Therefore, we start with an introduction of the biconvex center-based clustering
algorithms c-means and fuzzy c-means, where c denotes the number of classes. We show that
fuzzy c-means can be seen as an approximation of c-means in terms of power means.
Since noise is omnipresent in our image data, these simple clustering models are not suitable
for its segmentation. To this end, we introduce a general and finite dimensional segmentation
model that consists of a data term stemming from the aforementioned clustering models plus a
continuous regularization term. We tackle this optimization model via an alternating minimiza-
tion approach called regularized c-centers (RcC). Thereby, we fix the centers and optimize the
segment membership of the pixels and vice versa. In this general setting, we prove convergence
in the sense of set-valued algorithms using Zangwill’s Theory [172].
Further, we present a segmentation model with a total variation regularizer. While updating
the cluster centers is straightforward for fixed segment memberships of the pixels, updating the
segment membership can be solved iteratively via non-smooth, convex optimization. Thereby,
we do not iterate a convex optimization algorithm until convergence. Instead, we stop as soon as
we have a certain amount of decrease in the objective functional to increase the efficiency. This
algorithm is a particular implementation of RcC providing also the corresponding convergence
theory. Moreover, we show the good performance of our method in various examples such as
simulated 2d images of brain tissue and 3d volumes of two materials, namely a multi-filament
composite superconductor and a carbon fiber reinforced silicon carbide ceramics. Thereby, we
exploit the property of the latter material that two components have no common boundary in
our adapted model.
The second part of the thesis is concerned with supervised segmentation. We leave the area
of center based models and investigate convex approaches related to graph p-Laplacians and
reproducing kernel Hilbert spaces (RKHSs). We study the effect of different weights used to
construct the graph. In practical experiments we show on the one hand image types that
are better segmented by the p-Laplacian model and on the other hand images that are better
segmented by the RKHS-based approach. This is due to the fact that the p-Laplacian approach
provides smoother results, while the RKHS approach provides often more accurate and detailed
segmentations. Finally, we propose a novel combination of both approaches to benefit from the
advantages of both models and study the performance on challenging medical image data.

This thesis is concerned with tropical moduli spaces, which are an important tool in tropical enumerative geometry. The main result is a construction of tropical moduli spaces of rational tropical covers of smooth tropical curves and of tropical lines in smooth tropical surfaces. The construction of a moduli space of tropical curves in a smooth tropical variety is reduced to the case of smooth fans. Furthermore, we point out relations to intersection theory on suitable moduli spaces on algebraic curves.

This thesis deals with the following question. Given a moduli space of coherent sheaves on a projective variety with a fixed Hilbert polynomial, to find a natural construction that replaces the subvariety of the sheaves that are not locally free on their support (we call such sheaves singular) by some variety consisting of sheaves that are locally free on their support. We consider this problem on the example of the coherent sheaves on \(\mathbb P_2\) with Hilbert polynomial 3m+1.
Given a singular coherent sheaf \(\mathcal F\) with singular curve C as its support we replace \(\mathcal F\) by locally free sheaves \(\mathcal E\) supported on a reducible curve \(C_0\cup C_1\), where \(C_0\) is a partial normalization of C and \(C_1\) is an extra curve bearing the degree of \(\mathcal E\). These bundles resemble the bundles considered by Nagaraj and Seshadri. Many properties of the singular 3m+1 sheaves are inherited by the new sheaves we introduce in this thesis (we call them R-bundles). We consider R-bundles as natural replacements of the singular sheaves. R-bundles refine the information about 3m+1 sheaves on \(\mathbb P_2\). Namely, for every isomorphism class of singular 3m+1 sheaves there are \(\mathbb P_1\) many equivalence classes of R-bundles. There is a variety \(\tilde M\) of dimension 10 that may be considered as the space of all the isomorphism classes of the non-singular 3m+1 sheaves on \(\mathbb P_2\) together with all the equivalence classes of all R-bundles. This variety is obtained by blowing up the moduli space of 3m+1 sheaves on \(\mathbb P_2\) along the subvariety of singular sheaves. We modify the definition of a 3m+1 family and obtain a notion of a new family over an arbitrary variety S. In particular 3m+1 families of the non-singular sheaves on \(\mathbb P_2\) are families in this sense. New families over one point are either non-singular 3m+1 sheaves or R-bundles. For every variety S we introduce an equivalence relation on the set of all new families over S. The notion of equivalence for families over one point coincides with isomorphism for non-singular 3m+1 sheaves and with equivalence for R-bundles. We obtain a moduli functor \(\tilde{\mathcal M}:(Sch) \rightarrow (Sets)\) that assigns to every variety S the set of the equivalence classes of the new families over S. There is a natural transformation of functors \(\tilde{\mathcal M}\rightarrow \mathcal M\) that establishes a relation between \(\tilde{\mathcal M}\) and the moduli functor \(\mathcal M\) of the 3m+1 moduli problem on \(\mathbb P_2\). There is also a natural transformation \(\tilde{\mathcal M} \rightarrow Hom(\__ ,\tilde M)\), inducing a bijection \(\tilde{\mathcal M}(pt)\cong \tilde M\), which means that \(\tilde M\) is a coarse moduli space of the moduli problem \(\tilde{\mathcal M}\).

The question of how to model dependence structures between financial assets was revolutionized since the last decade when the copula concept was introduced in financial research. Even though the concept of splitting marginal behavior and dependence structure (described by a copula) of multidimensional distributions already goes back to Sklar (1955) and Hoeffding (1940), there were very little empirical efforts done to check out the potentials of this approach. The aim of this thesis is to figure out the possibilities of copulas for modelling, estimating and validating purposes. Therefore we extend the class of Archimedean Copulas via a transformation rule to new classes and come up with an explicit suggestion covering the Frank and Gumbel family. We introduce a copula based mapping rule leading to joint independence and as results of this mapping we present an easy method of multidimensional chi²-testing and a new estimate for high dimensional parametric distributions functions. Different ways of estimating the tail dependence coefficient, describing the asymptotic probability of joint extremes, are compared and improved. The limitations of elliptical distributions are carried out and a generalized form of them, preserving their applicability, is developed. We state a method to split a (generalized) elliptical distribution into its radial and angular part. This leads to a positive definite robust estimate of the dispersion matrix (here only given as a theoretical outlook). The impact of our findings is stated by modelling and testing the return distributions of stock- and currency portfolios furthermore of oil related commodities- and LME metal baskets. In addition we show the crash stability of real estate based firms and the existence of nonlinear dependence in between the yield curve.

This dissertation deals with the optimization of the web formation in a spunbond process for the production of artificial fabrics. A mathematical model of the process is presented. Based on the model, two kind of attributes to be optimized are considered, those related with the quality of the fabric and those describing the stability of the production process. The problem falls in the multicriteria and decision making framework. The functions involved on the model of the process are non linear, non convex and non differentiable. A strategy in two steps; exploration and continuation, is proposed to approximate numerically the Pareto frontier and alternative methods are proposed to navigate the set and support the decision making process. The proposed strategy is applied to a particular production process and numerical results are presented.

A vehicles fatigue damage is a highly relevant figure in the complete vehicle design process.
Long term observations and statistical experiments help to determine the influence of differnt parts of the vehicle, the driver and the surrounding environment.
This work is focussing on modeling one of the most important influence factors of the environment: road roughness. The quality of the road is highly dependant on several surrounding factors which can be used to create mathematical models.
Such models can be used for the extrapolation of information and an estimation of the environment for statistical studies.
The target quantity we focus on in this work ist the discrete International Roughness Index or discrete IRI. The class of models we use and evaluate is a discriminative classification model called Conditional Random Field.
We develop a suitable model specification and show new variants of stochastic optimizations to train the model efficiently.
The model is also applied to simulated and real world data to show the strengths of our approach.

Since its invention by Sir Allistair Pilkington in 1952, the float glass process has been used to manufacture long thin flat sheets of glass. Today, float glass is very popular due to its high quality and relatively low production costs. When producing thinner glass the main concern is to retain its optical quality, which can be deteriorated during the manufacturing process. The most important stage of this process is the floating part, hence is considered to be responsible for the loss in the optical quality. A series of investigations performed on the finite products showed the existence of many short wave patterns, which strongly affect the optical quality of the glass. Our work is concerned with finding the mechanism for wave development, taking into account all possible factors. In this thesis, we model the floating part of the process by an theoretical study of the stability of two superposed fluids confined between two infinite plates and subjected to a large horizontal temperature gradient. Our approach is to take into account the mixed convection effects (viscous shear and buoyancy), neglecting on the other hand the thermo-capillarity effects due to the length of our domain and the presence of a small stabilizing vertical temperature gradient. Both fluids are treated as Newtonian with constant viscosity. They are immiscible, incompressible, have very different properties and have a free surface between them. The lower fluid is a liquid metal with a very small kinematic viscosity, whereas the upper fluid is less dense. The two fluids move with different velocities: the speed of the upper fluid is imposed, whereas the lower fluid moves as a result of buoyancy effects. We examine the problem by means of small perturbation analysis, and obtain a system of two Orr-Sommerfeld equations coupled with two energy equations, and general interface and boundary conditions. We solve the system analytically in the long- and short- wave limit, by using asymptotic expansions with respect to the wave number. Moreover, we write the system in the form of a general eigenvalue problem and we solve the system numerically by using Chebyshev spectral methods for fluid dynamics. The results (both analytical and numerical) show the existence of the small-amplitude travelling waves, which move with constant velocity for wave numbers in the intermediate range. We show that the stability of the system is ensured in the long wave limit, a fact which is in agreement with the real float glass process. We analyze the stability for a wide range of wave numbers, Reynolds, Weber and Grashof number, and explain the physical implications on the dynamics of the problem. The consequences of the linear stability results are discussed. In reality in the float glass process, the temperature strongly influences the viscosity of both molten metal and hot glass, which will have direct consequences on the stability of the system. We investigate the linear stability of two superposed fluids with temperature dependent viscosities by considering a different model for the viscosity dependence of each fluid. Although, the temperature-viscosity relationships for glass and metal are more complex than those used in our computations, our intention is to emphasize the effects of this dependence on the stability of the system. It is known from the literature that in the case of one fluid, the heat, which causes viscosity to decrease along the domain, usually destabilizes the flow. For the two superposed fluids problem we investigate this behaviour and discuss the consequences of the linear stability in this new case.

We present a numerical scheme to simulate a moving rigid body with arbitrary shape suspended in a rarefied gas micro flows, in view of applications to complex computations of moving structures in micro or vacuum systems. The rarefied gas is simulated by solving the Boltzmann equation using a DSMC particle method. The motion of the rigid body is governed by the Newton-Euler equations, where the force and the torque on the rigid body is computed from the momentum transfer of the gas molecules colliding with the body. The resulting motion of the rigid body affects in turn again the gas flow in the surroundings. This means that a two-way coupling has been modeled. We validate the scheme by performing various numerical experiments in 1-, 2- and 3-dimensional computational domains. We have presented 1-dimensional actuator problem, 2-dimensional cavity driven flow problem, Brownian diffusion of a spherical particle both with translational and rotational motions, and finally thermophoresis on a spherical particles. We compare the numerical results obtained from the numerical simulations with the existing theories in each test examples.

The work consists of two parts.
In the first part an optimization problem of structures of linear elastic material with contact modeled by Robin-type boundary conditions is considered. The structures model textile-like materials and possess certain quasiperiodicity properties. The homogenization method is used to represent the structures by homogeneous elastic bodies and is essential for formulations of the effective stress and Poisson's ratio optimization problems. At the micro-level, the classical one-dimensional Euler-Bernoulli beam model extended with jump conditions at contact interfaces is used. The stress optimization problem is of a PDE-constrained optimization type, and the adjoint approach is exploited. Several numerical results are provided.
In the second part a non-linear model for simulation of textiles is proposed. The yarns are modeled by hyperelastic law and have no bending stiffness. The friction is modeled by the Capstan equation. The model is formulated as a problem with the rate-independent dissipation, and the basic continuity and convexity properties are investigated. The part ends with numerical experiments and a comparison of the results to a real measurement.

This thesis shows an approach to combine the advantages of MBS tyre models and FEM models for the use in full vehicle simulations. The procedure proposed in this thesis aims to describe a nonlinear structure with a Finite Element approach combined with nonlinear model reduction methods. Unlike most model reduction methods - as the frequently used Craig-Bampton approach - the method of Proper Orthogonal Decomposition (POD) offers a projection basis suitable for nonlinear models. For the linear wave equation, the POD method is studied comparing two different choices of snapshot sets. Set 1 consists of deformation snapshots, and set 2 additionally contains velocities and accelerations. An error analysis proves no convergence guarantee for deformations only. For inclusion of derivatives it yields an error bound diminishing for small time steps. The numerical results show a better behaviour for the derivative snapshot method, as long as the sum of the left-over eigenvalues is significant. For the reduction of nonlinear systems - especially when using commercial software - it is necessary to decouple the reduced surrogate system from the full model. To achieve this, a lookup table approach is presented. It makes use of the preceding computation step with the full model necessary to set up the POD basis (training step). The nonlinear term of inner forces and the stiffness matrix are output and stored in a lookup table for the reduced system. Numerical examples include a nonlinear string in Matlab and an airspring computed in Abaqus. Both examples show that effort reductions of two orders of magnitude are possible within a reasonable error tolerance. The lookup approaches perform faster than the Trajectory Piecewise Linear (TPWL) method and produce comparable errors. Furthermore, the Abaqus example shows the influence of training excitation on the quality of the reduced model.

Over the last decades, mathematical modeling has reached nearly all fields of natural science. The abstraction and reduction to a mathematical model has proven to be a powerful tool to gain a deeper insight into physical and technical processes. The increasing computing power has made numerical simulations available for many industrial applications. In recent years, mathematicians and engineers have turned there attention to model solid materials. New challenges have been found in the simulation of solids and fluid-structure interactions. In this context, it is indispensable to study the dynamics of elastic solids. Elasticity is a main feature of solid bodies while demanding a great deal of the numerical treatment. There exists a multitude of commercial tools to simulate the behavior of elastic solids. Anyhow, the majority of these software packages consider quasi-stationary problems. In the present work, we are interested in highly dynamical problems, e.g. the rotation of a solid. The applicability to free-boundary problems is a further emphasis of our considerations. In the last years, meshless or particle methods have attracted more and more attention. In many fields of numerical simulation these methods are on a par with classical methods or superior to them. In this work, we present the Finite Pointset Method (FPM) which uses a moving least squares particle approximation operator. The application of this method to various industrial problems at the Fraunhofer ITWM has shown that FPM is particularly suitable for highly dynamical problems with free surfaces and strongly changing geometries. Thereby, FPM offers exactly the features that we require for the analysis of the dynamics of solid bodies. In the present work, we provide a numerical scheme capable to simulate the behavior of elastic solids. We present the system of partial differential equations describing the dynamics of elastic solids and show its hyperbolic character. In particular, we focus our attention to the constitutive law for the stress tensor and provide evolution equations for the deviatoric part of the stress tensor in order to circumvent limitations of the classical Hooke's law. Furthermore, we present the basic principle of the Finite Pointset Method. In particular, we provide the concept of upwinding in a given direction as a key ingredient for stabilizing hyperbolic systems. The main part of this work describes the design of a numerical scheme based on FPM and an operator splitting to take the different processes within a solid body into account. Each resulting subsystem is treated separately in an adequate way. Hereby, we introduce the notion of system-inherent directions and dimensional upwinding. Finally, a coupling strategy for the subsystems and results are presented. We close this work with some final conclusions and an outlook on future work.

In this thesis we consider the problem of maximizing the growth rate with proportional and fixed costs in a framework with one bond and one stock, which is modeled as a jump diffusion with compound Poisson jumps. Following the approach from [1], we prove that in this framework it is optimal for an investor to follow a CB-strategy. The boundaries depend only on the parameters of the underlying stock and bond. Now it is natural to ask for the investor who follows a CB-strategy which is given by the stopping times \((\tau_i)_{i\in\mathbb N}\) and impulses \((\eta_i)_{i\in\mathbb N}\) how often he has to rebalance. In other words we want to obtain the limit of the inter trading times
\[
\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^n(\tau_{i+1}-\tau_{i}).
\]
We are able to obtain this limit which is given by the expected first exit time of the risky fraction process from some interval under the invariant measure of the Markov chain \((\eta_i)_{i\in\mathbb N}\) using the Ergodic Theorem from von Neumann and Birkhoff. In general, it is difficult to obtain the expectation of the first exit time for the process with jumps. Because of the jump part, when the process crosses the boundaries of the interval an overshoot may occur which makes it difficult to obtain the distribution. Nevertheless we can obtain the first exit time if the process has only negative jumps using scale functions. The main difficulty of this approach is that the scale functions are known only up to their Laplace transforms. In [2] and [3] the closed-form expression for the scale function of the Levy process with phase-type distributed jumps is obtained. Phase-type distributions build a rich class of positive-valued distributions: the exponential, hyperexponential, Erlang, hyper-Erlang and Coxian distributions. Since the scale function is given as a function in a closed form we can differentiate to obtain the expected first exit time using the fluctuation identities explicitly.
[1] Irle, A. and Sass,J.: Optimal portfolio policies under fixed and proportional transaction costs, Advances in Applied Probability 38, 916-942.
[2] Egami, M., Yamazaki, K.: On scale functions of spectrally negative Levy processes with phase-type jumps, working paper, July 3.
[3]Egami, M., Yamazaki, K.: Precautionary measures for credit risk management in jump models, working paper, June 17.

In the first part of this work, called Simple node singularity, are computed matrix factorizations of all isomorphism classes, up to shiftings, of rank one and two, graded, indecomposable maximal Cohen--Macaulay (shortly MCM) modules over the affine cone of the simple node singularity. The subsection 2.2 contains a description of all rank two graded MCM R-modules with stable sheafification on the projective cone of R, by their matrix factorizations. It is given also a general description of such modules, of any rank, over a projective curve of arithmetic genus 1, using their matrix factorizations. The non-locally free rank two MCM modules are computed using an alghorithm presented in the Introduction of this work, that gives a matrix factorization of any extension of two MCM modules over a hypersurface. In the second part, called Fermat surface, are classified all graded, rank two, MCM modules over the affine cone of the Fermat surface. For the classification of the orientable rank two graded MCM R-modules, is used a description of the orientable modules (over normal rings) with the help of codimension two Gorenstein ideals, realized by Herzog and Kühl. It is proven (in section 4), that they have skew symmetric matrix factorizations (over any normal hypersurface ring). For the classification of the non-orientable rank two MCM R-modules, we use a similar idea as in the case of the orientable ones, only that the ideal is not any more Gorenstein.

In this thesis we have discussed the problem of decomposing an integer matrix \(A\) into a weighted sum \(A=\sum_{k \in {\mathcal K}} \alpha_k Y^k\) of 0-1 matrices with the strict consecutive ones property. We have developed algorithms to find decompositions which minimize the decomposition time \(\sum_{k \in {\mathcal K}} \alpha_k\) and the decomposition cardinality \(|\{ k \in {\mathcal K}: \alpha_k > 0\}|\). In the absence of additional constraints on the 0-1 matrices \(Y^k\) we have given an algorithm that finds the minimal decomposition time in \({\mathcal O}(NM)\) time. For the case that the matrices \(Y^k\) are restricted to shape matrices -- a restriction which is important in the application of our results in radiotherapy -- we have given an \({\mathcal O}(NM^2)\) algorithm. This is achieved by solving an integer programming formulation of the problem by a very efficient combinatorial algorithm. In addition, we have shown that the problem of minimizing decomposition cardinality is strongly NP-hard, even for matrices with one row (and thus for the unconstrained as well as the shape matrix decomposition). Our greedy heuristics are based on the results for the decomposition time problem and produce better results than previously published algorithms.

Matrix Compression Methods for the Numerical Solution of Radiative Transfer in Scattering Media
(2002)

Radiative transfer in scattering media is usually described by the radiative transfer equation, an integro-differential equation which describes the propagation of the radiative intensity along a ray. The high dimensionality of the equation leads to a very large number of unknowns when discretizing the equation. This is the major difficulty in its numerical solution. In case of isotropic scattering and diffuse boundaries, the radiative transfer equation can be reformulated into a system of integral equations of the second kind, where the position is the only independent variable. By employing the so-called momentum equation, we derive an integral equation, which is also valid in case of linear anisotropic scattering. This equation is very similar to the equation for the isotropic case: no additional unknowns are introduced and the integral operators involved have very similar mapping properties. The discretization of an integral operator leads to a full matrix. Therefore, due to the large dimension of the matrix in practical applcation, it is not feasible to assemble and store the entire matrix. The so-called matrix compression methods circumvent the assembly of the matrix. Instead, the matrix-vector multiplications needed by iterative solvers are performed only approximately, thus, reducing, the computational complexity tremendously. The kernels of the integral equation describing the radiative transfer are very similar to the kernels of the integral equations occuring in the boundary element method. Therefore, with only slight modifications, the matrix compression methods, developed for the latter are readily applicable to the former. As apposed to the boundary element method, the integral kernels for radiative transfer in absorbing and scattering media involve an exponential decay term. We examine how this decay influences the efficiency of the matrix compression methods. Further, a comparison with the discrete ordinate method shows that discretizing the integral equation may lead to reductions in CPU time and to an improved accuracy especially in case of small absorption and scattering coefficients or if local sources are present.

In this thesis we outline the Kerner's 3-phase traffic flow theory, which states that the flow of vehicular traffic occur in three phases i.e. free flow, synchronized flow and wide moving jam phases.
A macroscopic 3-phase traffic model of the Aw-Rascle type is derived from the microscopic Speed Adaptation 3-phase traffic model
developed by Kerner and Klenov [J. Phys. A: Math. Gen., 39(2006), pp. 1775-1809 ].
We derive the same macroscopic model from the kinetic traffic flow model of Klar and Wegener [SIAM J. Appl. Math., 60(2000), pp. 1749-1766 ] as well as that of Illner, Klar and Materne [Comm. Math. Sci., 1(2003), pp. 1-12 ].
In the above stated derivations, the 3-phase traffic theory is constituted in the macroscopic model through a relaxation term.
This serves as an incentive to modify the relaxation term of the `switching curve' model of Greenberg,
Klar and Rascle [SIAM J. Appl. Math.,63(2003), pp.818-833 ] to obtain another macroscopic 3-phase traffic model, which is still of the Aw-Rascle type.
By specifying the relaxation term differently we obtain three kinds of models, namely the macroscopic Speed Adaptation,
the Switching Curve and the modified Switching Curve models.
To demonstrate the capability of the derived macroscopic traffic models to reproduce the features of 3-phase traffic theory, we simulate a
multi-lane road that has a bottleneck. We consider a stationary and a moving bottleneck.
The results of the simulations for the three models are compared.

Non–woven materials consist of many thousands of fibres laid down on a conveyor belt
under the influence of a turbulent air stream. To improve industrial processes for the
production of non–woven materials, we develop and explore novel mathematical fibre and
material models.
In Part I of this thesis we improve existing mathematical models describing the fibres on the
belt in the meltspinning process. In contrast to existing models, we include the fibre–fibre
interaction caused by the fibres’ thickness which prevents the intersection of the fibres and,
hence, results in a more accurate mathematical description. We start from a microscopic
characterisation, where each fibre is described by a stochastic functional differential
equation and include the interaction along the whole fibre path, which is described by a
delay term. As many fibres are required for the production of a non–woven material, we
consider the corresponding mean–field equation, which describes the evolution of the fibre
distribution with respect to fibre position and orientation. To analyse the particular case of
large turbulences in the air stream, we develop the diffusion approximation which yields a
distribution describing the fibre position. Considering the convergence to equilibrium on
an analytical level, as well as performing numerical experiments, gives an insight into the
influence of the novel interaction term in the equations.
In Part II of this thesis we model the industrial airlay process, which is a production method
whereby many short fibres build a three–dimensional non–woven material. We focus on
the development of a material model based on original fibre properties, machine data and
micro computer tomography. A possible linking of these models to other simulation tools,
for example virtual tensile tests, is discussed.
The models and methods presented in this thesis promise to further the field in mathematical
modelling and computational simulation of non–woven materials.

Paper production is a problem with significant importance for the society and it is a challenging topic for scientific investigations. This study is concerned with the simulations of the pressing section of a paper machine. We aim at the development of an advanced mathematical model of the pressing section, which is able to recover the behavior of the fluid flow within the paper felt sandwich obtained in laboratory experiments.
From the modeling point of view the pressing of the paper-felt sandwich is a complex process since one has to deal with the two-phase flow in moving and deformable porous media. To account for the solid deformations, we use developments from the PhD thesis by S. Rief where the elasticity model is stated and discussed in detail. The flow model which accounts for the movement of water within the paper-felt sandwich is described with the help of two flow regimes: single-phase water flow and two-phase air-water flow. The model for the saturated flow is presented by the Darcy's law and the mass conservation. The second regime is described by the Richards' approach together with dynamic capillary effects. The model for the dynamic capillary pressure - saturation relation proposed by Hassanizadeh and Gray is adapted for the needs of the paper manufacturing process.
We have started the development of the flow model with the mathematical modeling in one-dimensional case. The one-dimensional flow model is derived from a two-dimensional one by an averaging procedure in vertical direction. The model is numerically studied and verified in comparison with measurements. Some theoretical investigations are performed to prove the convergence of the discrete solution to the continuous one. For completeness of the studies, the models with the static and dynamic capillary pressure–saturation relations are considered. Existence, compactness and convergence results are obtained for both models.
Then, a two-dimensional model is developed, which accounts for a multilayer computational domain and formation of the fully saturated zones. For discretization we use a non-orthogonal grid resolving the layer interfaces and the multipoint flux approximation O-method. The numerical experiments are carried out for parameters which are typical for the production process. The static and dynamic capillary pressure-saturation relations are tested to evaluate the influence of the dynamic capillary effect.
The last part of the thesis is an investigation of the validity range of the Richards’ assumption for the two-dimensional flow model with the static capillary pressure-saturation relation. Numerical experiments show that the Richards’ assumption is not the best choice in simulating processes in the pressing section.

The central theme in this thesis concerns the development of enhanced methods and algorithms for appraising market and credit risks and their application within the context of standard and more advanced market models. Generally, methods and algorithms for analysing market risk of complex portfolios involve detailed knowledge of option sensitivities, the so-called "Greeks". Based on an analysis of symmetries in financial market models, relations between option sensitivities are obtained, which can be used for the efficient valuation of the Greeks. Mainly, the relations are derived within the Black Scholes model, however, some relations are also valid for more general models, for instance the Heston model. Portfolios are usually influenced by lots of underlyings, so it is necessary to characterise the dependencies of these basic instruments. It is usual to describe such dependencies by correlation matrices. However, estimations of correlation matrices in practice are disturbed by statistical noise and usually have the problem of rank deficiency due to missing data. A fast algorithm is presented which performs a generalized Cholesky decomposition of a perturbed correlation matrix. In contrast to the standard Cholesky algorithm, an advantage of the generalized method is that it works for semi-positive, rank deficient matrices as well. Moreover, it gives an approximative decomposition when the input matrix is indefinite. A comparison with known algorithms with similar features is performed and it turns out, that the new algorithm can be recommended in situations where computation time is the critical issue. The determination of a profit and loss distribution by Fourier inversion of its characteristic function is a powerful tool, but it can break down when the characteristic function is not integrable. In this thesis, methods for Fourier inversion of non-integrable characteristic functions are studied. In this respect, two theorems are obtained which are based on a suitable approximation of the unknown distribution with known density and characteristic function. Further it will be shown, that straightforward Fast Fourier inversion works, when the according density lives on a bounded interval. The above techniques are of crucial importance to determine the profit and loss distribution (P&L) of large portfolios efficiently. The so-called Delta Gamma normal approach has become industrial standard for the estimation of market risk. It is shown, that the performance of the Delta Gamma normal approach can be improved substantially by application of the developed methods. The same optimization procedure also applies to the Delta Gamma Student model. A standard tool for computing the P&L distribution of a loan portfolio is the CreditRisk+ model. Basically, the CreditRisk+ distribution is a discrete distribution which can be computed from its probability generating function. For this a numerically stable method is presented and as an alternative, a new algorithm based on Fourier inversion is proposed. Finally, an extension of the CreditRisk+ model to market risk is developed, which distribution can be obtained efficiently by the presented Fourier inversion methods as well.

In this dissertation we present analysis of macroscopic models for slow dense granular flow. Models are derived from plasticity theory with yield condition and flow rule. Corner stone equations are conservation of mass and conservation of momentum with special constitutive law. Such models are considered in the class of generalised Newtonian fluids, where viscosity depends on the pressure and modulo of the strain-rate tensor. We showed the hyperbolic nature for the evolutionary model in 1D and ill-posed behaviour for 2D and 3D. The steady state equations are always hyperbolic. In the 2D problem we derived a prototype nonlinear backward parabolic equation for the velocity and the similar equation for the shear-rate. Analysis of derived PDE showed the finite blow up time. Blow up time depends on the initial condition. Full 2D and antiplane 3D model were investigated numerically with finite element method. For 2D model we showed the presence of boundary layers. Antiplane 3D model was investigated with the Runge Kutta Discontinuous Galerkin method with mesh addoption. Numerical results confirmed that such a numerical method can be a good choice for the simulations of the slow dense granular flow.

Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.

Mrázek et al. [14] proposed a unified approach to curve estimation which combines
localization and regularization. In this thesis we will use their approach to study
some asymptotic properties of local smoothers with regularization. In Particular, we
shall discuss the regularized local least squares (RLLS) estimate with correlated errors
(more precisely with stationary time series errors), and then based on this approach
we will discuss the case when the kernel function is dirac function and compare our
smoother with the spline smoother. Finally, we will do some simulation study.

Mrázek et al. [25] proposed a unified approach to curve estimation which combines localization and regularization. Franke et al. [10] used that approach to discuss the case of the regularized local least-squares (RLLS) estimate. In this thesis we will use the unified approach of Mrázek et al. to study some asymptotic properties of local smoothers with regularization. In particular, we shall discuss the Huber M-estimate and its limiting cases towards the L2 and the L1 cases. For the regularization part, we will use quadratic regularization. Then, we will define a more general class of regularization functions. Finally, we will do a Monte Carlo simulation study to compare different types of estimates.

We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.

This work aims at including nonlinear elastic shell models in a multibody framework. We focus our attention to Kirchhoff-Love shells and explore the benefits of an isogeometric approach, the latest development in finite element methods, within a multibody system. Isogeometric analysis extends isoparametric finite elements to more general functions such as B-Splines and Non-Uniform Rational B-Splines (NURBS) and works on exact geometry representations even at the coarsest level of discretizations. Using NURBS as basis functions, high regularity requirements of the shell model, which are difficult to achieve with standard finite elements, are easily fulfilled. A particular advantage is the promise of simplifying the mesh generation step, and mesh refinement is easily performed by eliminating the need for communication with the geometry representation in a Computer-Aided Design (CAD) tool.
Quite often the domain consists of several patches where each patch is parametrized by means of NURBS, and these patches are then glued together by means of continuity conditions. Although the techniques known from domain decomposition can be carried over to this situation, the analysis of shell structures is substantially more involved as additional angle preservation constraints between the patches might arise. In this work, we address this issue in the stationary and transient case and make use of the analogy to constrained mechanical systems with joints and springs as interconnection elements. Starting point of our work is the bending strip method which is a penalty approach that adds extra stiffness to the interface between adjacent patches and which is found to lead to a so-called stiff mechanical system that might suffer from ill-conditioning and severe stepsize restrictions during time integration. As a remedy, an alternative formulation is developed that improves the condition number of the system and removes the penalty parameter dependence. Moreover, we study another alternative formulation with continuity constraints applied to triples of control points at the interface. The approach presented here to tackle stiff systems is quite general and can be applied to all penalty problems fulfilling some regularity requirements.
The numerical examples demonstrate an impressive convergence behavior of the isogeometric approach even for a coarse mesh, while offering substantial savings with respect to the number of degrees of freedom. We show a comparison between the different multipatch approaches and observe that the alternative formulations are well conditioned, independent of any penalty parameter and give the correct results. We also present a technique to couple the isogeometric shells with multibody systems using a pointwise interaction.

In this thesis we develop a shape optimization framework for isogeometric analysis in the optimize first–discretize then setting. For the discretization we use
isogeometric analysis (iga) to solve the state equation, and search optimal designs in a space of admissible b-spline or nurbs combinations. Thus a quite
general class of functions for representing optimal shapes is available. For the
gradient-descent method, the shape derivatives indicate both stopping criteria and search directions and are determined isogeometrically. The numerical treatment requires solvers for partial differential equations and optimization methods, which introduces numerical errors. The tight connection between iga and geometry representation offers new ways of refining the geometry and analysis discretization by the same means. Therefore, our main concern is to develop the optimize first framework for isogeometric shape optimization as ground work for both implementation and an error analysis. Numerical examples show that this ansatz is practical and case studies indicate that it allows local refinement.

In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations.
For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions.
For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain.
A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis.
Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods.

This thesis is devoted to the computational aspects of intersection theory and enumerative geometry. The first results are a Sage package Schubert3 and a Singular library schubert.lib which both provide the key functionality necessary for computations in intersection theory and enumerative geometry. In particular, we describe an alternative method for computations in Schubert calculus via equivariant intersection theory. More concretely, we propose an explicit formula for computing the degree of Fano schemes of linear subspaces on hypersurfaces. As a special case, we also obtain an explicit formula for computing the number of linear subspaces on a general hypersurface when this number is finite. This leads to a much better performance than classical Schubert calculus.
Another result of this thesis is related to the computation of Gromov-Witten invariants. The most powerful method for computing Gromov-Witten invariants is the localization of moduli spaces of stable maps. This method was introduced by Kontsevich in 1995. It allows us to compute Gromov-Witten invariants via Bott's formula. As an insightful application, we computed the numbers of rational curves on general complete intersection Calabi-Yau threefolds in projective spaces up to degree six. The results are all in agreement with predictions made from mirror symmetry.

Intersection Theory on Tropical Toric Varieties and Compactifications of Tropical Parameter Spaces
(2011)

We study toric varieties over the tropical semifield. We define tropical cycles inside these toric varieties and extend the stable intersection of tropical cycles in R^n to these toric varieties. In particular, we show that every tropical cycle can be degenerated into a sum of torus-invariant cycles. This allows us to tropicalize algebraic cycles of toric varieties over an algebraically closed field with non-Archimedean valuation. We see that the tropicalization map is a homomorphism on cycles and an isomorphism on cycle classes. Furthermore, we can use projective toric varieties to compactify known tropical varieties and study their combinatorics. We do this for the tropical Grassmannian in the Plücker embedding and compactify the tropical parameter space of rational degree d curves in tropical projective space using Chow quotients of the tropical Grassmannian.

This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.

Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers.
While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation.
In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral.
Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties.
In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers.
Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time.
Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions.
In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals.
By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality.
After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters.
By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.

Diese Dissertation besteht aus zwei aktuellen Themen im Bereich Finanzmathematik, die voneinander unabhängig sind.
Beim ersten Thema, "Flexible Algorithmen zur Bewertung komplexer Optionen mit mehreren Eigenschaften mittels der funktionalen Programmiersprache Haskell", handelt es sich um ein interdisziplinäres Projekt, in dem eine wissenschaftliche Brücke zwischen der Optionsbewertung und der funktionalen Programmierung geschlagen wurde.
Im diesem Projekt wurde eine funktionale Bibliothek zur Konstruktion von Optionen
entworfen, in dem es eine Reihe von grundlegenden Konstruktoren gibt, mit denen
man verschiedene Optionen kombinieren kann. Im Rahmen der funktionalen Bibliothek
wurde ein allgemeiner Algorithmus entwickelt, durch den die aus den Konstruktoren
kombinierten Optionen bewertet werden können.
Der mathematische Aspekt des Projekts besteht in der Entwicklung eines neuen Konzeptes zur Bewertung der Optionen. Dieses Konzept basiert auf dem Binomialmodell, welches in den letzten Jahren eine weite Verbreitung im Forschungsgebiet der Optionsbewertung fand. Der kerne Algorithmus des Konzeptes ist eine Kombination von mehreren
sorgfältig ausgewählten numerischen Methoden in Bezug auf den Binomialbaum. Diese
Kombination ist nicht trivial, sondern entwikelt sich nach bestimmten Regeln und ist eng mit den grundlegenden Konstruktoren verknüpft.
Ein wichtiger Charakterzug des Projekts ist die funktionale Denkweise. D. h. der Algorithmus ließ sich mithilfe einer funktionalen Programmiersprache formulieren. In unserem Projekt wurde Haskell verwendet.
Das zweite Thema, Monte-Carlo-Simulation des Deltas und (Cross-)Gammas von
Bermuda-Swaptions im LIBOR-Marktmodell, bezieht sich auf ein zentrales Problem der
Finanzmathematik, nämlich die Bestimmung der Risikoparameter komplexer Zinsderivate.
In dieser Arbeit wurde die numerische Berechnung des Delta-Vektors einer Bermuda-
Swaption ausführlich untersucht und die neue Herausforderung, die Gamma-Matrix einer Bermuda-Swaption exakt simulieren, erfolgreich gemeistert. Die beiden Risikoparameter spielen bei Handelsstrategien in Form des Delta-Hedgings und Gamma-Hedgings eine entscheidende Rolle. Das zugrunde liegende Zinsstrukturmodell ist das LIBORMarktmodell, welches in den letzten Jahren eine auffällige Entwicklung in der Finanzmathematik gemacht hat. Bei der Simulation und Anwendung des LIBOR-Marktmodells fällt die Monte-Carlo-Simulation ins Gewicht.
Für die Berechung des Delta-Vektors einer Bermuda-Swaption wurden drei klassische und drei von uns entwickelte numerische Methoden vorgestellt und gegenübergestellt, welche fast alle vorhandenen Arten der Monte-Carlo-Simulation zur Berechnung des Delta-Vektors einer Bermuda-Swaption enthalten.
Darüber hinaus gibt es in der Arbeit noch zwei neu entwickelte Methoden, um die Gamma-Matrix einer Bermuda-Swaption exakt zu berechnen, was völlig neu im Forschungsgebiet der Computational-Finance ist. Eine ist die modifizierte Finite-Differenzen-Methode. Die andere ist die reine Pathwise-Methode, die auf pfadweiser Differentialrechnung basiert und einem robusten und erwartungstreuen Simulationsverfahren entspricht.

In this thesis diverse problems concerning inflation-linked products are dealt with. To start with, two models for inflation are presented, including a geometric Brownian motion for consumer price index itself and an extended Vasicek model for inflation rate. For both suggested models the pricing formulas of inflation-linked products are derived using the risk-neutral valuation techniques. As a result Black and Scholes type closed form solutions for a call option on inflation index for a Brownian motion model and inflation evolution for an extended Vasicek model as well as for an inflation-linked bond are calculated. These results have been already presented in Korn and Kruse (2004) [17]. In addition to these inflation-linked products, for the both inflation models the pricing formulas of a European put option on inflation, an inflation cap and floor, an inflation swap and an inflation swaption are derived. Consequently, basing on the derived pricing formulas and assuming the geometric Brownian motion process for an inflation index, different continuous-time portfolio problems as well as hedging problems are studied using the martingale techniques as well as stochastic optimal control methods. These utility optimization problems are continuous-time portfolio problems in different financial market setups and in addition with a positive lower bound constraint on the final wealth of the investor. When one summarizes all the optimization problems studied in this work, one will have the complete picture of the inflation-linked market and both counterparts of market-participants, sellers as well as buyers of inflation-linked financial products. One of the interesting results worth mentioning here is naturally the fact that a regular risk-averse investor would like to sell and not buy inflation-linked products due to the high price of inflation-linked bonds for example and an underperformance of inflation-linked bonds compared to the conventional risk-free bonds. The relevance of this observation is proved by investigating a simple optimization problem for the extended Vasicek process, where as a result we still have an underperforming inflation-linked bond compared to the conventional bond. This situation does not change, when one switches to an optimization of expected utility from the purchasing power, because in its nature it is only a change of measure, where we have a different deflator. The negativity of the optimal portfolio process for a normal investor is in itself an interesting aspect, but it does not affect the optimality of handling inflation-linked products compared to the situation not including these products into investment portfolio. In the following, hedging problems are considered as a modeling of the other half of inflation market that is inflation-linked products buyers. Natural buyers of these inflation-linked products are obviously institutions that have payment obligations in the future that are inflation connected. That is why we consider problems of hedging inflation-indexed payment obligations with different financial assets. The role of inflation-linked products in the hedging portfolio is shown to be very important by analyzing two alternative optimal hedging strategies, where in the first one an investor is allowed to trade as inflation-linked bond and in the second one he is not allowed to include an inflation-linked bond into his hedging portfolio. Technically this is done by restricting our original financial market, which is made of a conventional bond, inflation index and a stock correlated with inflation index, to the one, where an inflation index is excluded. As a whole, this thesis presents a wide view on inflation-linked products: inflation modeling, pricing aspects of inflation-linked products, various continuous-time portfolio problems with inflation-linked products as well as hedging of inflation-related payment obligations.

In many industrial applications fast and accurate solutions of linear elliptic partial differential equations are needed as one of the building blocks of more complex problems. The domains are often highly complex and meshing turns out to be expensive and difficult to obtain with a sufficient quality. In such cases methods with a regular, not boundary adapted grid offer an attractive alternative. The Explicit Jump Immersed Interface Method is one of these algorithms. The main interest of this work lies in solving the linear elasticity equations. For this purpose the existing EJIIM algorithm has been extended to three dimensions. The Poisson equation is always considered in parallel as the most typical representative of elliptic PDEs. During the work it became clear that EJIIM can have very high computational memory requirements. To overcome this problem an improvement, Reduced EJIIM is proposed. The main theoretical result in this work is the proof of the smoothing property of inverses of elliptic finite difference operators in two and three space dimensions. It is an often observed phenomena that the local truncation error is allowed to be of lower order along some lower dimensional manifold without influencing the global convergence order of the solution.

This dissertation is intended to give a systematic treatment of hypersurface singularities in arbitrary characteristic which provides the necessary tools, theoretically and computationally, for the purpose of classification. This thesis consists of five chapters: In chapter 1, we introduce the background on isolated hypersurface singularities needed for our work. In chapter 2, we formalize the notions of piecewise-homogeneous grading and we discuss thoroughly non-degeneracy in arbitrary characteristic. Chapter 3 is devoted to determinacy and normal forms of isolated hypersurface singularities. In the first part, we give finite determinacy theorems in arbitrary characteristic with respect to right respectively contact equivalence. Furthermore, we show that "isolated" and finite determinacy properties are equivalent. In the second part, we formalize Arnol'd's key ideas for the computation of normal forms an define the conditions (AA) and (AAC). The last part of Chapter 3 is devoted to the study of normal forms in the general setting of hypersurface singularities imposing neither condition (A) nor Newton-Nondegeneracy. In Chapter 4, we present algorithms which we implement in Singular for the purpose of explicit computation of regular bases and normal forms. In chapter 5, we transfer some classical results on invariants over the field C of complex numbers to algebraically closed fields of characteristic zero known as Lefschetz principle.

The dissertation deals with the application of Hub Location models in public transport planning. The author proposes new mathematical models along with different solution approaches to solve the instances. Moreover, a novel multi-period formulation is proposed as an extension to the general model. Due to its high complexity heuristic approaches are formulated to find a good solution within a reasonable amount of time.

The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.

This thesis is separated into three main parts: Development of Gaussian and White Noise Analysis, Hamiltonian Path Integrals as White Noise Distributions, Numerical methods for polymers driven by fractional Brownian motion.
Throughout this thesis the Donsker's delta function plays a key role. We investigate this generalized function also in Chapter 2. Moreover we show by giving a counterexample, that the general definition for complex kernels is not true.
In Chapter 3 we take a closer look to generalized Gauss kernels and generalize these concepts to the case of vector-valued White Noise. These results are the basis for Hamiltonian path integrals of quadratic type. The core result of this chapter gives conditions under which pointwise products of generalized Gauss kernels and certain Hida distributions have a mathematical rigorous meaning as distributions in the Hida space.
In Chapter 4 we discuss operators which are related to applications for Feynman Integrals as differential operators, scaling, translation and projection. We show the relation of these operators to differential operators, which leads to the well-known notion of so called convolution operators. We generalize the central homomorphy theorem to regular generalized functions.
We generalize the concept of complex scaling to scaling with bounded operators and discuss the relation to generalized Radon-Nikodym derivatives. With the help of this we consider products of generalized functions in chapter 5. We show that the projection operator from the Wick formula for products with Donsker's deltais not closable on the square-integrable functions..
In Chapter 5 we discuss products of generalized functions. Moreover the Wick formula is revisited. We investigate under which conditions and on which spaces the Wick formula can be generalized to. At the end of the chapter we consider the products of Donsker's delta function with a generalized function with help of a measure transformation. Here also problems as measurability are concerned.
In Chapter 6 we characterize Hamiltonian path integrands for the free particle, the harmonic oscillator and the charged particle in a constant magnetic field as Hida distributions. This is done in terms of the T-transform and with the help of the results from chapter 3. For the free particle and the harmonic oscillator we also investigate the momentum space propagators. At the same time, the $T$-transform of the constructed Feynman integrands provides us with their generating functional. In Chapter 7, we can show that the generalized expectation (generating functional at zero) gives the Greens function to the corresponding Schrödinger equation.
Moreover, with help of the generating functional we can show that the canonical commutation relations for the free particle and the harmonic oscillator in phase space are fulfilled. This confirms on a mathematical rigorous level the heuristics developed by Feynman and Hibbs.
In Chapter 8 we give an outlook, how the scaling approach which is successfully applied in the Feynman integral setting can be transferred to the phase space setting. We give a mathematical rigorous meaning to an analogue construction to the scaled Feynman-Kac kernel. It is open if the expression solves the Schrödinger equation. At least for quadratic potentials we can get the right physics.
In the last chapter, we focus on the numerical analysis of polymer chains driven by fractional Brownian motion. Instead of complicated lattice algorithms, our discretization is based on the correlation matrix. Using fBm one can achieve a long-range dependence of the interaction of the monomers inside a polymer chain. Here a Metropolis algorithm is used to create the paths of a polymer driven by fBm taking the excluded volume effect in account.

Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.

Grey-box modelling deals with models which are able to integrate the following two kinds of information: qualitative (expert) knowledge and quantitative (data) knowledge, with equal importance. The doctoral thesis has two aims: the improvement of an existing neuro-fuzzy approach (LOLIMOT algorithm), and the development of a new model class with corresponding identification algorithm, based on multiresolution analysis (wavelets) and statistical methods. The identification algorithm is able to identify both hidden differential dynamics and hysteretic components. After the presentation of some improvements of the LOLIMOT algorithm based on readily normalized weight functions derived from decision trees, we investigate several mathematical theories, i.e. the theory of nonlinear dynamical systems and hysteresis, statistical decision theory, and approximation theory, in view of their applicability for grey-box modelling. These theories show us directly the way onto a new model class and its identification algorithm. The new model class will be derived from the local model networks through the following modifications: Inclusion of non-Gaussian noise sources; allowance of internal nonlinear differential dynamics represented by multi-dimensional real functions; introduction of internal hysteresis models through two-dimensional "primitive functions"; replacement respectively approximation of the weight functions and of the mentioned multi-dimensional functions by wavelets; usage of the sparseness of the matrix of the wavelet coefficients; and identification of the wavelet coefficients with Sequential Monte Carlo methods. We also apply this modelling scheme to the identification of a shock absorber.

Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.

This thesis is devoted to constructive module theory of polynomial
graded commutative algebras over a field.
It treats the theory of Groebner bases (GB), standard bases (SB) and syzygies as well as algorithms
and their implementations.
Graded commutative algebras naturally unify exterior and commutative polynomial algebras.
They are graded non-commutative, associative unital algebras over fields and may contain zero-divisors.
In this thesis
we try to make the most use out of _a priori_ knowledge about
their characteristic (super-commutative) structure
in developing direct symbolic methods, algorithms and implementations,
which are intrinsic to graded commutative algebras and practically efficient.
For our symbolic treatment we represent them as polynomial algebras
and redefine the product rule in order to allow super-commutative structures
and, in particular, to allow zero-divisors.
Using this representation we give a nice characterization
of a GB and an algorithm for its computation.
We can also tackle central localizations of graded commutative algebras by allowing commutative variables to be _local_,
generalizing Mora algorithm (in a similar fashion as G.M.Greuel and G.Pfister by allowing local or mixed monomial orderings)
and working with SBs.
In this general setting we prove a generalized Buchberger's criterion,
which shows that syzygies of leading terms play the utmost important role
in SB and syzygy module computations.
Furthermore, we develop a variation of the La Scala-Stillman free resolution algorithm,
which we can formulate particularly close to our implementation.
On the implementation side
we have further developed the Singular non-commutative subsystem Plural
in order to allow polynomial arithmetic
and more involved non-commutative basic Computer Algebra computations (e.g. S-polynomial, GB)
to be easily implementable for specific algebras.
At the moment graded commutative algebra-related algorithms
are implemented in this framework.
Benchmarks show that our new algorithms and implementation are practically efficient.
The developed framework has a lot of applications in various
branches of mathematics and theoretical physics.
They include computation of sheaf cohomology, coordinate-free verification of affine geometry
theorems and computation of cohomology rings of p-groups, which are partially described in this thesis.

We work in the setting of time series of financial returns. Our starting point are the GARCH models, which are very common in practice. We introduce the possibility of having crashes in such GARCH models. A crash will be modeled by drawing innovations from a distribution with much mass on extremely negative events, while in ''normal'' times the innovations will be drawn from a normal distribution. The probability of a crash is modeled to be time dependent, depending on the past of the observed time series and/or exogenous variables. The aim is a splitting of risk into ''normal'' risk coming mainly from the GARCH dynamic and extreme event risk coming from the modeled crashes. We will present several incarnations of this modeling idea and give some basic properties like the conditional first and second moments. For the special case that we just have an ARCH dynamic we can establish geometric ergodicity and, thus, stationarity and mixing conditions. Also in the ARCH case we formulate (quasi) maximum likelihood estimators and can derive conditions for consistency and asymptotic normality of the parameter estimates. In a special case of genuine GARCH dynamic we are able to establish L_1-approximability and hence laws of large numbers for the processes itself. We can formulate a conditional maximum likelihood estimator in this case, but cannot completely establish consistency for them. On the practical side we look for the outcome of estimating models with genuine GARCH dynamic and compare the result to classical GARCH models. We apply the models to Value at Risk estimation and see that in comparison to the classical models many of ours seem to work better although we chose the crash distributions quite heuristically.

In this thesis, the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to porous media is investigated. For that purpose, the transmission conditions across the interfaces between the fluid regions and the porous domain are derived. A proper algorithm is formulated and numerical examples are presented. First, the transmission conditions for the coupling of various physical phenomena are reviewed. For the coupling of free flow with porous media, it has to be distinguished whether the fluid flows tangentially or perpendicularly to the porous medium. This plays an essential role for the formulation of the transmission conditions. In the thesis, the transmission conditions for the coupling of the Stokes equations and the Biot poroelasticity equations for fluid flow normal to the porous medium in one and three dimensions are derived. With these conditions, the continuous fully coupled system of equations in one and three dimensions is formulated. In the one dimensional case the extreme cases, i.e. fluid-fluid interface and fluid impermeable solid interface, are considered. Two chapters of the thesis are devoted to the discretisation of the fully coupled Biot-Stokes system for matching and non-matching grids, respectively. Therefor, operators are introduced that map the internal and boundary variables to the respective domains via Stokes equations, Biot equations and the transmission conditions. The matrix representation of some of these operators is shown. For the non-matching case, a cell-centred grid in the fluid region and a staggered grid in the porous domain are used. Hence, the discretisation is more difficult, since an additional grid on the interface has to be introduced. Corresponding matching functions are needed to transfer the values properly from one domain to the other across the interface. In the end, the iterative solution procedure for the Biot-Stokes system on non-matching grids is presented. For this purpose, a short review of domain decomposition methods is given, which are often the methods of choice for such coupled problems. The iterative solution algorithm is presented, including details like stopping criteria, choice and computation of parameters, formulae for non-dimensionalisation, software and so on. Finally, numerical results for steady state examples, depth filtration and cake filtration examples are presented.

In this dissertation we consider mesoscale based models for flow driven fibre orientation dynamics in suspensions. Models for fibre orientation dynamics are derived for two classes of suspensions. For concentrated suspensions of rigid fibres the Folgar-Tucker model is generalized by incorporating the excluded volume effect. For dilute semi-flexible fibre suspensions a novel moments based description of fibre orientation state is introduced and a model for the flow-driven evolution of the corresponding variables is derived together with several closure approximations. The equation system describing fibre suspension flows, consisting of the incompressible Navier-Stokes equation with an orientation state dependent non-Newtonian constitutive relation and a linear first order hyperbolic system for the fibre orientation variables, has been analyzed, allowing rather general fibre orientation evolution models and constitutive relations. The existence and uniqueness of a solution has been demonstrated locally in time for sufficiently small data. The closure relations for the semiflexible fibre suspension model are studied numerically. A finite volume based discretization of the suspension flow is given and the numerical results for several two and three dimensional domains with different parameter values are presented and discussed.

In the present work, we investigated how to correct the questionable normality, linear and quadratic assumptions underlying existing Value-at-Risk methodologies. In order to take also into account the skewness, the heavy tailedness and the stochastic feature of the volatility of the market values of financial instruments, the constant volatility hypothesis widely used by existing Value-at-Risk appproches has also been investigated and corrected and the tails of the financial returns distributions have been handled via Generalized Pareto or Extreme Value Distributions. Artificial Neural Networks have been combined by Extreme Value Theory in order to build consistent and nonparametric Value-at-Risk measures without the need to make any of the questionable assumption specified above. For that, either autoregressive models (AR-GARCH) have been used or the direct characterization of conditional quantiles due to Bassett, Koenker [1978] and Smith [1987]. In order to build consistent and nonparametric Value-at-Risk estimates, we have proved some new results extending White Artificial Neural Network denseness results to unbounded random variables and provide a generalisation of the Bernstein inequality, which is needed to establish the consistency of our new Value-at-Risk estimates. For an accurate estimation of the quantile of the unexpected returns, Generalized Pareto and Extreme Value Distributions have been used. The new Artificial Neural Networks denseness results enable to build consistent, asymptotically normal and nonparametric estimates of conditional means and stochastic volatilities. The denseness results uses the Sobolev metric space L^m (my) for some m >= 1 and some probability measure my and which holds for a certain subclass of square integrable functions. The Fourier transform, the new extension of the Bernstein inequality for unbounded random variables from stationary alpha-mixing processes combined with the new generalization of a result of White and Wooldrige [1990] have been the main tool to establich the extension of White's neural network denseness results. To illustrate the goodness and level of accuracy of the new denseness results, we were able to demonstrate the applicability of the new Value-at-Risk approaches by means of three examples with real financial data mainly from the banking sector traded on the Frankfort Stock Exchange.

Filtering, Approximation and Portfolio Optimization for Shot-Noise Models and the Heston Model
(2012)

We consider a continuous time market model in which stock returns satisfy a stochastic differential equation with stochastic drift, e.g. following an Ornstein-Uhlenbeck process. The driving noise of the stock returns consists not only of Brownian motion but also of a jump part (shot noise or compound Poisson process). The investor's objective is to maximize expected utility of terminal wealth under partial information which means that the investor only observes stock prices but does not observe the drift process. Since the drift of the stock prices is unobservable, it has to be estimated using filtering techniques. E.g., if the drift follows an Ornstein-Uhlenbeck process and without
jump part, Kalman filtering can be applied and optimal strategies can be computed explicitly. Also in other cases, like for an underlying
Markov chain, finite-dimensional filters exist. But for certain jump processes (e.g. shot noise) or certain nonlinear drift dynamics explicit computations, based on discrete observations, are no longer possible or existence of finite dimensional filters is no longer valid. The same
computational difficulties apply to the optimal strategy since it depends on the filter. In this case the model may be approximated by
a model where the filter is known and can be computed. E.g., we use statistical linearization for non-linear drift processes, finite-state-Markov chain approximations for the drift process and/or diffusion approximations for small jumps in the noise term.
In the approximating models, filters and optimal strategies can often be computed explicitly. We analyze and compare different approximation methods, in particular in view of performance of the corresponding utility maximizing strategies.

In this dissertation a model of melt spinning (by Doufas, McHugh and Miller) has been investigated. The model (DMM model) which takes into account effects of inertia, air drag, gravity and surface tension in the momentum equation and heat exchange between air and fibre surface, viscous dissipation and crystallization in the energy equation also has a complicated coupling with the microstructure. The model has two parts, before onset of crystallization (BOC) and after onset of crystallization (AOC) with the point of onset of crystallization as the unknown interface. Mathematically the model has been formulated as a Free boundary value problem. Changes have been introduced in the model with respect to the air drag and an interface condition at the free boundary. The mathematical analysis of the nonlinear, coupled free boundary value problem shows that the solution of this problem depends heavily on initial conditions and parameters which renders the global analysis impossible. But by defining a physically acceptable solution, it is shown that for a more restricted set of initial conditions if a unique solution exists for IVP BOC then it is physically acceptable. For this the important property of the positivity of the conformation tensor variables has been proved. Further it is shown that if a physically acceptable solution exists for IVP BOC then under certain conditions it also exists for IVP AOC. This gives an important relation between the initial conditions of IVP BOC and the existence of a physically acceptable solution of IVP AOC. A new investigation has been done for the melt spinning process in the framework of classical mechanics. A Hamiltonian formulation has been done for the melt spinning process for which appropriate Poisson brackets have been derived for the 1-d, elongational flow of a viscoelastic fluid. From the Hamiltonian, cross sectionally averaged balance mass and momentum equations of melt spinning can be derived along with the microstructural equations. These studies show that the complicated problem of melt spinning can also be studied under the framework of classical mechanics. This work provides the basic groundwork on which further investigations on the dynamics of a fibre could be carried out. The Free boundary value problem has been solved numerically using shooting method. Matlab routines have been used to solve the IVPs arising in the problem. Some numerical case studies have been done to study the sensitivity of the ODE systems with respect to the initial guess and parameters. These experiments support the analysis done and throw more light on the stiff nature and ill posedness of the ODE systems. To validate the model, simulations have been performed on sets of data provided by the company. Comparison of numerical results (axial velocity profiles) has been done with the experimental profiles provided by the company. Numerical results have been found to be in excellent agreement with the experimental profiles.

The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.

The thesis is concerned with multiscale approximation by means of radial basis functions on hierarchically structured spherical grids. A new approach is proposed to construct a biorthogonal system of locally supported zonal functions. By use of this biorthogonal system of locally supported zonal functions, a spherical fast wavelet transform (SFWT) is established. Finally, based on the wavelet analysis, geophysically and geodetically relevant problems involving rotation-invariant pseudodifferential operators are shown to be efficiently and economically solvable.

In this dissertation we consider complex, projective hypersurfaces with many isolated singularities. The leading questions concern the maximal number of prescribed singularities of such hypersurfaces in a given linear system, and geometric properties of the equisingular stratum. In the first part a systematic introduction to the theory of equianalytic families of hypersurfaces is given. Furthermore, the patchworking method for constructing hypersurfaces with singularities of prescribed types is described. In the second part we present new existence results for hypersurfaces with many singularities. Using the patchworking method, we show asymptotically proper results for hypersurfaces in P^n with singularities of corank less than two. In the case of simple singularities, the results are even asymptotically optimal. These statements improve all previous general existence results for hypersurfaces with these singularities. Moreover, the results are also transferred to hypersurfaces defined over the real numbers. The last part of the dissertation deals with the Castelnuovo function for studying the cohomology of ideal sheaves of zero-dimensional schemes. Parts of the theory of this function for schemes in P^2 are generalized to the case of schemes on general surfaces in P^3. As an application we show an H^1-vanishing theorem for such schemes.

The study of families of curves with prescribed singularities has a long tradition. Its foundations were laid by Plücker, Severi, Segre, and Zariski at the beginning of the 20th century. Leading to interesting results with applications in singularity theory and in the topology of complex algebraic curves and surfaces it has attained the continuous attraction of algebraic geometers since then. Throughout this thesis we examine the varieties V(D,S1,...,Sr) of irreducible reduced curves in a fixed linear system |D| on a smooth projective surface S over the complex numbers having precisely r singular points of types S1,...,Sr. We are mainly interested in the following three questions: 1) Is V(D,S1,...,Sr) non-empty? 2) Is V(D,S1,...,Sr) T-smooth, that is smooth of the expected dimension? 3) Is V(D,S1,...Sr) irreducible? We would like to answer the questions in such a way that we present numerical conditions depending on invariants of the divisor D and of the singularity types S1,...,Sr, which ensure a positive answer. The main conditions which we derive will be of the type inv(S1)+...+inv(Sr) < aD^2+bD.K+c, where inv is some invariant of singularity types, a, b and c are some constants, and K is some fixed divisor. The case that S is the projective plane has been very well studied by many authors, and on other surfaces some results for curves with nodes and cusps have been derived in the past. We, however, consider arbitrary singularity types, and the results which we derive apply to large classes of surfaces, including surfaces in projective three-space, K3-surfaces, products of curves and geometrically ruled surfaces.

Factorization of multivariate polynomials is a cornerstone of many applications in computer algebra. To compute it, one uses an algorithm by Zassenhaus who used it in 1969 to factorize univariate polynomials over \(\mathbb{Z}\). Later Musser generalized it to the multivariate case. Subsequently, the algorithm was refined and improved.
In this work every step of the algorithm is described as well as the problems that arise in these steps.
In doing so, we restrict to the coefficient domains \(\mathbb{F}_{q}\), \(\mathbb{Z}\), and \(\mathbb{Q}(\alpha)\) while focussing on a fast implementation. The author has implemented almost all algorithms mentioned in this work in the C++ library factory which is part of the computer algebra system Singular.
Besides, a new bound on the coefficients of a factor of a multivariate polynomial over \(\mathbb{Q}(\alpha)\) is proven which does not require \(\alpha\) to be an algebraic integer. This bound is used to compute Hensel lifting and recombination of factors in a modular fashion. Furthermore, several sub-steps are improved.
Finally, an overview on the capability of the implementation is given which includes benchmark examples as well as random generated input which is supposed to give an impression of the average performance.

Extensions of Shallow Water Equations The subject of the thesis of Michael Hilden is the simulation of floods in urban areas. In case of strong rain events, water can flow out of the overloaded sewer system onto the street and damage the connected houses. The dependable simulation of water flow out of a manhole ("manhole") and over a curb ("curb") is crucial for the assessment of the flood risks. The incompressible 3D-Navier-Stokes Equations (3D-NSE) describe the free surface flow of water accurately, but require expensive computations. Therefore, the less CPU-intensive (factor ca.1/100) Shallow Water Equations (SWE) are usually applied in hydrology. They can be derived from 3D-NSE under the assumption of a hydrostatic pressure distribution via depth-integration and are applied successfully in particular to simulations of river flow processes. The SWE-computations of the flow problems "manhole" and "curb" differ to the 3D-NSE results. Thus, SWE need to be extended appropriately to give reliable forecasts for flood risks in urban areas within reduced computational efforts. These extensions are developed based on physical considerations not considered in the classical SWE. In one extension, a vortex layer on the ground is separated from the main flow representing its new bottom. In a further extension, the hydrostatic pressure distribution is corrected by additional terms due to approximations of vertical velocities and their interaction with the flow. These extensions increase the quality of the SWE results for these flow problems up to the quality level of the NSE results within a moderate increase of the CPU efforts.

A classical conjecture in the representation theory of finite groups, the McKay conjecture, states that for any finite group and prime number p the number of complex irreducible characters of degree prime to p is equal to the number of complex irreducible characters of degree prime to p of the normalizer of a p-Sylow subgroup. Recently a reduction theorem was proved by Isaacs, Malle and Navarro: If all simple groups are “good”, then the McKay conjecture holds. In this work we are concerned with the problem of goodness for finite groups of Lie type in their defining characteristic. A simple group is called “good” if certain equivariant bijections between the involved character sets exist. We present a structural approach to the construction of such a bijection by utilizing the so-called “Steinberg-Map”. This yields very natural bijections and we prove most of the desired properties.

Die vorliegende Dissertation besteht aus zwei Hauptteilen: Neue Ergebnisse aus der Gaußchen Analysis und ihre Anwendung auf die Theorie der Pfadintegrale. Das zentrale Resultat des ersten Teils ist die Charakterisierung aller regulären Distributionen die man mit Donsker's Delta multiplizieren kann. Dabei wird eine explizite Formel für solche Produkte, die sogenannte Wick-Formel, angegeben. Im Anwendungsteil dieser Arbeit wird zunächst eine komplex skalierte Feynman-Kac-Formel und ihre zugehörigen Kerne mit Hilfe dieser Wick-Formel gezeigt. Desweiteren werden Feynman Integranden für neue Klassen von Potentialen als White Noise Distributionen konstruiert.

Zusammenfassung. In dieser Arbeit werden Probleme der numerischen Lösung finiter Differenzenverfahren partieller Differentialgleichungen in einem algebraischen Ansatz behandelt. Es werden sowohl theoretische Ergebnisse präsentiert als auch die praktische Implementierung mithilfe der Systeme SINGULAR und QEPCAD vorgeführt. Dabei beziehen sich die algebraischen Methoden auf zwei unterschiedliche Aspekte bei finiten Differenzenverfahren: die Erzeugung von Schemata mithilfe von Gröbnerbasen und die darauf folgende Stabilitätsanalyse mittels Quantorenelimination durch algebraische zylindrische Dekomposition. Beim Aufbau der Arbeit werden in den ersten drei Kapiteln in einer Rückschau die nötigen Begriffe aus der Computeralgebra gelegt, die Grundzüge der numerischen Konvergenztheorie finiter Differenzenschemata erklärt sowie die Anwendung des CAD-Algorithmus zur Quantorenelimierung skizziert. Das Kapitel 4 entwickelt ausgehend vom zugrunde liegenden Kontext die Formulierung und die dafür nötigen Bedingungen an Differenzenschemata, die algebraisch nach Definition ein Ideal in einem Polynomring darstellen. Neben der praktischen Handhabbarkeit der Objekte liegt die Betonung auf größtmöglicher Allgemeinheit in den Definitionen der Begriffe. Es werden äquivalente Wege der Erzeugung sowie Eigenschaften der Eindeutigkeit unter sehr speziellen Bedingungen an die verwendeten Approximationen gezeigt. Die Anwendung des CAD-Algorithmus auf die Abschätzung des Symbols eines Schemas wird erläutert. Das fünfte Kapitel beschreibt die SINGULAR-Bibliothek findiff.lib, welche das Zusammenspiel von SINGULAR und QEPCAD garantiert und eine vollständige Automatisierung der Erzeugung und Stabilitätsanalyse eines finiten Differenzenverfahrens ermöglicht.

Efficient time integration and nonlinear model reduction for incompressible hyperelastic materials
(2013)

This thesis deals with the time integration and nonlinear model reduction of nearly incompressible materials that have been discretized in space by mixed finite elements. We analyze the structure of the equations of motion and show that a differential-algebraic system of index 1 with a singular perturbation term needs to be solved. In the limit case the index may jump to index 3 and thus renders the time integration into a difficult problem. For the time integration we apply Rosenbrock methods and study their convergence behavior for a test problem, which highlights the importance of the well-known Scholz conditions for this problem class. Numerical tests demonstrate that such linear-implicit methods are an attractive alternative to established time integration methods in structural dynamics. In the second part we combine the simulation of nonlinear materials with a model reduction step. We use the method of proper orthogonal decomposition and apply it to the discretized system of second order. For a nonlinear model reduction to be efficient we approximate the nonlinearity by following the lookup approach. In a practical example we show that large CPU time savings can achieved. This work is in order to prepare the ground for including such finite element structures as components in complex vehicle dynamics applications.

We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.

In this thesis, the quasi-static Biot poroelasticity system in bounded multilayered domains in one and three dimensions is studied. In more detail, in the one-dimensional case, a finite volume discretization for the Biot system with discontinuous coefficients is derived. The discretization results in a difference scheme with harmonic averaging of the coefficients. Detailed theoretical analysis of the obtained discrete model is performed. Error estimates, which establish convergence rates for both primary as well as flux unknowns are derived. Besides, modified and more accurate discretizations, which can be applied when the interface position coincides with a grid node, are obtained. These discretizations yield second order convergence of the fluxes of the problem. Finally, the solver for the solution of the produced system of linear equations is developed and extensively tested. A number of numerical experiments, which confirm the theoretical considerations are performed. In the three-dimensional case, the finite volume discretization of the system involves construction of special interpolating polynomials in the dual volumes. These polynomials are derived so that they satisfy the same continuity conditions across the interface, as the original system of PDEs. This technique allows to obtain such a difference scheme, which provides accurate computation of the primary as well as of the flux unknowns, including the points adjacent to the interface. Numerical experiments, based on the obtained discretization, show second order convergence for auxiliary problems with known analytical solutions. A multigrid solver, which incorporates the features of the discrete model, is developed in order to solve efficiently the linear system, produced by the finite volume discretization of the three-dimensional problem. The crucial point is to derive problem-dependent restriction and prolongation operators. Such operators are a well-known remedy for the scalar PDEs with discontinuous coefficients. Here, these operators are derived for the system of PDEs, taking into account interdependence of different unknowns within the system. In the derivation, the interpolating polynomials from the finite volume discretization are employed again, linking thus the discretization and the solution processes. The developed multigrid solver is tested on several model problems. Numerical experiments show that, due to the proper problem-dependent intergrid transfer, the multigrid solver is robust with respect to the discontinuities of the coefficients of the system. In the end, the poroelasticity system with discontinuous coefficients is used to model a real problem. The Biot model, describing this problem, is treated numerically, i.e., discretized by the developed finite volume techniques and then solved by the constructed multigrid solver. Physical characteristics of the process, such as displacement of the skeleton, pressure of the fluid, components of the stress tensor, are calculated and then presented at certain cross-sections.

Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall
modeling and simulation of physical and chemical processes occuring in the course of an accident
is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor tech-
nology and computer programming. The aim of the study is therefore to create the foundations
of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based
on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under
different working and accident conditions, and to develop proper action plans for minimizing the
risks of accidents, and/or minimizing the consequences of possible accidents. A very large number
of scenarios have to be simulated, and at the same time acceptable accuracy for the critical param-
eters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software
tools are either too slow, or not accurate enough. This thesis deals with developing customized al-
gorithm and software tools for simulation of isothermal and non-isothermal flows in a containment
pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented.
The goal of the work is to achieve a balance between accuracy and speed of calculation, and to
develop customized algorithm for this special case. Different discretization and solution approaches
are studied and those which correspond best to the formulated goal are selected, adjusted, and when
possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated
geometries, in presence of solid and porous obstales, is in the core of the algorithm. Developing
suitable pre-processor and customized domain decomposition algorithms are essential part of the
overall algorithm amd software. Results from numerical simulations in test geometries and in real
geometries are presented and discussed.

This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.

The goal of this work is to develop a simulation-based algorithm, allowing the prediction
of the effective mechanical properties of textiles on the basis of their microstructure
and corresponding properties of fibers. This method can be used for optimization of the
microstructure, in order to obtain a better stiffness or strength of the corresponding fiber
material later on. An additional aspect of the thesis is that we want to take into account the microcontacts
between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the
textile. An introduction of an additional asymptotics with respect to a small parameter,
the relation between the thickness and the representative length of the fibers, allows a
reduction of local contact problems between fibers to 1-dimensional problems, which
reduces numerical computations significantly.
A fiber composite material with periodic microstructure and multiple frictional microcontacts
between fibers is studied. The textile is modeled by introducing small geometrical
parameters: the periodicity of the microstructure and the characteristic
diameter of fibers. The contact linear elasticity problem is considered. A two-scale
approach is used for obtaining the effective mechanical properties.
The algorithm using asymptotic two-scale homogenization for computation of the
effective mechanical properties of textiles with periodic rod or fiber microstructure
is proposed. The algorithm is based on the consequent passing to the asymptotics
with respect to the in-plane period and the characteristic diameter of fibers. This
allows to come to the equivalent homogenized problem and to reduce the dimension
of the auxiliary problems. Further numerical simulations of the cell problems give
the effective material properties of the textile.
The homogenization of the boundary conditions on the vanishing out-of-plane interface
of a textile or fiber structured layer has been studied. Introducing additional
auxiliary functions into the formal asymptotic expansion for a heterogeneous
plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous
Neumann boundary condition were deduced. It is incorporated into the right hand
side of the homogenized problem via effective out-of-plane moduli.
FiberFEM, a C++ finite element code for solving contact elasticity problems, is
developed. The code is based on the implementation of the algorithm for the contact
between fibers, proposed in the thesis.
Numerical examples of homogenization of geotexiles and wovens are obtained in the
work by implementation of the developed algorithm. The effective material moduli
are computed numerically using the finite element solutions of the auxiliary contact
problems obtained by FiberFEM.

In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.

The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.

Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.

In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.

Die Arbeit beschäftigt sich mit den Charakteren des Normalisators und des Zentralisators eines Sylowtorus. Dabei wird jede Gruppe G vom Lie-Typ als Fixpunktgruppe einer einfach-zusammenhängenden einfachen Gruppe unter einer Frobeniusabbildung aufgefaßt. Für jeden Sylowtorus S der algebraischen Gruppe wird gezeigt, dass die irreduziblen Charaktere des Zentralisators von S in G sich auf ihre Trägheitsgruppe im Normalisator von S fortsetzen. Diese Fragestellung entsteht aus dem Studium der Höhe 0 Charaktere bei endlichen reduktiven Gruppen vom Lie-Typ im Zusammenhang mit der McKay-Vermutung. Neuere Resultate von Isaacs, Malle und Navarro führen diese Vermutung auf eine Eigenschaft von einfachen Gruppen zurück, die sie dann für eine Primzahl gut nennen. Bei Gruppen vom Lie-Typ zeigt das obige Resultat zusammen mit einer aktuellen Arbeit von Malle einige dabei wichtige und notwendige Eigenschaften. Anhand der Steinberg-Präsentation werden vor allem bei den klassischen Gruppen genauere Aussagen über die Struktur des Zentralisators und des Normalisators eines Sylowtorus bewiesen. Wichtig dabei ist die von Tits eingeführte erweiterte Weylgruppe, die starke Verbindungen zu Zopfgruppen besitzt. Das Resultat wird in zahlreichen Einzelfallbetrachtungen gezeigt, bei denen in dieser Arbeit bewiesene Vererbungsregeln von Fortsetzbarkeitseigenschaften benutzt werden.

Die vorliegende Arbeit wurde angeregt durch die in A.N. Borodin(2000) [Version of the Feynman-Kac Formula. Journal of Mathematical Sciences, 99(2):1044-1052, 2000] und in B. Simon(2000) [A Feynman-Kac Formula for Unbounded Semigroups. Canadian Math. Soc. Conf. Proc., 28:317-321, 2000] dargestellten Feynman-Kac-Formeln. Sie beschäftigt sich mit dem Problem, den Geltungsbereich der Feynman-Kac-Formel im Hinblick auf die Bedingungen der Potentiale und der Anfangsbedingung der zugehörigen partiellen Differentialgleichung zu erweitern. Es ist bekannt, dass die Feynman-Kac-Formel für beschränkte Potentiale gilt. Ausserdem gilt sie auch für Anfangsbedingungen, die im Raum \(C_{0}(\mathbb{R}^{n})\) oder im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegen. Die Darstellung der Feynman-Kac-Formel für die Anfangsbedingung, die im Raum \(C_{c}^{2}(\mathbb{R}^{n})\) liegt, liefert die Lösung der partiellen Differentialgleichung. Wir können sie auch als stark stetige Halbgruppe auf dem Raum \(C_{0}(\mathbb{R}^{n})\) auffassen. Diese zwei verschiedenen Darstellungen sind äquivalent. In dieser Arbeit zeigen wir zunächst, dass die Feynman-Kac-Formel auch für unbeschränkte Potentiale \(V\) gilt, wobei \(|V(x)| \leq \varepsilon ||x||^{2} + C_{\varepsilon} \) für alle \(\varepsilon > 0; C_{\varepsilon} > 0\) und \(x \in \mathbb{R}^{n}\) ist. Ausserdem zeigen wir, dass sie für alle Anfangsbedingungen \(f\) gilt mit \(x \mapsto e^{-\varepsilon |x|^{2}} f(x) \in H^{2,2}(\mathbb{R}^{n})\). Der Beweis ist wahrscheinlichkeitstheoretisch und benutzt keine Spektraltheorie. Der spektraltheoretische Zugang, in dem eine Darstellung des Operators \(e^{-tH}\), wobei \(H = -\frac{1}{2} \Delta + V\) gegeben wird, wurde von B. Simon(2000) auch auf die obige Klasse von Potentialen ausgeweitet. Wir lassen zusätzlich auch Potentiale der Form \(V = V_{1} + V_{2}\) zu, wobei \(V_{1} \in L^{2}(\mathbb{R}^{3})\) ist und für alle \(\varepsilon > 0\) gibt es \(C_{\varepsilon} > 0\), so dass \(|V_{2}(x)| \leq\varepsilon ||x||^{2} + C_{\varepsilon}\) für alle \(x \in \mathbb{R}^{3}\) ist. Im Gegensatz zur klassischen Situation ist \(e^{-tH}\) jetzt ein unbeschränkter Operator. Schließlich wird in dieser Arbeit auch der Zusammenhang zwischen der Feynman-Kac-It\(\hat{o}\)-Formel, der Feynman-Kac-Formel und der Kolmogorov-Rückwärtsgleichung untersucht.

In the last few years a lot of work has been done in the investigation of Brownian motion with point interaction(s) in one and higher dimensions. Roughly speaking a Brownian motion with point interaction is nothing else than a Brownian motion whose generator is disturbed by a measure supported in just one point.
The purpose of the present work is the introducing of curve interactions of the two dimensional Brownian motion for a closed curve \(\mathcal{C}\). We will understand a curve interaction as a self-adjoint extension of the restriction of the Laplacian to the set of infinitely often continuously differentiable functions with compact support in \(\mathbb{R}^{2}\) which are constantly 0 at the closed curve. We will give a full description of all these self-adjoint extensions.
In the second chapter we will prove a generalization of Tanaka's formula to \(\mathbb{R}^{2}\). We define \(g\) to be a so-called harmonic single layer with continuous layer function \(\eta\) in \(\mathbb{R}^{2}\). For such a function \(g\) we prove
\begin{align}
g\left(B_{t}\right)=g\left(B_{0}\right)+\int\limits_{0}^{t}{\nabla g\left(B_{s}\right)\mathrm{d}B_{s}}+\int\limits_{0}^{t}\eta\left(B_{s}\right)\mathrm{d}L\left(s,\mathcal{C}\right)
\end{align}
where \(B_{t}\) is just the usual Brownian motion in \(\mathbb{R}^{2}\) and \(L\left(t,\mathcal{C}\right)\) is the connected unique local time process of \(B_{t}\) on the closed curve \(\mathcal{C}\).
We will use the generalized Tanaka formula in the following chapter to construct classes of processes related to curve interactions. In a first step we get the generalization of point interactions in a second step we get processes which behaves like a Brownian motion in the complement of \(\mathcal{C}\) and has an additional movement along the curve in the time- scale of \(L\left(t,\mathcal{C}\right)\). Such processes do not exist in the one point case since there we cannot move when the Brownian motion is in the point.
By establishing an approximation of a curve interaction by operators of the form Laplacian \(+V_{n}\) with "nice" potentials \(V_{n}\) we are able to deduce the existence of superprocesses related to curve interactions.
The last step is to give an approximation of these superprocesses by a sytem of branching particles. This approximation gives a better understanding of the related mass creation.

In traditional portfolio optimization under the threat of a crash the investment horizon or time to maturity is neglected. Developing the so-called crash hedging strategies (which are portfolio strategies which make an investor indifferent to the occurrence of an uncertain (down) jumps of the price of the risky asset) the time to maturity turns out to be essential. The crash hedging strategies are derived as solutions of non-linear differential equations which itself are consequences of an equilibrium strategy. Hereby the situation of changing market coefficients after a possible crash is considered for the case of logarithmic utility as well as for the case of general utility functions. A benefit-cost analysis of the crash hedging strategy is done as well as a comparison of the crash hedging strategy with the optimal portfolio strategies given in traditional crash models. Moreover, it will be shown that the crash hedging strategies optimize the worst-case bound for the expected utility from final wealth subject to some restrictions. Another application is to model crash hedging strategies in situations where both the number and the height of the crash are uncertain but bounded. Taking the additional information of the probability of a possible crash happening into account leads to the development of the q-quantile crash hedging strategy.

The topic of this thesis is the coupling of an atomistic and a coarse scale region in molecular dynamics simulations with the focus on the reflection of waves at the interface between the two scales and the velocity of waves in the coarse scale region for a non-equilibrium process. First, two models from the literature for such a coupling, the concurrent coupling of length scales and the bridging scales method are investigated for a one dimensional system with harmonic interaction. It turns out that the concurrent coupling of length scales method leads to the reflection of fine scale waves at the interface, while the bridging scales method gives an approximated system that is not energy conserving. The velocity of waves in the coarse scale region is in both models not correct. To circumvent this problems, we present a coupling based on the displacement splitting of the bridging scales method together with choosing appropriate variables in orthogonal subspaces. This coupling allows the derivation of evolution equations of fine and coarse scale degrees of freedom together with a reflectionless boundary condition at the interface directly from the Lagrangian of the system. This leads to an energy conserving approximated system with a clear separation between modeling errors an errors due to the numerical solution. Possible approximations in the Lagrangian and the numerical computation of the memory integral and other numerical errors are discussed. We further present a method to choose the interpolation from coarse to atomistic scale in such a way, that the fine scale degrees of freedom in the coarse scale region can be neglected. The interpolation weights are computed by comparing the dispersion relations of the coarse scale equations and the fully atomistic system. With this new interpolation weights, the number of degrees of freedom can be drastically reduced without creating an error in the velocity of the waves in the coarse scale region. We give an alternative derivation of the new coupling with the Mori-Zwanzig projection operator formalism, and explain how the method can be extended to non-zero temperature simulations. For the comparison of the results of the approximated with the fully atomistic system, we use a local stress tensor and the energy in the atomistic region. Examples for the numerical solution of the approximated system for harmonic potentials are given in one and two dimensions.

This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.

In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.

In this thesis we show that the theory of algebraic correspondences introduced by Deuring in the 1930s can be applied to construct non-trivial homomorphisms between the Jacobi groups of hyperelliptic function fields. Concretely, we deduce algorithms to add and multiply correspondences which perform in a reasonable time if the degrees of the associated divisors of the double field are small. Moreover, we show how to compute the differential matrices associated to prime divisors of the double field for arbitrary genus. These matrices give a representation for the homomorphisms or endomorphisms in the additive group (ring) of matrices which is even faithful if the ground field has characteristic zero. As first examples for non-trivial correspondences we investigate multiplication by m endomorphisms. Afterwards we use factorisations of certain bivariate polynomials to construct prime divisors of the double field that are not equivalent to 0 in a coarser sense. Applying the theory of Deuring, these divisors yield homomorphisms between the Jacobi groups of special classes of hyperelliptic function fields. Finally, we generalise the Richelot isogeny to higher genus and by this way derive a class of hyperelliptic function fields given in terms of their defining polynomials which admit non-trivial homomorphisms. These include homomorphisms between the Jacobi groups of hyperelliptic curves of different as well as of equal genus. In addition we provide an explicit method to construct genus 2 function fields the endomorphism ring of which contains a sqrt(2) multiplication with the help of the Cholesky decomposition of a certain matrix.

Motivated by the results of infinite dimensional Gaussian analysis and especially white noise analysis, we construct a Mittag-Leffler analysis. This is an infinite dimensional analysis with respect to non-Gaussian measures of Mittag-Leffler type which we call Mittag-Leffler measures. Our results indicate that the Wick ordered polynomials, which play a key role in Gaussian analysis, cannot be generalized to this non-Gaussian case. We provide evidence that a system of biorthogonal polynomials, called generalized Appell system, is applicable to the Mittag-Leffler measures, instead of using Wick ordered polynomials. With the help of an Appell system, we introduce a test function and a distribution space. Furthermore we give characterizations of the distribution space and we characterize the weak integrable functions and the convergent sequences within the distribution space. We construct Donsker's delta in a non-Gaussian setting as an application.
In the second part, we develop a grey noise analysis. This is a special application of the Mittag-Leffler analysis. In this framework, we introduce generalized grey Brownian motion and prove differentiability in a distributional sense and the existence of generalized grey Brownian motion local times. Grey noise analysis is then applied to the time-fractional heat equation and the time-fractional Schrödinger equation. We prove a generalization of the fractional Feynman-Kac formula for distributional initial values. In this way, we find a Green's function for the time-fractional heat equation which coincides with the solutions given in the literature.

Competing Neural Networks as Models for Non Stationary Financial Time Series -Changepoint Analysis-
(2005)

The problem of structural changes (variations) play a central role in many scientific fields. One of the most current debates is about climatic changes. Further, politicians, environmentalists, scientists, etc. are involved in this debate and almost everyone is concerned with the consequences of climatic changes. However, in this thesis we will not move into the latter direction, i.e. the study of climatic changes. Instead, we consider models for analyzing changes in the dynamics of observed time series assuming these changes are driven by a non-observable stochastic process. To this end, we consider a first order stationary Markov Chain as hidden process and define the Generalized Mixture of AR-ARCH model(GMAR-ARCH) which is an extension of the classical ARCH model to suit to model with dynamical changes. For this model we provide sufficient conditions that ensure its geometric ergodic property. Further, we define a conditional likelihood given the hidden process and a pseudo conditional likelihood in turn. For the pseudo conditional likelihood we assume that at each time instant the autoregressive and volatility functions can be suitably approximated by given Feedfoward Networks. Under this setting the consistency of the parameter estimates is derived and versions of the well-known Expectation Maximization algorithm and Viterbi Algorithm are designed to solve the problem numerically. Moreover, considering the volatility functions to be constants, we establish the consistency of the autoregressive functions estimates given some parametric classes of functions in general and some classes of single layer Feedfoward Networks in particular. Beside this hidden Markov Driven model, we define as alternative a Weighted Least Squares for estimating the time of change and the autoregressive functions. For the latter formulation, we consider a mixture of independent nonlinear autoregressive processes and assume once more that the autoregressive functions can be approximated by given single layer Feedfoward Networks. We derive the consistency and asymptotic normality of the parameter estimates. Further, we prove the convergence of Backpropagation for this setting under some regularity assumptions. Last but not least, we consider a Mixture of Nonlinear autoregressive processes with only one abrupt unknown changepoint and design a statistical test that can validate such changes.

Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.

In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.

Many tasks in image processing can be tackled by modeling an appropriate data fidelity term \(\Phi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and then solve one of the regularized minimization problems \begin{align*}
&{}(P_{1,\tau}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \big\{ \Phi(x) \;{\rm s.t.}\; \Psi(x) \leq \tau \big\} \\ &{}(P_{2,\lambda}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \{ \Phi(x) + \lambda \Psi(x) \}, \; \lambda > 0 \end{align*} with some function \(\Psi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and a good choice of the parameter(s). Two tasks arise naturally here: \begin{align*} {}& \text{1. Study the solver sets \({\rm SOL}(P_{1,\tau})\) and
\({\rm SOL}(P_{2,\lambda})\) of the minimization problems.} \\ {}& \text{2. Ensure that the minimization problems have solutions.} \end{align*} This thesis provides contributions to both tasks: Regarding the first task for a more special setting we prove that there are intervals \((0,c)\) and \((0,d)\) such that the setvalued curves \begin{align*}
\tau \mapsto {}& {\rm SOL}(P_{1,\tau}), \; \tau \in (0,c) \\ {} \lambda \mapsto {}& {\rm SOL}(P_{2,\lambda}), \; \lambda \in (0,d) \end{align*} are the same, besides an order reversing parameter change \(g: (0,c) \rightarrow (0,d)\). Moreover we show that the solver sets are changing all the time while \(\tau\) runs from \(0\) to \(c\) and \(\lambda\) runs from \(d\) to \(0\).
In the presence of lower semicontinuity the second task is done if we have additionally coercivity. We regard lower semicontinuity and coercivity from a topological point of view and develop a new technique for proving lower semicontinuity plus coercivity.
Dropping any lower semicontinuity assumption we also prove a theorem on the coercivity of a sum of functions.

The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.

In this thesis we propose an efficient method to compute the automorphism group of an arbitrary hyperelliptic function field over a given constant field of odd characteristic as well as over its algebraic extensions. Beside theoretical applications, knowing the automorphism group also is useful in cryptography: The Jacobians of hyperelliptic curves have been suggested by Koblitz as groups for cryptographic purposes, because the discrete logarithm is believed to be hard in this kind of groups. In order to obtain "secure" Jacobians, it is necessary to prevent attacks like Pohlig/Hellman's and Duursma/Gaudry/Morain's. The latter is only feasible, if the corresponding function field has an automorphism of large order. According to a theorem by Madan, automorphisms seem to allow the Pohlig/Hellman attack, too. Hence, the function field of a secure Jacobian will most likely have trivial automorphism group. In other words: Computing the automorphism group of a hyperelliptic function field promises to be a quick test for insecure Jacobians. Let us outline our algorithm for computing the automorphism group Aut(F/k) of a hyperelliptic function field F/k. It is well known that Aut(F/k) is finite. For each possible subgroup U of Aut(F/k), Rolf Brandt has given a normal form for F if k is algebraically closed. Hence our problem reduces to deciding, whether a given hyperelliptic function field F=k(x,y), y^2=D_x has a defining equation of the form given by Brandt. This question can be answered using theorem III.18: We have F=k(t,u), u^2=D_t iff x is a fraction of linear polynomials in t and y=pu, where the factor p is a rational function w.r.t. t which can be determined explicitly from the coefficients of x. This condition can be checked efficiently using Gröbner basis techniques. With additional effort, it is also possible to compute Aut(F/k) if k is not algebraically closed. Investigating a huge number of examples one gets the impression that the above motivation of getting a quick test for insecure Jacobians is partially fulfilled: The computation of automorphism groups is quite fast using the suggested algorithm. Furthermore, fields with nontrivial automorphism groups seem to have insecure Jacobians. Only fields of small characteristic seem to have a reasonable chance of having nontrivial automorphisms. Hence, from a cryptographic point of view, computing Aut(F/k) seems to make sense whenever k has small characteristic.

The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.

In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.

The lattice Boltzmann method (LBM) is a numerical solver for the Navier-Stokes equations, based on an underlying molecular dynamic model. Recently, it has been extended towardsthe simulation of complex fluids. We use the asymptotic expansion technique to investigate the standard scheme, the initialization problem and possible developments towards moving boundary and fluid-structure interaction problems. At the same time, it will be shown how the mathematical analysis can be used to understand and improve the algorithm. First of all, we elaborate the tool "asymptotic analysis", proposing a general formulation of the technique and explaining the methods and the strategy we use for the investigation. A first standard application to the LBM is described, which leads to the approximation of the Navier-Stokes solution starting from the lattice Boltzmann equation. As next, we extend the analysis to investigate origin and dynamics of initial layers. A class of initialization algorithms to generate accurate initial values within the LB framework is described in detail. Starting from existing routines, we will be able to improve the schemes in term of efficiency and accuracy. Then we study the features of a simple moving boundary LBM. In particular, we concentrate on the initialization of new fluid nodes created by the variations of the computational fluid domain. An overview of existing possible choices is presented. Performing a careful analysis of the problem we propose a modified algorithm, which produces satisfactory results. Finally, to set up an LBM for fluid structure interaction, efficient routines to evaluate forces are required. We describe the Momentum Exchange algorithm (MEA). Precise accuracy estimates are derived, and the analysis leads to the construction of an improved method to evaluate the interface stresses. In conclusion, we test the defined code and validate the results of the analysis on several simple benchmarks. From the theoretical point of view, in the thesis we have developed a general formulation of the asymptotic expansion, which is expected to offer a more flexible tool in the investigation of numerical methods. The main practical contribution offered by this work is the detailed analysis of the numerical method. It allows to understand and improve the algorithms, and construct new routines, which can be considered as starting points for future researches.

In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).

The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem.