Fachbereich Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (245) (remove)
Keywords
Faculty / Organisational entity
- Fachbereich Mathematik (245)
- Fraunhofer (ITWM) (2)
Simplified ODE models describing blood flow rate are governed by the pressure gradient.
However, assuming the orientation of the blood flow in a human body correlates to a positive
direction, a negative pressure gradient forces the valve to shut, which stops the flow through
the valve, hence, the flow rate is zero, whereas the pressure rate is formulated by an ODE.
Presence of ODEs together with algebraic constraints and sudden changes of system characterizations
yield systems of switched differential-algebraic equations (swDAEs). Alternating
dynamics of the heart can be well modelled by means of swDAEs. Moreover, to study pulse
wave propagation in arteries and veins, PDE models have been developed. Connection between
the heart and vessels leads to coupling PDEs and swDAEs. This model motivates
to study PDEs coupled with swDAEs, for which the information exchange happens at PDE
boundaries, where swDAE provides boundary conditions to the PDE and PDE outputs serve
as inputs to swDAE. Such coupled systems occur, e.g. while modelling power grids using
telegrapher’s equations with switches, water flow networks with valves and district
heating networks with rapid consumption changes. Solutions of swDAEs might
include jumps, Dirac impulses and their derivatives of arbitrary high orders. As outputs of
swDAE read as boundary conditions of PDE, a rigorous solution framework for PDE must
be developed so that jumps, Dirac impulses and their derivatives are allowed at PDE boundaries
and in PDE solutions. This is a wider solution class than solutions of small bounded
variation (BV), for instance, used in where nonlinear hyperbolic PDEs are coupled with
ODEs. Similarly, in, the solutions to switched linear PDEs with source terms are
restricted to the class of BV. However, in the presence of Dirac impulses and their derivatives,
BV functions cannot handle the coupled systems including DAEs with index greater than one.
Therefore, hyperbolic PDEs coupled with swDAEs with index one will be studied in the BV
setting and with swDAEs whose index is greater than one will be investigated in the distributional
sense. To this end, the 1D space of piecewise-smooth distributions is extended to a 2D
piecewise-smooth distributional solution framework. 2D space of piecewise-smooth distributions
allows trace evaluations at boundaries of the PDE. Moreover, a relationship between
solutions to coupled system and switched delay DAEs is established. The coupling structure
in this thesis forms a rather general framework. In fact, any arbitrary network, where PDEs
are represented by edges and (switched) DAEs by nodes, is covered via this structure. Given
a network, by rescaling spatial domains which modifies the coefficient matrices by a constant,
each PDE can be defined on the same interval which leads to a formulation of a single
PDE whose unknown is made up of the unknowns of each PDE that are stacked over each
other with a block diagonal coefficient matrix. Likewise, every swDAE is reformulated such
that the unknowns are collected above each other and coefficient matrices compose a block
diagonal coefficient matrix so that each node in the network is expressed as a single swDAE.
The results are illustrated by numerical simulations of the power grid and simplified circulatory
system examples. Numerical results for the power grid display the evolution of jumps
and Dirac impulses caused by initial and boundary conditions as a result of instant switches.
On the other hand, the analysis and numerical results for the simplified circulatory system do
not entail a Dirac impulse, for otherwise such an entity would destroy the entire system. Yet
jumps in the flow rate in the numerical results can come about due to opening and closure of
valves, which suits clinical and physiological findings. Regarding physiological parameters,
numerical results obtained in this thesis for the simplified circulatory system agree well with
medical data and findings from literature when compared for the validation
In this thesis we study a variant of the quadrature problem for stochastic differential equations (SDEs), namely the approximation of expectations \(\mathrm{E}(f(X))\), where \(X = (X(t))_{t \in [0,1]}\) is the solution of an SDE and \(f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R}\) is a functional, mapping each realization of \(X\) into the real numbers. The distinctive feature in this work is that we consider randomized (Monte Carlo) algorithms with random bits as their only source of randomness, whereas the algorithms commonly studied in the literature are allowed to sample from the uniform distribution on the unit interval, i.e., they do have access to random numbers from \([0,1]\).
By assumption, all further operations like, e.g., arithmetic operations, evaluations of elementary functions, and oracle calls to evaluate \(f\) are considered within the real number model of computation, i.e., they are carried out exactly.
In the following, we provide a detailed description of the quadrature problem, namely we are interested in the approximation of
\begin{align*}
S(f) = \mathrm{E}(f(X))
\end{align*}
for \(X\) being the \(r\)-dimensional solution of an autonomous SDE of the form
\begin{align*}
\mathrm{d}X(t) = a(X(t)) \, \mathrm{d}t + b(X(t)) \, \mathrm{d}W(t), \quad t \in [0,1],
\end{align*}
with deterministic initial value
\begin{align*}
X(0) = x_0 \in \mathbb{R}^r,
\end{align*}
and driven by a \(d\)-dimensional standard Brownian motion \(W\). Furthermore, the drift coefficient \(a \colon \mathbb{R}^r \to \mathbb{R}^r\) and the diffusion coefficient \(b \colon \mathbb{R}^r \to \mathbb{R}^{r \times d}\) are assumed to be globally Lipschitz continuous.
For the function classes
\begin{align*}
F_{\infty} = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{\sup}\bigr\}
\end{align*}
and
\begin{align*}
F_p = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{L_p}\bigr\}, \quad 1 \leq p < \infty.
\end{align*}
we have established the following.
\[\]
\(\textit{Theorem 1.}\)
There exists a random bit multilevel Monte Carlo (MLMC) algorithm \(M\) using
\[
L = L(\varepsilon,F) = \begin{cases}\lceil{\log_2(\varepsilon^{-2}}\rceil, &\text{if} \ F = F_p,\\
\lceil{\log_2(\varepsilon^{-2} + \log_2(\log_2(\varepsilon^{-1}))}\rceil, &\text{if} \ F = F_\infty
\end{cases}
\]
and replication numbers
\[
N_\ell = N_\ell(\varepsilon,F) = \begin{cases}
\lceil{(L+1) \cdot 2^{-\ell} \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F = F_p,\\
\lceil{(L+1) \cdot 2^{-\ell} \cdot \max(\ell,1) \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F=f_\infty
\end{cases}
\]
for \(\ell = 0,\ldots,L\), for which exists a positive constant \(c\) such that
\begin{align*}
\mathrm{error}(M,F) = \sup_{f \in F} \bigl(\mathrm{E}(S(f) - M(f))^2\bigr)^{1/2} \leq c \cdot \varepsilon
\end{align*}
and
\begin{align*}
\mathrm{cost}(M,F) = \sup_{f \in F} \mathrm{E}(\mathrm{cost}(M,f)) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for every \(\varepsilon \in {]0,1/2[}\).
\[\]
Hence, in terms of the \(\varepsilon\)-complexity
\begin{align*}
\mathrm{comp}(\varepsilon,F) = \inf\bigl\{\mathrm{cost}(M,F) \colon M \ \text{is a random bit MC algorithm}, \mathrm{error}(M,F) \leq \varepsilon\bigr\}
\end{align*}
we have established the upper bound
\begin{align*}
\mathrm{comp}(\varepsilon,F) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for some positive constant \(c\). That is, we have shown the same weak asymptotic upper bound as in the case of random numbers from \([0,1]\). Hence, in this sense, random bits are almost as powerful as random numbers for our computational problem.
Moreover, we present numerical results for a non-analyzed adaptive random bit MLMC Euler algorithm, in the particular cases of the Brownian motion, the geometric Brownian motion, the Ornstein-Uhlenbeck SDE and the Cox-Ingersoll-Ross SDE. We also provide a numerical comparison to the corresponding adaptive random number MLMC Euler method.
A key challenge in the analysis of the algorithm in Theorem 1 is the approximation of probability distributions by means of random bits. A problem very closely related to the quantization problem, i.e., the optimal approximation of a given probability measure (on a separable Hilbert space) by means of a probability measure with finite support size.
Though we have shown that the random bit approximation of the standard normal distribution is 'harder' than the corresponding quantization problem (lower weak rate of convergence), we have been able to establish the same weak rate of convergence as for the corresponding quantization problem in the case of the distribution of a Brownian bridge on \(L_2([0,1])\), the distribution of the solution of a scalar SDE on \(L_2([0,1])\), and the distribution of a centered Gaussian random element in a separable Hilbert space.
Diversification is one of the main pillars of investment strategies. The prominent 1/N portfolio, which puts equal weight on each asset is, apart from its simplicity, a method which is hard to outperform in realistic settings, as many studies have shown. However, depending on the number of considered assets, this method can lead to very large portfolios. On the other hand, optimization methods like the mean-variance portfolio suffer from estimation errors, which often destroy the theoretical benefits. We investigate the performance of the equal weight portfolio when using fewer assets. For this we explore different naive portfolios, from selecting the best Sharpe ratio assets to exploiting knowledge about correlation structures using clustering methods. The clustering techniques separate the possible assets into non-overlapping clusters and the assets within a cluster are ordered by their Sharpe ratio. Then the best asset of each portfolio is chosen to be a member of the new portfolio with equal weights, the cluster portfolio. We show that this portfolio inherits the advantages of the 1/N portfolio and can even outperform it empirically. For this we use real data and several simulation models. We prove these findings from a statistical point of view using the framework by DeMiguel, Garlappi and Uppal (2009). Moreover, we show the superiority regarding the Sharpe ratio in a setting, where in each cluster the assets are comonotonic. In addition, we recommend the consideration of a diversification-risk ratio to evaluate the performance of different portfolios.
In a recent paper, G. Malle and G. Robinson proposed a modular anologue to Brauer's famous \( k(B) \)-conjecture. If \( B \) is a \( p \)-block of a finite group with defect group \( D \), then they conjecture that \( l(B) \leq p^r \), where \( r \) is the sectional \( p \)-rank of \( D \). Since this conjecture is relatively new, there is obviously still a lot of work to do. This thesis is concerned with proving their conjecture for the finite groups of exceptional Lie type.
On the complexity and approximability of optimization problems with Minimum Quantity Constraints
(2020)
During the last couple of years, there has been a variety of publications on the topic of
minimum quantity constraints. In general, a minimum quantity constraint is a lower bound
constraint on an entity of an optimization problem that only has to be fulfilled if the entity is
“used” in the respective solution. For example, if a minimum quantity \(q_e\) is defined on an
edge \(e\) of a flow network, the edge flow on \(e\) may either be \(0\) or at least \(q_e\) units of flow.
Minimum quantity constraints have already been applied to problem classes such as flow, bin
packing, assignment, scheduling and matching problems. A result that is common to all these
problem classes is that in the majority of cases problems with minimum quantity constraints
are NP-hard, even if the problem without minimum quantity constraints but with fixed lower
bounds can be solved in polynomial time. For instance, the maximum flow problem is known
to be solvable in polynomial time, but becomes NP-hard once minimum quantity constraints
are added.
In this thesis we consider flow, bin packing, scheduling and matching problems with minimum
quantity constraints. For each of these problem classes we provide a summary of the
definitions and results that exist to date. In addition, we define new problems by applying
minimum quantity constraints to the maximum-weight b-matching problem and to open
shop scheduling problems. We contribute results to each of the four problem classes: We
show NP-hardness for a variety of problems with minimum quantity constraints that have
not been considered so far. If possible, we restrict NP-hard problems to special cases that
can be solved in polynomial time. In addition, we consider approximability of the problems:
For most problems it turns out that, unless P=NP, there cannot be any polynomial-time
approximation algorithm. Hence, we consider bicriteria approximation algorithms that allow
the constraints of the problem to be violated up to a certain degree. This approach proves to
be very helpful and we provide a polynomial-time bicriteria approximation algorithm for at
least one problem of each of the four problem classes we consider. For problems defined on
graphs, the class of series parallel graphs supports this approach very well.
We end the thesis with a summary of the results and several suggestions for future research
on minimum quantity constraints.
This thesis introduces a novel deformation method for computational meshes. It is based on the numerical path following for the equations of nonlinear elasticity. By employing a logarithmic variation of the neo-Hookean hyperelastic material law, the method guarantees that the mesh elements do not become inverted and remain well-shaped. In order to demonstrate the performance of the method, this thesis addresses two areas of active research in isogeometric analysis: volumetric domain parametrization and fluid-structure interaction. The former concerns itself with the construction of a parametrization for a given computational domain provided only a parametrization of the domain’s boundary. The proposed mesh deformation method gives rise to a novel solution approach to this problem. Within it, the domain parametrization is constructed as a deformed configuration of a simplified domain. In order to obtain the simplified domain, the boundary of the target domain is projected in the \(L^2\)-sense onto a coarse NURBS basis. Then, the Coons patch is applied to parametrize the simplified domain. As a range of 2D and 3D examples demonstrates, the mesh deformation approach is able to produce high-quality parametrizations for complex domains where many state-of-the-art methods either fail or become unstable and inefficient. In the context of fluid-structure interaction, the proposed mesh deformation method is applied to robustly update the computational mesh in situations when the fluid domain undergoes large deformations. In comparison to the state-of-the-art mesh update methods, it is able to handle larger deformations and does not result in an eventual reduction of mesh quality. The performance of the method is demonstrated on a classic 2D fluid-structure interaction benchmark reproduced by using an isogeometric partitioned solver with strong coupling.
The famous Mather-Yau theorem in singularity theory yields a bijection of isomorphy classes of germs of isolated hypersurface singularities and their respective Tjurina algebras.
This result has been generalized by T. Gaffney and H. Hauser to singularities of isolated singularity type. Due to the fact that both results do not have a constructive proof, it is the objective of this thesis to extract explicit information about hypersurface singularities from their Tjurina algebras.
First we generalize the result by Gaffney-Hauser to germs of hypersurface singularities, which are strongly Euler-homogeneous at the origin. Afterwards we investigate the Lie algebra structure of the module of logarithmic derivations of Tjurina algebra while considering the theory of graded analytic algebras by G. Scheja and H. Wiebe. We use the aforementioned theory to show that germs of hypersurface singularities with positively graded Tjurina algebras are strongly Euler-homogeneous at the origin. We deduce the classification of hypersurface singularities with Stanley-Reisner Tjurina ideals.
The notion of freeness and holonomicity play an important role in the investigation of properties of the aforementioned singularities. Both notions have been introduced by K. Saito in 1980. We show that hypersurface singularities with Stanley--Reisner Tjurina ideals are holonomic and have a free singular locus. Furthermore, we present a Las Vegas algorithm, which decides whether a given zero-dimensional \(\mathbb{C}\)-algebra is the Tjurina algebra of a quasi-homogeneous isolated hypersurface singularity. The algorithm is implemented in the computer algebra system OSCAR.
In this thesis, we present the basic concepts of isogeometric analysis (IGA) and we consider Poisson's equation as model problem. Since in IGA the physical domain is parametrized via a geometry function that goes from a parameter domain, e.g. the unit square or unit cube, to the physical one, we present a class of parametrizations that can be viewed as a generalization of polar coordinates, known as the scaled boundary parametrizations (SB-parametrizations). These are easy to construct and are particularly attractive when only the boundary of a domain is available. We then present an IGA approach based on these parametrizations, that we call scaled boundary isogeometric analysis (SB-IGA). The SB-IGA derives the weak form of partial differential equations in a different way from the standard IGA. For the discretization projection
on a finite-dimensional space, we choose in both cases Galerkin's method. Thanks to this technique, we state an equivalence theorem for linear elliptic boundary value problems between the standard IGA, when it makes use of an SB-parametrization,
and the SB-IGA. We solve Poisson's equation with Dirichlet boundary conditions on different geometries and with different SB-parametrizations.
Fibre reinforced polymers(FRPs) are one the newest and modern materials. In FRPs a light polymer matrix holds but weak polymer matrix is strengthened by glass or carbon fibres. The result is a material that is light and compared to its weight, very strong.\par
The stiffness of the resulting material is governed by the direction and the length of the fibres. To better understand the behaviour of FRPs we need to know the fibre length distribution in the resulting material. The classic method for this is ashing, where a sample of the material is burned and destroyed. We look at CT images of the material. In the first part we assumed that we have a full fibre segmentation, we can fit an a cylinder to each individual fibre. In this setting we identified two problems, sampling bias and censoring.\par
Sampling bias occurs since a longer fibre has a higher probability to be visible in the observation window. To solve this problem we used a reweighed fibre length distribution. The weight depends on the used sampling rule.\par
For the censoring we used an EM algorithm. The EM algorithm is used to get a Maximum Likelihood estimator in cases of missing or censored data.\par
For this setting we deduced conditions such that the EM algorithm converges to at least a stationary point of the underlying likelihood function. We further found conditions such that if the EM converges to the correct ML estimator, the estimator is consistent and asymptotically normally distributed.\par
Since obtaining a full fibre segmentation is hard we further looked in the fibre endpoint process. The fibre end point process can be modelled as a Neymann-Scott cluster process. Using this model we can find a formula for the reduced second moment measure for this process. We use this formula to get an estimator for the fibre length distribution.\par
We investigated all estimators using simulation studies. We especially investigated their performance in the case of non overlapping fibres.
Operator semigroups and infinite dimensional analysis applied to problems from mathematical physics
(2020)
In this dissertation we treat several problems from mathematical physics via methods from functional analysis and probability theory and in particular operator semigroups. The thesis consists thematically of two parts.
In the first part we consider so-called generalized stochastic Hamiltonian systems. These are generalizations of Langevin dynamics which describe interacting particles moving in a surrounding medium. From a mathematical point of view these systems are stochastic differential equations with a degenerated diffusion coefficient. We construct weak solutions of these equations via the corresponding martingale problem. Therefore, we prove essential m-dissipativity of the degenerated and non-sectorial It\^{o} differential operator. Further, we apply results from the analytic and probabilistic potential theory to obtain an associated Markov process. Afterwards we show our main result, the convergence in law of the positions of the particles in the overdamped regime, the so-called overdamped limit, to a distorted Brownian motion. To this end, we show convergence of the associated operator semigroups in the framework of Kuwae-Shioya. Further, we established a tightness result for the approximations which proves together with the convergence of the semigroups weak convergence of the laws.
In the second part we deal with problems from infinite dimensional Analysis. Three different issues are considered. The first one is an improvement of a characterization theorem of the so-called regular test functions and distribution of White noise analysis. As an application we analyze a stochastic transport equation in terms of regularity of its solution in the space of regular distributions. The last two problems are from the field of relativistic quantum field theory. In the first one the $ (\Phi)_3^4 $-model of quantum field theory is under consideration. We show that the Schwinger functions of this model have a representation as the moments of a positive Hida distribution from White noise analysis. In the last chapter we construct a non-trivial relativistic quantum field in arbitrary space-time dimension. The field is given via Schwinger functions. For these which we establish all axioms of Osterwalder and Schrader. This yields via the reconstruction theorem of Osterwalder and Schrader a unique relativistic quantum field. The Schwinger functions are given as the moments of a non-Gaussian measure on the space of tempered distributions. We obtain the measure as a superposition of Gaussian measures. In particular, this measure is itself non-Gaussian, which implies that the field under consideration is not a generalized free field.
We study a multi-scale model for growth of malignant gliomas in the human brain.
Interactions of individual glioma cells with their environment determine the gross tumor shape.
We connect models on different time and length scales to derive a practical description of tumor growth that takes these microscopic interactions into account.
From a simple subcellular model for haptotactic interactions of glioma cells with the white matter we derive a microscopic particle system, which leads to a meso-scale model for the distribution of particles, and finally to a macroscopic description of the cell density.
The main body of this work is dedicated to the development and study of numerical methods adequate for the meso-scale transport model and its transition to the macroscopic limit.
Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.
In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.
Model uncertainty is a challenge that is inherent in many applications of mathematical models in various areas, for instance in mathematical finance and stochastic control. Optimization procedures in general take place under a particular model. This model, however, might be misspecified due to statistical estimation errors and incomplete information. In that sense, any specified model must be understood as an approximation of the unknown "true" model. Difficulties arise since a strategy which is optimal under the approximating model might perform rather bad in the true model. A natural way to deal with model uncertainty is to consider worst-case optimization.
The optimization problems that we are interested in are utility maximization problems in continuous-time financial markets. It is well known that drift parameters in such markets are notoriously difficult to estimate. To obtain strategies that are robust with respect to a possible misspecification of the drift we consider a worst-case utility maximization problem with ellipsoidal uncertainty sets for the drift parameter and with a constraint on the strategies that prevents a pure bond investment.
By a dual approach we derive an explicit representation of the optimal strategy and prove a minimax theorem. This enables us to show that the optimal strategy converges to a generalized uniform diversification strategy as uncertainty increases.
To come up with a reasonable uncertainty set, investors can use filtering techniques to estimate the drift of asset returns based on return observations as well as external sources of information, so-called expert opinions. In a Black-Scholes type financial market with a Gaussian drift process we investigate the asymptotic behavior of the filter as the frequency of expert opinions tends to infinity. We derive limit theorems stating that the information obtained from observing the discrete-time expert opinions is asymptotically the same as that from observing a certain diffusion process which can be interpreted as a continuous-time expert. Our convergence results carry over to convergence of the value function in a portfolio optimization problem with logarithmic utility.
Lastly, we use our observations about how expert opinions improve drift estimates for our robust utility maximization problem. We show that our duality approach carries over to a financial market with non-constant drift and time-dependence in the uncertainty set. A time-dependent uncertainty set can then be defined based on a generic filter. We apply this to various investor filtrations and investigate which effect expert opinions have on the robust strategies.
Various physical phenomenons with sudden transients that results into structrual changes can be modeled via
switched nonlinear differential algebraic equations (DAEs) of the type
\[
E_{\sigma}\dot{x}=A_{\sigma}x+f_{\sigma}+g_{\sigma}(x). \tag{DAE}
\]
where \(E_p,A_p \in \mathbb{R}^{n\times n}, x\mapsto g_p(x),\) is a mapping, \(p \in \{1,\cdots,P\}, P\in \mathbb{N}
f \in \mathbb{R} \rightarrow \mathbb{R}^n , \sigma: \mathbb{R} \rightarrow \{1,\cdots, P\}\).
Two related common tasks are:
Task 1: Investigate if above (DAE) has a solution and if it is unique.
Task 2: Find a connection among a solution of above (DAE) and solutions of related
partial differential equations.
In the linear case \(g(x) \equiv 0\) the task 1 has been tackeled already in a
distributional solution framework.
A main goal of the dissertation is to give contribution to task 1 for the
nonlinear case \(g(x) \not \equiv 0\) ; also contributions to the task 2 are given for
switched nonlinear DAEs arising while modeling sudden transients in water
distribution networks. In addition, this thesis contains the following further
contributions:
The notion of structured switched nonlinear DAEs has been introduced,
allowing also non regular distributions as solutions. This extend a previous
framework that allowed only piecewise smooth functions as solutions. Further six mild conditions were given to ensure existence and uniqueness of the solution within the space of piecewise smooth distribution. The main
condition, namely the regularity of the matrix pair \((E,A)\), is interpreted geometrically for those switched nonlinear DAEs arising from water network graphs.
Another contribution is the introduction of these switched nonlinear DAEs
as a simplication of the PDE model used classically for modeling water networks. Finally, with the support of numerical simulations of the PDE model it has been illustrated that this switched nonlinear DAE model is a good approximation for the PDE model in case of a small compressibility coefficient.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
In this thesis, we deal with the worst-case portfolio optimization problem occuring in discrete-time markets.
First, we consider the discrete-time market model in the presence of crash threats. We construct the discrete worst-case optimal portfolio strategy by the indifference principle in the case of the logarithmic utility. After that we extend this problem to general utility functions and derive the discrete worst-case optimal portfolio processes, which are characterized by a dynamic programming equation. Furthermore, the convergence of the discrete worst-case optimal portfolio processes are investigated when we deal with the explicit utility functions.
In order to further study the relation of the worst-case optimal value function in discrete-time models to continuous-time models we establish the finite-difference approach. By deriving the discrete HJB equation we verify the worst-case optimal value function in discrete-time models, which satisfies a system of dynamic programming inequalities. With increasing degree of fineness of the time discretization, the convergence of the worst-case value function in discrete-time models to that in continuous-time models are proved by using a viscosity solution method.
In this dissertation we apply financial mathematical modelling to electricity markets. Electricity is different from any other underlying of financial contracts: it is not storable. This means that electrical energy in one time point cannot be transferred to another. As a consequence, power contracts with disjoint delivery time spans basically have a different underlying. The main idea throughout this thesis is exactly this two-dimensionality of time: every electricity contract is not only characterized by its trading time but also by its delivery time.
The basis of this dissertation are four scientific papers corresponding to the Chapters 3 to 6, two of which have already been published in peer-reviewed journals. Throughout this thesis two model classes play a significant role: factor models and structural models. All ideas are applied to or supported by these two model classes. All empirical studies in this dissertation are conducted on electricity price data from the German market and Chapter 4 in particular studies an intraday derivative unique to the German market. Therefore, electricity market design is introduced by the example of Germany in Chapter 1. Subsequently, Chapter 2 introduces the general mathematical theory necessary for modelling electricity prices, such as Lévy processes and the Esscher transform. This chapter is the mathematical basis of the Chapters 3 to 6.
Chapter 3 studies factor models applied to the German day-ahead spot prices. We introduce a qualitative measure for seasonality functions based on three requirements. Furthermore, we introduce a relation of factor models to ARMA processes, which induces a new method to estimate the mean reversion speed.
Chapter 4 conducts a theoretical and empirical study of a pricing method for a new electricity derivative: the German intraday cap and floor futures. We introduce the general theory of derivative pricing and propose a method based on the Hull-White model of interest rate modelling, which is a one-factor model. We include week futures prices to generate a price forward curve (PFC), which is then used instead of a fixed deterministic seasonality function. The idea that we can combine all market prices, and in particular futures prices, to improve the model quality also plays the major role in Chapter 5 and Chapter 6.
In Chapter 5 we develop a Heath-Jarrow-Morton (HJM) framework that models intraday, day-ahead, and futures prices. This approach is based on two stochastic processes motivated by economic interpretations and separates the stochastic dynamics in trading and delivery time. Furthermore, this framework allows for the use of classical day-ahead spot price models such as the ones of Schwartz and Smith (2000), Lucia and Schwartz (2002) and includes many model classes such as structural models and factor models.
Chapter 6 unifies the classical theory of storage and the concept of a risk premium through the introduction of an unobservable intrinsic electricity price. Since all tradable electricity contracts are derivatives of this actual intrinsic price, their prices should all be derived as conditional expectation under the risk-neutral measure. Through the intrinsic electricity price we develop a framework, which also includes many existing modelling approaches, such as the HJM framework of Chapter 5.
Many loads acting on a vehicle depend on the condition and quality of roads
traveled as well as on the driving style of the motorist. Thus, during vehicle development,
good knowledge on these further operations conditions is advantageous.
For that purpose, usage models for different kinds of vehicles are considered. Based
on these mathematical descriptions, representative routes for multiple user
types can be simulated in a predefined geographical region. The obtained individual
driving schedules consist of coordinates of starting and target points and can
thus be routed on the true road network. Additionally, different factors, like the
topography, can be evaluated along the track.
Available statistics resulting from travel survey are integrated to guarantee reasonable
trip length. Population figures are used to estimate the number of vehicles in
contained administrative units. The creation of thousands of those geo-referenced
trips then allows the determination of realistic measures of the durability loads.
Private as well as commercial use of vehicles is modeled. For the former, commuters
are modeled as the main user group conducting daily drives to work and
additional leisure time a shopping trip during workweek. For the latter, taxis as
example for users of passenger cars are considered. The model of light-duty commercial
vehicles is split into two types of driving patterns, stars and tours, and in
the common traffic classes of long-distance, local and city traffic.
Algorithms to simulate reasonable target points based on geographical and statistical
data are presented in detail. Examples for the evaluation of routes based
on topographical factors and speed profiles comparing the influence of the driving
style are included.
Magnetoelastic coupling describes the mutual dependence of the elastic and magnetic fields and can be observed in certain types of materials, among which are the so-called "magnetostrictive materials". They belong to the large class of "smart materials", which change their shape, dimensions or material properties under the influence of an external field. The mechanical strain or deformation a material experiences due to an externally applied magnetic field is referred to as magnetostriction; the reciprocal effect, i.e. the change of the magnetization of a body subjected to mechanical stress is called inverse magnetostriction. The coupling of mechanical and electromagnetic fields is particularly observed in "giant magnetostrictive materials", alloys of ferromagnetic materials that can exhibit several thousand times greater magnitudes of magnetostriction (measured as the ratio of the change in length of the material to its original length) than the common magnetostrictive materials. These materials have wide applications areas: They are used as variable-stiffness devices, as sensors and actuators in mechanical systems or as artificial muscles. Possible application fields also include robotics, vibration control, hydraulics and sonar systems.
Although the computational treatment of coupled problems has seen great advances over the last decade, the underlying problem structure is often not fully understood nor taken into account when using black box simulation codes. A thorough analysis of the properties of coupled systems is thus an important task.
The thesis focuses on the mathematical modeling and analysis of the coupling effects in magnetostrictive materials. Under the assumption of linear and reversible material behavior with no magnetic hysteresis effects, a coupled magnetoelastic problem is set up using two different approaches: the magnetic scalar potential and vector potential formulations. On the basis of a minimum energy principle, a system of partial differential equations is derived and analyzed for both approaches. While the scalar potential model involves only stationary elastic and magnetic fields, the model using the magnetic vector potential accounts for different settings such as the eddy current approximation or the full Maxwell system in the frequency domain.
The distinctive feature of this work is the analysis of the obtained coupled magnetoelastic problems with regard to their structure, strong and weak formulations, the corresponding function spaces and the existence and uniqueness of the solutions. We show that the model based on the magnetic scalar potential constitutes a coupled saddle point problem with a penalty term. The main focus in proving the unique solvability of this problem lies on the verification of an inf-sup condition in the continuous and discrete cases. Furthermore, we discuss the impact of the reformulation of the coupled constitutive equations on the structure of the coupled problem and show that in contrast to the scalar potential approach, the vector potential formulation yields a symmetric system of PDEs. The dependence of the problem structure on the chosen formulation of the constitutive equations arises from the distinction of the energy and coenergy terms in the Lagrangian of the system. While certain combinations of the elastic and magnetic variables lead to a coupled magnetoelastic energy function yielding a symmetric problem, the use of their dual variables results in a coupled coenergy function for which a mixed problem is obtained.
The presented models are supplemented with numerical simulations carried out with MATLAB for different examples including a 1D Euler-Bernoulli beam under magnetic influence and a 2D magnetostrictive plate in the state of plane stress. The simulations are based on material data of Terfenol-D, a giant magnetostrictive materials used in many industrial applications.
In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.
In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.
Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.
Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.
Multiphase materials combine properties of several materials, which makes them interesting for high-performing components. This thesis considers a certain set of multiphase materials, namely silicon-carbide (SiC) particle-reinforced aluminium (Al) metal matrix composites and their modelling based on stochastic geometry models.
Stochastic modelling can be used for the generation of virtual material samples: Once we have fitted a model to the material statistics, we can obtain independent three-dimensional “samples” of the material under investigation without the need of any actual imaging. Additionally, by changing the model parameters, we can easily simulate a new material composition.
The materials under investigation have a rather complicated microstructure, as the system of SiC particles has many degrees of freedom: Size, shape, orientation and spatial distribution. Based on FIB-SEM images, that yield three-dimensional image data, we extract the SiC particle structure using methods of image analysis. Then we model the SiC particles by anisotropically rescaled cells of a random Laguerre tessellation that was fitted to the shapes of isotropically rescaled particles. We fit a log-normal distribution for the volume distribution of the SiC particles. Additionally, we propose models for the Al grain structure and the Aluminium-Copper (\({Al}_2{Cu}\)) precipitations occurring on the grain boundaries and on SiC-Al phase boundaries.
Finally, we show how we can estimate the parameters of the volume-distribution based on two-dimensional SEM images. This estimation is applied to two samples with different mean SiC particle diameters and to a random section through the model. The stereological estimations are within acceptable agreement with the parameters estimated from three-dimensional image data
as well as with the parameters of the model.
In this thesis, we focus on the application of the Heath-Platen (HP) estimator in option
pricing. In particular, we extend the approach of the HP estimator for pricing path dependent
options under the Heston model. The theoretical background of the estimator
was first introduced by Heath and Platen [32]. The HP estimator was originally interpreted
as a control variate technique and an application for European vanilla options was
presented in [32]. For European vanilla options, the HP estimator provided a considerable
amount of variance reduction. Thus, applying the technique for path dependent options
under the Heston model is the main contribution of this thesis.
The first part of the thesis deals with the implementation of the HP estimator for pricing
one-sided knockout barrier options. The main difficulty for the implementation of the HP
estimator is located in the determination of the first hitting time of the barrier. To test the
efficiency of the HP estimator we conduct numerical tests with regard to various aspects.
We provide a comparison among the crude Monte Carlo estimation, the crude control
variate technique and the HP estimator for all types of barrier options. Furthermore, we
present the numerical results for at the money, in the money and out of the money barrier
options. As numerical results imply, the HP estimator performs superior among others
for pricing one-sided knockout barrier options under the Heston model.
Another contribution of this thesis is the application of the HP estimator in pricing bond
options under the Cox-Ingersoll-Ross (CIR) model and the Fong-Vasicek (FV) model. As
suggested in the original paper of Heath and Platen [32], the HP estimator has a wide
range of applicability for derivative pricing. Therefore, transferring the structure of the
HP estimator for pricing bond options is a promising contribution. As the approximating
Vasicek process does not seem to be as good as the deterministic volatility process in the
Heston setting, the performance of the HP estimator in the CIR model is only relatively
good. However, for the FV model the variance reduction provided by the HP estimator is
again considerable.
Finally, the numerical result concerning the weak convergence rate of the HP estimator
for pricing European vanilla options in the Heston model is presented. As supported by
numerical analysis, the HP estimator has weak convergence of order almost 1.
A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.
Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.
Non–woven materials consist of many thousands of fibres laid down on a conveyor belt
under the influence of a turbulent air stream. To improve industrial processes for the
production of non–woven materials, we develop and explore novel mathematical fibre and
material models.
In Part I of this thesis we improve existing mathematical models describing the fibres on the
belt in the meltspinning process. In contrast to existing models, we include the fibre–fibre
interaction caused by the fibres’ thickness which prevents the intersection of the fibres and,
hence, results in a more accurate mathematical description. We start from a microscopic
characterisation, where each fibre is described by a stochastic functional differential
equation and include the interaction along the whole fibre path, which is described by a
delay term. As many fibres are required for the production of a non–woven material, we
consider the corresponding mean–field equation, which describes the evolution of the fibre
distribution with respect to fibre position and orientation. To analyse the particular case of
large turbulences in the air stream, we develop the diffusion approximation which yields a
distribution describing the fibre position. Considering the convergence to equilibrium on
an analytical level, as well as performing numerical experiments, gives an insight into the
influence of the novel interaction term in the equations.
In Part II of this thesis we model the industrial airlay process, which is a production method
whereby many short fibres build a three–dimensional non–woven material. We focus on
the development of a material model based on original fibre properties, machine data and
micro computer tomography. A possible linking of these models to other simulation tools,
for example virtual tensile tests, is discussed.
The models and methods presented in this thesis promise to further the field in mathematical
modelling and computational simulation of non–woven materials.
In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \).
In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig.
Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question.
\[\]
In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.
In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.
We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous
short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds.
The goal of an investor is to find a portfolio which maximizes expected utility
from terminal wealth under budget and present expected short-fall (PESF) risk
constraints. We analyze this portfolio optimization problem in both complete and
incomplete financial markets in three different cases: (a) when the PESF risk is
minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds
to the unconstrained Merton investment.
In all cases we find the optimal terminal wealth and portfolio process using the
martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the
optimal terminal wealth in the cases mentioned using numerical examples. Without
risk constraints, we further compare the investment strategies for complete
and incomplete market numerically.
This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.
We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property.
Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price.
We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.
The thesis studies change points in absolute time for censored survival data with some contributions to the more common analysis of change points with respect to survival time. We first introduce the notions and estimates of survival analysis, in particular the hazard function and censoring mechanisms. Then, we discuss change point models for survival data. In the literature, usually change points with respect to survival time are studied. Typical examples are piecewise constant and piecewise linear hazard functions. For that kind of models, we propose a new algorithm for numerical calculation of maximum likelihood estimates based on a cross entropy approach which in our simulations outperforms the common Nelder-Mead algorithm.
Our original motivation was the study of censored survival data (e.g., after diagnosis of breast cancer) over several decades. We wanted to investigate if the hazard functions differ between various time periods due, e.g., to progress in cancer treatment. This is a change point problem in the spirit of classical change point analysis. Horváth (1998) proposed a suitable change point test based on estimates of the cumulative hazard function. As an alternative, we propose similar tests based on nonparametric estimates of the hazard function. For one class of tests related to kernel probability density estimates, we develop fully the asymptotic theory for the change point tests. For the other class of estimates, which are versions of the Watson-Leadbetter estimate with censoring taken into account and which are related to the Nelson-Aalen estimate, we discuss some steps towards developing the full asymptotic theory. We close by applying the change point tests to simulated and real data, in particular to the breast cancer survival data from the SEER study.
In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.
Nonwoven materials are used as filter media which are the key component of automotive filters such as air filters, oil filters, and fuel filters. Today, the advanced engine technologies require innovative filter media with higher performances. A virtual microstructure of the nonwoven filter medium, which has similar filter properties as the existing material, can be used to design new filter media from existing media. Nonwoven materials considered in this thesis prominently feature non-overlapping fibers, curved fibers, fibers with circular cross section, fibers of apparently infinite length, and fiber bundles. To this end, as part of this thesis, we extend the Altendorf-Jeulin individual fiber model to incorporate all the above mentioned features. The resulting novel stochastic 3D fiber model can generate geometries with good visual resemblance of real filter media. Furthermore, pressure drop, which is one of the important physical properties of the filter, simulated numerically on the computed tomography (CT) data of the real nonwoven material agrees well (with a relative error of 8%) with the pressure drop simulated in the generated microstructure realizations from our model.
Generally, filter properties for the CT data and generated microstructure realizations are computed using numerical simulations. Since numerical simulations require extensive system memory and computation time, it is important to find the representative domain size of the generated microstructure for a required filter property. As part of this thesis, simulation and a statistical approach are used to estimate the representative domain size of our microstructure model. Precisely, the representative domain size with respect to the packing density, the pore size distribution, and the pressure drop are considered. It turns out that the statistical approach can be used to estimate the representative domain size for the given property more precisely and using less generated microstructures than the purely simulation based approach.
Among the various properties of fibrous filter media, fiber thickness and orientation are important characteristics which should be considered in design and quality assurance of filter media. Automatic analysis of images from scanning electron microscopy (SEM) is a suitable tool in that context. Yet, the accuracy of such image analysis tools cannot be judged based on images of real filter media since their true fiber thickness and orientation can never be known accurately. A solution is to employ synthetically generated models for evaluation. By combining our 3D fiber system model with simulation of the SEM imaging process, quantitative evaluation of the fiber thickness and orientation measurements becomes feasible. We evaluate the state-of-the-art automatic thickness and orientation estimation method that way.
In this dissertation convergence of binomial trees for option pricing is investigated. The focus is on American and European put and call options. For that purpose variations of the binomial tree model are reviewed.
In the first part of the thesis we investigated the convergence behavior of the already known trees from the literature (CRR, RB, Tian and CP) for the European options. The CRR and the RB tree suffer from irregular convergence, so our first aim is to find a way to get the smooth convergence. We first show what causes these oscillations. That will also help us to improve the rate of convergence. As a result we introduce the Tian and the CP tree and we proved that the order of convergence for these trees is \(O \left(\frac{1}{n} \right)\).
Afterwards we introduce the Split tree and explain its properties. We prove the convergence of it and we found an explicit first order error formula. In our setting, the splitting time \(t_{k} = k\Delta t\) is not fixed, i.e. it can be any time between 0 and the maturity time \(T\). This is the main difference compared to the model from the literature. Namely, we show that the good properties of the CRR tree when \(S_{0} = K\) can be preserved even without this condition (which is mainly the case). We achieved the convergence of \(O \left(n^{-\frac{3}{2}} \right)\) and we typically get better results if we split our tree later.
Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)
In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.
This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134].
A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk.
Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants.
Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.
In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored.
First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations.
Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established.
Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.
Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule.
We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC.
We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate.
To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band.
Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.
The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.
By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.
Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.
This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.
A vehicles fatigue damage is a highly relevant figure in the complete vehicle design process.
Long term observations and statistical experiments help to determine the influence of differnt parts of the vehicle, the driver and the surrounding environment.
This work is focussing on modeling one of the most important influence factors of the environment: road roughness. The quality of the road is highly dependant on several surrounding factors which can be used to create mathematical models.
Such models can be used for the extrapolation of information and an estimation of the environment for statistical studies.
The target quantity we focus on in this work ist the discrete International Roughness Index or discrete IRI. The class of models we use and evaluate is a discriminative classification model called Conditional Random Field.
We develop a suitable model specification and show new variants of stochastic optimizations to train the model efficiently.
The model is also applied to simulated and real world data to show the strengths of our approach.
We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.
The main theme of this thesis is the interplay between algebraic and tropical intersection
theory, especially in the context of enumerative geometry. We begin by exploiting
well-known results about tropicalizations of subvarieties of algebraic tori to give a
simple proof of Nishinou and Siebert’s correspondence theorem for rational curves
through given points in toric varieties. Afterwards, we extend this correspondence
by additionally allowing intersections with psi-classes. We do this by constructing
a tropicalization map for cycle classes on toroidal embeddings. It maps algebraic
cycle classes to elements of the Chow group of the cone complex of the toroidal
embedding, that is to weighted polyhedral complexes, which are balanced with respect
to an appropriate map to a vector space, modulo a naturally defined equivalence relation.
We then show that tropicalization respects basic intersection-theoretic operations like
intersections with boundary divisors and apply this to the appropriate moduli spaces
to obtain our correspondence theorem.
Trying to apply similar methods in higher genera inevitably confronts us with moduli
spaces which are not toroidal. This motivates the last part of this thesis, where we
construct tropicalizations of cycles on fine logarithmic schemes. The logarithmic point of
view also motivates our interpretation of tropical intersection theory as the dualization
of the intersection theory of Kato fans. This duality gives a new perspective on the
tropicalization map; namely, as the dualization of a pull-back via the characteristic
morphism of a logarithmic scheme.
Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers.
While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation.
In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral.
Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties.
In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers.
Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time.
Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions.
In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals.
By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality.
After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters.
By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.
Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory.
Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern.
To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.
In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.
In this work we focus on the regression models with asymmetrical error distribution,
more precisely, with extreme value error distributions. This thesis arises in the framework
of the project "Robust Risk Estimation". Starting from July 2011, this project won
three years funding by the Volkswagen foundation in the call "Extreme Events: Modelling,
Analysis, and Prediction" within the initiative "New Conceptual Approaches to
Modelling and Simulation of Complex Systems". The project involves applications in
Financial Mathematics (Operational and Liquidity Risk), Medicine (length of stay and
cost), and Hydrology (river discharge data). These applications are bridged by the
common use of robustness and extreme value statistics.
Within the project, in each of these applications arise issues, which can be dealt with by
means of Extreme Value Theory adding extra information in the form of the regression
models. The particular challenge in this context concerns asymmetric error distributions,
which significantly complicate the computations and make desired robustification
extremely difficult. To this end, this thesis makes a contribution.
This work consists of three main parts. The first part is focused on the basic notions
and it gives an overview of the existing results in the Robust Statistics and Extreme
Value Theory. We also provide some diagnostics, which is an important achievement of
our project work. The second part of the thesis presents deeper analysis of the basic
models and tools, used to achieve the main results of the research.
The second part is the most important part of the thesis, which contains our personal
contributions. First, in Chapter 5, we develop robust procedures for the risk management
of complex systems in the presence of extreme events. Mentioned applications use time
structure (e.g. hydrology), therefore we provide extreme value theory methods with time
dynamics. To this end, in the framework of the project we considered two strategies. In
the first one, we capture dynamic with the state-space model and apply extreme value
theory to the residuals, and in the second one, we integrate the dynamics by means of
autoregressive models, where the regressors are described by generalized linear models.
More precisely, since the classical procedures are not appropriate to the case of outlier
presence, for the first strategy we rework classical Kalman smoother and extended
Kalman procedures in a robust way for different types of outliers and illustrate the performance
of the new procedures in a GPS application and a stylized outlier situation.
To apply approach to shrinking neighborhoods we need some smoothness, therefore for
the second strategy, we derive smoothness of the generalized linear model in terms of
L2 differentiability and create sufficient conditions for it in the cases of stochastic and
deterministic regressors. Moreover, we set the time dependence in these models by
linking the distribution parameters to the own past observations. The advantage of
our approach is its applicability to the error distributions with the higher dimensional
parameter and case of regressors of possibly different length for each parameter. Further,
we apply our results to the models with generalized Pareto and generalized extreme value
error distributions.
Finally, we create the exemplary implementation of the fixed point iteration algorithm
for the computation of the optimally robust in
uence curve in R. Here we do not aim to
provide the most
exible implementation, but rather sketch how it should be done and
retain points of particular importance. In the third part of the thesis we discuss three applications,
operational risk, hospitalization times and hydrological river discharge data,
and apply our code to the real data set taken from Jena university hospital ICU and
provide reader with the various illustrations and detailed conclusions.
In some processes for spinning synthetic fibers the filaments are exposed to highly turbulent air flows to achieve a high degree of stretching (elongation). The quality of the resulting filaments, namely thickness and uniformity, is thus determined essentially by the aerodynamic force coming from the turbulent flow. Up to now, there is a gap between the elongation measured in experiments and the elongation obtained by numerical simulations available in the literature.
The main focus of this thesis is the development of an efficient and sufficiently accurate simulation algorithm for the velocity of a turbulent air flow and the application in turbulent spinning processes.
In stochastic turbulence models the velocity is described by an \(\mathbb{R}^3\)-valued random field. Based on an appropriate description of the random field by Marheineke, we have developed an algorithm that fulfills our requirements of efficiency and accuracy. Applying a resulting stochastic aerodynamic drag force on the fibers then allows the simulation of the fiber dynamics modeled by a random partial differential algebraic equation system as well as a quantization of the elongation in a simplified random ordinary differential equation model for turbulent spinning. The numerical results are very promising: whereas the numerical results available in the literature can only predict elongations up to order \(10^4\) we get an order of \(10^5\), which is closer to the elongations of order \(10^6\) measured in experiments.
Das Ziel dieser Dissertation ist die Entwicklung und Implementation eines Algorithmus zur Berechnung von tropischen Varietäten über allgemeine bewertete Körper. Die Berechnung von tropischen Varietäten über Körper mit trivialer Bewertung ist ein hinreichend gelöstes Problem. Hierfür kombinieren die Autoren Bogart, Jensen, Speyer, Sturmfels und Thomas eindrucksvoll klassische Techniken der Computeralgebra mit konstruktiven Methoden der konvexer Geometrie.
Haben wir allerdings einen Grundkörper mit nicht-trivialer Bewertung, wie zum Beispiel den Körper der \(p\)-adischen Zahlen \(\mathbb{Q}_p\), dann stößt die konventionelle Gröbnerbasentheorie scheinbar an ihre Grenzen. Die zugrundeliegenden Monomordnungen sind nicht geeignet um Problemstellungen zu untersuchen, die von einer nicht-trivialen Bewertung auf den Koeffizienten abhängig sind. Dies führte zu einer Reihe von Arbeiten, welche die gängige Gröbnerbasentheorie modifizieren um die Bewertung des Grundkörpers einzubeziehen.\[\phantom{newline}\]
In dieser Arbeit präsentieren wir einen alternativen Ansatz und zeigen, wie sich die Bewertung mittels einer speziell eingeführten Variable emulieren lässt, so dass eine Modifikation der klassischen Werkzeuge nicht notwendig ist.
Im Rahmen dessen wird Theorie der Standardbasen auf Potenzreihen über einen Koeffizientenring verallgemeinert. Hierbei wird besonders Wert darauf gelegt, dass alle Algorithmen bei polynomialen Eingabedaten mit ihren klassischen Pendants übereinstimmen, sodass für praktische Zwecke auf bereits etablierte Softwaresysteme zurückgegriffen werden kann. Darüber hinaus wird die Konstruktion des Gröbnerfächers sowie die Technik des Gröbnerwalks für leicht inhomogene Ideale eingeführt. Dies ist notwendig, da bei der Einführung der neuen Variable die Homogenität des Ausgangsideal gebrochen wird.\[\phantom{newline}\]
Alle Algorithmen wurden in Singular implementiert und sind als Teil der offiziellen Distribution erhältlich. Es ist die erste Implementation, welches in der Lage ist tropische Varietäten mit \(p\)-adischer Bewertung auszurechnen. Im Rahmen der Arbeit entstand ebenfalls ein Singular Paket für konvexe Geometrie, sowie eine Schnittstelle zu Polymake.
In this dissertation, we discuss how to price American-style options. Our aim is to study and improve the regression-based Monte Carlo methods. In order to have good benchmarks to compare with them, we also study the tree methods.
In the second chapter, we investigate the tree methods specifically. We do research firstly within the Black-Scholes model and then within the Heston model. In the Black-Scholes model, based on Müller's work, we illustrate how to price one dimensional and multidimensional American options, American Asian options, American lookback options, American barrier options and so on. In the Heston model, based on Sayer's research, we implement his algorithm to price one dimensional American options. In this way, we have good benchmarks of various American-style options and put them all in the appendix.
In the third chapter, we focus on the regression-based Monte Carlo methods theoretically and numerically. Firstly, we introduce two variations, the so called "Tsitsiklis-Roy method" and the "Longstaff-Schwartz method". Secondly, we illustrate the approximation of American option by its Bermudan counterpart. Thirdly we explain the source of low bias and high bias. Fourthly we compare these two methods using in-the-money paths and all paths. Fifthly, we examine the effect using different number and form of basis functions. Finally, we study the Andersen-Broadie method and present the lower and upper bounds.
In the fourth chapter, we study two machine learning techniques to improve the regression part of the Monte Carlo methods: Gaussian kernel method and kernel-based support vector machine. In order to choose a proper smooth parameter, we compare fixed bandwidth, global optimum and suboptimum from a finite set. We also point out that scaling the training data to [0,1] can avoid numerical difficulty. When out-of-sample paths of stock prices are simulated, the kernel method is robust and even performs better in several cases than the Tsitsiklis-Roy method and the Longstaff-Schwartz method. The support vector machine can keep on improving the kernel method and needs less representations of old stock prices during prediction of option continuation value for a new stock price.
In the fifth chapter, we switch to the hardware (FGPA) implementation of the Longstaff-Schwartz method and propose novel reversion formulas for the stock price and volatility within the Black-Scholes and Heston models. The test for this formula within the Black-Scholes model shows that the storage of data is reduced and also the corresponding energy consumption.
Many tasks in image processing can be tackled by modeling an appropriate data fidelity term \(\Phi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and then solve one of the regularized minimization problems \begin{align*}
&{}(P_{1,\tau}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \big\{ \Phi(x) \;{\rm s.t.}\; \Psi(x) \leq \tau \big\} \\ &{}(P_{2,\lambda}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \{ \Phi(x) + \lambda \Psi(x) \}, \; \lambda > 0 \end{align*} with some function \(\Psi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and a good choice of the parameter(s). Two tasks arise naturally here: \begin{align*} {}& \text{1. Study the solver sets \({\rm SOL}(P_{1,\tau})\) and
\({\rm SOL}(P_{2,\lambda})\) of the minimization problems.} \\ {}& \text{2. Ensure that the minimization problems have solutions.} \end{align*} This thesis provides contributions to both tasks: Regarding the first task for a more special setting we prove that there are intervals \((0,c)\) and \((0,d)\) such that the setvalued curves \begin{align*}
\tau \mapsto {}& {\rm SOL}(P_{1,\tau}), \; \tau \in (0,c) \\ {} \lambda \mapsto {}& {\rm SOL}(P_{2,\lambda}), \; \lambda \in (0,d) \end{align*} are the same, besides an order reversing parameter change \(g: (0,c) \rightarrow (0,d)\). Moreover we show that the solver sets are changing all the time while \(\tau\) runs from \(0\) to \(c\) and \(\lambda\) runs from \(d\) to \(0\).
In the presence of lower semicontinuity the second task is done if we have additionally coercivity. We regard lower semicontinuity and coercivity from a topological point of view and develop a new technique for proving lower semicontinuity plus coercivity.
Dropping any lower semicontinuity assumption we also prove a theorem on the coercivity of a sum of functions.
The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem.
In this thesis, we investigate several upcoming issues occurring in the context of conceiving and building a decision support system. We elaborate new algorithms for computing representative systems with special quality guarantees, provide concepts for supporting the decision makers after a representative system was computed, and consider a methodology of combining two optimization problems.
We review the original Box-Algorithm for two objectives by Hamacher et al. (2007) and discuss several extensions regarding coverage, uniformity, the enumeration of the whole nondominated set, and necessary modifications if the underlying scalarization problem cannot be solved to optimality. In a next step, the original Box-Algorithm is extended to the case of three objective functions to compute a representative system with desired coverage error. Besides the investigation of several theoretical properties, we prove the correctness of the algorithm, derive a bound on the number of iterations needed by the algorithm to meet the desired coverage error, and propose some ideas for possible extensions.
Furthermore, we investigate the problem of selecting a subset with desired cardinality from the computed representative system, the Hypervolume Subset Selection Problem (HSSP). We provide two new formulations for the bicriteria HSSP, a linear programming formulation and a \(k\)-link shortest path formulation. For the latter formulation, we propose an algorithm for which we obtain the currently best known complexity bound for solving the bicriteria HSSP. For the tricriteria HSSP, we propose an integer programming formulation with a corresponding branch-and-bound scheme.
Moreover, we address the issue of how to present the whole set of computed representative points to the decision makers. Based on common illustration methods, we elaborate an algorithm guiding the decision makers in choosing their preferred solution.
Finally, we step back and look from a meta-level on the issue of how to combine two given optimization problems and how the resulting combinations can be related to each other. We come up with several different combined formulations and give some ideas for the practical approach.
The central topic of this thesis is Alperin's weight conjecture, a problem concerning the representation theory of finite groups.
This conjecture, which was first proposed by J. L. Alperin in 1986, asserts that for any finite group the number of its irreducible Brauer characters coincides with the number of conjugacy classes of its weights. The blockwise version of Alperin's conjecture partitions this problem into a question concerning the number of irreducible Brauer characters and weights belonging to the blocks of finite groups.
A proof for this conjecture has not (yet) been found. However, the problem has been reduced to a question on non-abelian finite (quasi-) simple groups in the sense that there is a set of conditions, the so-called inductive blockwise Alperin weight condition, whose verification for all non-abelian finite simple groups implies the blockwise Alperin weight conjecture. Now the objective is to prove this condition for all non-abelian finite simple groups, all of which are known via the classification of finite simple groups.
In this thesis we establish the inductive blockwise Alperin weight condition for three infinite series of finite groups of Lie type: the special linear groups \(SL_3(q)\) in the case \(q>2\) and \(q \not\equiv 1 \bmod 3\), the Chevalley groups \(G_2(q)\) for \(q \geqslant 5\), and Steinberg's triality groups \(^3D_4(q)\).
The work consists of two parts.
In the first part an optimization problem of structures of linear elastic material with contact modeled by Robin-type boundary conditions is considered. The structures model textile-like materials and possess certain quasiperiodicity properties. The homogenization method is used to represent the structures by homogeneous elastic bodies and is essential for formulations of the effective stress and Poisson's ratio optimization problems. At the micro-level, the classical one-dimensional Euler-Bernoulli beam model extended with jump conditions at contact interfaces is used. The stress optimization problem is of a PDE-constrained optimization type, and the adjoint approach is exploited. Several numerical results are provided.
In the second part a non-linear model for simulation of textiles is proposed. The yarns are modeled by hyperelastic law and have no bending stiffness. The friction is modeled by the Capstan equation. The model is formulated as a problem with the rate-independent dissipation, and the basic continuity and convexity properties are investigated. The part ends with numerical experiments and a comparison of the results to a real measurement.
Motivated by the results of infinite dimensional Gaussian analysis and especially white noise analysis, we construct a Mittag-Leffler analysis. This is an infinite dimensional analysis with respect to non-Gaussian measures of Mittag-Leffler type which we call Mittag-Leffler measures. Our results indicate that the Wick ordered polynomials, which play a key role in Gaussian analysis, cannot be generalized to this non-Gaussian case. We provide evidence that a system of biorthogonal polynomials, called generalized Appell system, is applicable to the Mittag-Leffler measures, instead of using Wick ordered polynomials. With the help of an Appell system, we introduce a test function and a distribution space. Furthermore we give characterizations of the distribution space and we characterize the weak integrable functions and the convergent sequences within the distribution space. We construct Donsker's delta in a non-Gaussian setting as an application.
In the second part, we develop a grey noise analysis. This is a special application of the Mittag-Leffler analysis. In this framework, we introduce generalized grey Brownian motion and prove differentiability in a distributional sense and the existence of generalized grey Brownian motion local times. Grey noise analysis is then applied to the time-fractional heat equation and the time-fractional Schrödinger equation. We prove a generalization of the fractional Feynman-Kac formula for distributional initial values. In this way, we find a Green's function for the time-fractional heat equation which coincides with the solutions given in the literature.
The overall goal of the work is to simulate rarefied flows inside geometries with moving boundaries. The behavior of a rarefied flow is characterized through the Knudsen number \(Kn\), which can be very small (\(Kn < 0.01\) continuum flow) or larger (\(Kn > 1\) molecular flow). The transition region (\(0.01 < Kn < 1\)) is referred to as the transition flow regime.
Continuum flows are mainly simulated by using commercial CFD methods, which are used to solve the Euler equations. In the case of molecular flows one uses statistical methods, such as the Direct Simulation Monte Carlo (DSMC) method. In the transition region Euler equations are not adequate to model gas flows. Because of the rapid increase of particle collisions the DSMC method tends to fail, as well
Therefore, we develop a deterministic method, which is suitable to simulate problems of rarefied gases for any Knudsen number and is appropriate to simulate flows inside geometries with moving boundaries. Thus, the method we use is the Finite Pointset Method (FPM), which is a mesh-free numerical method developed at the ITWM Kaiserslautern and is mainly used to solve fluid dynamical problems.
More precisely, we develop a method in the FPM framework to solve the BGK model equation, which is a simplification of the Boltzmann equation. This equation is mainly used to describe rarefied flows.
The FPM based method is implemented for one and two dimensional physical and velocity space and different ranges of the Knudsen number. Numerical examples are shown for problems with moving boundaries. It is seen, that our method is superior to regular grid methods with respect to the implementation of boundary conditions. Furthermore, our results are comparable to reference solutions gained through CFD- and DSMC methods, respectevly.
We present a numerical scheme to simulate a moving rigid body with arbitrary shape suspended in a rarefied gas micro flows, in view of applications to complex computations of moving structures in micro or vacuum systems. The rarefied gas is simulated by solving the Boltzmann equation using a DSMC particle method. The motion of the rigid body is governed by the Newton-Euler equations, where the force and the torque on the rigid body is computed from the momentum transfer of the gas molecules colliding with the body. The resulting motion of the rigid body affects in turn again the gas flow in the surroundings. This means that a two-way coupling has been modeled. We validate the scheme by performing various numerical experiments in 1-, 2- and 3-dimensional computational domains. We have presented 1-dimensional actuator problem, 2-dimensional cavity driven flow problem, Brownian diffusion of a spherical particle both with translational and rotational motions, and finally thermophoresis on a spherical particles. We compare the numerical results obtained from the numerical simulations with the existing theories in each test examples.
This thesis is concerned with stochastic control problems under transaction costs. In particular, we consider a generalized menu cost problem with partially controlled regime switching, general multidimensional running cost problems and the maximization of long-term growth rates in incomplete markets. The first two problems are considered under a general cost structure that includes a fixed cost component, whereas the latter is analyzed under proportional and Morton-Pliska
transaction costs.
For the menu cost problem and the running cost problem we provide an equivalent characterization of the value function by means of a generalized version of the Ito-Dynkin formula instead of the more restrictive, traditional approach via the use of quasi-variational inequalities (QVIs). Based on the finite element method and weak solutions of QVIs in suitable Sobolev spaces, the value function is constructed iteratively. In addition to the analytical results, we study a novel application of the menu cost problem in management science. We consider a company that aims to implement an optimal investment and marketing strategy and must decide when to issue a new version of a product and when and how much
to invest into marketing.
For the long-term growth rate problem we provide a rigorous asymptotic analysis under both proportional and Morton-Pliska transaction costs in a general incomplete market that includes, for instance, the Heston stochastic volatility model and the Kim-Omberg stochastic excess return model as special cases. By means of a dynamic programming approach leading-order optimal strategies are constructed
and the leading-order coefficients in the expansions of the long-term growth rates are determined. Moreover, we analyze the asymptotic performance of Morton-Pliska strategies in settings with proportional transaction costs. Finally, pathwise optimality of the constructed strategies is established.
This work aims at including nonlinear elastic shell models in a multibody framework. We focus our attention to Kirchhoff-Love shells and explore the benefits of an isogeometric approach, the latest development in finite element methods, within a multibody system. Isogeometric analysis extends isoparametric finite elements to more general functions such as B-Splines and Non-Uniform Rational B-Splines (NURBS) and works on exact geometry representations even at the coarsest level of discretizations. Using NURBS as basis functions, high regularity requirements of the shell model, which are difficult to achieve with standard finite elements, are easily fulfilled. A particular advantage is the promise of simplifying the mesh generation step, and mesh refinement is easily performed by eliminating the need for communication with the geometry representation in a Computer-Aided Design (CAD) tool.
Quite often the domain consists of several patches where each patch is parametrized by means of NURBS, and these patches are then glued together by means of continuity conditions. Although the techniques known from domain decomposition can be carried over to this situation, the analysis of shell structures is substantially more involved as additional angle preservation constraints between the patches might arise. In this work, we address this issue in the stationary and transient case and make use of the analogy to constrained mechanical systems with joints and springs as interconnection elements. Starting point of our work is the bending strip method which is a penalty approach that adds extra stiffness to the interface between adjacent patches and which is found to lead to a so-called stiff mechanical system that might suffer from ill-conditioning and severe stepsize restrictions during time integration. As a remedy, an alternative formulation is developed that improves the condition number of the system and removes the penalty parameter dependence. Moreover, we study another alternative formulation with continuity constraints applied to triples of control points at the interface. The approach presented here to tackle stiff systems is quite general and can be applied to all penalty problems fulfilling some regularity requirements.
The numerical examples demonstrate an impressive convergence behavior of the isogeometric approach even for a coarse mesh, while offering substantial savings with respect to the number of degrees of freedom. We show a comparison between the different multipatch approaches and observe that the alternative formulations are well conditioned, independent of any penalty parameter and give the correct results. We also present a technique to couple the isogeometric shells with multibody systems using a pointwise interaction.
In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations.
For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions.
For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain.
A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis.
Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods.
Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.
Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.
In this thesis we develop a shape optimization framework for isogeometric analysis in the optimize first–discretize then setting. For the discretization we use
isogeometric analysis (iga) to solve the state equation, and search optimal designs in a space of admissible b-spline or nurbs combinations. Thus a quite
general class of functions for representing optimal shapes is available. For the
gradient-descent method, the shape derivatives indicate both stopping criteria and search directions and are determined isogeometrically. The numerical treatment requires solvers for partial differential equations and optimization methods, which introduces numerical errors. The tight connection between iga and geometry representation offers new ways of refining the geometry and analysis discretization by the same means. Therefore, our main concern is to develop the optimize first framework for isogeometric shape optimization as ground work for both implementation and an error analysis. Numerical examples show that this ansatz is practical and case studies indicate that it allows local refinement.
Die Dissertation "Portfoliooptimierung im Binomialmodell" befasst sich mit der Frage, inwieweit
das Problem der optimalen Portfolioauswahl im Binomialmodell lösbar ist bzw. inwieweit
die Ergebnisse auf das stetige Modell übertragbar sind. Dabei werden neben dem
klassischen Modell ohne Kosten und ohne Veränderung der Marktsituation auch Modellerweiterungen
untersucht.
In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.
This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.
In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.
Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall
modeling and simulation of physical and chemical processes occuring in the course of an accident
is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor tech-
nology and computer programming. The aim of the study is therefore to create the foundations
of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based
on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under
different working and accident conditions, and to develop proper action plans for minimizing the
risks of accidents, and/or minimizing the consequences of possible accidents. A very large number
of scenarios have to be simulated, and at the same time acceptable accuracy for the critical param-
eters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software
tools are either too slow, or not accurate enough. This thesis deals with developing customized al-
gorithm and software tools for simulation of isothermal and non-isothermal flows in a containment
pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented.
The goal of the work is to achieve a balance between accuracy and speed of calculation, and to
develop customized algorithm for this special case. Different discretization and solution approaches
are studied and those which correspond best to the formulated goal are selected, adjusted, and when
possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated
geometries, in presence of solid and porous obstales, is in the core of the algorithm. Developing
suitable pre-processor and customized domain decomposition algorithms are essential part of the
overall algorithm amd software. Results from numerical simulations in test geometries and in real
geometries are presented and discussed.
This thesis focuses on dealing with some new aspects of continuous time portfolio optimization by using the stochastic control method.
First, we extend the Busch-Korn-Seifried model for a large investor by using the Vasicek model for the short rate, and that problem is solved explicitly for two types of intensity functions.
Next, we justify the existence of the constant proportion portfolio insurance (CPPI) strategy in a framework containing a stochastic short rate and a Markov switching parameter. The effect of Vasicek short rate on the CPPI strategy has been studied by Horsky (2012). This part of the thesis extends his research by including a Markov switching parameter, and the generalization is based on the B\"{a}uerle-Rieder investment problem. The explicit solutions are obtained for the portfolio problem without the Money Market Account as well as the portfolio problem with the Money Market Account.
Finally, we apply the method used in Busch-Korn-Seifried investment problem to explicitly solve the portfolio optimization with a stochastic benchmark.
This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.
In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.
Multilevel Constructions
(2014)
The thesis consists of the two chapters.
The first chapter is addressed to make a deep investigation of the MLMC method. In particular we take an optimisation view at the estimate. Rather than fixing the number of discretisation points \(n_i\) to be a geometric sequence, we are trying to find an optimal set up for \(n_i\) such that for a fixed error the estimate can be computed within a minimal time.
In the second chapter we propose to enhance the MLMC estimate with the weak extrapolation technique. This technique helps to improve order of a weak convergence of a scheme and as a result reduce CC of an estimate. In particular we study high order weak extrapolation approach, which is know not be inefficient in the standard settings. However, a combination of the MLMC and the weak extrapolation yields an improvement of the MLMC.
Das zinsoptimierte Schuldenmanagement hat zum Ziel, eine möglichst effiziente Abwägung zwischen den erwarteten Finanzierungskosten einerseits und den Risiken für den Staatshaushalt andererseits zu finden. Um sich diesem Spannungsfeld zu nähern, schlagen wir erstmals die Brücke zwischen den Problemstellungen des Schuldenmanagements und den Methoden der zeitkontinuierlichen, dynamischen Portfoliooptimierung.
Das Schlüsselelement ist dabei eine neue Metrik zur Messung der Finanzierungskosten, die Perpetualkosten. Diese spiegeln die durchschnittlichen zukünftigen Finanzierungskosten wider und beinhalten sowohl die bereits bekannten Zinszahlungen als auch die noch unbekannten Kosten für notwendige Anschlussfinanzierungen. Daher repräsentiert die Volatilität der Perpetualkosten auch das Risiko einer bestimmten Strategie; je langfristiger eine Finanzierung ist, desto kleiner ist die Schwankungsbreite der Perpetualkosten.
Die Perpetualkosten ergeben sich als Produkt aus dem Barwert eines Schuldenportfolios und aus der vom Portfolio unabhängigen Perpetualrate. Für die Modellierung des Barwertes greifen wir auf das aus der dynamischen Portfoliooptimierung bekannte Konzept eines selbstfinanzierenden Bondportfolios zurück, das hier auf einem mehrdimensionalen affin-linearen Zinsmodell basiert. Das Wachstum des Schuldenportfolios wird dabei durch die Einbeziehung des Primärüberschusses des Staates gebremst bzw. verhindert, indem wir diesen als externen Zufluss in das selbstfinanzierende Modell aufnehmen.
Wegen der Vielfältigkeit möglicher Finanzierungsinstrumente wählen wir nicht deren Wertanteile als Kontrollvariable, sondern kontrollieren die Sensitivitäten des Portfolios gegenüber verschiedenen Zinsbewegungen. Aus optimalen Sensitivitäten können in einem nachgelagerten Schritt dann optimale Wertanteile für verschiedenste Finanzierungsinstrumente abgeleitet werden. Beispielhaft demonstrieren wir dies mittels Rolling-Horizon-Bonds unterschiedlicher Laufzeit.
Schließlich lösen wir zwei Optimierungsprobleme mit Methoden der stochastischen Kontrolltheorie. Dabei wird stets der erwartete Nutzen der Perpetualkosten maximiert. Die Nutzenfunktionen sind jeweils an das Schuldenmanagement angepasst und zeichnen sich insbesondere dadurch aus, dass höhere Kosten mit einem niedrigeren Nutzen einhergehen. Im ersten Problem betrachten wir eine Potenznutzenfunktion mit konstanter relativer Risikoaversion, im zweiten wählen wir eine Nutzenfunktion, welche die Einhaltung einer vorgegebenen Schulden- bzw. Kostenobergrenze garantiert.
Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors.
Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study.
The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.
In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.
This thesis is devoted to the computational aspects of intersection theory and enumerative geometry. The first results are a Sage package Schubert3 and a Singular library schubert.lib which both provide the key functionality necessary for computations in intersection theory and enumerative geometry. In particular, we describe an alternative method for computations in Schubert calculus via equivariant intersection theory. More concretely, we propose an explicit formula for computing the degree of Fano schemes of linear subspaces on hypersurfaces. As a special case, we also obtain an explicit formula for computing the number of linear subspaces on a general hypersurface when this number is finite. This leads to a much better performance than classical Schubert calculus.
Another result of this thesis is related to the computation of Gromov-Witten invariants. The most powerful method for computing Gromov-Witten invariants is the localization of moduli spaces of stable maps. This method was introduced by Kontsevich in 1995. It allows us to compute Gromov-Witten invariants via Bott's formula. As an insightful application, we computed the numbers of rational curves on general complete intersection Calabi-Yau threefolds in projective spaces up to degree six. The results are all in agreement with predictions made from mirror symmetry.
In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.
Pedestrian Flow Models
(2014)
There have been many crowd disasters because of poor planning of the events. Pedestrian models are useful in analysing the behavior of pedestrians in advance to the events so that no pedestrians will be harmed during the event. This thesis deals with pedestrian flow models on microscopic, hydrodynamic and scalar scales. By following the Hughes' approach, who describes the crowd as a thinking fluid, we use the solution of the Eikonal equation to compute the optimal path for pedestrians. We start with the microscopic model for pedestrian flow and then derive the hydrodynamic and scalar models from it. We use particle methods to solve the governing equations. Moreover, we have coupled a mesh free particle method to the fixed grid for solving the Eikonal equation. We consider an example with a large number of pedestrians to investigate our models for different settings of obstacles and for different parameters. We also consider the pedestrian flow in a straight corridor and through T-junction and compare our numerical results with the experiments. A part of this work is devoted for finding a mesh free method to solve the Eikonal equation. Most of the available methods to solve the Eikonal equation are restricted to either cartesian grid or triangulated grid. In this context, we propose a mesh free method to solve the Eikonal equation, which can be applicable to any arbitrary grid and useful for the complex geometries.
Factorization of multivariate polynomials is a cornerstone of many applications in computer algebra. To compute it, one uses an algorithm by Zassenhaus who used it in 1969 to factorize univariate polynomials over \(\mathbb{Z}\). Later Musser generalized it to the multivariate case. Subsequently, the algorithm was refined and improved.
In this work every step of the algorithm is described as well as the problems that arise in these steps.
In doing so, we restrict to the coefficient domains \(\mathbb{F}_{q}\), \(\mathbb{Z}\), and \(\mathbb{Q}(\alpha)\) while focussing on a fast implementation. The author has implemented almost all algorithms mentioned in this work in the C++ library factory which is part of the computer algebra system Singular.
Besides, a new bound on the coefficients of a factor of a multivariate polynomial over \(\mathbb{Q}(\alpha)\) is proven which does not require \(\alpha\) to be an algebraic integer. This bound is used to compute Hensel lifting and recombination of factors in a modular fashion. Furthermore, several sub-steps are improved.
Finally, an overview on the capability of the implementation is given which includes benchmark examples as well as random generated input which is supposed to give an impression of the average performance.
The application behind the subject of this thesis are multiscale simulations on highly heterogeneous particle-reinforced composites with large jumps in their material coefficients. Such simulations are used, e.g., for the prediction of elastic properties. As the underlying microstructures have very complex geometries, a discretization by means of finite elements typically involves very fine resolved meshes. The latter results in discretized linear systems of more than \(10^8\) unknowns which need to be solved efficiently. However, the variation of the material coefficients even on very small scales reveals the failure of most available methods when solving the arising linear systems. While for scalar elliptic problems of multiscale character, robust domain decomposition methods are developed, their extension and application to 3D elasticity problems needs to be further established.
The focus of the thesis lies in the development and analysis of robust overlapping domain decomposition methods for multiscale problems in linear elasticity. The method combines corrections on local subdomains with a global correction on a coarser grid. As the robustness of the overall method is mainly determined by how well small scale features of the solution can be captured on the coarser grid levels, robust multiscale coarsening strategies need to be developed which properly transfer information between fine and coarse grids.
We carry out a detailed and novel analysis of two-level overlapping domain decomposition methods for the elasticity problems. The study also provides a concept for the construction of multiscale coarsening strategies to robustly solve the discretized linear systems, i.e. with iteration numbers independent of variations in the Young's modulus and the Poisson ratio of the underlying composite. The theory also captures anisotropic elasticity problems and allows applications to multi-phase elastic materials with non-isotropic constituents in two and three spatial dimensions.
Moreover, we develop and construct new multiscale coarsening strategies and show why they should be preferred over standard ones on several model problems. In a parallel implementation (MPI) of the developed methods, we present applications to real composites and robustly solve discretized systems of more than \(200\) million unknowns.
This thesis deals with generalized inverses, multivariate polynomial interpolation and approximation of scattered data. Moreover, it covers the lifting scheme, which basically links the aforementioned topics. For instance, determining filters for the lifting scheme is connected to multivariate polynomial interpolation. More precisely, sets of interpolation sites are required that can be interpolated by a unique polynomial of a certain degree. In this thesis a new class of such sets is introduced and elements from this class are used to construct new and computationally more efficient filters for the lifting scheme.
Furthermore, a method to approximate multidimensional scattered data is introduced which is based on the lifting scheme. A major task in this method is to solve an ordinary linear least squares problem which possesses a special structure. Exploiting this structure yields better approximations and therefore this particular least squares problem is analyzed in detail. This leads to a characterization of special generalized inverses with partially prescribed image spaces.
Many real life problems have multiple spatial scales. In addition to the multiscale nature one has to take uncertainty into account. In this work we consider multiscale problems with stochastic coefficients.
We combine multiscale methods, e.g., mixed multiscale finite elements or homogenization, which are used for deterministic problems with stochastic methods, such as multi-level Monte Carlo or polynomial chaos methods.
The work is divided into three parts.
In the first two parts we study homogenization with different stochastic methods. Therefore we consider elliptic stationary diffusion equations with stochastic coefficients.
The last part is devoted to the study of mixed multiscale finite elements in combination with multi-level Monte Carlo methods. In the third part we consider multi-phase flow and transport equations.
This thesis is separated into three main parts: Development of Gaussian and White Noise Analysis, Hamiltonian Path Integrals as White Noise Distributions, Numerical methods for polymers driven by fractional Brownian motion.
Throughout this thesis the Donsker's delta function plays a key role. We investigate this generalized function also in Chapter 2. Moreover we show by giving a counterexample, that the general definition for complex kernels is not true.
In Chapter 3 we take a closer look to generalized Gauss kernels and generalize these concepts to the case of vector-valued White Noise. These results are the basis for Hamiltonian path integrals of quadratic type. The core result of this chapter gives conditions under which pointwise products of generalized Gauss kernels and certain Hida distributions have a mathematical rigorous meaning as distributions in the Hida space.
In Chapter 4 we discuss operators which are related to applications for Feynman Integrals as differential operators, scaling, translation and projection. We show the relation of these operators to differential operators, which leads to the well-known notion of so called convolution operators. We generalize the central homomorphy theorem to regular generalized functions.
We generalize the concept of complex scaling to scaling with bounded operators and discuss the relation to generalized Radon-Nikodym derivatives. With the help of this we consider products of generalized functions in chapter 5. We show that the projection operator from the Wick formula for products with Donsker's deltais not closable on the square-integrable functions..
In Chapter 5 we discuss products of generalized functions. Moreover the Wick formula is revisited. We investigate under which conditions and on which spaces the Wick formula can be generalized to. At the end of the chapter we consider the products of Donsker's delta function with a generalized function with help of a measure transformation. Here also problems as measurability are concerned.
In Chapter 6 we characterize Hamiltonian path integrands for the free particle, the harmonic oscillator and the charged particle in a constant magnetic field as Hida distributions. This is done in terms of the T-transform and with the help of the results from chapter 3. For the free particle and the harmonic oscillator we also investigate the momentum space propagators. At the same time, the $T$-transform of the constructed Feynman integrands provides us with their generating functional. In Chapter 7, we can show that the generalized expectation (generating functional at zero) gives the Greens function to the corresponding Schrödinger equation.
Moreover, with help of the generating functional we can show that the canonical commutation relations for the free particle and the harmonic oscillator in phase space are fulfilled. This confirms on a mathematical rigorous level the heuristics developed by Feynman and Hibbs.
In Chapter 8 we give an outlook, how the scaling approach which is successfully applied in the Feynman integral setting can be transferred to the phase space setting. We give a mathematical rigorous meaning to an analogue construction to the scaled Feynman-Kac kernel. It is open if the expression solves the Schrödinger equation. At least for quadratic potentials we can get the right physics.
In the last chapter, we focus on the numerical analysis of polymer chains driven by fractional Brownian motion. Instead of complicated lattice algorithms, our discretization is based on the correlation matrix. Using fBm one can achieve a long-range dependence of the interaction of the monomers inside a polymer chain. Here a Metropolis algorithm is used to create the paths of a polymer driven by fBm taking the excluded volume effect in account.
This thesis is concerned with tropical moduli spaces, which are an important tool in tropical enumerative geometry. The main result is a construction of tropical moduli spaces of rational tropical covers of smooth tropical curves and of tropical lines in smooth tropical surfaces. The construction of a moduli space of tropical curves in a smooth tropical variety is reduced to the case of smooth fans. Furthermore, we point out relations to intersection theory on suitable moduli spaces on algebraic curves.
Efficient time integration and nonlinear model reduction for incompressible hyperelastic materials
(2013)
This thesis deals with the time integration and nonlinear model reduction of nearly incompressible materials that have been discretized in space by mixed finite elements. We analyze the structure of the equations of motion and show that a differential-algebraic system of index 1 with a singular perturbation term needs to be solved. In the limit case the index may jump to index 3 and thus renders the time integration into a difficult problem. For the time integration we apply Rosenbrock methods and study their convergence behavior for a test problem, which highlights the importance of the well-known Scholz conditions for this problem class. Numerical tests demonstrate that such linear-implicit methods are an attractive alternative to established time integration methods in structural dynamics. In the second part we combine the simulation of nonlinear materials with a model reduction step. We use the method of proper orthogonal decomposition and apply it to the discretized system of second order. For a nonlinear model reduction to be efficient we approximate the nonlinearity by following the lookup approach. In a practical example we show that large CPU time savings can achieved. This work is in order to prepare the ground for including such finite element structures as components in complex vehicle dynamics applications.
The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.
The use of trading stops is a common practice in financial markets for a variety of reasons: it provides a simple way to control losses on a given trade, while also ensuring that profit-taking is not deferred indefinitely; and it allows opportunities to consider reallocating resources to other investments. In this thesis, it is explained why the use of stops may be desirable in certain cases.
This is done by proposing a simple objective to be optimized. Some simple and commonly-used rules for the placing and use of stops are investigated; consisting of fixed or moving barriers, with fixed transaction costs. It is shown how to identify optimal levels at which to set stops, and the performances of different rules and strategies are compared. Thereby, uncertainty and altering of the drift parameter of the investment are incorporated.
In the last few years a lot of work has been done in the investigation of Brownian motion with point interaction(s) in one and higher dimensions. Roughly speaking a Brownian motion with point interaction is nothing else than a Brownian motion whose generator is disturbed by a measure supported in just one point.
The purpose of the present work is the introducing of curve interactions of the two dimensional Brownian motion for a closed curve \(\mathcal{C}\). We will understand a curve interaction as a self-adjoint extension of the restriction of the Laplacian to the set of infinitely often continuously differentiable functions with compact support in \(\mathbb{R}^{2}\) which are constantly 0 at the closed curve. We will give a full description of all these self-adjoint extensions.
In the second chapter we will prove a generalization of Tanaka's formula to \(\mathbb{R}^{2}\). We define \(g\) to be a so-called harmonic single layer with continuous layer function \(\eta\) in \(\mathbb{R}^{2}\). For such a function \(g\) we prove
\begin{align}
g\left(B_{t}\right)=g\left(B_{0}\right)+\int\limits_{0}^{t}{\nabla g\left(B_{s}\right)\mathrm{d}B_{s}}+\int\limits_{0}^{t}\eta\left(B_{s}\right)\mathrm{d}L\left(s,\mathcal{C}\right)
\end{align}
where \(B_{t}\) is just the usual Brownian motion in \(\mathbb{R}^{2}\) and \(L\left(t,\mathcal{C}\right)\) is the connected unique local time process of \(B_{t}\) on the closed curve \(\mathcal{C}\).
We will use the generalized Tanaka formula in the following chapter to construct classes of processes related to curve interactions. In a first step we get the generalization of point interactions in a second step we get processes which behaves like a Brownian motion in the complement of \(\mathcal{C}\) and has an additional movement along the curve in the time- scale of \(L\left(t,\mathcal{C}\right)\). Such processes do not exist in the one point case since there we cannot move when the Brownian motion is in the point.
By establishing an approximation of a curve interaction by operators of the form Laplacian \(+V_{n}\) with "nice" potentials \(V_{n}\) we are able to deduce the existence of superprocesses related to curve interactions.
The last step is to give an approximation of these superprocesses by a sytem of branching particles. This approximation gives a better understanding of the related mass creation.