Refine
Year of publication
Document Type
- Preprint (575)
- Doctoral Thesis (245)
- Report (118)
- Article (37)
- Diploma Thesis (25)
- Lecture (25)
- Master's Thesis (6)
- Part of a Book (4)
- Study Thesis (4)
- Working Paper (4)
- Course Material (3)
- Bachelor Thesis (2)
Has Fulltext
- yes (1048) (remove)
Keywords
- Mathematische Modellierung (12)
- Modellierung (12)
- Wavelet (12)
- Inverses Problem (11)
- MINT (11)
- Mehrskalenanalyse (11)
- Schule (11)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)
Faculty / Organisational entity
- Fachbereich Mathematik (1048) (remove)
Simplified ODE models describing blood flow rate are governed by the pressure gradient.
However, assuming the orientation of the blood flow in a human body correlates to a positive
direction, a negative pressure gradient forces the valve to shut, which stops the flow through
the valve, hence, the flow rate is zero, whereas the pressure rate is formulated by an ODE.
Presence of ODEs together with algebraic constraints and sudden changes of system characterizations
yield systems of switched differential-algebraic equations (swDAEs). Alternating
dynamics of the heart can be well modelled by means of swDAEs. Moreover, to study pulse
wave propagation in arteries and veins, PDE models have been developed. Connection between
the heart and vessels leads to coupling PDEs and swDAEs. This model motivates
to study PDEs coupled with swDAEs, for which the information exchange happens at PDE
boundaries, where swDAE provides boundary conditions to the PDE and PDE outputs serve
as inputs to swDAE. Such coupled systems occur, e.g. while modelling power grids using
telegrapher’s equations with switches, water flow networks with valves and district
heating networks with rapid consumption changes. Solutions of swDAEs might
include jumps, Dirac impulses and their derivatives of arbitrary high orders. As outputs of
swDAE read as boundary conditions of PDE, a rigorous solution framework for PDE must
be developed so that jumps, Dirac impulses and their derivatives are allowed at PDE boundaries
and in PDE solutions. This is a wider solution class than solutions of small bounded
variation (BV), for instance, used in where nonlinear hyperbolic PDEs are coupled with
ODEs. Similarly, in, the solutions to switched linear PDEs with source terms are
restricted to the class of BV. However, in the presence of Dirac impulses and their derivatives,
BV functions cannot handle the coupled systems including DAEs with index greater than one.
Therefore, hyperbolic PDEs coupled with swDAEs with index one will be studied in the BV
setting and with swDAEs whose index is greater than one will be investigated in the distributional
sense. To this end, the 1D space of piecewise-smooth distributions is extended to a 2D
piecewise-smooth distributional solution framework. 2D space of piecewise-smooth distributions
allows trace evaluations at boundaries of the PDE. Moreover, a relationship between
solutions to coupled system and switched delay DAEs is established. The coupling structure
in this thesis forms a rather general framework. In fact, any arbitrary network, where PDEs
are represented by edges and (switched) DAEs by nodes, is covered via this structure. Given
a network, by rescaling spatial domains which modifies the coefficient matrices by a constant,
each PDE can be defined on the same interval which leads to a formulation of a single
PDE whose unknown is made up of the unknowns of each PDE that are stacked over each
other with a block diagonal coefficient matrix. Likewise, every swDAE is reformulated such
that the unknowns are collected above each other and coefficient matrices compose a block
diagonal coefficient matrix so that each node in the network is expressed as a single swDAE.
The results are illustrated by numerical simulations of the power grid and simplified circulatory
system examples. Numerical results for the power grid display the evolution of jumps
and Dirac impulses caused by initial and boundary conditions as a result of instant switches.
On the other hand, the analysis and numerical results for the simplified circulatory system do
not entail a Dirac impulse, for otherwise such an entity would destroy the entire system. Yet
jumps in the flow rate in the numerical results can come about due to opening and closure of
valves, which suits clinical and physiological findings. Regarding physiological parameters,
numerical results obtained in this thesis for the simplified circulatory system agree well with
medical data and findings from literature when compared for the validation
Skript zur Vorlesung "Character Theory of finite groups".
In this thesis we study a variant of the quadrature problem for stochastic differential equations (SDEs), namely the approximation of expectations \(\mathrm{E}(f(X))\), where \(X = (X(t))_{t \in [0,1]}\) is the solution of an SDE and \(f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R}\) is a functional, mapping each realization of \(X\) into the real numbers. The distinctive feature in this work is that we consider randomized (Monte Carlo) algorithms with random bits as their only source of randomness, whereas the algorithms commonly studied in the literature are allowed to sample from the uniform distribution on the unit interval, i.e., they do have access to random numbers from \([0,1]\).
By assumption, all further operations like, e.g., arithmetic operations, evaluations of elementary functions, and oracle calls to evaluate \(f\) are considered within the real number model of computation, i.e., they are carried out exactly.
In the following, we provide a detailed description of the quadrature problem, namely we are interested in the approximation of
\begin{align*}
S(f) = \mathrm{E}(f(X))
\end{align*}
for \(X\) being the \(r\)-dimensional solution of an autonomous SDE of the form
\begin{align*}
\mathrm{d}X(t) = a(X(t)) \, \mathrm{d}t + b(X(t)) \, \mathrm{d}W(t), \quad t \in [0,1],
\end{align*}
with deterministic initial value
\begin{align*}
X(0) = x_0 \in \mathbb{R}^r,
\end{align*}
and driven by a \(d\)-dimensional standard Brownian motion \(W\). Furthermore, the drift coefficient \(a \colon \mathbb{R}^r \to \mathbb{R}^r\) and the diffusion coefficient \(b \colon \mathbb{R}^r \to \mathbb{R}^{r \times d}\) are assumed to be globally Lipschitz continuous.
For the function classes
\begin{align*}
F_{\infty} = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{\sup}\bigr\}
\end{align*}
and
\begin{align*}
F_p = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{L_p}\bigr\}, \quad 1 \leq p < \infty.
\end{align*}
we have established the following.
\[\]
\(\textit{Theorem 1.}\)
There exists a random bit multilevel Monte Carlo (MLMC) algorithm \(M\) using
\[
L = L(\varepsilon,F) = \begin{cases}\lceil{\log_2(\varepsilon^{-2}}\rceil, &\text{if} \ F = F_p,\\
\lceil{\log_2(\varepsilon^{-2} + \log_2(\log_2(\varepsilon^{-1}))}\rceil, &\text{if} \ F = F_\infty
\end{cases}
\]
and replication numbers
\[
N_\ell = N_\ell(\varepsilon,F) = \begin{cases}
\lceil{(L+1) \cdot 2^{-\ell} \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F = F_p,\\
\lceil{(L+1) \cdot 2^{-\ell} \cdot \max(\ell,1) \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F=f_\infty
\end{cases}
\]
for \(\ell = 0,\ldots,L\), for which exists a positive constant \(c\) such that
\begin{align*}
\mathrm{error}(M,F) = \sup_{f \in F} \bigl(\mathrm{E}(S(f) - M(f))^2\bigr)^{1/2} \leq c \cdot \varepsilon
\end{align*}
and
\begin{align*}
\mathrm{cost}(M,F) = \sup_{f \in F} \mathrm{E}(\mathrm{cost}(M,f)) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for every \(\varepsilon \in {]0,1/2[}\).
\[\]
Hence, in terms of the \(\varepsilon\)-complexity
\begin{align*}
\mathrm{comp}(\varepsilon,F) = \inf\bigl\{\mathrm{cost}(M,F) \colon M \ \text{is a random bit MC algorithm}, \mathrm{error}(M,F) \leq \varepsilon\bigr\}
\end{align*}
we have established the upper bound
\begin{align*}
\mathrm{comp}(\varepsilon,F) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for some positive constant \(c\). That is, we have shown the same weak asymptotic upper bound as in the case of random numbers from \([0,1]\). Hence, in this sense, random bits are almost as powerful as random numbers for our computational problem.
Moreover, we present numerical results for a non-analyzed adaptive random bit MLMC Euler algorithm, in the particular cases of the Brownian motion, the geometric Brownian motion, the Ornstein-Uhlenbeck SDE and the Cox-Ingersoll-Ross SDE. We also provide a numerical comparison to the corresponding adaptive random number MLMC Euler method.
A key challenge in the analysis of the algorithm in Theorem 1 is the approximation of probability distributions by means of random bits. A problem very closely related to the quantization problem, i.e., the optimal approximation of a given probability measure (on a separable Hilbert space) by means of a probability measure with finite support size.
Though we have shown that the random bit approximation of the standard normal distribution is 'harder' than the corresponding quantization problem (lower weak rate of convergence), we have been able to establish the same weak rate of convergence as for the corresponding quantization problem in the case of the distribution of a Brownian bridge on \(L_2([0,1])\), the distribution of the solution of a scalar SDE on \(L_2([0,1])\), and the distribution of a centered Gaussian random element in a separable Hilbert space.
Diversification is one of the main pillars of investment strategies. The prominent 1/N portfolio, which puts equal weight on each asset is, apart from its simplicity, a method which is hard to outperform in realistic settings, as many studies have shown. However, depending on the number of considered assets, this method can lead to very large portfolios. On the other hand, optimization methods like the mean-variance portfolio suffer from estimation errors, which often destroy the theoretical benefits. We investigate the performance of the equal weight portfolio when using fewer assets. For this we explore different naive portfolios, from selecting the best Sharpe ratio assets to exploiting knowledge about correlation structures using clustering methods. The clustering techniques separate the possible assets into non-overlapping clusters and the assets within a cluster are ordered by their Sharpe ratio. Then the best asset of each portfolio is chosen to be a member of the new portfolio with equal weights, the cluster portfolio. We show that this portfolio inherits the advantages of the 1/N portfolio and can even outperform it empirically. For this we use real data and several simulation models. We prove these findings from a statistical point of view using the framework by DeMiguel, Garlappi and Uppal (2009). Moreover, we show the superiority regarding the Sharpe ratio in a setting, where in each cluster the assets are comonotonic. In addition, we recommend the consideration of a diversification-risk ratio to evaluate the performance of different portfolios.
Synapses are connections between different nerve cells that form an essential link in neural signal transmission. It is generally distinguished between electrical and chemical synapses, where chemical synapses are more common in the human brain and are also the type we deal with in this work.
In chemical synapses, small container-like objects called vesicles fill with neurotransmitter and expel them from the cell during synaptic transmission. This process is vital for communication between neurons. However, to the best of our knowledge no mathematical models that take different filling states of the vesicles into account have been developed before this thesis was written.
In this thesis we propose a novel mathematical model for modeling synaptic transmission at chemical synapses which includes the description of vesicles of different filling states. The model consists of a transport equation (for the vesicle growth process) plus three ordinary differential equations (ODEs) and focuses on the presynapse and synaptic cleft.
The well-posedness is proved in detail for this partial differential equation (PDE) system. We also propose a few different variations and related models. In particular, an ODE system is derived and a delay differential equation (DDE) system is formulated. We then use nonlinear optimization methods for data fitting to test some of the models on data made available to us by the Animal Physiology group at TU Kaiserslautern.
Cohomology of Groups
(2020)
In a recent paper, G. Malle and G. Robinson proposed a modular anologue to Brauer's famous \( k(B) \)-conjecture. If \( B \) is a \( p \)-block of a finite group with defect group \( D \), then they conjecture that \( l(B) \leq p^r \), where \( r \) is the sectional \( p \)-rank of \( D \). Since this conjecture is relatively new, there is obviously still a lot of work to do. This thesis is concerned with proving their conjecture for the finite groups of exceptional Lie type.
Elementare Zahlentheorie
(2020)
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope.
This document is the documentation for version 2020.02.
For more information, see https://www.lintim.net
Einführung in die Algebra
(2020)
On the complexity and approximability of optimization problems with Minimum Quantity Constraints
(2020)
During the last couple of years, there has been a variety of publications on the topic of
minimum quantity constraints. In general, a minimum quantity constraint is a lower bound
constraint on an entity of an optimization problem that only has to be fulfilled if the entity is
“used” in the respective solution. For example, if a minimum quantity \(q_e\) is defined on an
edge \(e\) of a flow network, the edge flow on \(e\) may either be \(0\) or at least \(q_e\) units of flow.
Minimum quantity constraints have already been applied to problem classes such as flow, bin
packing, assignment, scheduling and matching problems. A result that is common to all these
problem classes is that in the majority of cases problems with minimum quantity constraints
are NP-hard, even if the problem without minimum quantity constraints but with fixed lower
bounds can be solved in polynomial time. For instance, the maximum flow problem is known
to be solvable in polynomial time, but becomes NP-hard once minimum quantity constraints
are added.
In this thesis we consider flow, bin packing, scheduling and matching problems with minimum
quantity constraints. For each of these problem classes we provide a summary of the
definitions and results that exist to date. In addition, we define new problems by applying
minimum quantity constraints to the maximum-weight b-matching problem and to open
shop scheduling problems. We contribute results to each of the four problem classes: We
show NP-hardness for a variety of problems with minimum quantity constraints that have
not been considered so far. If possible, we restrict NP-hard problems to special cases that
can be solved in polynomial time. In addition, we consider approximability of the problems:
For most problems it turns out that, unless P=NP, there cannot be any polynomial-time
approximation algorithm. Hence, we consider bicriteria approximation algorithms that allow
the constraints of the problem to be violated up to a certain degree. This approach proves to
be very helpful and we provide a polynomial-time bicriteria approximation algorithm for at
least one problem of each of the four problem classes we consider. For problems defined on
graphs, the class of series parallel graphs supports this approach very well.
We end the thesis with a summary of the results and several suggestions for future research
on minimum quantity constraints.
This thesis introduces a novel deformation method for computational meshes. It is based on the numerical path following for the equations of nonlinear elasticity. By employing a logarithmic variation of the neo-Hookean hyperelastic material law, the method guarantees that the mesh elements do not become inverted and remain well-shaped. In order to demonstrate the performance of the method, this thesis addresses two areas of active research in isogeometric analysis: volumetric domain parametrization and fluid-structure interaction. The former concerns itself with the construction of a parametrization for a given computational domain provided only a parametrization of the domain’s boundary. The proposed mesh deformation method gives rise to a novel solution approach to this problem. Within it, the domain parametrization is constructed as a deformed configuration of a simplified domain. In order to obtain the simplified domain, the boundary of the target domain is projected in the \(L^2\)-sense onto a coarse NURBS basis. Then, the Coons patch is applied to parametrize the simplified domain. As a range of 2D and 3D examples demonstrates, the mesh deformation approach is able to produce high-quality parametrizations for complex domains where many state-of-the-art methods either fail or become unstable and inefficient. In the context of fluid-structure interaction, the proposed mesh deformation method is applied to robustly update the computational mesh in situations when the fluid domain undergoes large deformations. In comparison to the state-of-the-art mesh update methods, it is able to handle larger deformations and does not result in an eventual reduction of mesh quality. The performance of the method is demonstrated on a classic 2D fluid-structure interaction benchmark reproduced by using an isogeometric partitioned solver with strong coupling.
The famous Mather-Yau theorem in singularity theory yields a bijection of isomorphy classes of germs of isolated hypersurface singularities and their respective Tjurina algebras.
This result has been generalized by T. Gaffney and H. Hauser to singularities of isolated singularity type. Due to the fact that both results do not have a constructive proof, it is the objective of this thesis to extract explicit information about hypersurface singularities from their Tjurina algebras.
First we generalize the result by Gaffney-Hauser to germs of hypersurface singularities, which are strongly Euler-homogeneous at the origin. Afterwards we investigate the Lie algebra structure of the module of logarithmic derivations of Tjurina algebra while considering the theory of graded analytic algebras by G. Scheja and H. Wiebe. We use the aforementioned theory to show that germs of hypersurface singularities with positively graded Tjurina algebras are strongly Euler-homogeneous at the origin. We deduce the classification of hypersurface singularities with Stanley-Reisner Tjurina ideals.
The notion of freeness and holonomicity play an important role in the investigation of properties of the aforementioned singularities. Both notions have been introduced by K. Saito in 1980. We show that hypersurface singularities with Stanley--Reisner Tjurina ideals are holonomic and have a free singular locus. Furthermore, we present a Las Vegas algorithm, which decides whether a given zero-dimensional \(\mathbb{C}\)-algebra is the Tjurina algebra of a quasi-homogeneous isolated hypersurface singularity. The algorithm is implemented in the computer algebra system OSCAR.
In this thesis, we present the basic concepts of isogeometric analysis (IGA) and we consider Poisson's equation as model problem. Since in IGA the physical domain is parametrized via a geometry function that goes from a parameter domain, e.g. the unit square or unit cube, to the physical one, we present a class of parametrizations that can be viewed as a generalization of polar coordinates, known as the scaled boundary parametrizations (SB-parametrizations). These are easy to construct and are particularly attractive when only the boundary of a domain is available. We then present an IGA approach based on these parametrizations, that we call scaled boundary isogeometric analysis (SB-IGA). The SB-IGA derives the weak form of partial differential equations in a different way from the standard IGA. For the discretization projection
on a finite-dimensional space, we choose in both cases Galerkin's method. Thanks to this technique, we state an equivalence theorem for linear elliptic boundary value problems between the standard IGA, when it makes use of an SB-parametrization,
and the SB-IGA. We solve Poisson's equation with Dirichlet boundary conditions on different geometries and with different SB-parametrizations.
Fibre reinforced polymers(FRPs) are one the newest and modern materials. In FRPs a light polymer matrix holds but weak polymer matrix is strengthened by glass or carbon fibres. The result is a material that is light and compared to its weight, very strong.\par
The stiffness of the resulting material is governed by the direction and the length of the fibres. To better understand the behaviour of FRPs we need to know the fibre length distribution in the resulting material. The classic method for this is ashing, where a sample of the material is burned and destroyed. We look at CT images of the material. In the first part we assumed that we have a full fibre segmentation, we can fit an a cylinder to each individual fibre. In this setting we identified two problems, sampling bias and censoring.\par
Sampling bias occurs since a longer fibre has a higher probability to be visible in the observation window. To solve this problem we used a reweighed fibre length distribution. The weight depends on the used sampling rule.\par
For the censoring we used an EM algorithm. The EM algorithm is used to get a Maximum Likelihood estimator in cases of missing or censored data.\par
For this setting we deduced conditions such that the EM algorithm converges to at least a stationary point of the underlying likelihood function. We further found conditions such that if the EM converges to the correct ML estimator, the estimator is consistent and asymptotically normally distributed.\par
Since obtaining a full fibre segmentation is hard we further looked in the fibre endpoint process. The fibre end point process can be modelled as a Neymann-Scott cluster process. Using this model we can find a formula for the reduced second moment measure for this process. We use this formula to get an estimator for the fibre length distribution.\par
We investigated all estimators using simulation studies. We especially investigated their performance in the case of non overlapping fibres.
Operator semigroups and infinite dimensional analysis applied to problems from mathematical physics
(2020)
In this dissertation we treat several problems from mathematical physics via methods from functional analysis and probability theory and in particular operator semigroups. The thesis consists thematically of two parts.
In the first part we consider so-called generalized stochastic Hamiltonian systems. These are generalizations of Langevin dynamics which describe interacting particles moving in a surrounding medium. From a mathematical point of view these systems are stochastic differential equations with a degenerated diffusion coefficient. We construct weak solutions of these equations via the corresponding martingale problem. Therefore, we prove essential m-dissipativity of the degenerated and non-sectorial It\^{o} differential operator. Further, we apply results from the analytic and probabilistic potential theory to obtain an associated Markov process. Afterwards we show our main result, the convergence in law of the positions of the particles in the overdamped regime, the so-called overdamped limit, to a distorted Brownian motion. To this end, we show convergence of the associated operator semigroups in the framework of Kuwae-Shioya. Further, we established a tightness result for the approximations which proves together with the convergence of the semigroups weak convergence of the laws.
In the second part we deal with problems from infinite dimensional Analysis. Three different issues are considered. The first one is an improvement of a characterization theorem of the so-called regular test functions and distribution of White noise analysis. As an application we analyze a stochastic transport equation in terms of regularity of its solution in the space of regular distributions. The last two problems are from the field of relativistic quantum field theory. In the first one the $ (\Phi)_3^4 $-model of quantum field theory is under consideration. We show that the Schwinger functions of this model have a representation as the moments of a positive Hida distribution from White noise analysis. In the last chapter we construct a non-trivial relativistic quantum field in arbitrary space-time dimension. The field is given via Schwinger functions. For these which we establish all axioms of Osterwalder and Schrader. This yields via the reconstruction theorem of Osterwalder and Schrader a unique relativistic quantum field. The Schwinger functions are given as the moments of a non-Gaussian measure on the space of tempered distributions. We obtain the measure as a superposition of Gaussian measures. In particular, this measure is itself non-Gaussian, which implies that the field under consideration is not a generalized free field.
We study a multi-scale model for growth of malignant gliomas in the human brain.
Interactions of individual glioma cells with their environment determine the gross tumor shape.
We connect models on different time and length scales to derive a practical description of tumor growth that takes these microscopic interactions into account.
From a simple subcellular model for haptotactic interactions of glioma cells with the white matter we derive a microscopic particle system, which leads to a meso-scale model for the distribution of particles, and finally to a macroscopic description of the cell density.
The main body of this work is dedicated to the development and study of numerical methods adequate for the meso-scale transport model and its transition to the macroscopic limit.
We propose a model for glioma patterns in a microlocal tumor environment under
the influence of acidity, angiogenesis, and tissue anisotropy. The bottom-up model deduction
eventually leads to a system of reaction–diffusion–taxis equations for glioma and endothelial cell
population densities, of which the former infers flux limitation both in the self-diffusion and taxis
terms. The model extends a recently introduced (Kumar, Li and Surulescu, 2020) description of
glioma pseudopalisade formation with the aim of studying the effect of hypoxia-induced tumor
vascularization on the establishment and maintenance of these histological patterns which are typical
for high-grade brain cancer. Numerical simulations of the population level dynamics are performed
to investigate several model scenarios containing this and further effects.
Die Konstruktion eines Schrittzählers mit einem Arduino-Mikrocontroller und einem Bewegungssensor ist ein spannendes Technikprojekt. Wir erläutern den Grundgedanken hinter der produktorientierten Modellierung und die vielfältigen Möglichkeiten, die Fragestellung zu bearbeiten. Darüberhinaus werden die technischen Details der verwendeten Hardware diskutiert, um einen schnellen Einstieg ins Thema zu ermöglichen.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2020.12. For more information, see https://www.lintim.net
Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.
In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.
Model uncertainty is a challenge that is inherent in many applications of mathematical models in various areas, for instance in mathematical finance and stochastic control. Optimization procedures in general take place under a particular model. This model, however, might be misspecified due to statistical estimation errors and incomplete information. In that sense, any specified model must be understood as an approximation of the unknown "true" model. Difficulties arise since a strategy which is optimal under the approximating model might perform rather bad in the true model. A natural way to deal with model uncertainty is to consider worst-case optimization.
The optimization problems that we are interested in are utility maximization problems in continuous-time financial markets. It is well known that drift parameters in such markets are notoriously difficult to estimate. To obtain strategies that are robust with respect to a possible misspecification of the drift we consider a worst-case utility maximization problem with ellipsoidal uncertainty sets for the drift parameter and with a constraint on the strategies that prevents a pure bond investment.
By a dual approach we derive an explicit representation of the optimal strategy and prove a minimax theorem. This enables us to show that the optimal strategy converges to a generalized uniform diversification strategy as uncertainty increases.
To come up with a reasonable uncertainty set, investors can use filtering techniques to estimate the drift of asset returns based on return observations as well as external sources of information, so-called expert opinions. In a Black-Scholes type financial market with a Gaussian drift process we investigate the asymptotic behavior of the filter as the frequency of expert opinions tends to infinity. We derive limit theorems stating that the information obtained from observing the discrete-time expert opinions is asymptotically the same as that from observing a certain diffusion process which can be interpreted as a continuous-time expert. Our convergence results carry over to convergence of the value function in a portfolio optimization problem with logarithmic utility.
Lastly, we use our observations about how expert opinions improve drift estimates for our robust utility maximization problem. We show that our duality approach carries over to a financial market with non-constant drift and time-dependence in the uncertainty set. A time-dependent uncertainty set can then be defined based on a generic filter. We apply this to various investor filtrations and investigate which effect expert opinions have on the robust strategies.
Various physical phenomenons with sudden transients that results into structrual changes can be modeled via
switched nonlinear differential algebraic equations (DAEs) of the type
\[
E_{\sigma}\dot{x}=A_{\sigma}x+f_{\sigma}+g_{\sigma}(x). \tag{DAE}
\]
where \(E_p,A_p \in \mathbb{R}^{n\times n}, x\mapsto g_p(x),\) is a mapping, \(p \in \{1,\cdots,P\}, P\in \mathbb{N}
f \in \mathbb{R} \rightarrow \mathbb{R}^n , \sigma: \mathbb{R} \rightarrow \{1,\cdots, P\}\).
Two related common tasks are:
Task 1: Investigate if above (DAE) has a solution and if it is unique.
Task 2: Find a connection among a solution of above (DAE) and solutions of related
partial differential equations.
In the linear case \(g(x) \equiv 0\) the task 1 has been tackeled already in a
distributional solution framework.
A main goal of the dissertation is to give contribution to task 1 for the
nonlinear case \(g(x) \not \equiv 0\) ; also contributions to the task 2 are given for
switched nonlinear DAEs arising while modeling sudden transients in water
distribution networks. In addition, this thesis contains the following further
contributions:
The notion of structured switched nonlinear DAEs has been introduced,
allowing also non regular distributions as solutions. This extend a previous
framework that allowed only piecewise smooth functions as solutions. Further six mild conditions were given to ensure existence and uniqueness of the solution within the space of piecewise smooth distribution. The main
condition, namely the regularity of the matrix pair \((E,A)\), is interpreted geometrically for those switched nonlinear DAEs arising from water network graphs.
Another contribution is the introduction of these switched nonlinear DAEs
as a simplication of the PDE model used classically for modeling water networks. Finally, with the support of numerical simulations of the PDE model it has been illustrated that this switched nonlinear DAE model is a good approximation for the PDE model in case of a small compressibility coefficient.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
In this thesis, we deal with the worst-case portfolio optimization problem occuring in discrete-time markets.
First, we consider the discrete-time market model in the presence of crash threats. We construct the discrete worst-case optimal portfolio strategy by the indifference principle in the case of the logarithmic utility. After that we extend this problem to general utility functions and derive the discrete worst-case optimal portfolio processes, which are characterized by a dynamic programming equation. Furthermore, the convergence of the discrete worst-case optimal portfolio processes are investigated when we deal with the explicit utility functions.
In order to further study the relation of the worst-case optimal value function in discrete-time models to continuous-time models we establish the finite-difference approach. By deriving the discrete HJB equation we verify the worst-case optimal value function in discrete-time models, which satisfies a system of dynamic programming inequalities. With increasing degree of fineness of the time discretization, the convergence of the worst-case value function in discrete-time models to that in continuous-time models are proved by using a viscosity solution method.
In this dissertation we apply financial mathematical modelling to electricity markets. Electricity is different from any other underlying of financial contracts: it is not storable. This means that electrical energy in one time point cannot be transferred to another. As a consequence, power contracts with disjoint delivery time spans basically have a different underlying. The main idea throughout this thesis is exactly this two-dimensionality of time: every electricity contract is not only characterized by its trading time but also by its delivery time.
The basis of this dissertation are four scientific papers corresponding to the Chapters 3 to 6, two of which have already been published in peer-reviewed journals. Throughout this thesis two model classes play a significant role: factor models and structural models. All ideas are applied to or supported by these two model classes. All empirical studies in this dissertation are conducted on electricity price data from the German market and Chapter 4 in particular studies an intraday derivative unique to the German market. Therefore, electricity market design is introduced by the example of Germany in Chapter 1. Subsequently, Chapter 2 introduces the general mathematical theory necessary for modelling electricity prices, such as Lévy processes and the Esscher transform. This chapter is the mathematical basis of the Chapters 3 to 6.
Chapter 3 studies factor models applied to the German day-ahead spot prices. We introduce a qualitative measure for seasonality functions based on three requirements. Furthermore, we introduce a relation of factor models to ARMA processes, which induces a new method to estimate the mean reversion speed.
Chapter 4 conducts a theoretical and empirical study of a pricing method for a new electricity derivative: the German intraday cap and floor futures. We introduce the general theory of derivative pricing and propose a method based on the Hull-White model of interest rate modelling, which is a one-factor model. We include week futures prices to generate a price forward curve (PFC), which is then used instead of a fixed deterministic seasonality function. The idea that we can combine all market prices, and in particular futures prices, to improve the model quality also plays the major role in Chapter 5 and Chapter 6.
In Chapter 5 we develop a Heath-Jarrow-Morton (HJM) framework that models intraday, day-ahead, and futures prices. This approach is based on two stochastic processes motivated by economic interpretations and separates the stochastic dynamics in trading and delivery time. Furthermore, this framework allows for the use of classical day-ahead spot price models such as the ones of Schwartz and Smith (2000), Lucia and Schwartz (2002) and includes many model classes such as structural models and factor models.
Chapter 6 unifies the classical theory of storage and the concept of a risk premium through the introduction of an unobservable intrinsic electricity price. Since all tradable electricity contracts are derivatives of this actual intrinsic price, their prices should all be derived as conditional expectation under the risk-neutral measure. Through the intrinsic electricity price we develop a framework, which also includes many existing modelling approaches, such as the HJM framework of Chapter 5.
Many loads acting on a vehicle depend on the condition and quality of roads
traveled as well as on the driving style of the motorist. Thus, during vehicle development,
good knowledge on these further operations conditions is advantageous.
For that purpose, usage models for different kinds of vehicles are considered. Based
on these mathematical descriptions, representative routes for multiple user
types can be simulated in a predefined geographical region. The obtained individual
driving schedules consist of coordinates of starting and target points and can
thus be routed on the true road network. Additionally, different factors, like the
topography, can be evaluated along the track.
Available statistics resulting from travel survey are integrated to guarantee reasonable
trip length. Population figures are used to estimate the number of vehicles in
contained administrative units. The creation of thousands of those geo-referenced
trips then allows the determination of realistic measures of the durability loads.
Private as well as commercial use of vehicles is modeled. For the former, commuters
are modeled as the main user group conducting daily drives to work and
additional leisure time a shopping trip during workweek. For the latter, taxis as
example for users of passenger cars are considered. The model of light-duty commercial
vehicles is split into two types of driving patterns, stars and tours, and in
the common traffic classes of long-distance, local and city traffic.
Algorithms to simulate reasonable target points based on geographical and statistical
data are presented in detail. Examples for the evaluation of routes based
on topographical factors and speed profiles comparing the influence of the driving
style are included.
Die MINT-EC-Girls-Camp: Math-Talent-School ist eine vom Fraunhofer Institut für Techno- und Wirtschaftsmathematik (ITWM) initiierte Veranstaltung, die regelmäßig als Kooperation zwischen dem Felix-Klein-Zentrum für Mathematik und dem Verein mathematisch-naturwissenschaftlicher Excellence-Center an Schulen e.V. (Verein MINT-EC) durchgeführt wird. Die methodisch-didaktische Konzeption der Math-Talent-Schools erfolgt durch das Kompetenzzentrum für Mathematische Modellierung in MINT-Projekten in der Schule (KOMMS), einer wissenschaftlichen Einrichtung des Fachbereichs Mathematik der Technischen Universität Kaiserslautern. Die inhaltlich-organisatorische Ausführung übernimmt das Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM in enger Abstimmung und Kooperation von Wissenschaftlern der Technischen Universität und des Fraunhofer ITWM. Die MINT-EC-Girls-Camp: Math-Talent-School hat zum Ziel, Mathematik-interessierten Schülerinnen einen Einblick in die Arbeitswelt von Mathematikerinnen und Mathematikern zu geben. In diesem Artikel stellen wir die Math-Talent-School vor. Hierfür werden die fachlichen und fachdidaktischen Hintergründe der Projekte beleuchtet, der Ablauf der Veranstaltung erläutert und ein Fazit gezogen.
Magnetoelastic coupling describes the mutual dependence of the elastic and magnetic fields and can be observed in certain types of materials, among which are the so-called "magnetostrictive materials". They belong to the large class of "smart materials", which change their shape, dimensions or material properties under the influence of an external field. The mechanical strain or deformation a material experiences due to an externally applied magnetic field is referred to as magnetostriction; the reciprocal effect, i.e. the change of the magnetization of a body subjected to mechanical stress is called inverse magnetostriction. The coupling of mechanical and electromagnetic fields is particularly observed in "giant magnetostrictive materials", alloys of ferromagnetic materials that can exhibit several thousand times greater magnitudes of magnetostriction (measured as the ratio of the change in length of the material to its original length) than the common magnetostrictive materials. These materials have wide applications areas: They are used as variable-stiffness devices, as sensors and actuators in mechanical systems or as artificial muscles. Possible application fields also include robotics, vibration control, hydraulics and sonar systems.
Although the computational treatment of coupled problems has seen great advances over the last decade, the underlying problem structure is often not fully understood nor taken into account when using black box simulation codes. A thorough analysis of the properties of coupled systems is thus an important task.
The thesis focuses on the mathematical modeling and analysis of the coupling effects in magnetostrictive materials. Under the assumption of linear and reversible material behavior with no magnetic hysteresis effects, a coupled magnetoelastic problem is set up using two different approaches: the magnetic scalar potential and vector potential formulations. On the basis of a minimum energy principle, a system of partial differential equations is derived and analyzed for both approaches. While the scalar potential model involves only stationary elastic and magnetic fields, the model using the magnetic vector potential accounts for different settings such as the eddy current approximation or the full Maxwell system in the frequency domain.
The distinctive feature of this work is the analysis of the obtained coupled magnetoelastic problems with regard to their structure, strong and weak formulations, the corresponding function spaces and the existence and uniqueness of the solutions. We show that the model based on the magnetic scalar potential constitutes a coupled saddle point problem with a penalty term. The main focus in proving the unique solvability of this problem lies on the verification of an inf-sup condition in the continuous and discrete cases. Furthermore, we discuss the impact of the reformulation of the coupled constitutive equations on the structure of the coupled problem and show that in contrast to the scalar potential approach, the vector potential formulation yields a symmetric system of PDEs. The dependence of the problem structure on the chosen formulation of the constitutive equations arises from the distinction of the energy and coenergy terms in the Lagrangian of the system. While certain combinations of the elastic and magnetic variables lead to a coupled magnetoelastic energy function yielding a symmetric problem, the use of their dual variables results in a coupled coenergy function for which a mixed problem is obtained.
The presented models are supplemented with numerical simulations carried out with MATLAB for different examples including a 1D Euler-Bernoulli beam under magnetic influence and a 2D magnetostrictive plate in the state of plane stress. The simulations are based on material data of Terfenol-D, a giant magnetostrictive materials used in many industrial applications.
A distributional solution framework is developed for systems consisting of linear hyperbolic partial differential equations (PDEs) and switched differential algebraic equations (DAEs) which are coupled via boundary conditions. The unique solvability is then characterize in terms of a switched delay DAE. The theory is illustrated with an example of electric power lines modeled by the telegraph equations which are coupled via a switching transformer where simulations confirm the predicted impulsive solutions.
Wir zeigen an einigen Beispielen, wie man numerische Simulationen in Tabellenkalkulationsprogrammen (hier speziell in Excel) erzeugen kann. Diese können beispielsweise im Kontext von mathematischer Modellierung verwendet werden.
Die Beispiele umfassen ein Modell zur Ausbreitung von Krankheiten, die Flugkurve eines Fußballs unter Berücksichtigung von Luftreibung, eine Monte-Carlo-Simulation zur experimentellen Bestimmung von pi, eine Monte-Carlo-Simulation eines gemischten Kartenstapels und die Modellierung von Benzinpreisen mit einem Preistrend und Rauschen
Dieser Beitrag beschreibt eine Lernumgebung für Schülerinnen und Schüler der Unter- und Mittelstufe mit einem Schwerpunkt im Fach Mathematik. Das Thema dieser Lernumgebung ist die Simulation von Entfluchtungsprozessen im Rahmen von Gebäudeevakuierungen. Dabei wird das Konzept eines zellulären Automaten vermittelt, ohne dabei Programmierkenntnisse vorauszusetzen oder anzuwenden. Anhand dieses speziellen Simulationswerkzeugs des zellulären Automaten werden Eigenschaften, Kenngrößen sowie Vor- und Nachteile von Simulationen im Allgemeinen thematisiert. Dazu gehören unter anderem die experimentelle Datengewinnung, die Festlegung von Modellparametern, die Diskretisierung des zeitlichen und räumlichen Betrachtungshorizonts sowie die zwangsläufig auftretenden (Diskretisierungs-)Fehler, die algorithmischen Abläufe einer Simulation in Form elementarer Handlungsanweisungen, die Speicherung und Visualisierung von Daten aus einer Simulation sowie die Interpretation und kritische Diskussion von Simulationsergebnissen. Die vorgestellte Lernumgebung ermöglicht etliche Variationen zu weiteren Aspekten des Themas „Evakuierungssimulation“ und bietet dadurch auch vielfältige Differenzierungsmöglichkeiten.
In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.
In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.
Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.
In the present master’s thesis we investigate the connection between derivations and
homogeneities of complete analytic algebras. We prove a theorem, which describes a specific set of generators
for the module of derivations of an analytic algebra, which map the maximal ideal of R into itself. It turns out, that this set has a structure similar to a Cartan subalgebra and contains
information regarding multi-homogeneity. In order to prove
this theorem, we extend the notion of grading by Scheja and Wiebe to projective systems and state the connection between multi-gradings and pairwise
commuting diagonalizable derivations. We prove a theorem similar to Cartan’s Conjugacy Theorem in the setup of infinite-dimensional Lie algebras, which arise as projective limits of finite-dimensional Lie algebras. Using this result, we can show that the structure of the aforementioned set of generators is an intrinsic property of the analytic algebra. At the end we state an algorithm, which is theoretically able to compute the maximal multi-homogeneity of a complete analytic algebra.
Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.
Multiphase materials combine properties of several materials, which makes them interesting for high-performing components. This thesis considers a certain set of multiphase materials, namely silicon-carbide (SiC) particle-reinforced aluminium (Al) metal matrix composites and their modelling based on stochastic geometry models.
Stochastic modelling can be used for the generation of virtual material samples: Once we have fitted a model to the material statistics, we can obtain independent three-dimensional “samples” of the material under investigation without the need of any actual imaging. Additionally, by changing the model parameters, we can easily simulate a new material composition.
The materials under investigation have a rather complicated microstructure, as the system of SiC particles has many degrees of freedom: Size, shape, orientation and spatial distribution. Based on FIB-SEM images, that yield three-dimensional image data, we extract the SiC particle structure using methods of image analysis. Then we model the SiC particles by anisotropically rescaled cells of a random Laguerre tessellation that was fitted to the shapes of isotropically rescaled particles. We fit a log-normal distribution for the volume distribution of the SiC particles. Additionally, we propose models for the Al grain structure and the Aluminium-Copper (\({Al}_2{Cu}\)) precipitations occurring on the grain boundaries and on SiC-Al phase boundaries.
Finally, we show how we can estimate the parameters of the volume-distribution based on two-dimensional SEM images. This estimation is applied to two samples with different mean SiC particle diameters and to a random section through the model. The stereological estimations are within acceptable agreement with the parameters estimated from three-dimensional image data
as well as with the parameters of the model.
Multifacility location problems arise in many real world applications. Often, the facilities can only be placed in feasible regions such as development or industrial areas. In this paper we show the existence of a finite dominating set (FDS) for the planar multifacility location problem with polyhedral gauges as distance functions, and polyhedral feasible regions, if the interacting facilities form a tree. As application we show how to solve the planar 2-hub location problem in polynomial time. This approach will yield an ε-approximation for the euclidean norm case polynomial in the input data and 1/ε.
In this thesis, we focus on the application of the Heath-Platen (HP) estimator in option
pricing. In particular, we extend the approach of the HP estimator for pricing path dependent
options under the Heston model. The theoretical background of the estimator
was first introduced by Heath and Platen [32]. The HP estimator was originally interpreted
as a control variate technique and an application for European vanilla options was
presented in [32]. For European vanilla options, the HP estimator provided a considerable
amount of variance reduction. Thus, applying the technique for path dependent options
under the Heston model is the main contribution of this thesis.
The first part of the thesis deals with the implementation of the HP estimator for pricing
one-sided knockout barrier options. The main difficulty for the implementation of the HP
estimator is located in the determination of the first hitting time of the barrier. To test the
efficiency of the HP estimator we conduct numerical tests with regard to various aspects.
We provide a comparison among the crude Monte Carlo estimation, the crude control
variate technique and the HP estimator for all types of barrier options. Furthermore, we
present the numerical results for at the money, in the money and out of the money barrier
options. As numerical results imply, the HP estimator performs superior among others
for pricing one-sided knockout barrier options under the Heston model.
Another contribution of this thesis is the application of the HP estimator in pricing bond
options under the Cox-Ingersoll-Ross (CIR) model and the Fong-Vasicek (FV) model. As
suggested in the original paper of Heath and Platen [32], the HP estimator has a wide
range of applicability for derivative pricing. Therefore, transferring the structure of the
HP estimator for pricing bond options is a promising contribution. As the approximating
Vasicek process does not seem to be as good as the deterministic volatility process in the
Heston setting, the performance of the HP estimator in the CIR model is only relatively
good. However, for the FV model the variance reduction provided by the HP estimator is
again considerable.
Finally, the numerical result concerning the weak convergence rate of the HP estimator
for pricing European vanilla options in the Heston model is presented. As supported by
numerical analysis, the HP estimator has weak convergence of order almost 1.
A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.
In this paper, we demonstrate the power of functional data models for a statistical analysis of stimulus-response experiments which is a quite natural way to look at this kind of data and which makes use of the full information available. In particular, we focus on the detection of a change in the mean of the response in a series of stimulus-response curves where we also take into account dependence in time.
In this article a new numerical solver for simulations of district heating networks is presented. The numerical method applies the local time stepping introduced in [11] to networks of linear advection equations. In combination with the high order approach of [4] an accurate and very efficient scheme is developed. In several numerical test cases the advantages for simulations of district heating networks are shown.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
Optimal control of partial differential equations is an important task in applied mathematics where it is used in order to optimize, for example, industrial or medical processes. In this thesis we investigate an optimal control problem with tracking type cost functional for the Cattaneo equation with distributed control, that is, \(\tau y_{tt} + y_t - \Delta y = u\). Our focus is on the theoretical and numerical analysis of the limit process \(\tau \to 0\) where we prove the convergence of solutions of the Cattaneo equation to solutions of the heat equation.
We start by deriving both the Cattaneo and the classical heat equation as well as introducing our notation and some functional analytic background. Afterwards, we prove the well-posedness of the Cattaneo equation for homogeneous Dirichlet boundary conditions, that is, we show the existence and uniqueness of a weak solution together with its continuous dependence on the data. We need this in the following, where we investigate the optimal control problem for the Cattaneo equation: We show the existence and uniqueness of a global minimizer for an optimal control problem with tracking type cost functional and the Cattaneo equation as a constraint. Subsequently, we do an asymptotic analysis for \(\tau \to 0\) for both the forward equation and the aforementioned optimal control problem and show that the solutions of these problems for the Cattaneo equation converge strongly to the ones for the heat equation. Finally, we investigate these problems numerically, where we examine the different behaviour of the models and also consider the limit \(\tau \to 0\), suggesting a linear convergence rate.
Cutting-edge cancer therapy involves producing individualized medicine for many patients at the same time. Within this process, most steps can be completed for a certain number of patients simultaneously. Using these resources efficiently may significantly reduce waiting times for the patients and is therefore crucial for saving human lives. However, this involves solving a complex scheduling problem, which can mathematically be modeled as a proportionate flow shop of batching machines (PFB). In this thesis we investigate exact and approximate algorithms for tackling many variants of this problem. Related mathematical models have been studied before in the context of semiconductor manufacturing.
SDE-driven modeling of phenotypically heterogeneous tumors: The influence of cancer cell stemness
(2018)
We deduce cell population models describing the evolution of a tumor (possibly interacting with its
environment of healthy cells) with the aid of differential equations. Thereby, different subpopulations
of cancer cells allow accounting for the tumor heterogeneity. In our settings these include cancer
stem cells known to be less sensitive to treatment and differentiated cancer cells having a higher
sensitivity towards chemo- and radiotherapy. Our approach relies on stochastic differential equations
in order to account for randomness in the system, arising e.g., by the therapy-induced decreasing
number of clonogens, which renders a pure deterministic model arguable. The equations are deduced
relying on transition probabilities characterizing innovations of the two cancer cell subpopulations,
and similarly extended to also account for the evolution of normal tissue. Several therapy approaches
are introduced and compared by way of tumor control probability (TCP) and uncomplicated tumor
control probability (UTCP). A PDE approach allows to assess the evolution of tumor and normal
tissue with respect to time and to cell population densities which can vary continuously in a given set
of states. Analytical approximations of solutions to the obtained PDE system are provided as well.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.
We continue in this paper the study of k-adaptable robust solutions for combinatorial optimization problems with bounded uncertainty sets. In this concept not a single solution needs to be chosen to hedge against the uncertainty. Instead one is allowed to choose a set of k different solutions from which one can be chosen after the uncertain scenario has been revealed. We first show how the problem can be decomposed into polynomially many subproblems if k is fixed. In the remaining part of the paper we consider the special case where k=2, i.e., one is allowed to choose two different solutions to hedge against the uncertainty. We decompose this problem into so called coordination problems. The study of these coordination problems turns out to be interesting on its own. We prove positive results for the unconstrained combinatorial optimization problem, the matroid maximization problem, the selection problem, and the shortest path problem on series parallel graphs. The shortest path problem on general graphs turns out to be NP-complete. Further, we present for minimization problems how to transform approximation algorithms for the coordination problem to approximation algorithms for the original problem. We study the knapsack problem to show that this relation does not hold for maximization problems in general. We present a PTAS for the corresponding coordination problem and prove that the 2-adaptable knapsack problem is not at all approximable.
This paper presents a case study of duty rostering for physicians at a department of orthopedics and trauma surgery. We provide a detailed description of the rostering problem faced and present an integer programming model that has been used in practice for creating duty rosters at the department for more than a year. Using real world data, we compare the model output to a manually generated roster as used previously by the department and analyze the quality of the rosters generated by the model over a longer time span. Moreover, we demonstrate how unforeseen events such as absences of scheduled physicians are handled.
Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.
Non–woven materials consist of many thousands of fibres laid down on a conveyor belt
under the influence of a turbulent air stream. To improve industrial processes for the
production of non–woven materials, we develop and explore novel mathematical fibre and
material models.
In Part I of this thesis we improve existing mathematical models describing the fibres on the
belt in the meltspinning process. In contrast to existing models, we include the fibre–fibre
interaction caused by the fibres’ thickness which prevents the intersection of the fibres and,
hence, results in a more accurate mathematical description. We start from a microscopic
characterisation, where each fibre is described by a stochastic functional differential
equation and include the interaction along the whole fibre path, which is described by a
delay term. As many fibres are required for the production of a non–woven material, we
consider the corresponding mean–field equation, which describes the evolution of the fibre
distribution with respect to fibre position and orientation. To analyse the particular case of
large turbulences in the air stream, we develop the diffusion approximation which yields a
distribution describing the fibre position. Considering the convergence to equilibrium on
an analytical level, as well as performing numerical experiments, gives an insight into the
influence of the novel interaction term in the equations.
In Part II of this thesis we model the industrial airlay process, which is a production method
whereby many short fibres build a three–dimensional non–woven material. We focus on
the development of a material model based on original fibre properties, machine data and
micro computer tomography. A possible linking of these models to other simulation tools,
for example virtual tensile tests, is discussed.
The models and methods presented in this thesis promise to further the field in mathematical
modelling and computational simulation of non–woven materials.
In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \).
In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig.
Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question.
\[\]
In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.
Die Akustik liefert einen interessanten Hintergrund, interdisziplinären und fächerverbindenen Unterricht zwischen Mathematik, Physik und Musik durchzuführen. SchülerInnen können hierbei beispielsweise experimentell tätig sein, indem sie Audioaufnahmen selbst erzeugen und sich mit Computersoftware Frequenzspektren erzeugen lassen. Genauso können die Schüler auch Frequenzspektren vorgeben und daraus Klänge erzeugen. Dies kann beispielsweise dazu dienen, den Begriff der Obertöne im Musikunterricht physikalisch oder mathematisch greifbar zu machen oder in der Harmonielehre Frequenzverhältnisse von Intervallen und Dreiklängen näher zu untersuchen.
Der Computer ist hier ein sehr nützliches Hilfsmittel, da der mathematische Hintergrund dieser Aufgabe -- das Wechseln zwischen Audioaufnahme und ihrem Frequenzbild -- sich in der Fourier-Analysis findet, die für SchülerInnen äußerst anspruchsvoll ist. Indem man jedoch die Fouriertransformation als numerisches Hilfsmittel einführt, das nicht im Detail verstanden werden muss, lässt sich an anderer Stelle interessante Mathematik betreiben und die Zusammenhänge zwischen Akustik und Musik können spielerisch erfahren werden.
Im folgenden Beitrag wird eine Herangehensweise geschildert, wie wir sie bereits bei der Felix-Klein-Modellierungswoche umgesetzt haben: Die SchülerInnen haben den Auftrag erhalten, einen Synthesizer zu entwickeln, mit dem verschiedene Musikinstrumente nachgeahmt werden können. Als Hilfsmittel haben sie eine kurze Einführung in die Eigenschaften der Fouriertransformation erhalten, sowie Audioaufnahmen verschiedener Instrumente.
Der vorliegende Artikel befasst sich mit der Realisierung eines einfachen Motion Capturing Verfahrens in MATLAB als Vorschlag für eine Umsetzung in der Schule. Die zugrunde liegende Mathematik kann ab der Mittelstufe leicht vermittelt werden. Je nach technischer Ausstattung können mit einfachen Mitteln farbige Marker in Videos oder Webcam-Streams verfolgt werden. Notwendige Konzepte und Algorithmen werden im Artikel beleuchtet.
We extend the standard concept of robust optimization by the introduction of an alternative solution. In contrast to the classic concept, one is allowed to chose two solutions from which the best can be picked after the uncertain scenario has been revealed. We focus in this paper on the resulting robust problem for combinatorial problems with bounded uncertainty sets. We present a reformulation of the robust problem which decomposes it into polynomially many subproblems. In each subproblem one needs to find two solutions which are connected by a cost function which penalizes if the same element is part of both solutions. Using this reformulation, we show how the robust problem can be solved efficiently for the unconstrained combinatorial problem, the selection problem, and the minimum spanning tree problem. The robust problem corresponding to the shortest path problem turns out to be NP-complete on general graphs. However, for series-parallel graphs, the robust shortest path problem can be solved efficiently. Further, we show how approximation algorithms for the subproblem can be used to compute approximate solutions for the original problem.
In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.
We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous
short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds.
The goal of an investor is to find a portfolio which maximizes expected utility
from terminal wealth under budget and present expected short-fall (PESF) risk
constraints. We analyze this portfolio optimization problem in both complete and
incomplete financial markets in three different cases: (a) when the PESF risk is
minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds
to the unconstrained Merton investment.
In all cases we find the optimal terminal wealth and portfolio process using the
martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the
optimal terminal wealth in the cases mentioned using numerical examples. Without
risk constraints, we further compare the investment strategies for complete
and incomplete market numerically.
This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.
We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property.
Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price.
We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.
Manifolds
(2017)
The thesis studies change points in absolute time for censored survival data with some contributions to the more common analysis of change points with respect to survival time. We first introduce the notions and estimates of survival analysis, in particular the hazard function and censoring mechanisms. Then, we discuss change point models for survival data. In the literature, usually change points with respect to survival time are studied. Typical examples are piecewise constant and piecewise linear hazard functions. For that kind of models, we propose a new algorithm for numerical calculation of maximum likelihood estimates based on a cross entropy approach which in our simulations outperforms the common Nelder-Mead algorithm.
Our original motivation was the study of censored survival data (e.g., after diagnosis of breast cancer) over several decades. We wanted to investigate if the hazard functions differ between various time periods due, e.g., to progress in cancer treatment. This is a change point problem in the spirit of classical change point analysis. Horváth (1998) proposed a suitable change point test based on estimates of the cumulative hazard function. As an alternative, we propose similar tests based on nonparametric estimates of the hazard function. For one class of tests related to kernel probability density estimates, we develop fully the asymptotic theory for the change point tests. For the other class of estimates, which are versions of the Watson-Leadbetter estimate with censoring taken into account and which are related to the Nelson-Aalen estimate, we discuss some steps towards developing the full asymptotic theory. We close by applying the change point tests to simulated and real data, in particular to the breast cancer survival data from the SEER study.
In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.
Nonwoven materials are used as filter media which are the key component of automotive filters such as air filters, oil filters, and fuel filters. Today, the advanced engine technologies require innovative filter media with higher performances. A virtual microstructure of the nonwoven filter medium, which has similar filter properties as the existing material, can be used to design new filter media from existing media. Nonwoven materials considered in this thesis prominently feature non-overlapping fibers, curved fibers, fibers with circular cross section, fibers of apparently infinite length, and fiber bundles. To this end, as part of this thesis, we extend the Altendorf-Jeulin individual fiber model to incorporate all the above mentioned features. The resulting novel stochastic 3D fiber model can generate geometries with good visual resemblance of real filter media. Furthermore, pressure drop, which is one of the important physical properties of the filter, simulated numerically on the computed tomography (CT) data of the real nonwoven material agrees well (with a relative error of 8%) with the pressure drop simulated in the generated microstructure realizations from our model.
Generally, filter properties for the CT data and generated microstructure realizations are computed using numerical simulations. Since numerical simulations require extensive system memory and computation time, it is important to find the representative domain size of the generated microstructure for a required filter property. As part of this thesis, simulation and a statistical approach are used to estimate the representative domain size of our microstructure model. Precisely, the representative domain size with respect to the packing density, the pore size distribution, and the pressure drop are considered. It turns out that the statistical approach can be used to estimate the representative domain size for the given property more precisely and using less generated microstructures than the purely simulation based approach.
Among the various properties of fibrous filter media, fiber thickness and orientation are important characteristics which should be considered in design and quality assurance of filter media. Automatic analysis of images from scanning electron microscopy (SEM) is a suitable tool in that context. Yet, the accuracy of such image analysis tools cannot be judged based on images of real filter media since their true fiber thickness and orientation can never be known accurately. A solution is to employ synthetically generated models for evaluation. By combining our 3D fiber system model with simulation of the SEM imaging process, quantitative evaluation of the fiber thickness and orientation measurements becomes feasible. We evaluate the state-of-the-art automatic thickness and orientation estimation method that way.
In this dissertation convergence of binomial trees for option pricing is investigated. The focus is on American and European put and call options. For that purpose variations of the binomial tree model are reviewed.
In the first part of the thesis we investigated the convergence behavior of the already known trees from the literature (CRR, RB, Tian and CP) for the European options. The CRR and the RB tree suffer from irregular convergence, so our first aim is to find a way to get the smooth convergence. We first show what causes these oscillations. That will also help us to improve the rate of convergence. As a result we introduce the Tian and the CP tree and we proved that the order of convergence for these trees is \(O \left(\frac{1}{n} \right)\).
Afterwards we introduce the Split tree and explain its properties. We prove the convergence of it and we found an explicit first order error formula. In our setting, the splitting time \(t_{k} = k\Delta t\) is not fixed, i.e. it can be any time between 0 and the maturity time \(T\). This is the main difference compared to the model from the literature. Namely, we show that the good properties of the CRR tree when \(S_{0} = K\) can be preserved even without this condition (which is mainly the case). We achieved the convergence of \(O \left(n^{-\frac{3}{2}} \right)\) and we typically get better results if we split our tree later.
In this paper a modified version of dynamic network
ows is discussed. Whereas dynamic network flows are widely analyzed already, we consider a dynamic flow problem with aggregate arc capacities called Bridge
Problem which was introduced by Melkonian [Mel07]. We extend his research to integer flows and show that this problem is strongly NP-hard. For practical relevance we also introduce and analyze the hybrid bridge problem, i.e. with underlying networks whose arc capacity can limit aggregate flow (bridge problem) or the flow entering an arc at each time (general dynamic flow). For this kind of problem we present efficient procedures for
special cases that run in polynomial time. Moreover, we present a heuristic for general hybrid graphs with restriction on the number of bridge arcs.
Computational experiments show that the heuristic works well, both on random graphs and on graphs modeling also on realistic scenarios.
We propose a multiscale model for tumor cell migration in a tissue network. The system of equations involves a structured population model for the tumor cell density, which besides time and
position depends on a further variable characterizing the cellular state with respect to the amount
of receptors bound to soluble and insoluble ligands. Moreover, this equation features pH-taxis and
adhesion, along with an integral term describing proliferation conditioned by receptor binding. The
interaction of tumor cells with their surroundings calls for two more equations for the evolution of
tissue fibers and acidity (expressed via concentration of extracellular protons), respectively. The
resulting ODE-PDE system is highly nonlinear. We prove the global existence of a solution and
perform numerical simulations to illustrate its behavior, paying particular attention to the influence
of the supplementary structure and of the adhesion.
Buses not arriving on time and then arriving all at once - this phenomenon is known from
busy bus routes and is called bus bunching.
This thesis combines the well studied but so far separate areas of bus-bunching prediction
and dynamic holding strategies, which allow to modulate buses’ dwell times at stops to
eliminate bus bunching. We look at real data of the Dublin Bus route 46A and present
a headway-based predictive-control framework considering all components like data
acquisition, prediction and control strategies. We formulate time headways as time series
and compare several prediction methods for those. Furthermore we present an analytical
model of an artificial bus route and discuss stability properties and dynamic holding
strategies using both data available at the time and predicted headway data. In a numerical
simulation we illustrate the advantages of the presented predictive-control framework
compared to the classical approaches which only use directly available data.
Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)
In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.
This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134].
A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk.
Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants.
Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.
In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored.
First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations.
Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established.
Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.
Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule.
We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC.
We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate.
To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band.
Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.
We propose and study a strongly coupled PDE-ODE-ODE system modeling cancer cell invasion through a tissue network
under the go-or-grow hypothesis asserting that cancer cells can either move or proliferate. Hence our setting features
two interacting cell populations with their mutual transitions and involves tissue-dependent degenerate diffusion and
haptotaxis for the moving subpopulation. The proliferating cells and the tissue evolution are characterized by way of ODEs
for the respective densities. We prove the global existence of weak solutions and illustrate the model behaviour by
numerical simulations in a two-dimensional setting.
The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.
By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.
Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.
Die Planung von Bushaltestellen in Innenstädten ist ein authentisches Thema, welches sich für den Einsatz in einem realitätsbezogenen Unterricht in unterschiedlichen Klassenstufen eignet. Verschiedene Interessen und Gegebenheiten müssen in einem Modell und in einer Lösungsstrategie vereint werden. Durch eine sehr offen gewählte Fragestellung sind verschiedene Ansätze und Modelle möglich. Somit wird mathematisches Modellieren trainiert und das Durchlaufen eines Modellierungsprozesses in einem interessanten Projekt ermöglicht. Die mathematischen Hintergründe sowie das vielseitige Lösungsspektrum von Schülerinnen und Schülern unterschiedlicher Jahrgangsstufen zu derselben Fragestellung werden im Folgenden vorgestellt.
Um Spielkarten zu mischen gibt es unterschiedliche Techniken, die sich sowohl in ihrem Zeitaufwand, als auch in der Güte der Durchmischung unterscheiden. Der folgende Artikel vermittelt, wie man die Frage nach einer besonders guten Mischtechnik nutzen kann, um mathematische Modellierung anhand einer alltagsnahen Fragestellung in den Unterricht einzubinden. Dabei können verschiedene Aspekte der Stochastik angesprochen werden, und es bietet sich ein breites Potential, auf unterschiedlichen Niveaus Computer zum Generieren von Zufallsexperimenten zu verwenden.
Der Beitrag beschäftigt sich mit der Frage, ob Schildkröten alleine anhand der Musterung bzw. Struktur ihres Bauch- Rückenpanzers eindeutig identifiziert werden können. Dabei sollen sinnvolle Identifizierungsmerkmale entwickelt werden, die auf der Basis von Fotos ausgewertet werden. Das Besondere an diesem Problem ist, dass es mit Lernenden ganz unterschiedlicher Altersstufen bearbeitet werden kann und dass es eine unheimliche Vielfalt an mathematischen Methoden gibt, die auf dem Weg zu einer Lösung hilfreich sind: Dies reicht von einfachen geometrischen Überlegungen über Analysis (Integration, Kurvendiskussion) bis hin zu mathematischer Bildverarbeitung und Fragen der Robustheit. Genauso breit wie das Spektrum der einsetzbaren mathematischen Werkzeuge ist die Altergruppe, mit der ein derartiges Projekt durchführbar ist: Vom Grundschulalter bis hin zur Masterarbeit ist eine Bearbeitung möglich, und die benötigte Zeitspanne reicht von wenigen Stunden bis hin zu mehreren Monaten. Im Beitrag wird die angesprochene Vielfalt exemplarisch gezeigt, so dass die Leser im Idealfall das Projekt genau an die Bedürfnisse ihrer Lerngruppe anpassen können.
In this paper, we discuss the problem of approximating ellipsoid uncertainty sets with bounded (gamma) uncertainty sets. Robust linear programs with ellipsoid uncertainty lead to quadratically constrained programs, whereas robust linear programs with bounded uncertainty sets remain linear programs which are generally easier to solve.
We call a bounded uncertainty set an inner approximation of an ellipsoid if it is contained in it. We consider two different inner approximation problems. The first problem is to find a bounded uncertainty set which sticks close to the ellipsoid such that a shrank version of the ellipsoid is contained in it. The approximation is optimal if the required shrinking is minimal. In the second problem, we search for a bounded uncertainty set within the ellipsoid with maximum volume. We present how both problems can be solved analytically by stating explicit formulas for the optimal solutions of these problems.
Further, we present in a computational experiment how the derived approximation techniques can be used to approximate shortest path and network flow problems which are affected by ellipsoidal uncertainty.
This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.
In retail, assortment planning refers to selecting a subset of products to offer that maximizes profit. Assortments can be planned for a single store or a retailer with multiple chain stores where demand varies between stores. In this paper, we assume that a retailer with a multitude of stores wants to specify her offered assortment. To suit all local preferences, regionalization and store-level assortment optimization are widely used in practice and lead to competitive advantages. When selecting regionalized assortments, a tradeoff between expensive, customized assortments in every store and inexpensive, identical assortments in all stores that neglect demand variation is preferable.
We formulate a stylized model for the regionalized assortment planning problem (APP) with capacity constraints and given demand. In our approach, a 'common assortment' that is supplemented by regionalized products is selected. While products in the common assortment are offered in all stores, products in the local assortments are customized and vary from store to store.
Concerning the computational complexity, we show that the APP is strongly NP-complete. The core of this hardness result lies in the selection of the common assortment. We formulate the APP as an integer program and provide algorithms and methods for obtaining approximate solutions and solving large-scale instances.
Lastly, we perform computational experiments to analyze the benefits of regionalized assortment planning depending on the variation in customer demands between stores.
A vehicles fatigue damage is a highly relevant figure in the complete vehicle design process.
Long term observations and statistical experiments help to determine the influence of differnt parts of the vehicle, the driver and the surrounding environment.
This work is focussing on modeling one of the most important influence factors of the environment: road roughness. The quality of the road is highly dependant on several surrounding factors which can be used to create mathematical models.
Such models can be used for the extrapolation of information and an estimation of the environment for statistical studies.
The target quantity we focus on in this work ist the discrete International Roughness Index or discrete IRI. The class of models we use and evaluate is a discriminative classification model called Conditional Random Field.
We develop a suitable model specification and show new variants of stochastic optimizations to train the model efficiently.
The model is also applied to simulated and real world data to show the strengths of our approach.
We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.
Der unmögliche Freistoß
(2016)
Die Autoren befassen sich mit der Ableitung und Bearbeitung eines Modellierungsprojektes aus der populären Sportart Fußball: Ein Freistoß wird unter Beachtung der gegebenen physikalischen Effekte mathematisch modelliert und simuliert. Der Fokus liegt auf der möglichen Durchführung dieses Modellierungsprojekts mit Schülerinnen und Schülern der Sekundarstufe II.
We investigate a PDE-ODE system describing cancer cell invasion in a tissue network. The model is an extension of the multiscale setting in [28,40], by considering two subpopulations of tumor cells interacting mutually and with the surrounding tissue. According to the go-or-grow hypothesis, these subpopulations consist of moving and proliferating cells, respectively. The mathematical setting also accommodates the effects of some therapy approaches. We prove the global existence of weak solutions to this model and perform numerical simulations to illustrate its behavior for different therapy strategies.
We present a new approach to handle uncertain combinatorial optimization problems that uses solution ranking procedures to determine the degree of robustness of a solution. Unlike classic concepts for robust optimization, our approach is not purely based on absolute quantitative performance, but also includes qualitative aspects that are of major importance for the decision maker.
We discuss the two variants, solution ranking and objective ranking robustness, in more detail, presenting problem complexities and solution approaches. Using an uncertain shortest path problem as a computational example, the potential of our approach is demonstrated in the context of evacuation planning due to river flooding.
We consider the problem to evacuate several regions due to river flooding, where sufficient time is given to plan ahead. To ensure a smooth evacuation procedure, our model includes the decision which regions to assign to which shelter, and when evacuation orders should be issued, such that roads do not become congested.
Due to uncertainty in weather forecast, several possible scenarios are simultaneously considered in a robust optimization framework. To solve the resulting integer program, we apply a Tabu search algorithm based on decomposing the problem into better tractable subproblems. Computational experiments on random instances and an instance based on Kulmbach, Germany, data show considerable improvement compared to an MIP solver provided with a strong starting solution.
The main theme of this thesis is the interplay between algebraic and tropical intersection
theory, especially in the context of enumerative geometry. We begin by exploiting
well-known results about tropicalizations of subvarieties of algebraic tori to give a
simple proof of Nishinou and Siebert’s correspondence theorem for rational curves
through given points in toric varieties. Afterwards, we extend this correspondence
by additionally allowing intersections with psi-classes. We do this by constructing
a tropicalization map for cycle classes on toroidal embeddings. It maps algebraic
cycle classes to elements of the Chow group of the cone complex of the toroidal
embedding, that is to weighted polyhedral complexes, which are balanced with respect
to an appropriate map to a vector space, modulo a naturally defined equivalence relation.
We then show that tropicalization respects basic intersection-theoretic operations like
intersections with boundary divisors and apply this to the appropriate moduli spaces
to obtain our correspondence theorem.
Trying to apply similar methods in higher genera inevitably confronts us with moduli
spaces which are not toroidal. This motivates the last part of this thesis, where we
construct tropicalizations of cycles on fine logarithmic schemes. The logarithmic point of
view also motivates our interpretation of tropical intersection theory as the dualization
of the intersection theory of Kato fans. This duality gives a new perspective on the
tropicalization map; namely, as the dualization of a pull-back via the characteristic
morphism of a logarithmic scheme.
Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers.
While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation.
In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral.
Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties.
In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers.
Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time.
Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions.
In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals.
By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality.
After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters.
By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.
Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory.
Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern.
To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.
For some optimization problems on a graph \(G=(V,E)\), one can give a general formulation: Let \(c\colon E \to \mathbb{R}_{\geq 0}\) be a cost function on the edges and \(X \subseteq 2^E\) be a set of (so-called feasible) subsets of \(E\), one aims to minimize \(\sum_{e\in S} c(e)\) among all feasible \(S\in X\). This formulation covers, for instance, the shortest path problem by choosing \(X\) as the set of all paths between two vertices, or the minimum spanning tree problem by choosing \(X\) to be the set of all spanning trees. This bachelor thesis deals with a parametric version of this formulation, where the edge costs \(c_\lambda\colon E \to \mathbb{R}_{\geq 0}\) depend on a parameter \(\lambda\in\mathbb{R}_{\geq 0}\) in a concave and piecewise linear manner. The goal is to investigate the worst case minimum size of a so-called representation system \(R\subseteq X\), which contains for each scenario \(\lambda\in\mathbb{R}_{\geq 0}\) an optimal solution \(S(\lambda)\in R\). It turns out that only a pseudo-polynomial size can be ensured in general, but smaller systems have to exist in special cases. Moreover, methods are presented to find such small systems algorithmically. Finally, the notion of a representation system is relaxed in order to get smaller (i.e. polynomial) systems ensuring a certain approximation ratio.