Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (225) (entfernen)
Schlagworte
Fachbereich / Organisatorische Einheit
- Fachbereich Mathematik (225) (entfernen)
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.
Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.
Multiphase materials combine properties of several materials, which makes them interesting for high-performing components. This thesis considers a certain set of multiphase materials, namely silicon-carbide (SiC) particle-reinforced aluminium (Al) metal matrix composites and their modelling based on stochastic geometry models.
Stochastic modelling can be used for the generation of virtual material samples: Once we have fitted a model to the material statistics, we can obtain independent three-dimensional “samples” of the material under investigation without the need of any actual imaging. Additionally, by changing the model parameters, we can easily simulate a new material composition.
The materials under investigation have a rather complicated microstructure, as the system of SiC particles has many degrees of freedom: Size, shape, orientation and spatial distribution. Based on FIB-SEM images, that yield three-dimensional image data, we extract the SiC particle structure using methods of image analysis. Then we model the SiC particles by anisotropically rescaled cells of a random Laguerre tessellation that was fitted to the shapes of isotropically rescaled particles. We fit a log-normal distribution for the volume distribution of the SiC particles. Additionally, we propose models for the Al grain structure and the Aluminium-Copper (\({Al}_2{Cu}\)) precipitations occurring on the grain boundaries and on SiC-Al phase boundaries.
Finally, we show how we can estimate the parameters of the volume-distribution based on two-dimensional SEM images. This estimation is applied to two samples with different mean SiC particle diameters and to a random section through the model. The stereological estimations are within acceptable agreement with the parameters estimated from three-dimensional image data
as well as with the parameters of the model.
Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.
In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.
In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.
A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.
In this thesis, we focus on the application of the Heath-Platen (HP) estimator in option
pricing. In particular, we extend the approach of the HP estimator for pricing path dependent
options under the Heston model. The theoretical background of the estimator
was first introduced by Heath and Platen [32]. The HP estimator was originally interpreted
as a control variate technique and an application for European vanilla options was
presented in [32]. For European vanilla options, the HP estimator provided a considerable
amount of variance reduction. Thus, applying the technique for path dependent options
under the Heston model is the main contribution of this thesis.
The first part of the thesis deals with the implementation of the HP estimator for pricing
one-sided knockout barrier options. The main difficulty for the implementation of the HP
estimator is located in the determination of the first hitting time of the barrier. To test the
efficiency of the HP estimator we conduct numerical tests with regard to various aspects.
We provide a comparison among the crude Monte Carlo estimation, the crude control
variate technique and the HP estimator for all types of barrier options. Furthermore, we
present the numerical results for at the money, in the money and out of the money barrier
options. As numerical results imply, the HP estimator performs superior among others
for pricing one-sided knockout barrier options under the Heston model.
Another contribution of this thesis is the application of the HP estimator in pricing bond
options under the Cox-Ingersoll-Ross (CIR) model and the Fong-Vasicek (FV) model. As
suggested in the original paper of Heath and Platen [32], the HP estimator has a wide
range of applicability for derivative pricing. Therefore, transferring the structure of the
HP estimator for pricing bond options is a promising contribution. As the approximating
Vasicek process does not seem to be as good as the deterministic volatility process in the
Heston setting, the performance of the HP estimator in the CIR model is only relatively
good. However, for the FV model the variance reduction provided by the HP estimator is
again considerable.
Finally, the numerical result concerning the weak convergence rate of the HP estimator
for pricing European vanilla options in the Heston model is presented. As supported by
numerical analysis, the HP estimator has weak convergence of order almost 1.
Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.
In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.
Nonwoven materials are used as filter media which are the key component of automotive filters such as air filters, oil filters, and fuel filters. Today, the advanced engine technologies require innovative filter media with higher performances. A virtual microstructure of the nonwoven filter medium, which has similar filter properties as the existing material, can be used to design new filter media from existing media. Nonwoven materials considered in this thesis prominently feature non-overlapping fibers, curved fibers, fibers with circular cross section, fibers of apparently infinite length, and fiber bundles. To this end, as part of this thesis, we extend the Altendorf-Jeulin individual fiber model to incorporate all the above mentioned features. The resulting novel stochastic 3D fiber model can generate geometries with good visual resemblance of real filter media. Furthermore, pressure drop, which is one of the important physical properties of the filter, simulated numerically on the computed tomography (CT) data of the real nonwoven material agrees well (with a relative error of 8%) with the pressure drop simulated in the generated microstructure realizations from our model.
Generally, filter properties for the CT data and generated microstructure realizations are computed using numerical simulations. Since numerical simulations require extensive system memory and computation time, it is important to find the representative domain size of the generated microstructure for a required filter property. As part of this thesis, simulation and a statistical approach are used to estimate the representative domain size of our microstructure model. Precisely, the representative domain size with respect to the packing density, the pore size distribution, and the pressure drop are considered. It turns out that the statistical approach can be used to estimate the representative domain size for the given property more precisely and using less generated microstructures than the purely simulation based approach.
Among the various properties of fibrous filter media, fiber thickness and orientation are important characteristics which should be considered in design and quality assurance of filter media. Automatic analysis of images from scanning electron microscopy (SEM) is a suitable tool in that context. Yet, the accuracy of such image analysis tools cannot be judged based on images of real filter media since their true fiber thickness and orientation can never be known accurately. A solution is to employ synthetically generated models for evaluation. By combining our 3D fiber system model with simulation of the SEM imaging process, quantitative evaluation of the fiber thickness and orientation measurements becomes feasible. We evaluate the state-of-the-art automatic thickness and orientation estimation method that way.
The thesis studies change points in absolute time for censored survival data with some contributions to the more common analysis of change points with respect to survival time. We first introduce the notions and estimates of survival analysis, in particular the hazard function and censoring mechanisms. Then, we discuss change point models for survival data. In the literature, usually change points with respect to survival time are studied. Typical examples are piecewise constant and piecewise linear hazard functions. For that kind of models, we propose a new algorithm for numerical calculation of maximum likelihood estimates based on a cross entropy approach which in our simulations outperforms the common Nelder-Mead algorithm.
Our original motivation was the study of censored survival data (e.g., after diagnosis of breast cancer) over several decades. We wanted to investigate if the hazard functions differ between various time periods due, e.g., to progress in cancer treatment. This is a change point problem in the spirit of classical change point analysis. Horváth (1998) proposed a suitable change point test based on estimates of the cumulative hazard function. As an alternative, we propose similar tests based on nonparametric estimates of the hazard function. For one class of tests related to kernel probability density estimates, we develop fully the asymptotic theory for the change point tests. For the other class of estimates, which are versions of the Watson-Leadbetter estimate with censoring taken into account and which are related to the Nelson-Aalen estimate, we discuss some steps towards developing the full asymptotic theory. We close by applying the change point tests to simulated and real data, in particular to the breast cancer survival data from the SEER study.
The main theme of this thesis is the interplay between algebraic and tropical intersection
theory, especially in the context of enumerative geometry. We begin by exploiting
well-known results about tropicalizations of subvarieties of algebraic tori to give a
simple proof of Nishinou and Siebert’s correspondence theorem for rational curves
through given points in toric varieties. Afterwards, we extend this correspondence
by additionally allowing intersections with psi-classes. We do this by constructing
a tropicalization map for cycle classes on toroidal embeddings. It maps algebraic
cycle classes to elements of the Chow group of the cone complex of the toroidal
embedding, that is to weighted polyhedral complexes, which are balanced with respect
to an appropriate map to a vector space, modulo a naturally defined equivalence relation.
We then show that tropicalization respects basic intersection-theoretic operations like
intersections with boundary divisors and apply this to the appropriate moduli spaces
to obtain our correspondence theorem.
Trying to apply similar methods in higher genera inevitably confronts us with moduli
spaces which are not toroidal. This motivates the last part of this thesis, where we
construct tropicalizations of cycles on fine logarithmic schemes. The logarithmic point of
view also motivates our interpretation of tropical intersection theory as the dualization
of the intersection theory of Kato fans. This duality gives a new perspective on the
tropicalization map; namely, as the dualization of a pull-back via the characteristic
morphism of a logarithmic scheme.
In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.
In this dissertation convergence of binomial trees for option pricing is investigated. The focus is on American and European put and call options. For that purpose variations of the binomial tree model are reviewed.
In the first part of the thesis we investigated the convergence behavior of the already known trees from the literature (CRR, RB, Tian and CP) for the European options. The CRR and the RB tree suffer from irregular convergence, so our first aim is to find a way to get the smooth convergence. We first show what causes these oscillations. That will also help us to improve the rate of convergence. As a result we introduce the Tian and the CP tree and we proved that the order of convergence for these trees is \(O \left(\frac{1}{n} \right)\).
Afterwards we introduce the Split tree and explain its properties. We prove the convergence of it and we found an explicit first order error formula. In our setting, the splitting time \(t_{k} = k\Delta t\) is not fixed, i.e. it can be any time between 0 and the maturity time \(T\). This is the main difference compared to the model from the literature. Namely, we show that the good properties of the CRR tree when \(S_{0} = K\) can be preserved even without this condition (which is mainly the case). We achieved the convergence of \(O \left(n^{-\frac{3}{2}} \right)\) and we typically get better results if we split our tree later.
Non–woven materials consist of many thousands of fibres laid down on a conveyor belt
under the influence of a turbulent air stream. To improve industrial processes for the
production of non–woven materials, we develop and explore novel mathematical fibre and
material models.
In Part I of this thesis we improve existing mathematical models describing the fibres on the
belt in the meltspinning process. In contrast to existing models, we include the fibre–fibre
interaction caused by the fibres’ thickness which prevents the intersection of the fibres and,
hence, results in a more accurate mathematical description. We start from a microscopic
characterisation, where each fibre is described by a stochastic functional differential
equation and include the interaction along the whole fibre path, which is described by a
delay term. As many fibres are required for the production of a non–woven material, we
consider the corresponding mean–field equation, which describes the evolution of the fibre
distribution with respect to fibre position and orientation. To analyse the particular case of
large turbulences in the air stream, we develop the diffusion approximation which yields a
distribution describing the fibre position. Considering the convergence to equilibrium on
an analytical level, as well as performing numerical experiments, gives an insight into the
influence of the novel interaction term in the equations.
In Part II of this thesis we model the industrial airlay process, which is a production method
whereby many short fibres build a three–dimensional non–woven material. We focus on
the development of a material model based on original fibre properties, machine data and
micro computer tomography. A possible linking of these models to other simulation tools,
for example virtual tensile tests, is discussed.
The models and methods presented in this thesis promise to further the field in mathematical
modelling and computational simulation of non–woven materials.
We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property.
Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price.
We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.
This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.
In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \).
In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig.
Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question.
\[\]
In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous
short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds.
The goal of an investor is to find a portfolio which maximizes expected utility
from terminal wealth under budget and present expected short-fall (PESF) risk
constraints. We analyze this portfolio optimization problem in both complete and
incomplete financial markets in three different cases: (a) when the PESF risk is
minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds
to the unconstrained Merton investment.
In all cases we find the optimal terminal wealth and portfolio process using the
martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the
optimal terminal wealth in the cases mentioned using numerical examples. Without
risk constraints, we further compare the investment strategies for complete
and incomplete market numerically.
In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.
A vehicles fatigue damage is a highly relevant figure in the complete vehicle design process.
Long term observations and statistical experiments help to determine the influence of differnt parts of the vehicle, the driver and the surrounding environment.
This work is focussing on modeling one of the most important influence factors of the environment: road roughness. The quality of the road is highly dependant on several surrounding factors which can be used to create mathematical models.
Such models can be used for the extrapolation of information and an estimation of the environment for statistical studies.
The target quantity we focus on in this work ist the discrete International Roughness Index or discrete IRI. The class of models we use and evaluate is a discriminative classification model called Conditional Random Field.
We develop a suitable model specification and show new variants of stochastic optimizations to train the model efficiently.
The model is also applied to simulated and real world data to show the strengths of our approach.
By using Gröbner bases of ideals of polynomial algebras over a field, many implemented algorithms manage to give exciting examples and counter examples in Commutative Algebra and Algebraic Geometry. Part A of this thesis will focus on extending the concept of Gröbner bases and Standard bases for polynomial algebras over the ring of integers and its factors \(\mathbb{Z}_m[x]\). Moreover we implemented two algorithms for this case in Singular which use different approaches in detecting useless computations, the classical Buchberger algorithm and a F5 signature based algorithm. Part B includes two algorithms that compute the graded Hilbert depth of a graded module over a polynomial algebra \(R\) over a field, as well as the depth and the multigraded Stanley depth of a factor of monomial ideals of \(R\). The two algorithms provide faster computations and examples that lead B. Ichim and A. Zarojanu to a counter example of a question of J. Herzog. A. Duval, B. Goeckner, C. Klivans and J. Martin have recently discovered a counter example for the Stanley Conjecture. We prove in this thesis that the Stanley Conjecture holds in some special cases. Part D explores the General Neron Desingularization in the frame of Noetherian local domains of dimension 1. We have constructed and implemented in Singular and algorithm that computes a strong Artin Approximation for Cohen-Macaulay local rings of dimension 1.
Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.
This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.
Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory.
Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern.
To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.
Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers.
While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation.
In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral.
Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties.
In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers.
Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time.
Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions.
In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals.
By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality.
After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters.
By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.
The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\]
In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.
Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule.
We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC.
We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate.
To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band.
Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.
In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored.
First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations.
Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established.
Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.
This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134].
A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk.
Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants.
Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.
We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.
Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)
In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.
In this thesis we develop a shape optimization framework for isogeometric analysis in the optimize first–discretize then setting. For the discretization we use
isogeometric analysis (iga) to solve the state equation, and search optimal designs in a space of admissible b-spline or nurbs combinations. Thus a quite
general class of functions for representing optimal shapes is available. For the
gradient-descent method, the shape derivatives indicate both stopping criteria and search directions and are determined isogeometrically. The numerical treatment requires solvers for partial differential equations and optimization methods, which introduces numerical errors. The tight connection between iga and geometry representation offers new ways of refining the geometry and analysis discretization by the same means. Therefore, our main concern is to develop the optimize first framework for isogeometric shape optimization as ground work for both implementation and an error analysis. Numerical examples show that this ansatz is practical and case studies indicate that it allows local refinement.
The central topic of this thesis is Alperin's weight conjecture, a problem concerning the representation theory of finite groups.
This conjecture, which was first proposed by J. L. Alperin in 1986, asserts that for any finite group the number of its irreducible Brauer characters coincides with the number of conjugacy classes of its weights. The blockwise version of Alperin's conjecture partitions this problem into a question concerning the number of irreducible Brauer characters and weights belonging to the blocks of finite groups.
A proof for this conjecture has not (yet) been found. However, the problem has been reduced to a question on non-abelian finite (quasi-) simple groups in the sense that there is a set of conditions, the so-called inductive blockwise Alperin weight condition, whose verification for all non-abelian finite simple groups implies the blockwise Alperin weight conjecture. Now the objective is to prove this condition for all non-abelian finite simple groups, all of which are known via the classification of finite simple groups.
In this thesis we establish the inductive blockwise Alperin weight condition for three infinite series of finite groups of Lie type: the special linear groups \(SL_3(q)\) in the case \(q>2\) and \(q \not\equiv 1 \bmod 3\), the Chevalley groups \(G_2(q)\) for \(q \geqslant 5\), and Steinberg's triality groups \(^3D_4(q)\).
In this thesis, we investigate several upcoming issues occurring in the context of conceiving and building a decision support system. We elaborate new algorithms for computing representative systems with special quality guarantees, provide concepts for supporting the decision makers after a representative system was computed, and consider a methodology of combining two optimization problems.
We review the original Box-Algorithm for two objectives by Hamacher et al. (2007) and discuss several extensions regarding coverage, uniformity, the enumeration of the whole nondominated set, and necessary modifications if the underlying scalarization problem cannot be solved to optimality. In a next step, the original Box-Algorithm is extended to the case of three objective functions to compute a representative system with desired coverage error. Besides the investigation of several theoretical properties, we prove the correctness of the algorithm, derive a bound on the number of iterations needed by the algorithm to meet the desired coverage error, and propose some ideas for possible extensions.
Furthermore, we investigate the problem of selecting a subset with desired cardinality from the computed representative system, the Hypervolume Subset Selection Problem (HSSP). We provide two new formulations for the bicriteria HSSP, a linear programming formulation and a \(k\)-link shortest path formulation. For the latter formulation, we propose an algorithm for which we obtain the currently best known complexity bound for solving the bicriteria HSSP. For the tricriteria HSSP, we propose an integer programming formulation with a corresponding branch-and-bound scheme.
Moreover, we address the issue of how to present the whole set of computed representative points to the decision makers. Based on common illustration methods, we elaborate an algorithm guiding the decision makers in choosing their preferred solution.
Finally, we step back and look from a meta-level on the issue of how to combine two given optimization problems and how the resulting combinations can be related to each other. We come up with several different combined formulations and give some ideas for the practical approach.
The overall goal of the work is to simulate rarefied flows inside geometries with moving boundaries. The behavior of a rarefied flow is characterized through the Knudsen number \(Kn\), which can be very small (\(Kn < 0.01\) continuum flow) or larger (\(Kn > 1\) molecular flow). The transition region (\(0.01 < Kn < 1\)) is referred to as the transition flow regime.
Continuum flows are mainly simulated by using commercial CFD methods, which are used to solve the Euler equations. In the case of molecular flows one uses statistical methods, such as the Direct Simulation Monte Carlo (DSMC) method. In the transition region Euler equations are not adequate to model gas flows. Because of the rapid increase of particle collisions the DSMC method tends to fail, as well
Therefore, we develop a deterministic method, which is suitable to simulate problems of rarefied gases for any Knudsen number and is appropriate to simulate flows inside geometries with moving boundaries. Thus, the method we use is the Finite Pointset Method (FPM), which is a mesh-free numerical method developed at the ITWM Kaiserslautern and is mainly used to solve fluid dynamical problems.
More precisely, we develop a method in the FPM framework to solve the BGK model equation, which is a simplification of the Boltzmann equation. This equation is mainly used to describe rarefied flows.
The FPM based method is implemented for one and two dimensional physical and velocity space and different ranges of the Knudsen number. Numerical examples are shown for problems with moving boundaries. It is seen, that our method is superior to regular grid methods with respect to the implementation of boundary conditions. Furthermore, our results are comparable to reference solutions gained through CFD- and DSMC methods, respectevly.
In this dissertation, we discuss how to price American-style options. Our aim is to study and improve the regression-based Monte Carlo methods. In order to have good benchmarks to compare with them, we also study the tree methods.
In the second chapter, we investigate the tree methods specifically. We do research firstly within the Black-Scholes model and then within the Heston model. In the Black-Scholes model, based on Müller's work, we illustrate how to price one dimensional and multidimensional American options, American Asian options, American lookback options, American barrier options and so on. In the Heston model, based on Sayer's research, we implement his algorithm to price one dimensional American options. In this way, we have good benchmarks of various American-style options and put them all in the appendix.
In the third chapter, we focus on the regression-based Monte Carlo methods theoretically and numerically. Firstly, we introduce two variations, the so called "Tsitsiklis-Roy method" and the "Longstaff-Schwartz method". Secondly, we illustrate the approximation of American option by its Bermudan counterpart. Thirdly we explain the source of low bias and high bias. Fourthly we compare these two methods using in-the-money paths and all paths. Fifthly, we examine the effect using different number and form of basis functions. Finally, we study the Andersen-Broadie method and present the lower and upper bounds.
In the fourth chapter, we study two machine learning techniques to improve the regression part of the Monte Carlo methods: Gaussian kernel method and kernel-based support vector machine. In order to choose a proper smooth parameter, we compare fixed bandwidth, global optimum and suboptimum from a finite set. We also point out that scaling the training data to [0,1] can avoid numerical difficulty. When out-of-sample paths of stock prices are simulated, the kernel method is robust and even performs better in several cases than the Tsitsiklis-Roy method and the Longstaff-Schwartz method. The support vector machine can keep on improving the kernel method and needs less representations of old stock prices during prediction of option continuation value for a new stock price.
In the fifth chapter, we switch to the hardware (FGPA) implementation of the Longstaff-Schwartz method and propose novel reversion formulas for the stock price and volatility within the Black-Scholes and Heston models. The test for this formula within the Black-Scholes model shows that the storage of data is reduced and also the corresponding energy consumption.
Das Ziel dieser Dissertation ist die Entwicklung und Implementation eines Algorithmus zur Berechnung von tropischen Varietäten über allgemeine bewertete Körper. Die Berechnung von tropischen Varietäten über Körper mit trivialer Bewertung ist ein hinreichend gelöstes Problem. Hierfür kombinieren die Autoren Bogart, Jensen, Speyer, Sturmfels und Thomas eindrucksvoll klassische Techniken der Computeralgebra mit konstruktiven Methoden der konvexer Geometrie.
Haben wir allerdings einen Grundkörper mit nicht-trivialer Bewertung, wie zum Beispiel den Körper der \(p\)-adischen Zahlen \(\mathbb{Q}_p\), dann stößt die konventionelle Gröbnerbasentheorie scheinbar an ihre Grenzen. Die zugrundeliegenden Monomordnungen sind nicht geeignet um Problemstellungen zu untersuchen, die von einer nicht-trivialen Bewertung auf den Koeffizienten abhängig sind. Dies führte zu einer Reihe von Arbeiten, welche die gängige Gröbnerbasentheorie modifizieren um die Bewertung des Grundkörpers einzubeziehen.\[\phantom{newline}\]
In dieser Arbeit präsentieren wir einen alternativen Ansatz und zeigen, wie sich die Bewertung mittels einer speziell eingeführten Variable emulieren lässt, so dass eine Modifikation der klassischen Werkzeuge nicht notwendig ist.
Im Rahmen dessen wird Theorie der Standardbasen auf Potenzreihen über einen Koeffizientenring verallgemeinert. Hierbei wird besonders Wert darauf gelegt, dass alle Algorithmen bei polynomialen Eingabedaten mit ihren klassischen Pendants übereinstimmen, sodass für praktische Zwecke auf bereits etablierte Softwaresysteme zurückgegriffen werden kann. Darüber hinaus wird die Konstruktion des Gröbnerfächers sowie die Technik des Gröbnerwalks für leicht inhomogene Ideale eingeführt. Dies ist notwendig, da bei der Einführung der neuen Variable die Homogenität des Ausgangsideal gebrochen wird.\[\phantom{newline}\]
Alle Algorithmen wurden in Singular implementiert und sind als Teil der offiziellen Distribution erhältlich. Es ist die erste Implementation, welches in der Lage ist tropische Varietäten mit \(p\)-adischer Bewertung auszurechnen. Im Rahmen der Arbeit entstand ebenfalls ein Singular Paket für konvexe Geometrie, sowie eine Schnittstelle zu Polymake.
In some processes for spinning synthetic fibers the filaments are exposed to highly turbulent air flows to achieve a high degree of stretching (elongation). The quality of the resulting filaments, namely thickness and uniformity, is thus determined essentially by the aerodynamic force coming from the turbulent flow. Up to now, there is a gap between the elongation measured in experiments and the elongation obtained by numerical simulations available in the literature.
The main focus of this thesis is the development of an efficient and sufficiently accurate simulation algorithm for the velocity of a turbulent air flow and the application in turbulent spinning processes.
In stochastic turbulence models the velocity is described by an \(\mathbb{R}^3\)-valued random field. Based on an appropriate description of the random field by Marheineke, we have developed an algorithm that fulfills our requirements of efficiency and accuracy. Applying a resulting stochastic aerodynamic drag force on the fibers then allows the simulation of the fiber dynamics modeled by a random partial differential algebraic equation system as well as a quantization of the elongation in a simplified random ordinary differential equation model for turbulent spinning. The numerical results are very promising: whereas the numerical results available in the literature can only predict elongations up to order \(10^4\) we get an order of \(10^5\), which is closer to the elongations of order \(10^6\) measured in experiments.
Motivated by the results of infinite dimensional Gaussian analysis and especially white noise analysis, we construct a Mittag-Leffler analysis. This is an infinite dimensional analysis with respect to non-Gaussian measures of Mittag-Leffler type which we call Mittag-Leffler measures. Our results indicate that the Wick ordered polynomials, which play a key role in Gaussian analysis, cannot be generalized to this non-Gaussian case. We provide evidence that a system of biorthogonal polynomials, called generalized Appell system, is applicable to the Mittag-Leffler measures, instead of using Wick ordered polynomials. With the help of an Appell system, we introduce a test function and a distribution space. Furthermore we give characterizations of the distribution space and we characterize the weak integrable functions and the convergent sequences within the distribution space. We construct Donsker's delta in a non-Gaussian setting as an application.
In the second part, we develop a grey noise analysis. This is a special application of the Mittag-Leffler analysis. In this framework, we introduce generalized grey Brownian motion and prove differentiability in a distributional sense and the existence of generalized grey Brownian motion local times. Grey noise analysis is then applied to the time-fractional heat equation and the time-fractional Schrödinger equation. We prove a generalization of the fractional Feynman-Kac formula for distributional initial values. In this way, we find a Green's function for the time-fractional heat equation which coincides with the solutions given in the literature.
The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem.
Many tasks in image processing can be tackled by modeling an appropriate data fidelity term \(\Phi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and then solve one of the regularized minimization problems \begin{align*}
&{}(P_{1,\tau}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \big\{ \Phi(x) \;{\rm s.t.}\; \Psi(x) \leq \tau \big\} \\ &{}(P_{2,\lambda}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \{ \Phi(x) + \lambda \Psi(x) \}, \; \lambda > 0 \end{align*} with some function \(\Psi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and a good choice of the parameter(s). Two tasks arise naturally here: \begin{align*} {}& \text{1. Study the solver sets \({\rm SOL}(P_{1,\tau})\) and
\({\rm SOL}(P_{2,\lambda})\) of the minimization problems.} \\ {}& \text{2. Ensure that the minimization problems have solutions.} \end{align*} This thesis provides contributions to both tasks: Regarding the first task for a more special setting we prove that there are intervals \((0,c)\) and \((0,d)\) such that the setvalued curves \begin{align*}
\tau \mapsto {}& {\rm SOL}(P_{1,\tau}), \; \tau \in (0,c) \\ {} \lambda \mapsto {}& {\rm SOL}(P_{2,\lambda}), \; \lambda \in (0,d) \end{align*} are the same, besides an order reversing parameter change \(g: (0,c) \rightarrow (0,d)\). Moreover we show that the solver sets are changing all the time while \(\tau\) runs from \(0\) to \(c\) and \(\lambda\) runs from \(d\) to \(0\).
In the presence of lower semicontinuity the second task is done if we have additionally coercivity. We regard lower semicontinuity and coercivity from a topological point of view and develop a new technique for proving lower semicontinuity plus coercivity.
Dropping any lower semicontinuity assumption we also prove a theorem on the coercivity of a sum of functions.
Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.
Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.
In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations.
For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions.
For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain.
A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis.
Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods.
This work aims at including nonlinear elastic shell models in a multibody framework. We focus our attention to Kirchhoff-Love shells and explore the benefits of an isogeometric approach, the latest development in finite element methods, within a multibody system. Isogeometric analysis extends isoparametric finite elements to more general functions such as B-Splines and Non-Uniform Rational B-Splines (NURBS) and works on exact geometry representations even at the coarsest level of discretizations. Using NURBS as basis functions, high regularity requirements of the shell model, which are difficult to achieve with standard finite elements, are easily fulfilled. A particular advantage is the promise of simplifying the mesh generation step, and mesh refinement is easily performed by eliminating the need for communication with the geometry representation in a Computer-Aided Design (CAD) tool.
Quite often the domain consists of several patches where each patch is parametrized by means of NURBS, and these patches are then glued together by means of continuity conditions. Although the techniques known from domain decomposition can be carried over to this situation, the analysis of shell structures is substantially more involved as additional angle preservation constraints between the patches might arise. In this work, we address this issue in the stationary and transient case and make use of the analogy to constrained mechanical systems with joints and springs as interconnection elements. Starting point of our work is the bending strip method which is a penalty approach that adds extra stiffness to the interface between adjacent patches and which is found to lead to a so-called stiff mechanical system that might suffer from ill-conditioning and severe stepsize restrictions during time integration. As a remedy, an alternative formulation is developed that improves the condition number of the system and removes the penalty parameter dependence. Moreover, we study another alternative formulation with continuity constraints applied to triples of control points at the interface. The approach presented here to tackle stiff systems is quite general and can be applied to all penalty problems fulfilling some regularity requirements.
The numerical examples demonstrate an impressive convergence behavior of the isogeometric approach even for a coarse mesh, while offering substantial savings with respect to the number of degrees of freedom. We show a comparison between the different multipatch approaches and observe that the alternative formulations are well conditioned, independent of any penalty parameter and give the correct results. We also present a technique to couple the isogeometric shells with multibody systems using a pointwise interaction.
This thesis is concerned with stochastic control problems under transaction costs. In particular, we consider a generalized menu cost problem with partially controlled regime switching, general multidimensional running cost problems and the maximization of long-term growth rates in incomplete markets. The first two problems are considered under a general cost structure that includes a fixed cost component, whereas the latter is analyzed under proportional and Morton-Pliska
transaction costs.
For the menu cost problem and the running cost problem we provide an equivalent characterization of the value function by means of a generalized version of the Ito-Dynkin formula instead of the more restrictive, traditional approach via the use of quasi-variational inequalities (QVIs). Based on the finite element method and weak solutions of QVIs in suitable Sobolev spaces, the value function is constructed iteratively. In addition to the analytical results, we study a novel application of the menu cost problem in management science. We consider a company that aims to implement an optimal investment and marketing strategy and must decide when to issue a new version of a product and when and how much
to invest into marketing.
For the long-term growth rate problem we provide a rigorous asymptotic analysis under both proportional and Morton-Pliska transaction costs in a general incomplete market that includes, for instance, the Heston stochastic volatility model and the Kim-Omberg stochastic excess return model as special cases. By means of a dynamic programming approach leading-order optimal strategies are constructed
and the leading-order coefficients in the expansions of the long-term growth rates are determined. Moreover, we analyze the asymptotic performance of Morton-Pliska strategies in settings with proportional transaction costs. Finally, pathwise optimality of the constructed strategies is established.
In this work we focus on the regression models with asymmetrical error distribution,
more precisely, with extreme value error distributions. This thesis arises in the framework
of the project "Robust Risk Estimation". Starting from July 2011, this project won
three years funding by the Volkswagen foundation in the call "Extreme Events: Modelling,
Analysis, and Prediction" within the initiative "New Conceptual Approaches to
Modelling and Simulation of Complex Systems". The project involves applications in
Financial Mathematics (Operational and Liquidity Risk), Medicine (length of stay and
cost), and Hydrology (river discharge data). These applications are bridged by the
common use of robustness and extreme value statistics.
Within the project, in each of these applications arise issues, which can be dealt with by
means of Extreme Value Theory adding extra information in the form of the regression
models. The particular challenge in this context concerns asymmetric error distributions,
which significantly complicate the computations and make desired robustification
extremely difficult. To this end, this thesis makes a contribution.
This work consists of three main parts. The first part is focused on the basic notions
and it gives an overview of the existing results in the Robust Statistics and Extreme
Value Theory. We also provide some diagnostics, which is an important achievement of
our project work. The second part of the thesis presents deeper analysis of the basic
models and tools, used to achieve the main results of the research.
The second part is the most important part of the thesis, which contains our personal
contributions. First, in Chapter 5, we develop robust procedures for the risk management
of complex systems in the presence of extreme events. Mentioned applications use time
structure (e.g. hydrology), therefore we provide extreme value theory methods with time
dynamics. To this end, in the framework of the project we considered two strategies. In
the first one, we capture dynamic with the state-space model and apply extreme value
theory to the residuals, and in the second one, we integrate the dynamics by means of
autoregressive models, where the regressors are described by generalized linear models.
More precisely, since the classical procedures are not appropriate to the case of outlier
presence, for the first strategy we rework classical Kalman smoother and extended
Kalman procedures in a robust way for different types of outliers and illustrate the performance
of the new procedures in a GPS application and a stylized outlier situation.
To apply approach to shrinking neighborhoods we need some smoothness, therefore for
the second strategy, we derive smoothness of the generalized linear model in terms of
L2 differentiability and create sufficient conditions for it in the cases of stochastic and
deterministic regressors. Moreover, we set the time dependence in these models by
linking the distribution parameters to the own past observations. The advantage of
our approach is its applicability to the error distributions with the higher dimensional
parameter and case of regressors of possibly different length for each parameter. Further,
we apply our results to the models with generalized Pareto and generalized extreme value
error distributions.
Finally, we create the exemplary implementation of the fixed point iteration algorithm
for the computation of the optimally robust in
uence curve in R. Here we do not aim to
provide the most
exible implementation, but rather sketch how it should be done and
retain points of particular importance. In the third part of the thesis we discuss three applications,
operational risk, hospitalization times and hydrological river discharge data,
and apply our code to the real data set taken from Jena university hospital ICU and
provide reader with the various illustrations and detailed conclusions.
In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.
The work consists of two parts.
In the first part an optimization problem of structures of linear elastic material with contact modeled by Robin-type boundary conditions is considered. The structures model textile-like materials and possess certain quasiperiodicity properties. The homogenization method is used to represent the structures by homogeneous elastic bodies and is essential for formulations of the effective stress and Poisson's ratio optimization problems. At the micro-level, the classical one-dimensional Euler-Bernoulli beam model extended with jump conditions at contact interfaces is used. The stress optimization problem is of a PDE-constrained optimization type, and the adjoint approach is exploited. Several numerical results are provided.
In the second part a non-linear model for simulation of textiles is proposed. The yarns are modeled by hyperelastic law and have no bending stiffness. The friction is modeled by the Capstan equation. The model is formulated as a problem with the rate-independent dissipation, and the basic continuity and convexity properties are investigated. The part ends with numerical experiments and a comparison of the results to a real measurement.
We present a numerical scheme to simulate a moving rigid body with arbitrary shape suspended in a rarefied gas micro flows, in view of applications to complex computations of moving structures in micro or vacuum systems. The rarefied gas is simulated by solving the Boltzmann equation using a DSMC particle method. The motion of the rigid body is governed by the Newton-Euler equations, where the force and the torque on the rigid body is computed from the momentum transfer of the gas molecules colliding with the body. The resulting motion of the rigid body affects in turn again the gas flow in the surroundings. This means that a two-way coupling has been modeled. We validate the scheme by performing various numerical experiments in 1-, 2- and 3-dimensional computational domains. We have presented 1-dimensional actuator problem, 2-dimensional cavity driven flow problem, Brownian diffusion of a spherical particle both with translational and rotational motions, and finally thermophoresis on a spherical particles. We compare the numerical results obtained from the numerical simulations with the existing theories in each test examples.
In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.
In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.
Multilevel Constructions
(2014)
The thesis consists of the two chapters.
The first chapter is addressed to make a deep investigation of the MLMC method. In particular we take an optimisation view at the estimate. Rather than fixing the number of discretisation points \(n_i\) to be a geometric sequence, we are trying to find an optimal set up for \(n_i\) such that for a fixed error the estimate can be computed within a minimal time.
In the second chapter we propose to enhance the MLMC estimate with the weak extrapolation technique. This technique helps to improve order of a weak convergence of a scheme and as a result reduce CC of an estimate. In particular we study high order weak extrapolation approach, which is know not be inefficient in the standard settings. However, a combination of the MLMC and the weak extrapolation yields an improvement of the MLMC.
Das zinsoptimierte Schuldenmanagement hat zum Ziel, eine möglichst effiziente Abwägung zwischen den erwarteten Finanzierungskosten einerseits und den Risiken für den Staatshaushalt andererseits zu finden. Um sich diesem Spannungsfeld zu nähern, schlagen wir erstmals die Brücke zwischen den Problemstellungen des Schuldenmanagements und den Methoden der zeitkontinuierlichen, dynamischen Portfoliooptimierung.
Das Schlüsselelement ist dabei eine neue Metrik zur Messung der Finanzierungskosten, die Perpetualkosten. Diese spiegeln die durchschnittlichen zukünftigen Finanzierungskosten wider und beinhalten sowohl die bereits bekannten Zinszahlungen als auch die noch unbekannten Kosten für notwendige Anschlussfinanzierungen. Daher repräsentiert die Volatilität der Perpetualkosten auch das Risiko einer bestimmten Strategie; je langfristiger eine Finanzierung ist, desto kleiner ist die Schwankungsbreite der Perpetualkosten.
Die Perpetualkosten ergeben sich als Produkt aus dem Barwert eines Schuldenportfolios und aus der vom Portfolio unabhängigen Perpetualrate. Für die Modellierung des Barwertes greifen wir auf das aus der dynamischen Portfoliooptimierung bekannte Konzept eines selbstfinanzierenden Bondportfolios zurück, das hier auf einem mehrdimensionalen affin-linearen Zinsmodell basiert. Das Wachstum des Schuldenportfolios wird dabei durch die Einbeziehung des Primärüberschusses des Staates gebremst bzw. verhindert, indem wir diesen als externen Zufluss in das selbstfinanzierende Modell aufnehmen.
Wegen der Vielfältigkeit möglicher Finanzierungsinstrumente wählen wir nicht deren Wertanteile als Kontrollvariable, sondern kontrollieren die Sensitivitäten des Portfolios gegenüber verschiedenen Zinsbewegungen. Aus optimalen Sensitivitäten können in einem nachgelagerten Schritt dann optimale Wertanteile für verschiedenste Finanzierungsinstrumente abgeleitet werden. Beispielhaft demonstrieren wir dies mittels Rolling-Horizon-Bonds unterschiedlicher Laufzeit.
Schließlich lösen wir zwei Optimierungsprobleme mit Methoden der stochastischen Kontrolltheorie. Dabei wird stets der erwartete Nutzen der Perpetualkosten maximiert. Die Nutzenfunktionen sind jeweils an das Schuldenmanagement angepasst und zeichnen sich insbesondere dadurch aus, dass höhere Kosten mit einem niedrigeren Nutzen einhergehen. Im ersten Problem betrachten wir eine Potenznutzenfunktion mit konstanter relativer Risikoaversion, im zweiten wählen wir eine Nutzenfunktion, welche die Einhaltung einer vorgegebenen Schulden- bzw. Kostenobergrenze garantiert.
Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors.
Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study.
The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.
This thesis focuses on dealing with some new aspects of continuous time portfolio optimization by using the stochastic control method.
First, we extend the Busch-Korn-Seifried model for a large investor by using the Vasicek model for the short rate, and that problem is solved explicitly for two types of intensity functions.
Next, we justify the existence of the constant proportion portfolio insurance (CPPI) strategy in a framework containing a stochastic short rate and a Markov switching parameter. The effect of Vasicek short rate on the CPPI strategy has been studied by Horsky (2012). This part of the thesis extends his research by including a Markov switching parameter, and the generalization is based on the B\"{a}uerle-Rieder investment problem. The explicit solutions are obtained for the portfolio problem without the Money Market Account as well as the portfolio problem with the Money Market Account.
Finally, we apply the method used in Busch-Korn-Seifried investment problem to explicitly solve the portfolio optimization with a stochastic benchmark.
In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.
Die Dissertation "Portfoliooptimierung im Binomialmodell" befasst sich mit der Frage, inwieweit
das Problem der optimalen Portfolioauswahl im Binomialmodell lösbar ist bzw. inwieweit
die Ergebnisse auf das stetige Modell übertragbar sind. Dabei werden neben dem
klassischen Modell ohne Kosten und ohne Veränderung der Marktsituation auch Modellerweiterungen
untersucht.
This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.
In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.
Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall
modeling and simulation of physical and chemical processes occuring in the course of an accident
is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor tech-
nology and computer programming. The aim of the study is therefore to create the foundations
of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based
on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under
different working and accident conditions, and to develop proper action plans for minimizing the
risks of accidents, and/or minimizing the consequences of possible accidents. A very large number
of scenarios have to be simulated, and at the same time acceptable accuracy for the critical param-
eters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software
tools are either too slow, or not accurate enough. This thesis deals with developing customized al-
gorithm and software tools for simulation of isothermal and non-isothermal flows in a containment
pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented.
The goal of the work is to achieve a balance between accuracy and speed of calculation, and to
develop customized algorithm for this special case. Different discretization and solution approaches
are studied and those which correspond best to the formulated goal are selected, adjusted, and when
possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated
geometries, in presence of solid and porous obstales, is in the core of the algorithm. Developing
suitable pre-processor and customized domain decomposition algorithms are essential part of the
overall algorithm amd software. Results from numerical simulations in test geometries and in real
geometries are presented and discussed.
This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.
Pedestrian Flow Models
(2014)
There have been many crowd disasters because of poor planning of the events. Pedestrian models are useful in analysing the behavior of pedestrians in advance to the events so that no pedestrians will be harmed during the event. This thesis deals with pedestrian flow models on microscopic, hydrodynamic and scalar scales. By following the Hughes' approach, who describes the crowd as a thinking fluid, we use the solution of the Eikonal equation to compute the optimal path for pedestrians. We start with the microscopic model for pedestrian flow and then derive the hydrodynamic and scalar models from it. We use particle methods to solve the governing equations. Moreover, we have coupled a mesh free particle method to the fixed grid for solving the Eikonal equation. We consider an example with a large number of pedestrians to investigate our models for different settings of obstacles and for different parameters. We also consider the pedestrian flow in a straight corridor and through T-junction and compare our numerical results with the experiments. A part of this work is devoted for finding a mesh free method to solve the Eikonal equation. Most of the available methods to solve the Eikonal equation are restricted to either cartesian grid or triangulated grid. In this context, we propose a mesh free method to solve the Eikonal equation, which can be applicable to any arbitrary grid and useful for the complex geometries.
This thesis is devoted to the computational aspects of intersection theory and enumerative geometry. The first results are a Sage package Schubert3 and a Singular library schubert.lib which both provide the key functionality necessary for computations in intersection theory and enumerative geometry. In particular, we describe an alternative method for computations in Schubert calculus via equivariant intersection theory. More concretely, we propose an explicit formula for computing the degree of Fano schemes of linear subspaces on hypersurfaces. As a special case, we also obtain an explicit formula for computing the number of linear subspaces on a general hypersurface when this number is finite. This leads to a much better performance than classical Schubert calculus.
Another result of this thesis is related to the computation of Gromov-Witten invariants. The most powerful method for computing Gromov-Witten invariants is the localization of moduli spaces of stable maps. This method was introduced by Kontsevich in 1995. It allows us to compute Gromov-Witten invariants via Bott's formula. As an insightful application, we computed the numbers of rational curves on general complete intersection Calabi-Yau threefolds in projective spaces up to degree six. The results are all in agreement with predictions made from mirror symmetry.
In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.
This thesis is divided into two parts. Both cope with multi-class image segmentation and utilize
non-smooth optimization algorithms.
The topic of the first part, namely unsupervised segmentation, is the application of clustering
to image pixels. Therefore, we start with an introduction of the biconvex center-based clustering
algorithms c-means and fuzzy c-means, where c denotes the number of classes. We show that
fuzzy c-means can be seen as an approximation of c-means in terms of power means.
Since noise is omnipresent in our image data, these simple clustering models are not suitable
for its segmentation. To this end, we introduce a general and finite dimensional segmentation
model that consists of a data term stemming from the aforementioned clustering models plus a
continuous regularization term. We tackle this optimization model via an alternating minimiza-
tion approach called regularized c-centers (RcC). Thereby, we fix the centers and optimize the
segment membership of the pixels and vice versa. In this general setting, we prove convergence
in the sense of set-valued algorithms using Zangwill’s Theory [172].
Further, we present a segmentation model with a total variation regularizer. While updating
the cluster centers is straightforward for fixed segment memberships of the pixels, updating the
segment membership can be solved iteratively via non-smooth, convex optimization. Thereby,
we do not iterate a convex optimization algorithm until convergence. Instead, we stop as soon as
we have a certain amount of decrease in the objective functional to increase the efficiency. This
algorithm is a particular implementation of RcC providing also the corresponding convergence
theory. Moreover, we show the good performance of our method in various examples such as
simulated 2d images of brain tissue and 3d volumes of two materials, namely a multi-filament
composite superconductor and a carbon fiber reinforced silicon carbide ceramics. Thereby, we
exploit the property of the latter material that two components have no common boundary in
our adapted model.
The second part of the thesis is concerned with supervised segmentation. We leave the area
of center based models and investigate convex approaches related to graph p-Laplacians and
reproducing kernel Hilbert spaces (RKHSs). We study the effect of different weights used to
construct the graph. In practical experiments we show on the one hand image types that
are better segmented by the p-Laplacian model and on the other hand images that are better
segmented by the RKHS-based approach. This is due to the fact that the p-Laplacian approach
provides smoother results, while the RKHS approach provides often more accurate and detailed
segmentations. Finally, we propose a novel combination of both approaches to benefit from the
advantages of both models and study the performance on challenging medical image data.
In the last few years a lot of work has been done in the investigation of Brownian motion with point interaction(s) in one and higher dimensions. Roughly speaking a Brownian motion with point interaction is nothing else than a Brownian motion whose generator is disturbed by a measure supported in just one point.
The purpose of the present work is the introducing of curve interactions of the two dimensional Brownian motion for a closed curve \(\mathcal{C}\). We will understand a curve interaction as a self-adjoint extension of the restriction of the Laplacian to the set of infinitely often continuously differentiable functions with compact support in \(\mathbb{R}^{2}\) which are constantly 0 at the closed curve. We will give a full description of all these self-adjoint extensions.
In the second chapter we will prove a generalization of Tanaka's formula to \(\mathbb{R}^{2}\). We define \(g\) to be a so-called harmonic single layer with continuous layer function \(\eta\) in \(\mathbb{R}^{2}\). For such a function \(g\) we prove
\begin{align}
g\left(B_{t}\right)=g\left(B_{0}\right)+\int\limits_{0}^{t}{\nabla g\left(B_{s}\right)\mathrm{d}B_{s}}+\int\limits_{0}^{t}\eta\left(B_{s}\right)\mathrm{d}L\left(s,\mathcal{C}\right)
\end{align}
where \(B_{t}\) is just the usual Brownian motion in \(\mathbb{R}^{2}\) and \(L\left(t,\mathcal{C}\right)\) is the connected unique local time process of \(B_{t}\) on the closed curve \(\mathcal{C}\).
We will use the generalized Tanaka formula in the following chapter to construct classes of processes related to curve interactions. In a first step we get the generalization of point interactions in a second step we get processes which behaves like a Brownian motion in the complement of \(\mathcal{C}\) and has an additional movement along the curve in the time- scale of \(L\left(t,\mathcal{C}\right)\). Such processes do not exist in the one point case since there we cannot move when the Brownian motion is in the point.
By establishing an approximation of a curve interaction by operators of the form Laplacian \(+V_{n}\) with "nice" potentials \(V_{n}\) we are able to deduce the existence of superprocesses related to curve interactions.
The last step is to give an approximation of these superprocesses by a sytem of branching particles. This approximation gives a better understanding of the related mass creation.
Constructing accurate earth models from seismic data is a challenging task. Traditional methods rely on ray based approximations of the wave equation and reach their limit in geologically complex areas. Full waveform inversion (FWI) on the other side seeks to minimize the misfit between modeled and observed data without such approximation.
While superior in accuracy, FWI uses a gradient based iterative scheme that makes it also very computationally expensive. In this thesis we analyse and test an Alternating Direction Implicit (ADI) scheme in order to reduce the costs of the two dimensional time domain algorithm for solving the acoustic wave equation. The ADI scheme can be seen as an intermediate between explicit and implicit finite difference modeling schemes. Compared to full implicit schemes the ADI scheme only requires the solution of much smaller matrices and is thus less computationally demanding. Using ADI we can handle coarser discretization compared to an explicit method. Although order of convergence and CFL conditions for the examined explicit method and ADI scheme are comparable, we observe that the ADI scheme is less prone to dispersion. Furhter, our algorithm is efficiently parallelized with vectorization and threading techniques. In a numerical comparison, we can demonstrate a runtime advantage of the ADI scheme over an explicit method of the same accuracy.
With the modeling in place, we test and compare several inverse schemes in the second part of the thesis. With the goal of avoiding local minima and improving speed of convergence, we use different minimization functions and hierarchical approaches. In several tests, we demonstrate superior results of the L1 norm compared to the L2 norm – especially in the presence of noise. Furthermore we show positive effects for applying three different multiscale approaches to the inverse problem. These methods focus on low frequency, early recording, or far offset during early iterations of the minimization and then proceed iteratively towards the full problem. We achieve best results with the frequency based multiscale scheme, for which we also provide a heuristical method of choosing iteratively increasing frequency bands.
Finally, we demonstrate the effectiveness of the different methods first on the Marmousi model and then on an extract of the 2004 BP model, where we are able to recover both high contrast top salt structures and lower contrast inclusions accurately.
The use of trading stops is a common practice in financial markets for a variety of reasons: it provides a simple way to control losses on a given trade, while also ensuring that profit-taking is not deferred indefinitely; and it allows opportunities to consider reallocating resources to other investments. In this thesis, it is explained why the use of stops may be desirable in certain cases.
This is done by proposing a simple objective to be optimized. Some simple and commonly-used rules for the placing and use of stops are investigated; consisting of fixed or moving barriers, with fixed transaction costs. It is shown how to identify optimal levels at which to set stops, and the performances of different rules and strategies are compared. Thereby, uncertainty and altering of the drift parameter of the investment are incorporated.
This thesis deals with generalized inverses, multivariate polynomial interpolation and approximation of scattered data. Moreover, it covers the lifting scheme, which basically links the aforementioned topics. For instance, determining filters for the lifting scheme is connected to multivariate polynomial interpolation. More precisely, sets of interpolation sites are required that can be interpolated by a unique polynomial of a certain degree. In this thesis a new class of such sets is introduced and elements from this class are used to construct new and computationally more efficient filters for the lifting scheme.
Furthermore, a method to approximate multidimensional scattered data is introduced which is based on the lifting scheme. A major task in this method is to solve an ordinary linear least squares problem which possesses a special structure. Exploiting this structure yields better approximations and therefore this particular least squares problem is analyzed in detail. This leads to a characterization of special generalized inverses with partially prescribed image spaces.
The application behind the subject of this thesis are multiscale simulations on highly heterogeneous particle-reinforced composites with large jumps in their material coefficients. Such simulations are used, e.g., for the prediction of elastic properties. As the underlying microstructures have very complex geometries, a discretization by means of finite elements typically involves very fine resolved meshes. The latter results in discretized linear systems of more than \(10^8\) unknowns which need to be solved efficiently. However, the variation of the material coefficients even on very small scales reveals the failure of most available methods when solving the arising linear systems. While for scalar elliptic problems of multiscale character, robust domain decomposition methods are developed, their extension and application to 3D elasticity problems needs to be further established.
The focus of the thesis lies in the development and analysis of robust overlapping domain decomposition methods for multiscale problems in linear elasticity. The method combines corrections on local subdomains with a global correction on a coarser grid. As the robustness of the overall method is mainly determined by how well small scale features of the solution can be captured on the coarser grid levels, robust multiscale coarsening strategies need to be developed which properly transfer information between fine and coarse grids.
We carry out a detailed and novel analysis of two-level overlapping domain decomposition methods for the elasticity problems. The study also provides a concept for the construction of multiscale coarsening strategies to robustly solve the discretized linear systems, i.e. with iteration numbers independent of variations in the Young's modulus and the Poisson ratio of the underlying composite. The theory also captures anisotropic elasticity problems and allows applications to multi-phase elastic materials with non-isotropic constituents in two and three spatial dimensions.
Moreover, we develop and construct new multiscale coarsening strategies and show why they should be preferred over standard ones on several model problems. In a parallel implementation (MPI) of the developed methods, we present applications to real composites and robustly solve discretized systems of more than \(200\) million unknowns.
Factorization of multivariate polynomials is a cornerstone of many applications in computer algebra. To compute it, one uses an algorithm by Zassenhaus who used it in 1969 to factorize univariate polynomials over \(\mathbb{Z}\). Later Musser generalized it to the multivariate case. Subsequently, the algorithm was refined and improved.
In this work every step of the algorithm is described as well as the problems that arise in these steps.
In doing so, we restrict to the coefficient domains \(\mathbb{F}_{q}\), \(\mathbb{Z}\), and \(\mathbb{Q}(\alpha)\) while focussing on a fast implementation. The author has implemented almost all algorithms mentioned in this work in the C++ library factory which is part of the computer algebra system Singular.
Besides, a new bound on the coefficients of a factor of a multivariate polynomial over \(\mathbb{Q}(\alpha)\) is proven which does not require \(\alpha\) to be an algebraic integer. This bound is used to compute Hensel lifting and recombination of factors in a modular fashion. Furthermore, several sub-steps are improved.
Finally, an overview on the capability of the implementation is given which includes benchmark examples as well as random generated input which is supposed to give an impression of the average performance.
This thesis is concerned with tropical moduli spaces, which are an important tool in tropical enumerative geometry. The main result is a construction of tropical moduli spaces of rational tropical covers of smooth tropical curves and of tropical lines in smooth tropical surfaces. The construction of a moduli space of tropical curves in a smooth tropical variety is reduced to the case of smooth fans. Furthermore, we point out relations to intersection theory on suitable moduli spaces on algebraic curves.
The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.
Efficient time integration and nonlinear model reduction for incompressible hyperelastic materials
(2013)
This thesis deals with the time integration and nonlinear model reduction of nearly incompressible materials that have been discretized in space by mixed finite elements. We analyze the structure of the equations of motion and show that a differential-algebraic system of index 1 with a singular perturbation term needs to be solved. In the limit case the index may jump to index 3 and thus renders the time integration into a difficult problem. For the time integration we apply Rosenbrock methods and study their convergence behavior for a test problem, which highlights the importance of the well-known Scholz conditions for this problem class. Numerical tests demonstrate that such linear-implicit methods are an attractive alternative to established time integration methods in structural dynamics. In the second part we combine the simulation of nonlinear materials with a model reduction step. We use the method of proper orthogonal decomposition and apply it to the discretized system of second order. For a nonlinear model reduction to be efficient we approximate the nonlinearity by following the lookup approach. In a practical example we show that large CPU time savings can achieved. This work is in order to prepare the ground for including such finite element structures as components in complex vehicle dynamics applications.
This thesis is separated into three main parts: Development of Gaussian and White Noise Analysis, Hamiltonian Path Integrals as White Noise Distributions, Numerical methods for polymers driven by fractional Brownian motion.
Throughout this thesis the Donsker's delta function plays a key role. We investigate this generalized function also in Chapter 2. Moreover we show by giving a counterexample, that the general definition for complex kernels is not true.
In Chapter 3 we take a closer look to generalized Gauss kernels and generalize these concepts to the case of vector-valued White Noise. These results are the basis for Hamiltonian path integrals of quadratic type. The core result of this chapter gives conditions under which pointwise products of generalized Gauss kernels and certain Hida distributions have a mathematical rigorous meaning as distributions in the Hida space.
In Chapter 4 we discuss operators which are related to applications for Feynman Integrals as differential operators, scaling, translation and projection. We show the relation of these operators to differential operators, which leads to the well-known notion of so called convolution operators. We generalize the central homomorphy theorem to regular generalized functions.
We generalize the concept of complex scaling to scaling with bounded operators and discuss the relation to generalized Radon-Nikodym derivatives. With the help of this we consider products of generalized functions in chapter 5. We show that the projection operator from the Wick formula for products with Donsker's deltais not closable on the square-integrable functions..
In Chapter 5 we discuss products of generalized functions. Moreover the Wick formula is revisited. We investigate under which conditions and on which spaces the Wick formula can be generalized to. At the end of the chapter we consider the products of Donsker's delta function with a generalized function with help of a measure transformation. Here also problems as measurability are concerned.
In Chapter 6 we characterize Hamiltonian path integrands for the free particle, the harmonic oscillator and the charged particle in a constant magnetic field as Hida distributions. This is done in terms of the T-transform and with the help of the results from chapter 3. For the free particle and the harmonic oscillator we also investigate the momentum space propagators. At the same time, the $T$-transform of the constructed Feynman integrands provides us with their generating functional. In Chapter 7, we can show that the generalized expectation (generating functional at zero) gives the Greens function to the corresponding Schrödinger equation.
Moreover, with help of the generating functional we can show that the canonical commutation relations for the free particle and the harmonic oscillator in phase space are fulfilled. This confirms on a mathematical rigorous level the heuristics developed by Feynman and Hibbs.
In Chapter 8 we give an outlook, how the scaling approach which is successfully applied in the Feynman integral setting can be transferred to the phase space setting. We give a mathematical rigorous meaning to an analogue construction to the scaled Feynman-Kac kernel. It is open if the expression solves the Schrödinger equation. At least for quadratic potentials we can get the right physics.
In the last chapter, we focus on the numerical analysis of polymer chains driven by fractional Brownian motion. Instead of complicated lattice algorithms, our discretization is based on the correlation matrix. Using fBm one can achieve a long-range dependence of the interaction of the monomers inside a polymer chain. Here a Metropolis algorithm is used to create the paths of a polymer driven by fBm taking the excluded volume effect in account.
Many real life problems have multiple spatial scales. In addition to the multiscale nature one has to take uncertainty into account. In this work we consider multiscale problems with stochastic coefficients.
We combine multiscale methods, e.g., mixed multiscale finite elements or homogenization, which are used for deterministic problems with stochastic methods, such as multi-level Monte Carlo or polynomial chaos methods.
The work is divided into three parts.
In the first two parts we study homogenization with different stochastic methods. Therefore we consider elliptic stationary diffusion equations with stochastic coefficients.
The last part is devoted to the study of mixed multiscale finite elements in combination with multi-level Monte Carlo methods. In the third part we consider multi-phase flow and transport equations.
The main topic of this thesis is to define and analyze a multilevel Monte Carlo algorithm for path-dependent functionals of the solution of a stochastic differential equation (SDE) which is driven by a square integrable, \(d_X\)-dimensional Lévy process \(X\). We work with standard Lipschitz assumptions and denote by \(Y=(Y_t)_{t\in[0,1]}\) the \(d_Y\)-dimensional strong solution of the SDE.
We investigate the computation of expectations \(S(f) = \mathrm{E}[f(Y)]\) using randomized algorithms \(\widehat S\). Thereby, we are interested in the relation of the error and the computational cost of \(\widehat S\), where \(f:D[0,1] \to \mathbb{R}\) ranges in the class \(F\) of measurable functionals on the space of càdlàg functions on \([0,1]\), that are Lipschitz continuous with respect to the supremum norm.
We consider as error \(e(\widehat S)\) the worst case of the root mean square error over the class of functionals \(F\). The computational cost of an algorithm \(\widehat S\), denoted \(\mathrm{cost}(\widehat S)\), should represent the runtime of the algorithm on a computer. We work in the real number model of computation and further suppose that evaluations of \(f\) are possible for piecewise constant functions in time units according to its number of breakpoints.
We state strong error estimates for an approximate Euler scheme on a random time discretization. With this strong error estimates, the multilevel algorithm leads to upper bounds for the convergence order of the error with respect to the computational cost. The main results can be summarized in terms of the Blumenthal-Getoor index of the driving Lévy process, denoted by \(\beta\in[0,2]\). For \(\beta <1\) and no Brownian component present, we almost reach convergence order \(1/2\), which means, that there exists a sequence of multilevel algorithms \((\widehat S_n)_{n\in \mathbb{N}}\) with \(\mathrm{cost}(\widehat S_n) \leq n\) such that \( e(\widehat S_n) \precsim n^{-1/2}\). Here, by \( \precsim\), we denote a weak asymptotic upper bound, i.e. the inequality holds up to an unspecified positive constant. If \(X\) has a Brownian component, the order has an additional logarithmic term, in which case, we reach \( e(\widehat S_n) \precsim n^{-1/2} \, (\log(n))^{3/2}\).
For the special subclass of $Y$ being the Lévy process itself, we also provide a lower bound, which, up to a logarithmic term, recovers the order \(1/2\), i.e., neglecting logarithmic terms, the multilevel algorithm is order optimal for \( \beta <1\).
An empirical error analysis via numerical experiments matches the theoretical results and completes the analysis.
This thesis is devoted to furthering the tropical intersection theory as well as to applying the
developed theory to gain new insights about tropical moduli spaces.
We use piecewise polynomials to define tropical cocycles that generalise the notion of tropical Cartier divisors to higher codimensions, introduce an intersection product of cocycles with tropical cycles and use the connection to toric geometry to prove a Poincaré duality for certain cases. Our
main application of this Poincaré duality is the construction of intersection-theoretic fibres under a
large class of tropical morphisms.
We construct an intersection product of cycles on matroid varieties which are a natural
generalisation of tropicalisations of classical linear spaces and the local blocks of smooth tropical
varieties. The key ingredient is the ability to express a matroid variety contained in another matroid variety by a piecewise polynomial that is given in terms of the rank functions of the corresponding
matroids. In particular, this enables us to intersect cycles on the moduli spaces of n-marked abstract
rational curves. We also construct a pull-back of cycles along morphisms of smooth varieties, relate
pull-backs to tropical modifications and show that every cycle on a matroid variety is rationally
equivalent to its recession cycle and can be cut out by a cocycle.
Finally, we define families of smooth rational tropical curves over smooth varieties and construct a tropical fibre product in order to show that every morphism of a smooth variety to the moduli space of abstract rational tropical curves induces a family of curves over the domain of the morphism.
This leads to an alternative, inductive way of constructing moduli spaces of rational curves.
Filtering, Approximation and Portfolio Optimization for Shot-Noise Models and the Heston Model
(2012)
We consider a continuous time market model in which stock returns satisfy a stochastic differential equation with stochastic drift, e.g. following an Ornstein-Uhlenbeck process. The driving noise of the stock returns consists not only of Brownian motion but also of a jump part (shot noise or compound Poisson process). The investor's objective is to maximize expected utility of terminal wealth under partial information which means that the investor only observes stock prices but does not observe the drift process. Since the drift of the stock prices is unobservable, it has to be estimated using filtering techniques. E.g., if the drift follows an Ornstein-Uhlenbeck process and without
jump part, Kalman filtering can be applied and optimal strategies can be computed explicitly. Also in other cases, like for an underlying
Markov chain, finite-dimensional filters exist. But for certain jump processes (e.g. shot noise) or certain nonlinear drift dynamics explicit computations, based on discrete observations, are no longer possible or existence of finite dimensional filters is no longer valid. The same
computational difficulties apply to the optimal strategy since it depends on the filter. In this case the model may be approximated by
a model where the filter is known and can be computed. E.g., we use statistical linearization for non-linear drift processes, finite-state-Markov chain approximations for the drift process and/or diffusion approximations for small jumps in the noise term.
In the approximating models, filters and optimal strategies can often be computed explicitly. We analyze and compare different approximation methods, in particular in view of performance of the corresponding utility maximizing strategies.
In this thesis we consider the problem of maximizing the growth rate with proportional and fixed costs in a framework with one bond and one stock, which is modeled as a jump diffusion with compound Poisson jumps. Following the approach from [1], we prove that in this framework it is optimal for an investor to follow a CB-strategy. The boundaries depend only on the parameters of the underlying stock and bond. Now it is natural to ask for the investor who follows a CB-strategy which is given by the stopping times \((\tau_i)_{i\in\mathbb N}\) and impulses \((\eta_i)_{i\in\mathbb N}\) how often he has to rebalance. In other words we want to obtain the limit of the inter trading times
\[
\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^n(\tau_{i+1}-\tau_{i}).
\]
We are able to obtain this limit which is given by the expected first exit time of the risky fraction process from some interval under the invariant measure of the Markov chain \((\eta_i)_{i\in\mathbb N}\) using the Ergodic Theorem from von Neumann and Birkhoff. In general, it is difficult to obtain the expectation of the first exit time for the process with jumps. Because of the jump part, when the process crosses the boundaries of the interval an overshoot may occur which makes it difficult to obtain the distribution. Nevertheless we can obtain the first exit time if the process has only negative jumps using scale functions. The main difficulty of this approach is that the scale functions are known only up to their Laplace transforms. In [2] and [3] the closed-form expression for the scale function of the Levy process with phase-type distributed jumps is obtained. Phase-type distributions build a rich class of positive-valued distributions: the exponential, hyperexponential, Erlang, hyper-Erlang and Coxian distributions. Since the scale function is given as a function in a closed form we can differentiate to obtain the expected first exit time using the fluctuation identities explicitly.
[1] Irle, A. and Sass,J.: Optimal portfolio policies under fixed and proportional transaction costs, Advances in Applied Probability 38, 916-942.
[2] Egami, M., Yamazaki, K.: On scale functions of spectrally negative Levy processes with phase-type jumps, working paper, July 3.
[3]Egami, M., Yamazaki, K.: Precautionary measures for credit risk management in jump models, working paper, June 17.
The goal of this work is to develop a simulation-based algorithm, allowing the prediction
of the effective mechanical properties of textiles on the basis of their microstructure
and corresponding properties of fibers. This method can be used for optimization of the
microstructure, in order to obtain a better stiffness or strength of the corresponding fiber
material later on. An additional aspect of the thesis is that we want to take into account the microcontacts
between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the
textile. An introduction of an additional asymptotics with respect to a small parameter,
the relation between the thickness and the representative length of the fibers, allows a
reduction of local contact problems between fibers to 1-dimensional problems, which
reduces numerical computations significantly.
A fiber composite material with periodic microstructure and multiple frictional microcontacts
between fibers is studied. The textile is modeled by introducing small geometrical
parameters: the periodicity of the microstructure and the characteristic
diameter of fibers. The contact linear elasticity problem is considered. A two-scale
approach is used for obtaining the effective mechanical properties.
The algorithm using asymptotic two-scale homogenization for computation of the
effective mechanical properties of textiles with periodic rod or fiber microstructure
is proposed. The algorithm is based on the consequent passing to the asymptotics
with respect to the in-plane period and the characteristic diameter of fibers. This
allows to come to the equivalent homogenized problem and to reduce the dimension
of the auxiliary problems. Further numerical simulations of the cell problems give
the effective material properties of the textile.
The homogenization of the boundary conditions on the vanishing out-of-plane interface
of a textile or fiber structured layer has been studied. Introducing additional
auxiliary functions into the formal asymptotic expansion for a heterogeneous
plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous
Neumann boundary condition were deduced. It is incorporated into the right hand
side of the homogenized problem via effective out-of-plane moduli.
FiberFEM, a C++ finite element code for solving contact elasticity problems, is
developed. The code is based on the implementation of the algorithm for the contact
between fibers, proposed in the thesis.
Numerical examples of homogenization of geotexiles and wovens are obtained in the
work by implementation of the developed algorithm. The effective material moduli
are computed numerically using the finite element solutions of the auxiliary contact
problems obtained by FiberFEM.
This thesis deals with the relationship between no-arbitrage and (strictly) consistent price processes for a financial market with proportional transaction costs
in a discrete time model. The exact mathematical statement behind this relationship is formulated in the so-called Fundamental Theorem of Asset Pricing (FTAP). Among the many proofs of the FTAP without transaction costs there
is also an economic intuitive utility-based approach. It relies on the economic
intuitive fact that the investor can maximize his expected utility from terminal
wealth. This approach is rather constructive since the equivalent martingale measure is then given by the marginal utility evaluated at the optimal terminal payoff.
However, in the presence of proportional transaction costs such a utility-based approach for the existence of consistent price processes is missing in the literature. So far, rather deep methods from functional analysis or from the theory of random sets have been used to show the FTAP under proportional transaction costs.
For the sake of existence of a utility-maximizing payoff we first concentrate on a generic single-period model with only one risky asset. The marignal utility evaluated at the optimal terminal payoff yields the first component of a
consistent price process. The second component is given by the bid-ask prices
depending on the investors optimal action. Even more is true: nearby this consistent price process there are many strictly consistent price processes. Their exact structure allows us to apply this utility-maximizing argument in a multi-period model. In a backwards induction we adapt the given bid-ask prices in such a way so that the strictly consistent price processes found from maximizing utility can be extended to terminal time. In addition possible arbitrage opportunities of the 2nd kind vanish which can present for the original bid-ask process. The notion of arbitrage opportunities of the 2nd kind has been so
far investigated only in models with strict costs in every state. In our model
transaction costs need not be present in every state.
For a model with finitely many risky assets a similar idea is applicable. However, in the single-period case we need to develop new methods compared
to the single-period case with only one risky asset. There are mainly two reasons
for that. Firstly, it is not at all obvious how to get a consistent price process
from the utility-maximizing payoff, since the consistent price process has to be
found for all assets simultaneously. Secondly, we need to show directly that the
so-called vector space property for null payoffs implies the robust no-arbitrage condition. Once this step is accomplished we can à priori use prices with a
smaller spread than the original ones so that the consistent price process found
from the utility-maximizing payoff is strictly consistent for the original prices.
To make the results applicable for the multi-period case we assume that the prices are given by compact and convex random sets. Then the multi-period case is similar to the case with only one risky asset but more demanding with regard to technical questions.
Image restoration and enhancement methods that respect important features such as edges play a fundamental role in digital image processing. In the last decades a large
variety of methods have been proposed. Nevertheless, the correct restoration and
preservation of, e.g., sharp corners, crossings or texture in images is still a challenge, in particular in the presence of severe distortions. Moreover, in the context of image denoising many methods are designed for the removal of additive Gaussian noise and their adaptation for other types of noise occurring in practice requires usually additional efforts.
The aim of this thesis is to contribute to these topics and to develop and analyze new
methods for restoring images corrupted by different types of noise:
First, we present variational models and diffusion methods which are particularly well
suited for the restoration of sharp corners and X junctions in images corrupted by
strong additive Gaussian noise. For their deduction we present and analyze different
tensor based methods for locally estimating orientations in images and show how to
successfully incorporate the obtained information in the denoising process. The advantageous
properties of the obtained methods are shown theoretically as well as by
numerical experiments. Moreover, the potential of the proposed methods is demonstrated
for applications beyond image denoising.
Afterwards, we focus on variational methods for the restoration of images corrupted
by Poisson and multiplicative Gamma noise. Here, different methods from the literature
are compared and the surprising equivalence between a standard model for
the removal of Poisson noise and a recently introduced approach for multiplicative
Gamma noise is proven. Since this Poisson model has not been considered for multiplicative
Gamma noise before, we investigate its properties further for more general
regularizers including also nonlocal ones. Moreover, an efficient algorithm for solving
the involved minimization problems is proposed, which can also handle an additional
linear transformation of the data. The good performance of this algorithm is demonstrated
experimentally and different examples with images corrupted by Poisson and
multiplicative Gamma noise are presented.
In the final part of this thesis new nonlocal filters for images corrupted by multiplicative
noise are presented. These filters are deduced in a weighted maximum likelihood
estimation framework and for the definition of the involved weights a new similarity measure for the comparison of data corrupted by multiplicative noise is applied. The
advantageous properties of the new measure are demonstrated theoretically and by
numerical examples. Besides, denoising results for images corrupted by multiplicative
Gamma and Rayleigh noise show the very good performance of the new filters.
Diese Dissertation besteht aus zwei aktuellen Themen im Bereich Finanzmathematik, die voneinander unabhängig sind.
Beim ersten Thema, "Flexible Algorithmen zur Bewertung komplexer Optionen mit mehreren Eigenschaften mittels der funktionalen Programmiersprache Haskell", handelt es sich um ein interdisziplinäres Projekt, in dem eine wissenschaftliche Brücke zwischen der Optionsbewertung und der funktionalen Programmierung geschlagen wurde.
Im diesem Projekt wurde eine funktionale Bibliothek zur Konstruktion von Optionen
entworfen, in dem es eine Reihe von grundlegenden Konstruktoren gibt, mit denen
man verschiedene Optionen kombinieren kann. Im Rahmen der funktionalen Bibliothek
wurde ein allgemeiner Algorithmus entwickelt, durch den die aus den Konstruktoren
kombinierten Optionen bewertet werden können.
Der mathematische Aspekt des Projekts besteht in der Entwicklung eines neuen Konzeptes zur Bewertung der Optionen. Dieses Konzept basiert auf dem Binomialmodell, welches in den letzten Jahren eine weite Verbreitung im Forschungsgebiet der Optionsbewertung fand. Der kerne Algorithmus des Konzeptes ist eine Kombination von mehreren
sorgfältig ausgewählten numerischen Methoden in Bezug auf den Binomialbaum. Diese
Kombination ist nicht trivial, sondern entwikelt sich nach bestimmten Regeln und ist eng mit den grundlegenden Konstruktoren verknüpft.
Ein wichtiger Charakterzug des Projekts ist die funktionale Denkweise. D. h. der Algorithmus ließ sich mithilfe einer funktionalen Programmiersprache formulieren. In unserem Projekt wurde Haskell verwendet.
Das zweite Thema, Monte-Carlo-Simulation des Deltas und (Cross-)Gammas von
Bermuda-Swaptions im LIBOR-Marktmodell, bezieht sich auf ein zentrales Problem der
Finanzmathematik, nämlich die Bestimmung der Risikoparameter komplexer Zinsderivate.
In dieser Arbeit wurde die numerische Berechnung des Delta-Vektors einer Bermuda-
Swaption ausführlich untersucht und die neue Herausforderung, die Gamma-Matrix einer Bermuda-Swaption exakt simulieren, erfolgreich gemeistert. Die beiden Risikoparameter spielen bei Handelsstrategien in Form des Delta-Hedgings und Gamma-Hedgings eine entscheidende Rolle. Das zugrunde liegende Zinsstrukturmodell ist das LIBORMarktmodell, welches in den letzten Jahren eine auffällige Entwicklung in der Finanzmathematik gemacht hat. Bei der Simulation und Anwendung des LIBOR-Marktmodells fällt die Monte-Carlo-Simulation ins Gewicht.
Für die Berechung des Delta-Vektors einer Bermuda-Swaption wurden drei klassische und drei von uns entwickelte numerische Methoden vorgestellt und gegenübergestellt, welche fast alle vorhandenen Arten der Monte-Carlo-Simulation zur Berechnung des Delta-Vektors einer Bermuda-Swaption enthalten.
Darüber hinaus gibt es in der Arbeit noch zwei neu entwickelte Methoden, um die Gamma-Matrix einer Bermuda-Swaption exakt zu berechnen, was völlig neu im Forschungsgebiet der Computational-Finance ist. Eine ist die modifizierte Finite-Differenzen-Methode. Die andere ist die reine Pathwise-Methode, die auf pfadweiser Differentialrechnung basiert und einem robusten und erwartungstreuen Simulationsverfahren entspricht.
The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.
Paper production is a problem with significant importance for the society and it is a challenging topic for scientific investigations. This study is concerned with the simulations of the pressing section of a paper machine. We aim at the development of an advanced mathematical model of the pressing section, which is able to recover the behavior of the fluid flow within the paper felt sandwich obtained in laboratory experiments.
From the modeling point of view the pressing of the paper-felt sandwich is a complex process since one has to deal with the two-phase flow in moving and deformable porous media. To account for the solid deformations, we use developments from the PhD thesis by S. Rief where the elasticity model is stated and discussed in detail. The flow model which accounts for the movement of water within the paper-felt sandwich is described with the help of two flow regimes: single-phase water flow and two-phase air-water flow. The model for the saturated flow is presented by the Darcy's law and the mass conservation. The second regime is described by the Richards' approach together with dynamic capillary effects. The model for the dynamic capillary pressure - saturation relation proposed by Hassanizadeh and Gray is adapted for the needs of the paper manufacturing process.
We have started the development of the flow model with the mathematical modeling in one-dimensional case. The one-dimensional flow model is derived from a two-dimensional one by an averaging procedure in vertical direction. The model is numerically studied and verified in comparison with measurements. Some theoretical investigations are performed to prove the convergence of the discrete solution to the continuous one. For completeness of the studies, the models with the static and dynamic capillary pressure–saturation relations are considered. Existence, compactness and convergence results are obtained for both models.
Then, a two-dimensional model is developed, which accounts for a multilayer computational domain and formation of the fully saturated zones. For discretization we use a non-orthogonal grid resolving the layer interfaces and the multipoint flux approximation O-method. The numerical experiments are carried out for parameters which are typical for the production process. The static and dynamic capillary pressure-saturation relations are tested to evaluate the influence of the dynamic capillary effect.
The last part of the thesis is an investigation of the validity range of the Richards’ assumption for the two-dimensional flow model with the static capillary pressure-saturation relation. Numerical experiments show that the Richards’ assumption is not the best choice in simulating processes in the pressing section.
Standard bases are one of the main tools in computational commutative algebra. In 1965
Buchberger presented a criterion for such bases and thus was able to introduce a first approach for their computation. Since the basic version of this algorithm is rather inefficient
due to the fact that it processes lots of useless data during its execution, active research for
improvements of those kind of algorithms is quite important.
In this thesis we introduce the reader to the area of computational commutative algebra with a focus on so-called signature-based standard basis algorithms. We do not only
present the basic version of Buchberger’s algorithm, but give an extensive discussion of different attempts optimizing standard basis computations, from several sorting algorithms
for internal data up to different reduction processes. Afterwards the reader gets a complete
introduction to the origin of signature-based algorithms in general, explaining the under-
lying ideas in detail. Furthermore, we give an extensive discussion in terms of correctness,
termination, and efficiency, presenting various different variants of signature-based standard basis algorithms.
Whereas Buchberger and others found criteria to discard useless computations which
are completely based on the polynomial structure of the elements considered, Faugère presented a first signature-based algorithm in 2002, the F5 Algorithm. This algorithm is famous for generating much less computational overhead during its execution. Within this
thesis we not only present Faugère’s ideas, we also generalize them and end up with several
different, optimized variants of his criteria for detecting redundant data.
Being not completely focussed on theory, we also present information about practical
aspects, comparing the performance of various implementations of those algorithms in the
computer algebra system Singular over a wide range of example sets.
In the end we give a rather extensive overview of recent research in this area of computational commutative algebra.
On Gyroscopic Stabilization
(2012)
This thesis deals with systems of the form
\(
M\ddot x+D\dot x+Kx=0\;, \; x \in \mathbb R^n\;,
\)
with a positive definite mass matrix \(M\), a symmetric damping matrix \(D\) and a positive definite stiffness
matrix \(K\).
If the equilibrium in the system is unstable, a small disturbance is enough to set the system in motion again. The motion of the system sustains itself, an effect which is called self-excitation or self-induced vibration. The reason behind this effect is the presence of negative damping, which results for example from dry friction.
Negative damping implies that the damping matrix \(D\) is indefinite or negative definite. Throughout our work, we assume \(D\) to be indefinite, and that the system possesses both stable and unstable modes and thus is unstable.
It is now the idea of gyroscopic stabilization to mix the modes of a system with indefinite damping such
that the system is stabilized without introducing further
dissipation. This is done by adding gyroscopic forces \(G\dot x\) with a suitable
skew-symmetric matrix \(G\) to the left-hand side. We call \(G=-G^T\in\mathbb R^{n\times n}\) a gyroscopic stabilizer for
the unstable system, if
\(
M\ddot x+(D+ G)\dot x+Kx=0
\)
is asymptotically stable. We show the existence of \(G\) in space dimensions three and four.
In this thesis we outline the Kerner's 3-phase traffic flow theory, which states that the flow of vehicular traffic occur in three phases i.e. free flow, synchronized flow and wide moving jam phases.
A macroscopic 3-phase traffic model of the Aw-Rascle type is derived from the microscopic Speed Adaptation 3-phase traffic model
developed by Kerner and Klenov [J. Phys. A: Math. Gen., 39(2006), pp. 1775-1809 ].
We derive the same macroscopic model from the kinetic traffic flow model of Klar and Wegener [SIAM J. Appl. Math., 60(2000), pp. 1749-1766 ] as well as that of Illner, Klar and Materne [Comm. Math. Sci., 1(2003), pp. 1-12 ].
In the above stated derivations, the 3-phase traffic theory is constituted in the macroscopic model through a relaxation term.
This serves as an incentive to modify the relaxation term of the `switching curve' model of Greenberg,
Klar and Rascle [SIAM J. Appl. Math.,63(2003), pp.818-833 ] to obtain another macroscopic 3-phase traffic model, which is still of the Aw-Rascle type.
By specifying the relaxation term differently we obtain three kinds of models, namely the macroscopic Speed Adaptation,
the Switching Curve and the modified Switching Curve models.
To demonstrate the capability of the derived macroscopic traffic models to reproduce the features of 3-phase traffic theory, we simulate a
multi-lane road that has a bottleneck. We consider a stationary and a moving bottleneck.
The results of the simulations for the three models are compared.
This thesis generalizes the Cohen-Lenstra heuristic for the class groups of real quadratic
number fields to higher class groups. A "good part" of the second class group is defined.
In general this is a non abelian proper factor group of the second class group. Properties
of those groups are described, a probability distribution on the set of those groups is in-
troduced and proposed as generalization of the Cohen-Lenstra heuristic for real quadratic
number fields. The calculation of number field tables which contain information about
higher class groups is explained and the tables are compared to the heuristic. The agree-
ment is close. A program which can create an internet database for number field tables is
presented.
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)
Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.
For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.