## Fachbereich Mathematik

### Refine

#### Year of publication

#### Document Type

- Preprint (527)
- Doctoral Thesis (155)
- Report (38)
- Article (26)
- Master's Thesis (26)
- Lecture (17)
- Study Thesis (2)
- Working Paper (2)
- Course Material (1)

#### Has Fulltext

- yes (794) (remove)

#### Keywords

- Wavelet (12)
- Inverses Problem (10)
- Modellierung (10)
- Mathematikunterricht (9)
- Mehrskalenanalyse (9)
- praxisorientiert (9)
- Boltzmann Equation (7)
- Location Theory (7)
- Approximation (6)
- Lineare Algebra (6)

- Maximizing the Asymptotic Growth Rate under Fixed and Proportional Transaction Costs in a Financial Market with Jumps (2012)
- In this thesis we consider the problem of maximizing the growth rate with proportional and fixed costs in a framework with one bond and one stock, which is modeled as a jump diffusion with compound Poisson jumps. Following the approach from [1], we prove that in this framework it is optimal for an investor to follow a CB-strategy. The boundaries depend only on the parameters of the underlying stock and bond. Now it is natural to ask for the investor who follows a CB-strategy which is given by the stopping times \((\tau_i)_{i\in\mathbb N}\) and impulses \((\eta_i)_{i\in\mathbb N}\) how often he has to rebalance. In other words we want to obtain the limit of the inter trading times \[ \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^n(\tau_{i+1}-\tau_{i}). \] We are able to obtain this limit which is given by the expected first exit time of the risky fraction process from some interval under the invariant measure of the Markov chain \((\eta_i)_{i\in\mathbb N}\) using the Ergodic Theorem from von Neumann and Birkhoff. In general, it is difficult to obtain the expectation of the first exit time for the process with jumps. Because of the jump part, when the process crosses the boundaries of the interval an overshoot may occur which makes it difficult to obtain the distribution. Nevertheless we can obtain the first exit time if the process has only negative jumps using scale functions. The main difficulty of this approach is that the scale functions are known only up to their Laplace transforms. In [2] and [3] the closed-form expression for the scale function of the Levy process with phase-type distributed jumps is obtained. Phase-type distributions build a rich class of positive-valued distributions: the exponential, hyperexponential, Erlang, hyper-Erlang and Coxian distributions. Since the scale function is given as a function in a closed form we can differentiate to obtain the expected first exit time using the fluctuation identities explicitly. [1] Irle, A. and Sass,J.: Optimal portfolio policies under fixed and proportional transaction costs, Advances in Applied Probability 38, 916-942. [2] Egami, M., Yamazaki, K.: On scale functions of spectrally negative Levy processes with phase-type jumps, working paper, July 3. [3]Egami, M., Yamazaki, K.: Precautionary measures for credit risk management in jump models, working paper, June 17.

- Utility-based proof for the existence of strictly consistent price processes under proportional transaction costs (2012)
- This thesis deals with the relationship between no-arbitrage and (strictly) consistent price processes for a financial market with proportional transaction costs in a discrete time model. The exact mathematical statement behind this relationship is formulated in the so-called Fundamental Theorem of Asset Pricing (FTAP). Among the many proofs of the FTAP without transaction costs there is also an economic intuitive utility-based approach. It relies on the economic intuitive fact that the investor can maximize his expected utility from terminal wealth. This approach is rather constructive since the equivalent martingale measure is then given by the marginal utility evaluated at the optimal terminal payoff. However, in the presence of proportional transaction costs such a utility-based approach for the existence of consistent price processes is missing in the literature. So far, rather deep methods from functional analysis or from the theory of random sets have been used to show the FTAP under proportional transaction costs. For the sake of existence of a utility-maximizing payoff we first concentrate on a generic single-period model with only one risky asset. The marignal utility evaluated at the optimal terminal payoff yields the first component of a consistent price process. The second component is given by the bid-ask prices depending on the investors optimal action. Even more is true: nearby this consistent price process there are many strictly consistent price processes. Their exact structure allows us to apply this utility-maximizing argument in a multi-period model. In a backwards induction we adapt the given bid-ask prices in such a way so that the strictly consistent price processes found from maximizing utility can be extended to terminal time. In addition possible arbitrage opportunities of the 2nd kind vanish which can present for the original bid-ask process. The notion of arbitrage opportunities of the 2nd kind has been so far investigated only in models with strict costs in every state. In our model transaction costs need not be present in every state. For a model with finitely many risky assets a similar idea is applicable. However, in the single-period case we need to develop new methods compared to the single-period case with only one risky asset. There are mainly two reasons for that. Firstly, it is not at all obvious how to get a consistent price process from the utility-maximizing payoff, since the consistent price process has to be found for all assets simultaneously. Secondly, we need to show directly that the so-called vector space property for null payoffs implies the robust no-arbitrage condition. Once this step is accomplished we can à priori use prices with a smaller spread than the original ones so that the consistent price process found from the utility-maximizing payoff is strictly consistent for the original prices. To make the results applicable for the multi-period case we assume that the prices are given by compact and convex random sets. Then the multi-period case is similar to the case with only one risky asset but more demanding with regard to technical questions.

- Effective mechanical properties of technical textile materials via asymptotic homogenization (2012)
- The goal of this work is to develop a simulation-based algorithm, allowing the prediction of the effective mechanical properties of textiles on the basis of their microstructure and corresponding properties of fibers. This method can be used for optimization of the microstructure, in order to obtain a better stiffness or strength of the corresponding fiber material later on. An additional aspect of the thesis is that we want to take into account the microcontacts between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the textile. An introduction of an additional asymptotics with respect to a small parameter, the relation between the thickness and the representative length of the fibers, allows a reduction of local contact problems between fibers to 1-dimensional problems, which reduces numerical computations significantly. A fiber composite material with periodic microstructure and multiple frictional microcontacts between fibers is studied. The textile is modeled by introducing small geometrical parameters: the periodicity of the microstructure and the characteristic diameter of fibers. The contact linear elasticity problem is considered. A two-scale approach is used for obtaining the effective mechanical properties. The algorithm using asymptotic two-scale homogenization for computation of the effective mechanical properties of textiles with periodic rod or fiber microstructure is proposed. The algorithm is based on the consequent passing to the asymptotics with respect to the in-plane period and the characteristic diameter of fibers. This allows to come to the equivalent homogenized problem and to reduce the dimension of the auxiliary problems. Further numerical simulations of the cell problems give the effective material properties of the textile. The homogenization of the boundary conditions on the vanishing out-of-plane interface of a textile or fiber structured layer has been studied. Introducing additional auxiliary functions into the formal asymptotic expansion for a heterogeneous plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous Neumann boundary condition were deduced. It is incorporated into the right hand side of the homogenized problem via effective out-of-plane moduli. FiberFEM, a C++ finite element code for solving contact elasticity problems, is developed. The code is based on the implementation of the algorithm for the contact between fibers, proposed in the thesis. Numerical examples of homogenization of geotexiles and wovens are obtained in the work by implementation of the developed algorithm. The effective material moduli are computed numerically using the finite element solutions of the auxiliary contact problems obtained by FiberFEM.

- A limitation of the estimation of intrinsic volumes via pixel configuration counts (2012)
- It is often helpful to compute the intrinsic volumes of a set of which only a pixel image is observed. A computational efficient approach, which is suggested by several authors and used in practice, is to approximate the intrinsic volumes by a linear functional of the pixel configuration histogram. Here we want to examine, whether there is an optimal way of choosing this linear functional, where we will use a quite natural optimality criterion that has already been applied successfully for the estimation of the surface area. We will see that for intrinsic volumes other than volume or surface area this optimality criterion cannot be used, since estimators which ignore the data and return constant values are optimal w.r.t. this criterion. This shows that one has to be very careful, when intrinsic volumes are approximated by a linear functional of the pixel configuration histogram.

- Intersection Theory of the Tropical Moduli Spaces of Curves (2007)
- Tropical geometry is a very new mathematical domain. The appearance of tropical geometry was motivated by its deep relations to other mathematical branches. These include algebraic geometry, symplectic geometry, complex analysis, combinatorics and mathematical biology. In this work we see some more relations between algebraic geometry and tropical geometry. Our aim is to prove a one-to-one correspondence between the divisor classes on the moduli space of n-pointed rational stable curves and the divisors of the moduli space of n-pointed abstract tropical curves. Thus we state some results of the algebraic case first. In algebraic geometry these moduli spaces are well understood. In particular, the group of divisor classes is calculated by S. Keel. We recall the needed results in chapter one. For the proof of the correspondence we use some results of toric geometry. Further we want to show an equality of the Chow groups of a special toric variety and the algebraic moduli space. Thus we state some results of the toric geometry as well. This thesis tries to discover some connection between algebraic and tropical geometry. Thus we also need the corresponding tropical objects to the algebraic objects. Therefore we give some necessary definitions such as fan, tropical fan, morphisms between tropical fans, divisors or the topical moduli space of all n-marked tropical curves. Since we need it, we show that the tropical moduli space can be embedded as a tropical fan. After this preparatory work we prove that the group of divisor classes in v classical algebraic geometry has it equivalence in tropical geometry. For this it is useful to give a map from the group of divisor classes of the algebraic moduli space to the group of divisors of the tropical moduli space. Our aim is to prove the bijectivity of this map in chapter three. On the way we discover a deep connection between the algebraic moduli space and the toric variety given by the tropical fan of the tropical moduli space.

- The Generalized Assignment Problem with Minimum Quantities (2012)
- We consider a variant of the generalized assignment problem (GAP) where the amount of space used in each bin is restricted to be either zero (if the bin is not opened) or above a given lower bound (a minimum quantity). We provide several complexity results for different versions of the problem and give polynomial time exact algorithms and approximation algorithms for restricted cases. For the most general version of the problem, we show that it does not admit a polynomial time approximation algorithm (unless P=NP), even for the case of a single bin. This motivates to study dual approximation algorithms that compute solutions violating the bin capacities and minimum quantities by a constant factor. When the number of bins is fixed and the minimum quantity of each bin is at least a factor \(\delta>1\) larger than the largest size of an item in the bin, we show how to obtain a polynomial time dual approximation algorithm that computes a solution violating the minimum quantities and bin capacities by at most a factor \(1-\frac{1}{\delta}\) and \(1+\frac{1}{\delta}\), respectively, and whose profit is at least as large as the profit of the best solution that satisfies the minimum quantities and bin capacities strictly. In particular, for \(\delta=2\), we obtain a polynomial time (1,2)-approximation algorithm.

- Filtering, Approximation and Portfolio Optimization for Shot-Noise Models and the Heston Model (2012)
- We consider a continuous time market model in which stock returns satisfy a stochastic differential equation with stochastic drift, e.g. following an Ornstein-Uhlenbeck process. The driving noise of the stock returns consists not only of Brownian motion but also of a jump part (shot noise or compound Poisson process). The investor's objective is to maximize expected utility of terminal wealth under partial information which means that the investor only observes stock prices but does not observe the drift process. Since the drift of the stock prices is unobservable, it has to be estimated using filtering techniques. E.g., if the drift follows an Ornstein-Uhlenbeck process and without jump part, Kalman filtering can be applied and optimal strategies can be computed explicitly. Also in other cases, like for an underlying Markov chain, finite-dimensional filters exist. But for certain jump processes (e.g. shot noise) or certain nonlinear drift dynamics explicit computations, based on discrete observations, are no longer possible or existence of finite dimensional filters is no longer valid. The same computational difficulties apply to the optimal strategy since it depends on the filter. In this case the model may be approximated by a model where the filter is known and can be computed. E.g., we use statistical linearization for non-linear drift processes, finite-state-Markov chain approximations for the drift process and/or diffusion approximations for small jumps in the noise term. In the approximating models, filters and optimal strategies can often be computed explicitly. We analyze and compare different approximation methods, in particular in view of performance of the corresponding utility maximizing strategies.

- Homogeneous Penalizers and Constraints in Convex Image Restoration (2012)
- Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form \({\rm argmin} \{ \Phi(x)\) subject to \(\Psi(x) \le \tau \}\) and their penalized counterparts \({\rm argmin} \{\Phi(x) + \lambda \Psi(x)\}\). We recall general results on the topic by the help of an epigraphical projection. Then we deal with the special setting \(\Psi := \| L \cdot\|\) with \(L \in \mathbb{R}^{m,n}\) and \(\Phi := \varphi(H \cdot)\), where \(H \in \mathbb{R}^{n,n}\) and \(\varphi: \mathbb R^n \rightarrow \mathbb{R} \cup \{+\infty\} \) meet certain requirements which are often fulfilled in image processing models. In this case we prove by incorporating the dual problems that there exists a bijective function such that the solutions of the constrained problem coincide with those of the penalized problem if and only if \(\tau\) and \(\lambda\) are in the graph of this function. We illustrate the relation between \(\tau\) and \(\lambda\) for various problems arising in image processing. In particular, we point out the relation to the Pareto frontier for joint sparsity problems. We demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise with the \(I\)-divergence as data fitting term \(\varphi\) and in inpainting models with the constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level.

- Tropical Intersection Products and Families of Tropical Curves (2012)
- This thesis is devoted to furthering the tropical intersection theory as well as to applying the developed theory to gain new insights about tropical moduli spaces. We use piecewise polynomials to define tropical cocycles that generalise the notion of tropical Cartier divisors to higher codimensions, introduce an intersection product of cocycles with tropical cycles and use the connection to toric geometry to prove a Poincaré duality for certain cases. Our main application of this Poincaré duality is the construction of intersection-theoretic fibres under a large class of tropical morphisms. We construct an intersection product of cycles on matroid varieties which are a natural generalisation of tropicalisations of classical linear spaces and the local blocks of smooth tropical varieties. The key ingredient is the ability to express a matroid variety contained in another matroid variety by a piecewise polynomial that is given in terms of the rank functions of the corresponding matroids. In particular, this enables us to intersect cycles on the moduli spaces of n-marked abstract rational curves. We also construct a pull-back of cycles along morphisms of smooth varieties, relate pull-backs to tropical modifications and show that every cycle on a matroid variety is rationally equivalent to its recession cycle and can be cut out by a cocycle. Finally, we define families of smooth rational tropical curves over smooth varieties and construct a tropical fibre product in order to show that every morphism of a smooth variety to the moduli space of abstract rational tropical curves induces a family of curves over the domain of the morphism. This leads to an alternative, inductive way of constructing moduli spaces of rational curves.

- Quadrature for Path-dependent Functionals of Lévy-driven Stochastic Differential Equations (2012)
- The main topic of this thesis is to define and analyze a multilevel Monte Carlo algorithm for path-dependent functionals of the solution of a stochastic differential equation (SDE) which is driven by a square integrable, \(d_X\)-dimensional Lévy process \(X\). We work with standard Lipschitz assumptions and denote by \(Y=(Y_t)_{t\in[0,1]}\) the \(d_Y\)-dimensional strong solution of the SDE. We investigate the computation of expectations \(S(f) = \mathrm{E}[f(Y)]\) using randomized algorithms \(\widehat S\). Thereby, we are interested in the relation of the error and the computational cost of \(\widehat S\), where \(f:D[0,1] \to \mathbb{R}\) ranges in the class \(F\) of measurable functionals on the space of càdlàg functions on \([0,1]\), that are Lipschitz continuous with respect to the supremum norm. We consider as error \(e(\widehat S)\) the worst case of the root mean square error over the class of functionals \(F\). The computational cost of an algorithm \(\widehat S\), denoted \(\mathrm{cost}(\widehat S)\), should represent the runtime of the algorithm on a computer. We work in the real number model of computation and further suppose that evaluations of \(f\) are possible for piecewise constant functions in time units according to its number of breakpoints. We state strong error estimates for an approximate Euler scheme on a random time discretization. With this strong error estimates, the multilevel algorithm leads to upper bounds for the convergence order of the error with respect to the computational cost. The main results can be summarized in terms of the Blumenthal-Getoor index of the driving Lévy process, denoted by \(\beta\in[0,2]\). For \(\beta <1\) and no Brownian component present, we almost reach convergence order \(1/2\), which means, that there exists a sequence of multilevel algorithms \((\widehat S_n)_{n\in \mathbb{N}}\) with \(\mathrm{cost}(\widehat S_n) \leq n\) such that \( e(\widehat S_n) \precsim n^{-1/2}\). Here, by \( \precsim\), we denote a weak asymptotic upper bound, i.e. the inequality holds up to an unspecified positive constant. If \(X\) has a Brownian component, the order has an additional logarithmic term, in which case, we reach \( e(\widehat S_n) \precsim n^{-1/2} \, (\log(n))^{3/2}\). For the special subclass of $Y$ being the Lévy process itself, we also provide a lower bound, which, up to a logarithmic term, recovers the order \(1/2\), i.e., neglecting logarithmic terms, the multilevel algorithm is order optimal for \( \beta <1\). An empirical error analysis via numerical experiments matches the theoretical results and completes the analysis.