### Refine

#### Year of publication

- 2010 (8) (remove)

#### Document Type

- Doctoral Thesis (8) (remove)

#### Language

- English (8) (remove)

#### Keywords

#### Faculty / Organisational entity

- Fachbereich Mathematik (8) (remove)

Tropical intersection theory
(2010)

This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.

The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.

In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.

In this work, we develop a framework for analyzing an executive’s own- company stockholding and work effort preferences. The executive, character- ized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed form for logarithmic and power utility and for exponential utility for the case of zero interest rates. Additionally, a utility indifference rationale is applied to determine his fair compensation. Being unconstrained by performance contracting, the executive’s work effort strategy establishes a base case for theoretical or empirical assessment of the benefits or otherwise of constraining executives with performance contracting. Further, we consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market including the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility and power utility. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two positions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibilityto benefit from her work effort by acquiring own-company shares. This givesinsight into aspects of optimal contract design. Our framework is applicable to the pharmaceutical and financial industry, as well as the IT sector.

This thesis deals with the solution of special problems arising in financial engineering or financial mathematics. The main focus lies on commodity indices. Chapter 1 addresses the important issue for the financial engineering practice of developing well-suited models for certain assets (here: commodity indices). Descriptive analysis of the Dow Jones-UBS commodity index compared to the Standard & Poor 500 stock index provides us with first insights of some features of the corresponding distributions. Statistical tests of normality and mean reversion then helps us in setting up a model for commodity indices. Additionally, chapter 1 encompasses a thorough introduction to commodity investment, history of commodities trading and the most important derivatives, namely futures and European options on futures. Chapter 2 proposes a model for commodity indices and derives fair prices for the most important derivatives in the commodity markets. It is a Heston model supplemented with a stochastic convenience yield. The Heston model belongs to the model class of stochastic volatility models and is currently widely used in stock markets. For the application in the commodity markets the stochastic convenience yield is included in the drift of the instantaneous spot return process. Motivated by the results of chapter 1 it seems reasonable to model the convenience yield by a mean reverting Ornstein-Uhlenbeck process. Since trading desks only apply and consider models with closed form solutions for options I derive such formulas for commodity futures by solving the corresponding partial differential equation. Additionally, semi-closed form formulas for European options on futures are determined. The Cauchy problem with respect to these options is more challenging than the first one. A solution can be provided. Unlike equities, which typically entitle the holder to a continuing stake in a corporation, commodity futures contracts normally specify a certain date for the delivery of the underlying physical commodity. In order to avoid the delivery process and maintain a futures position, nearby contracts must be sold and contracts that have not yet reached the delivery period must be purchased (so called rolling). Optimal trading days for selling and buying futures are determined by applying statistical tests for stochastic dominance. Besides the optimization of the rolling procedure for commodity futures we dedicate ourselves in chapter 3 with the optimization of the weightings of the commodity futures that make up the index. To this end, I apply the Markowitz approach or mean-variance optimization. The mean-variance optimization penalizes up-side and down-side risk equally, whereas most investors do not mind up-side risk. To overcome this, I consider in the next step other risk measures, namely Value-at-Risk and Conditional Value-at-Risk. The Conditional Value-at-Risk is generalized to discontinuous cumulative distribution functions of the loss. For continuous loss distributions, the Conditional Value-at-Risk at a given confidence level is defined as the expected loss exceeding the Value-at-Risk. Loss distributions associated with finite sampling or scenario modeling are, however, discontinuous. Various risk measures involving discontinuous loss distributions shall be introduced and compared. I then apply the theoretical results to the field of portfolio optimization with commodity indices. Furthermore, I uncover graphically the behavior of these risk measures. For this purpose, I consider the risk measures as a function of the confidence level. Based on a special discrete loss distribution, the graphs demonstrate the different properties of these risk measures. The goal of the first section of chapter 4 is to apply the mathematical concept of excursions for the creation of optimal highly automated or algorithmic trading strategies. The idea is to consider the gain of the strategy and the excursion time it takes to realize the gain. In this section I calculate formulas for the Ornstein-Uhlenbeck process. I show that the corresponding formulas can be calculated quite fast since the only function appearing in the formulas is the so called imaginary error function. This function is already implemented in many programs, such as in Maple. My main contribution of this topic is the optimization of the trading strategy for Ornstein-Uhlenbeck processes via the Banach fixed-point theorem. The second section of chapter 4 deals with statistical arbitrage strategies, a long horizon trading opportunity that generates a riskless profit. The results of this section provide an investor with a tool to investigate empirically if some strategies (for example momentum strategies) constitute statistical arbitrage opportunities or not.

Mrázek et al. [25] proposed a unified approach to curve estimation which combines localization and regularization. Franke et al. [10] used that approach to discuss the case of the regularized local least-squares (RLLS) estimate. In this thesis we will use the unified approach of Mrázek et al. to study some asymptotic properties of local smoothers with regularization. In particular, we shall discuss the Huber M-estimate and its limiting cases towards the L2 and the L1 cases. For the regularization part, we will use quadratic regularization. Then, we will define a more general class of regularization functions. Finally, we will do a Monte Carlo simulation study to compare different types of estimates.

This thesis deals with the numerical study of multiscale problems arising in the modelling of processes of the flow of fluid in plain and porous media. Many of these processes, governed by partial differential equations, are relevant in engineering, industry, and environmental studies. The overall task of modelling and simulating the filtration-related multiscale processes becomes interdisciplinary as it employs physics, mathematics and computer programming to reach its aim. Keeping the challenges in mind, the main focus is to overcome the limitations of accuracy, speed and memory and to develop novel efficient numerical algorithms which could, in part or whole, be utilized by those working in the field of porous media. This work has essentially four parts. A single grid basic algorithm and a corresponding parallel algorithm to solve the macroscopic Navier-Stokes-Brinkmann model is discussed. An upscaling subgrid algorithm is derived and numerically tested for the same model. Moving a step further in the line of multiscale methods, an iterative Mutliscale Finite Volume (iMSFV) method is developed for the Stokes-Darcy system. Additionally, the last part of the thesis deals with ways to incorporate changes occurring at different (meso) scale level. The flow equations are coupled with the Convection-Diffusion-Reaction (CDR) equation, which models the transport and capturing of particle concentrations. By employing the numerical method for the coupled flow and transport problem, we understand the interplay between the flow velocity and filtration.

A classical conjecture in the representation theory of finite groups, the McKay conjecture, states that for any finite group and prime number p the number of complex irreducible characters of degree prime to p is equal to the number of complex irreducible characters of degree prime to p of the normalizer of a p-Sylow subgroup. Recently a reduction theorem was proved by Isaacs, Malle and Navarro: If all simple groups are “good”, then the McKay conjecture holds. In this work we are concerned with the problem of goodness for finite groups of Lie type in their defining characteristic. A simple group is called “good” if certain equivariant bijections between the involved character sets exist. We present a structural approach to the construction of such a bijection by utilizing the so-called “Steinberg-Map”. This yields very natural bijections and we prove most of the desired properties.