## Fachbereich Mathematik

### Filtern

#### Fachbereich / Organisatorische Einheit

- Fachbereich Mathematik (199)
- Fraunhofer (ITWM) (2)

#### Erscheinungsjahr

#### Dokumenttyp

- Dissertation (199) (entfernen)

#### Schlagworte

- Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information (2016)
- In a financial market we consider three types of investors trading with a finite time horizon with access to a bank account as well as multliple stocks: the fully informed investor, the partially informed investor whose only source of information are the stock prices and an investor who does not use this infor- mation. The drift is modeled either as following linear Gaussian dynamics or as being a continuous time Markov chain with finite state space. The optimization problem is to maximize expected utility of terminal wealth. The case of partial information is based on the use of filtering techniques. Conditions to ensure boundedness of the expected value of the filters are developed, in the Markov case also for positivity. For the Markov modulated drift, boundedness of the expected value of the filter relates strongly to port- folio optimization: effects are studied and quantified. The derivation of an equivalent, less dimensional market is presented next. It is a type of Mutual Fund Theorem that is shown here. Gains and losses eminating from the use of filtering are then discussed in detail for different market parameters: For infrequent trading we find that both filters need to comply with the boundedness conditions to be an advan- tage for the investor. Losses are minimal in case the filters are advantageous. At an increasing number of stocks, again boundedness conditions need to be met. Losses in this case depend strongly on the added stocks. The relation of boundedness and portfolio optimization in the Markov model leads here to increasing losses for the investor if the boundedness condition is to hold for all numbers of stocks. In the Markov case, the losses for different numbers of states are negligible in case more states are assumed then were originally present. Assuming less states leads to high losses. Again for the Markov model, a simplification of the complex optimal trading strategy for power utility in the partial information setting is shown to cause only minor losses. If the market parameters are such that shortselling and borrowing constraints are in effect, these constraints may lead to big losses depending on how much effect the constraints have. They can though also be an advantage for the investor in case the expected value of the filters does not meet the conditions for boundedness. All results are implemented and illustrated with the corresponding numerical findings.

- Linear diffusions conditioned on long-term survival (2016)
- We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.

- Utility-Based Risk Measures and Time Consistency of Dynamic Risk Measures (2016)
- This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134]. A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk. Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants. Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.

- Recursive Utility and Stochastic Differential Utility: From Discrete to Continuous Time (2016)
- In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored. First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations. Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established. Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.

- New Aspects of Inflation Modeling (2016)
- Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule. We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC. We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate. To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band. Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.

- Hecke algebras of type A: Auslander--Reiten quivers and branching rules (2016)
- The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\] In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.

- Integrality of representations of finite groups (2016)
- Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers. While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation. In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral. Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties. In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers. Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time. Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions. In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals. By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality. After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters. By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.

- The Bootstrap for the Functional Autoregressive Model FAR(1) (2016)
- Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory. Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern. To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.

- Interest Rate Modeling - The Potential Approach and Multi-Curve Potential Models (2016)
- This thesis is concerned with interest rate modeling by means of the potential approach. The contribution of this work is twofold. First, by making use of the potential approach and the theory of affine Markov processes, we develop a general class of rational models to the term structure of interest rates which we refer to as "the affine rational potential model". These models feature positive interest rates and analytical pricing formulae for zero-coupon bonds, caps, swaptions, and European currency options. We present some concrete models to illustrate the scope of the affine rational potential model and calibrate a model specification to real-world market data. Second, we develop a general family of "multi-curve potential models" for post-crisis interest rates. Our models feature positive stochastic basis spreads, positive term structures, and analytic pricing formulae for interest rate derivatives. This modeling framework is also flexible enough to accommodate negative interest rates and positive basis spreads.

- Gröbner Bases over Extention Fields of \(\mathbb{Q}\) (2016)
- Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field. In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\). To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\). On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation. At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.