## Fachbereich Mathematik

### Filtern

#### Fachbereich / Organisatorische Einheit

- Fachbereich Mathematik (204)
- Fraunhofer (ITWM) (2)

#### Erscheinungsjahr

#### Dokumenttyp

- Dissertation (204) (entfernen)

#### Schlagworte

- Portfolio Optimization with Risk Constraints in the View of Stochastic Interest Rates (2017)
- We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds. The goal of an investor is to find a portfolio which maximizes expected utility from terminal wealth under budget and present expected short-fall (PESF) risk constraints. We analyze this portfolio optimization problem in both complete and incomplete financial markets in three different cases: (a) when the PESF risk is minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds to the unconstrained Merton investment. In all cases we find the optimal terminal wealth and portfolio process using the martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the optimal terminal wealth in the cases mentioned using numerical examples. Without risk constraints, we further compare the investment strategies for complete and incomplete market numerically.

- Asymptotics for change-point tests and change-point estimators (2017)
- In change-point analysis the point of interest is to decide if the observations follow one model or if there is at least one time-point, where the model has changed. This results in two sub- fields, the testing of a change and the estimation of the time of change. This thesis considers both parts but with the restriction of testing and estimating for at most one change-point. A well known example is based on independent observations having one change in the mean. Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was derived for this model. As it is a well-known fact that the corresponding convergence rate is very slow, modifications of the test using a weight function were considered. Those tests have a better performance. We focus on this class of test statistics. The first part gives a detailed introduction to the techniques for analysing test statistics and estimators. Therefore we consider the multivariate mean change model and focus on the effects of the weight function. In the case of change-point estimators we can distinguish between the assumption of a fixed size of change (fixed alternative) and the assumption that the size of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed in the literature. We show how to come from the proof for the fixed alternative to the proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate observations. The main part of this thesis focuses on two points. First, analysing test statistics and, secondly, analysing the corresponding change-point estimators. In both cases, we first consider a change in the mean for independent observations but relaxing the moment condition. Based on a robust estimator for the mean, we derive a new type of change-point test having a randomized weight function. Secondly, we analyse non-linear autoregressive models with unknown regression function. Based on neural networks, test statistics and estimators are derived for correctly specified as well as for misspecified situations. This part extends the literature as we analyse test statistics and estimators not only based on the sample residuals. In both sections, the section on tests and the one on the change-point estimator, we end with giving regularity conditions on the model as well as the parameter estimator. Finally, a simulation study for the case of the neural network based test and estimator is given. We discuss the behaviour under correct and mis-specification and apply the neural network based test and estimator on two data sets.

- Small self-centralizing subgroups in defect groups of finite classical groups (2017)
- In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \). In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig. Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question. \[\] In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.

- The Bootstrap for the Functional Autoregressive Model FAR(1) (2016)
- Functional data analysis is a branch of statistics that deals with observations \(X_1,..., X_n\) which are curves. We are interested in particular in time series of dependent curves and, specifically, consider the functional autoregressive process of order one (FAR(1)), which is defined as \(X_{n+1}=\Psi(X_{n})+\epsilon_{n+1}\) with independent innovations \(\epsilon_t\). Estimates \(\hat{\Psi}\) for the autoregressive operator \(\Psi\) have been investigated a lot during the last two decades, and their asymptotic properties are well understood. Particularly difficult and different from scalar- or vector-valued autoregressions are the weak convergence properties which also form the basis of the bootstrap theory. Although the asymptotics for \(\hat{\Psi}{(X_{n})}\) are still tractable, they are only useful for large enough samples. In applications, however, frequently only small samples of data are available such that an alternative method for approximating the distribution of \(\hat{\Psi}{(X_{n})}\) is welcome. As a motivation, we discuss a real-data example where we investigate a changepoint detection problem for a stimulus response dataset obtained from the animal physiology group at the Technical University of Kaiserslautern. To get an alternative for asymptotic approximations, we employ the naive or residual-based bootstrap procedure. In this thesis, we prove theoretically and show via simulations that the bootstrap provides asymptotically valid and practically useful approximations of the distributions of certain functions of the data. Such results may be used to calculate approximate confidence bands or critical bounds for tests.

- Integrality of representations of finite groups (2016)
- Since the early days of representation theory of finite groups in the 19th century, it was known that complex linear representations of finite groups live over number fields, that is, over finite extensions of the field of rational numbers. While the related question of integrality of representations was answered negatively by the work of Cliff, Ritter and Weiss as well as by Serre and Feit, it was not known how to decide integrality of a given representation. In this thesis we show that there exists an algorithm that given a representation of a finite group over a number field decides whether this representation can be made integral. Moreover, we provide theoretical and numerical evidence for a conjecture, which predicts the existence of splitting fields of irreducible characters with integrality properties. In the first part, we describe two algorithms for the pseudo-Hermite normal form, which is crucial when handling modules over ring of integers. Using a newly developed computational model for ideal and element arithmetic in number fields, we show that our pseudo-Hermite normal form algorithms have polynomial running time. Furthermore, we address a range of algorithmic questions related to orders and lattices over Dedekind domains, including computation of genera, testing local isomorphism, computation of various homomorphism rings and computation of Solomon zeta functions. In the second part we turn to the integrality of representations of finite groups and show that an important ingredient is a thorough understanding of the reduction of lattices at almost all prime ideals. By employing class field theory and tools from representation theory we solve this problem and eventually describe an algorithm for testing integrality. After running the algorithm on a large set of examples we are led to a conjecture on the existence of integral and nonintegral splitting fields of characters. By extending techniques of Serre we prove the conjecture for characters with rational character field and Schur index two.

- Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information (2016)
- In a financial market we consider three types of investors trading with a finite time horizon with access to a bank account as well as multliple stocks: the fully informed investor, the partially informed investor whose only source of information are the stock prices and an investor who does not use this infor- mation. The drift is modeled either as following linear Gaussian dynamics or as being a continuous time Markov chain with finite state space. The optimization problem is to maximize expected utility of terminal wealth. The case of partial information is based on the use of filtering techniques. Conditions to ensure boundedness of the expected value of the filters are developed, in the Markov case also for positivity. For the Markov modulated drift, boundedness of the expected value of the filter relates strongly to port- folio optimization: effects are studied and quantified. The derivation of an equivalent, less dimensional market is presented next. It is a type of Mutual Fund Theorem that is shown here. Gains and losses eminating from the use of filtering are then discussed in detail for different market parameters: For infrequent trading we find that both filters need to comply with the boundedness conditions to be an advan- tage for the investor. Losses are minimal in case the filters are advantageous. At an increasing number of stocks, again boundedness conditions need to be met. Losses in this case depend strongly on the added stocks. The relation of boundedness and portfolio optimization in the Markov model leads here to increasing losses for the investor if the boundedness condition is to hold for all numbers of stocks. In the Markov case, the losses for different numbers of states are negligible in case more states are assumed then were originally present. Assuming less states leads to high losses. Again for the Markov model, a simplification of the complex optimal trading strategy for power utility in the partial information setting is shown to cause only minor losses. If the market parameters are such that shortselling and borrowing constraints are in effect, these constraints may lead to big losses depending on how much effect the constraints have. They can though also be an advantage for the investor in case the expected value of the filters does not meet the conditions for boundedness. All results are implemented and illustrated with the corresponding numerical findings.

- Linear diffusions conditioned on long-term survival (2016)
- We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.

- Utility-Based Risk Measures and Time Consistency of Dynamic Risk Measures (2016)
- This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134]. A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk. Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants. Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.

- Recursive Utility and Stochastic Differential Utility: From Discrete to Continuous Time (2016)
- In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored. First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations. Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established. Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.

- New Aspects of Inflation Modeling (2016)
- Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule. We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC. We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate. To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band. Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.