Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (292) (remove)
Has Fulltext
- yes (292)
Keywords
Faculty / Organisational entity
Das zinsoptimierte Schuldenmanagement hat zum Ziel, eine möglichst effiziente Abwägung zwischen den erwarteten Finanzierungskosten einerseits und den Risiken für den Staatshaushalt andererseits zu finden. Um sich diesem Spannungsfeld zu nähern, schlagen wir erstmals die Brücke zwischen den Problemstellungen des Schuldenmanagements und den Methoden der zeitkontinuierlichen, dynamischen Portfoliooptimierung.
Das Schlüsselelement ist dabei eine neue Metrik zur Messung der Finanzierungskosten, die Perpetualkosten. Diese spiegeln die durchschnittlichen zukünftigen Finanzierungskosten wider und beinhalten sowohl die bereits bekannten Zinszahlungen als auch die noch unbekannten Kosten für notwendige Anschlussfinanzierungen. Daher repräsentiert die Volatilität der Perpetualkosten auch das Risiko einer bestimmten Strategie; je langfristiger eine Finanzierung ist, desto kleiner ist die Schwankungsbreite der Perpetualkosten.
Die Perpetualkosten ergeben sich als Produkt aus dem Barwert eines Schuldenportfolios und aus der vom Portfolio unabhängigen Perpetualrate. Für die Modellierung des Barwertes greifen wir auf das aus der dynamischen Portfoliooptimierung bekannte Konzept eines selbstfinanzierenden Bondportfolios zurück, das hier auf einem mehrdimensionalen affin-linearen Zinsmodell basiert. Das Wachstum des Schuldenportfolios wird dabei durch die Einbeziehung des Primärüberschusses des Staates gebremst bzw. verhindert, indem wir diesen als externen Zufluss in das selbstfinanzierende Modell aufnehmen.
Wegen der Vielfältigkeit möglicher Finanzierungsinstrumente wählen wir nicht deren Wertanteile als Kontrollvariable, sondern kontrollieren die Sensitivitäten des Portfolios gegenüber verschiedenen Zinsbewegungen. Aus optimalen Sensitivitäten können in einem nachgelagerten Schritt dann optimale Wertanteile für verschiedenste Finanzierungsinstrumente abgeleitet werden. Beispielhaft demonstrieren wir dies mittels Rolling-Horizon-Bonds unterschiedlicher Laufzeit.
Schließlich lösen wir zwei Optimierungsprobleme mit Methoden der stochastischen Kontrolltheorie. Dabei wird stets der erwartete Nutzen der Perpetualkosten maximiert. Die Nutzenfunktionen sind jeweils an das Schuldenmanagement angepasst und zeichnen sich insbesondere dadurch aus, dass höhere Kosten mit einem niedrigeren Nutzen einhergehen. Im ersten Problem betrachten wir eine Potenznutzenfunktion mit konstanter relativer Risikoaversion, im zweiten wählen wir eine Nutzenfunktion, welche die Einhaltung einer vorgegebenen Schulden- bzw. Kostenobergrenze garantiert.
Yield Curves and Chance-Risk Classification: Modeling, Forecasting, and Pension Product Portfolios
(2021)
This dissertation consists of three independent parts: The yield curve shapes generated by interest rate models, the yield curve forecasting, and the application of the chance-risk classification to a portfolio of pension products. As a component of the capital market model, the yield curve influences the chance-risk classification which was introduced to improve the comparability of pension products and strengthen consumer protection. Consequently, all three topics have a major impact on this essential safeguard.
Firstly, we focus on the obtained yield curve shapes of the Vasicek interest rate models. We extend the existing studies on the attainable yield curve shapes in the one-factor Vasicek model by analysis of the curvature. Further, we show that the two-factor Vasicek model can explain significantly more effects that are observed at the market than its one-factor variant. Among them is the occurrence of dipped yield curves.
We further introduce a general change of measure framework for the Monte Carlo simulation of the Vasicek model under a subjective measure. This can be used to avoid the occurrence of a far too high frequency of inverse yield curves with growing time.
Secondly, we examine different time series models including machine learning algorithms forecasting the yield curve. For this, we consider statistical time series models such as autoregression and vector autoregression. Their performances are compared with the performance of a multilayer perceptron, a fully connected feed-forward neural network. For this purpose, we develop an extended approach for the hyperparameter optimization of the perceptron which is based on standard procedures like Grid and Random Search but allows to search a larger hyperparameter space. Our investigation shows that multilayer perceptrons outperform statistical models for long forecast horizons.
The third part deals with the chance-risk classification of state-subsidized pension products in Germany as well as its relevance for customer consulting. To optimize the use of the chance-risk classes assigned by Produktinformationsstelle Altersvorsorge gGmbH, we develop a procedure for determining the chance-risk class of different portfolios of state-subsidized pension products under the constraint that the portfolio chance-risk class does not exceed the customer's risk preference. For this, we consider a portfolio consisting of two new pension products as well as a second one containing a product already owned by the customer as well as the offer of a new one. This is of particular interest for customer consulting and can include other assets of the customer. We examine the properties of various chance and risk parameters as well as their corresponding mappings and show that a diversification effect exists. Based on the properties, we conclude that the average final contract values have to be used to obtain the upper bound of the portfolio chance-risk class. Furthermore, we develop an approach for determining the chance-risk class over the contract term since the chance-risk class is only assigned at the beginning of the accumulation phase. On the one hand, we apply the current legal situation, but on the other hand, we suggest an approach that requires further simulations. Finally, we translate our results into recommendations for customer consultation.
Wreath product groups \(C_\ell \wr \mathfrak{S}_n\) have a rich combinatorial representation theory coming from the symmetric group case and involving partitions, Young tableaux, and Specht modules. To such a wreath product group \(W\), one can associate various algebras and geometric objects: Hecke algebras, quantum groups, Hilbert schemes, Calogero--Moser spaces, and (restricted) rational Cherednik algebras. Over the years, surprising connections have been made between a lot of these objects, with many of these connections having been traced back to combinatorial constructions and properties of the group \(W\) itself.
In this thesis, we have studied one of the algebras, namely the restricted rational Cherednik algebra \(\overline{\mathsf{H}}_\mathbf{c}(W)\), in order to find combinatorial models which describe certain representation theoretical phenomena around \(\overline{\mathsf{H}}_\mathbf{c}(W)\). In particular, we generalize a result by Gordon and describe the graded \(W\)-characters of the simple modules of \(\overline{\mathsf{H}}_\mathbf{c}(W)\) for generic parameter \(\mathbf{c}\) using Haiman's wreath Macdonald polynomials. These graded \(W\)-characters turn out to be specializations of Haiman's wreath Macdonald polynomials. In the non-generic parameter case, we use recent results by Maksimau to combinatorially express an inductive rule of \(\overline{\mathsf{H}}_\mathbf{c}(W)\)-modules first described by Bellamy. We use our results in type \(B\) to describe the (ungraded) \(B_n\)-character of simple \(\overline{\mathsf{H}}_\mathbf{c}(B_n)\)-modules associated to bipartitions with one empty part. Afterwards, we relate this combinatorial induction to various other algebras and families of \(W\)-characters found in the literature such as Lusztig's constructible characters, as well as detail some connections between generic and non-generic parameter using wreath Macdonald polynomials.
In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
The thesis is concerned with the modelling of ionospheric current systems and induced magnetic fields in a multiscale framework. Scaling functions and wavelets are used to realize a multiscale analysis of the function spaces under consideration and to establish a multiscale regularization procedure for the inversion of the considered operator equation. First of all a general multiscale concept for vectorial operator equations between two separable Hilbert spaces is developed in terms of vector kernel functions. The equivalence to the canonical tensorial ansatz is proven and the theory is transferred to the case of multiscale regularization of vectorial inverse problems. As a first application, a special multiresolution analysis of the space of square-integrable vector fields on the sphere, e.g. the Earth’s magnetic field measured on a spherical satellite’s orbit, is presented. By this, a multiscale separation of spherical vector-valued functions with respect to their sources can be established. The vector field is split up into a part induced by sources inside the sphere, a part which is due to sources outside the sphere and a part which is generated by sources on the sphere, i.e. currents crossing the sphere. The multiscale technqiue is tested on a magnetic field data set of the satellite CHAMP and it is shown that crustal field determination can be improved by previously applying our method. In order to reconstruct ionspheric current systems from magnetic field data, an inversion of the Biot-Savart’s law in terms of multiscale regularization is defined. The corresponding operator is formulated and the singular values are calculated. Based on the konwledge of the singular system a regularzation technique in terms of certain product kernels and correponding convolutions can be formed. The method is tested on different simulations and on real magnetic field data of the satellite CHAMP and the proposed satellite mission SWARM.
Diese Doktorarbeit befasst sich mit Volatilitätsarbitrage bei europäischen Kaufoptionen und mit der Modellierung von Collateralized Debt Obligations (CDOs). Zuerst wird anhand einer Idee von Carr gezeigt, dass es stochastische Arbitrage in einem Black-Scholes-ähnlichen Modell geben kann. Danach optimieren wir den Arbitrage- Gewinn mithilfe des Erwartungswert-Varianz-Ansatzes von Markowitz und der Martingaltheorie. Stochastische Arbitrage im stochastischen Volatilitätsmodell von Heston wird auch untersucht. Ferner stellen wir ein Markoff-Modell für CDOs vor. Wir zeigen dann, dass man relativ schnell an die Grenzen dieses Modells stößt: Nach dem Ausfall einer Firma steigen die Ausfallintensitäten der überlebenden Firmen an, und kehren nie wieder zu ihrem Ausgangsniveau zurück. Dieses Verhalten stimmt aber nicht mit Beobachtungen am Markt überein: Nach Turbulenzen auf dem Markt stabilisiert sich der Markt wieder und daher würde man erwarten, dass die Ausfallintensitäten der überlebenden Firmen ebenfalls wieder abflachen. Wir ersetzen daher das Markoff-Modell durch ein Semi-Markoff-Modell, das den Markt viel besser nachbildet.
The present work deals with the (global and local) modeling of the windfield on the real topography of Rheinland-Pfalz. Thereby the focus is on the construction of a vectorial windfield from low, irregularly distributed data given on a topographical surface. The developed spline procedure works by means of vectorial (homogeneous, harmonic) polynomials (outer harmonics) which control the oscillation behaviour of the spline interpoland. In the process the characteristic of the spline curvature which defines the energy norm is assumed to be on a sphere inside the Earth interior and not on the Earth’s surface. The numerical advantage of this method arises from the maximum-minimum principle for harmonic functions.
In this thesis we classify simple coherent sheaves on Kodaira fibers of types II, III and IV (cuspidal and tacnode cubic curves and a plane configuration of three concurrent lines). Indecomposable vector bundles on smooth elliptic curves were classified in 1957 by Atiyah. In works of Burban, Drozd and Greuel it was shown that the categories of vector bundles and coherent sheaves on cycles of projective lines are tame. It turns out, that all other degenerations of elliptic curves are vector-bundle-wild. Nevertheless, we prove that the category of coherent sheaves of an arbitrary reduced plane cubic curve, (including the mentioned Kodaira fibers) is brick-tame. The main technical tool of our approach is the representation theory of bocses. Although, this technique was mainly used for purely theoretical purposes, we illustrate its computational potential for investigating tame behavior in wild categories. In particular, it allows to prove that a simple vector bundle on a reduced cubic curve is determined by its rank, multidegree and determinant, generalizing Atiyah's classification. Our approach leads to an interesting class of bocses, which can be wild but are brick-tame.
Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors.
Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study.
The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.