Refine
Year of publication
Document Type
- Preprint (607)
- Doctoral Thesis (265)
- Report (121)
- Article (40)
- Diploma Thesis (26)
- Lecture (25)
- Master's Thesis (6)
- Part of a Book (4)
- Course Material (4)
- Study Thesis (4)
Is part of the Bibliography
- no (1109)
Keywords
- Mathematische Modellierung (15)
- Wavelet (14)
- MINT (13)
- Schule (13)
- Inverses Problem (12)
- Mehrskalenanalyse (12)
- Modellierung (12)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)
Faculty / Organisational entity
- Fachbereich Mathematik (1109) (remove)
Diese Diplomarbeit gibt eine kurze Einführung in das Gebiet der Diffusionsprozesse (beschrieben als Lösungen stochastischer Differentialgleichungen) und der großen Abweichungen. Mit Methoden aus dem Gebiet der großen Abweichungen wird dann das asymptotische Verhalten des Bayesrisikos für die unterscheidung zweier Diffusionsprozesse untersucht.
Anhand des vom Gutachterausschuß der Stadt Kaiserlautern zur Verfügung gestellten Datenmaterials soll untersucht werden, welche Faktoren den Verkehrswert eines bebauten Grundstücks beeinflussen. Mit diesen Erkenntnissen soll eine möglichst einfache Formel ermittelt werden, die eine Schätzung für den Verkehrswert liefert, und die dabei die in der Vergangenheit erzielten Kaufpreise berücksichtigt. Für die Lösung dieser Aufgabe bietet sich das Verfahren der multiplen linearen Regression an. Auf die theoretischen Grundlagen soll hier nicht näher eingegangen werden, man findet sie in jedem Buch über mathematische Statistik, oder in [1]. Bei der Analyse der Daten wurde im großen und ganzen der Weg eingeschlagen, den Angelika Schwarz in [1] beschreibt. Ihre Ergebnisse lassen sich jedoch nicht direkt übertragen, da die dort betrachteten Grundstücke unbebaut waren. Da bei der statistischen Auswertung großer Datenmengen ein immenser Rechenaufwand anfällt, ist es unverzichtbar, professionelle statistische Software einzusetzen. Es stand das Programm S-Plus 2.0 (PC-Version für Windows) zur Verfügung. Sämtliche Berechnungen und alle Grafiken in diesem Bericht wurden in S-Plus erstellt.
We consider the problem to evacuate several regions due to river flooding, where sufficient time is given to plan ahead. To ensure a smooth evacuation procedure, our model includes the decision which regions to assign to which shelter, and when evacuation orders should be issued, such that roads do not become congested.
Due to uncertainty in weather forecast, several possible scenarios are simultaneously considered in a robust optimization framework. To solve the resulting integer program, we apply a Tabu search algorithm based on decomposing the problem into better tractable subproblems. Computational experiments on random instances and an instance based on Kulmbach, Germany, data show considerable improvement compared to an MIP solver provided with a strong starting solution.
Das zinsoptimierte Schuldenmanagement hat zum Ziel, eine möglichst effiziente Abwägung zwischen den erwarteten Finanzierungskosten einerseits und den Risiken für den Staatshaushalt andererseits zu finden. Um sich diesem Spannungsfeld zu nähern, schlagen wir erstmals die Brücke zwischen den Problemstellungen des Schuldenmanagements und den Methoden der zeitkontinuierlichen, dynamischen Portfoliooptimierung.
Das Schlüsselelement ist dabei eine neue Metrik zur Messung der Finanzierungskosten, die Perpetualkosten. Diese spiegeln die durchschnittlichen zukünftigen Finanzierungskosten wider und beinhalten sowohl die bereits bekannten Zinszahlungen als auch die noch unbekannten Kosten für notwendige Anschlussfinanzierungen. Daher repräsentiert die Volatilität der Perpetualkosten auch das Risiko einer bestimmten Strategie; je langfristiger eine Finanzierung ist, desto kleiner ist die Schwankungsbreite der Perpetualkosten.
Die Perpetualkosten ergeben sich als Produkt aus dem Barwert eines Schuldenportfolios und aus der vom Portfolio unabhängigen Perpetualrate. Für die Modellierung des Barwertes greifen wir auf das aus der dynamischen Portfoliooptimierung bekannte Konzept eines selbstfinanzierenden Bondportfolios zurück, das hier auf einem mehrdimensionalen affin-linearen Zinsmodell basiert. Das Wachstum des Schuldenportfolios wird dabei durch die Einbeziehung des Primärüberschusses des Staates gebremst bzw. verhindert, indem wir diesen als externen Zufluss in das selbstfinanzierende Modell aufnehmen.
Wegen der Vielfältigkeit möglicher Finanzierungsinstrumente wählen wir nicht deren Wertanteile als Kontrollvariable, sondern kontrollieren die Sensitivitäten des Portfolios gegenüber verschiedenen Zinsbewegungen. Aus optimalen Sensitivitäten können in einem nachgelagerten Schritt dann optimale Wertanteile für verschiedenste Finanzierungsinstrumente abgeleitet werden. Beispielhaft demonstrieren wir dies mittels Rolling-Horizon-Bonds unterschiedlicher Laufzeit.
Schließlich lösen wir zwei Optimierungsprobleme mit Methoden der stochastischen Kontrolltheorie. Dabei wird stets der erwartete Nutzen der Perpetualkosten maximiert. Die Nutzenfunktionen sind jeweils an das Schuldenmanagement angepasst und zeichnen sich insbesondere dadurch aus, dass höhere Kosten mit einem niedrigeren Nutzen einhergehen. Im ersten Problem betrachten wir eine Potenznutzenfunktion mit konstanter relativer Risikoaversion, im zweiten wählen wir eine Nutzenfunktion, welche die Einhaltung einer vorgegebenen Schulden- bzw. Kostenobergrenze garantiert.
Zeitreihen und Modalanalyse
(1987)
Die Arbeit ist zu verstehen als ein Teil im großen Projekt der Universität Kaiserslautern, das sich unter dem Namen Technomathematik um die dringend erforderliche Verständigung zwischen Technik und Mathematik bemüht.; Der große Leitfaden war das Buch von Natke: Einführung in Theorie und Praxis der Zeitreihen- und Modalanalyse, Schilderung der wesentlichen dort verwendeten Ideen der indirekten Systemidentifikation sowie des wahrscheinlichkeitstheoretischen und physikalisch-technischen Hintergrundes.
Yield Curves and Chance-Risk Classification: Modeling, Forecasting, and Pension Product Portfolios
(2021)
This dissertation consists of three independent parts: The yield curve shapes generated by interest rate models, the yield curve forecasting, and the application of the chance-risk classification to a portfolio of pension products. As a component of the capital market model, the yield curve influences the chance-risk classification which was introduced to improve the comparability of pension products and strengthen consumer protection. Consequently, all three topics have a major impact on this essential safeguard.
Firstly, we focus on the obtained yield curve shapes of the Vasicek interest rate models. We extend the existing studies on the attainable yield curve shapes in the one-factor Vasicek model by analysis of the curvature. Further, we show that the two-factor Vasicek model can explain significantly more effects that are observed at the market than its one-factor variant. Among them is the occurrence of dipped yield curves.
We further introduce a general change of measure framework for the Monte Carlo simulation of the Vasicek model under a subjective measure. This can be used to avoid the occurrence of a far too high frequency of inverse yield curves with growing time.
Secondly, we examine different time series models including machine learning algorithms forecasting the yield curve. For this, we consider statistical time series models such as autoregression and vector autoregression. Their performances are compared with the performance of a multilayer perceptron, a fully connected feed-forward neural network. For this purpose, we develop an extended approach for the hyperparameter optimization of the perceptron which is based on standard procedures like Grid and Random Search but allows to search a larger hyperparameter space. Our investigation shows that multilayer perceptrons outperform statistical models for long forecast horizons.
The third part deals with the chance-risk classification of state-subsidized pension products in Germany as well as its relevance for customer consulting. To optimize the use of the chance-risk classes assigned by Produktinformationsstelle Altersvorsorge gGmbH, we develop a procedure for determining the chance-risk class of different portfolios of state-subsidized pension products under the constraint that the portfolio chance-risk class does not exceed the customer's risk preference. For this, we consider a portfolio consisting of two new pension products as well as a second one containing a product already owned by the customer as well as the offer of a new one. This is of particular interest for customer consulting and can include other assets of the customer. We examine the properties of various chance and risk parameters as well as their corresponding mappings and show that a diversification effect exists. Based on the properties, we conclude that the average final contract values have to be used to obtain the upper bound of the portfolio chance-risk class. Furthermore, we develop an approach for determining the chance-risk class over the contract term since the chance-risk class is only assigned at the beginning of the accumulation phase. On the one hand, we apply the current legal situation, but on the other hand, we suggest an approach that requires further simulations. Finally, we translate our results into recommendations for customer consultation.
Gegenstand dieser Arbeit ist die Entwicklung eines Wärmetransportmodells für tiefe geothermische (hydrothermale) Reservoire. Existenz- und Eindeutigkeitsaussagen bezüglich einer schwachen Lösung des vorgestellten Modells werden getätigt. Weiterhin wird ein Verfahren zur Approximation dieser Lösung basierend auf einem linearen Galerkin-Schema dargelegt, wobei sowohl die Konvergenz nachgewiesen als auch eine Konvergenzrate erarbeitet werden.
In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
Aufgrund der vernetzten Strukturen und Wirkungszusammenhänge dynamischer Systeme werden die zugrundeliegenden mathematischen Modelle meist sehr komplex und erfordern ein hohes mathematisches Verständnis und Geschick. Bei Verwendung von spezieller Software können jedoch auch ohne tiefgehende mathematische oder informatorische Fachkenntnisse komplexe Wirkungsnetze dynamischer Systeme interaktiv erstellt werden. Als Beispiel wollen wir schrittweise das Modell einer Miniwelt entwerfen und Aussagen bezüglich ihrer Bevölkerungsentwicklung treffen.
Die Akustik liefert einen interessanten Hintergrund, interdisziplinären und fächerverbindenen Unterricht zwischen Mathematik, Physik und Musik durchzuführen. SchülerInnen können hierbei beispielsweise experimentell tätig sein, indem sie Audioaufnahmen selbst erzeugen und sich mit Computersoftware Frequenzspektren erzeugen lassen. Genauso können die Schüler auch Frequenzspektren vorgeben und daraus Klänge erzeugen. Dies kann beispielsweise dazu dienen, den Begriff der Obertöne im Musikunterricht physikalisch oder mathematisch greifbar zu machen oder in der Harmonielehre Frequenzverhältnisse von Intervallen und Dreiklängen näher zu untersuchen.
Der Computer ist hier ein sehr nützliches Hilfsmittel, da der mathematische Hintergrund dieser Aufgabe -- das Wechseln zwischen Audioaufnahme und ihrem Frequenzbild -- sich in der Fourier-Analysis findet, die für SchülerInnen äußerst anspruchsvoll ist. Indem man jedoch die Fouriertransformation als numerisches Hilfsmittel einführt, das nicht im Detail verstanden werden muss, lässt sich an anderer Stelle interessante Mathematik betreiben und die Zusammenhänge zwischen Akustik und Musik können spielerisch erfahren werden.
Im folgenden Beitrag wird eine Herangehensweise geschildert, wie wir sie bereits bei der Felix-Klein-Modellierungswoche umgesetzt haben: Die SchülerInnen haben den Auftrag erhalten, einen Synthesizer zu entwickeln, mit dem verschiedene Musikinstrumente nachgeahmt werden können. Als Hilfsmittel haben sie eine kurze Einführung in die Eigenschaften der Fouriertransformation erhalten, sowie Audioaufnahmen verschiedener Instrumente.
Using particle methods to solve the Boltzmann equation for rarefied gases numerically, in realistic streaming problems, huge differences in the total number of particles per cell arise. In order to overcome the resulting numerical difficulties the application of a weighted particle concept is well-suited. The underlying idea is to use different particle masses in different cells depending on the macroscopic density of the gas. Discrepance estimates and numerical results are given.
Weighted k-cardinality trees
(1992)
We consider the k -CARD TREE problem, i.e., the problem of finding in a given undirected graph G a subtree with k edges, having minimum weight. Applications of this problem arise in oil-field leasing and facility layout. While the general problem is shown to be strongly NP hard, it can be solved in polynomial time if G is itself a tree. We give an integer programming formulation of k-CARD TREE, and an efficient exact separation routine for a set of generalized subtour elimination constraints. The polyhedral structure of the convex huLl of the integer solutions is studied.
In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the method of approximation through decomposition to our problem and design also a new branch and bound scheme. The numerical results show that this new solution approach performs better than the existing algorithms.
Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.
We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.
By means of the limit and jump relations of classical potential theory the framework of a wavelet approach on a regular surface is established. The properties of a multiresolution analysis are verified, and a tree algorithm for fast computation is developed based on numerical integration. As applications of the wavelet approach some numerical examples are presented, including the zoom-in property as well as the detection of high frequency perturbations. At the end we discuss a fast multiscale representation of the solution of (exterior) Dirichlet's or Neumann's boundary-value problem corresponding to regular surfaces.
A wavelet technique, the wavelet-Mie-representation, is introduced for the analysis and modelling of the Earth's magnetic field and corresponding electric current distributions from geomagnetic data obtained within the ionosphere. The considerations are essentially based on two well-known geomathematical keystones, (i) the Helmholtz-decomposition of spherical vector fields and (ii) the Mie-representation of solenoidal vector fields in terms of poloidal and toroidal parts. The wavelet-Mie-representation is shown to provide an adequate tool for geomagnetic modelling in the case of ionospheric magnetic contributions and currents which exhibit spatially localized features. An important example are ionospheric currents flowing radially onto or away from the Earth. To demonstrate the functionality of the approach, such radial currents are calculated from vectorial data of the MAGSAT and CHAMP satellite missions.
* naive examples which show drawbacks of discrete wavelet transform and windowed Fourier transform; * adaptive partition (with a 'best basis' approach) of speech-like signals by means of local trigonometric bases with orthonormal windows. * extraction of formant-like features from the cosine transform; * further proceedingings for classification of vowels or voiced speech are suggested at the end.
With this article we first like to give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based on Gaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.; In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus on the evolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor.
We derive minimax rates for estimation in anisotropic smoothness classes. This rate is attained by a coordinatewise thresholded wavelet estimator based on a tensor product basis with separate scale parameter for every dimension. It is shown that this basis is superior to its one-scale multiresolution analog, if different degrees of smoothness in different directions are present.; As an important application we introduce a new adaptive wavelet estimator of the time-dependent spectrum of a locally stationary time series. Using this model which was resently developed by Dahlhaus, we show that the resulting estimator attains nearly the rate, which is optimal in Gaussian white noise, simultaneously over a wide range of smoothness classes. Moreover, by our new approach we overcome the difficulty of how to choose the right amount of smoothing, i.e. how to adapt to the appropriate resolution, for reconstructing the local structure of the evolutionary spectrum in the time-frequency plane.
We consider wavelet estimation of the time-dependent (evolutionary) power spectrum of a locally stationary time series. Allowing for departures from stationary proves useful for modelling, e.g., transient phenomena, quasi-oscillating behaviour or spectrum modulation. In our work wavelets are used to provide an adaptive local smoothing of a short-time periodogram in the time-freqeuncy plane. For this, in contrast to classical nonparametric (linear) approaches we use nonlinear thresholding of the empirical wavelet coefficients of the evolutionary spectrum. We show how these techniques allow for both adaptively reconstructing the local structure in the time-frequency plane and for denoising the resulting estimates. To this end a threshold choice is derived which is motivated by minimax properties w.r.t. the integrated mean squared error. Our approach is based on a 2-d orthogonal wavelet transform modified by using a cardinal Lagrange interpolation function on the finest scale. As an example, we apply our procedure to a time-varying spectrum motivated from mobile radio propagation.
The article is concerned with the modelling of ionospheric current systems from induced magnetic fields measured by satellites in a multiscale framework. Scaling functions and wavelets are used to realize a multiscale analysis of the function spaces under consideration and to establish a multiscale regularization procedure for the inversion of the considered vectorial operator equation. Based on the knowledge of the singular system a regularization technique in terms of certain product kernels and corresponding convolutions can be formed. In order to reconstruct ionospheric current systems from satellite magnetic field data, an inversion of the Biot-Savart's law in terms of multiscale regularization is derived. The corresponding operator is formulated and the singular values are calculated. The method is tested on real magnetic field data of the satellite CHAMP and the proposed satellite mission SWARM.
This work is dedicated to the wavelet modelling of regional and temporal variations of the Earth's gravitational potential observed by GRACE. In the first part, all required mathematical tools and methods involving spherical wavelets are introduced. Then we apply our method to monthly GRACE gravity fields. A strong seasonal signal can be identified, which is restricted to areas, where large-scale redistributions of continental water mass are expected. This assumption is analyzed and verified by comparing the time series of regionally obtained wavelet coefficients of the gravitational signal originated from hydrology models and the gravitational potential observed by GRACE. The results are in good agreement to previous studies and illustrate that wavelets are an appropriate tool to investigate regional time-variable effects in the gravitational field.
The thesis is concerned with the modelling of ionospheric current systems and induced magnetic fields in a multiscale framework. Scaling functions and wavelets are used to realize a multiscale analysis of the function spaces under consideration and to establish a multiscale regularization procedure for the inversion of the considered operator equation. First of all a general multiscale concept for vectorial operator equations between two separable Hilbert spaces is developed in terms of vector kernel functions. The equivalence to the canonical tensorial ansatz is proven and the theory is transferred to the case of multiscale regularization of vectorial inverse problems. As a first application, a special multiresolution analysis of the space of square-integrable vector fields on the sphere, e.g. the Earth’s magnetic field measured on a spherical satellite’s orbit, is presented. By this, a multiscale separation of spherical vector-valued functions with respect to their sources can be established. The vector field is split up into a part induced by sources inside the sphere, a part which is due to sources outside the sphere and a part which is generated by sources on the sphere, i.e. currents crossing the sphere. The multiscale technqiue is tested on a magnetic field data set of the satellite CHAMP and it is shown that crustal field determination can be improved by previously applying our method. In order to reconstruct ionspheric current systems from magnetic field data, an inversion of the Biot-Savart’s law in terms of multiscale regularization is defined. The corresponding operator is formulated and the singular values are calculated. Based on the konwledge of the singular system a regularzation technique in terms of certain product kernels and correponding convolutions can be formed. The method is tested on different simulations and on real magnetic field data of the satellite CHAMP and the proposed satellite mission SWARM.
In this paper we introduce a multiscale technique for the analysis of deformation phenomena of the Earth. Classically, the basis functions under use are globally defined and show polynomial character. In consequence, only a global analysis of deformations is possible such that, for example, the water load of an artificial reservoir is hardly to model in that way. Up till now, the alternative to realize a local analysis can only be established by assuming the investigated region to be flat. In what follows we propose a local analysis based on tools (Navier scaling functions and wavelets) taking the (spherical) surface of the Earth into account. Our approach, in particular, enables us to perform a zooming-in procedure. In fact, the concept of Navier wavelets is formulated in such a way that subregions with larger or smaller data density can accordingly be modelled with a higher or lower resolution of the model, respectively.
Wavelets on closed surfaces in Euclidean space R3 are introduced starting from a scale discrete wavelet transform for potentials harmonic down to a spherical boundary. Essential tools for approximation are integration formulas relating an integral over the sphere to suitable linear combinations of functional values (resp. normal derivatives) on the closed surface under consideration. A scale discrete version of multiresolution is described for potential functions harmonic outside the closed surface and regular at infinity. Furthermore, an exact fully discrete wavelet approximation is developed in case of band-limited wavelets. Finally, the role of wavelets is discussed in three problems, namely (i) the representation of a function on a closed surface from discretely given data, (ii) the (discrete) solution of the exterior Dirichlet problem, and (iii) the (discrete) solution of the exterior Neumann problem.
A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.
In this paper, the reflection and refraction of a plane wave at an interface between .two half-spaces composed of triclinic crystalline material is considered. It is shown that due to incidence of a plane wave three types of waves namely quasi-P (qP), quasi-SV (qSV) and quasi-SH (qSH) will be generated governed by the propagation condition involving the acoustic tensor. A simple procedure has been presented for the calculation of all the three phase velocities of the quasi waves. It has been considered that the direction of particle motion is neither parallel nor perpendicular to the direction of propagation. Relations are established between directions of motion and propagation, respectively. The expressions for reflection and refraction coefficients of qP, qSV and qSH waves are obtained. Numerical results of reflection and refraction coefficients are presented for different types of anisotropic media and for different types of incident waves. Graphical representation have been made for incident qP waves and for incident qSV and qSH waves numerical data are presented in two tables.
Vorlesung Logik
(2000)
Diese Doktorarbeit befasst sich mit Volatilitätsarbitrage bei europäischen Kaufoptionen und mit der Modellierung von Collateralized Debt Obligations (CDOs). Zuerst wird anhand einer Idee von Carr gezeigt, dass es stochastische Arbitrage in einem Black-Scholes-ähnlichen Modell geben kann. Danach optimieren wir den Arbitrage- Gewinn mithilfe des Erwartungswert-Varianz-Ansatzes von Markowitz und der Martingaltheorie. Stochastische Arbitrage im stochastischen Volatilitätsmodell von Heston wird auch untersucht. Ferner stellen wir ein Markoff-Modell für CDOs vor. Wir zeigen dann, dass man relativ schnell an die Grenzen dieses Modells stößt: Nach dem Ausfall einer Firma steigen die Ausfallintensitäten der überlebenden Firmen an, und kehren nie wieder zu ihrem Ausgangsniveau zurück. Dieses Verhalten stimmt aber nicht mit Beobachtungen am Markt überein: Nach Turbulenzen auf dem Markt stabilisiert sich der Markt wieder und daher würde man erwarten, dass die Ausfallintensitäten der überlebenden Firmen ebenfalls wieder abflachen. Wir ersetzen daher das Markoff-Modell durch ein Semi-Markoff-Modell, das den Markt viel besser nachbildet.
Vigenere-Verschlüsselung
(1999)
The present work deals with the (global and local) modeling of the windfield on the real topography of Rheinland-Pfalz. Thereby the focus is on the construction of a vectorial windfield from low, irregularly distributed data given on a topographical surface. The developed spline procedure works by means of vectorial (homogeneous, harmonic) polynomials (outer harmonics) which control the oscillation behaviour of the spline interpoland. In the process the characteristic of the spline curvature which defines the energy norm is assumed to be on a sphere inside the Earth interior and not on the Earth’s surface. The numerical advantage of this method arises from the maximum-minimum principle for harmonic functions.
In this thesis we classify simple coherent sheaves on Kodaira fibers of types II, III and IV (cuspidal and tacnode cubic curves and a plane configuration of three concurrent lines). Indecomposable vector bundles on smooth elliptic curves were classified in 1957 by Atiyah. In works of Burban, Drozd and Greuel it was shown that the categories of vector bundles and coherent sheaves on cycles of projective lines are tame. It turns out, that all other degenerations of elliptic curves are vector-bundle-wild. Nevertheless, we prove that the category of coherent sheaves of an arbitrary reduced plane cubic curve, (including the mentioned Kodaira fibers) is brick-tame. The main technical tool of our approach is the representation theory of bocses. Although, this technique was mainly used for purely theoretical purposes, we illustrate its computational potential for investigating tame behavior in wild categories. In particular, it allows to prove that a simple vector bundle on a reduced cubic curve is determined by its rank, multidegree and determinant, generalizing Atiyah's classification. Our approach leads to an interesting class of bocses, which can be wild but are brick-tame.
The mathematical modelling of problems in science and engineering leads often to partial differential equations in time and space with boundary and initial conditions.The boundary value problems can be written as extremal problems(principle of minimal potential energy), as variational equations (principle of virtual power) or as classical boundary value problems.There are connections concerning existence and uniqueness results between these formulations, which will be investigated using the powerful tools of functional analysis.The first part of the lecture is devoted to the analysis of linear elliptic boundary value problems given in a variational form.The second part deals with the numerical approximation of the solutions of the variational problems.Galerkin methods as FEM and BEM are the main tools. The h-version will be discussed, and an error analysis will be done.Examples, especially from the elasticity theory, demonstrate the methods.
The shortest path problem in which the \((s,t)\)-paths \(P\) of a given digraph \(G =(V,E)\) are compared with respect to the sum of their edge costs is one of the best known problems in combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, \(k\)-sum and \(k\)-max objectives, \((k_1, k_2)-max, (k_1, k_2)\)-balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on \(k\)-sum and \(k\)-max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs.
Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors.
Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study.
The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.
Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)
We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.
In this work two main approaches for the evaluation of credit derivatives are analyzed: the copula based approach and the Markov Chain based approach. This work gives the opportunity to use the advantages and avoid disadvantages of both approaches. For example, modeling of contagion effects, i.e. modeling dependencies between counterparty defaults, is complicated under the copula approach. One remedy is to use Markov Chain, where it can be done directly. The work consists of five chapters. The first chapter of this work extends the model for the pricing of CDS contracts presented in the paper by Kraft and Steffensen (2007). In the widely used models for CDS pricing it is assumed that only borrower can default. In our model we assume that each of the counterparties involved in the contract may default. Calculated contract prices are compared with those calculated under usual assumptions. All results are summarized in the form of numerical examples and plots. In the second chapter the copula and its main properties are described. The methods of constructing copulas as well as most common copulas families and its properties are introduced. In the third chapter the method of constructing a copula for the existing Markov Chain is introduced. The cases with two and three counterparties are considered. Necessary relations between the transition intensities are derived to directly find some copula functions. The formulae for default dependencies like Spearman's rho and Kendall's tau for defined copulas are derived. Several numerical examples are presented in which the copulas are built for given Markov Chains. The fourth chapter deals with the approximation of copulas if for a given Markov Chain a copula cannot be provided explicitly. The fifth chapter concludes this thesis.
This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134].
A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk.
Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants.
Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.
This thesis deals with the relationship between no-arbitrage and (strictly) consistent price processes for a financial market with proportional transaction costs
in a discrete time model. The exact mathematical statement behind this relationship is formulated in the so-called Fundamental Theorem of Asset Pricing (FTAP). Among the many proofs of the FTAP without transaction costs there
is also an economic intuitive utility-based approach. It relies on the economic
intuitive fact that the investor can maximize his expected utility from terminal
wealth. This approach is rather constructive since the equivalent martingale measure is then given by the marginal utility evaluated at the optimal terminal payoff.
However, in the presence of proportional transaction costs such a utility-based approach for the existence of consistent price processes is missing in the literature. So far, rather deep methods from functional analysis or from the theory of random sets have been used to show the FTAP under proportional transaction costs.
For the sake of existence of a utility-maximizing payoff we first concentrate on a generic single-period model with only one risky asset. The marignal utility evaluated at the optimal terminal payoff yields the first component of a
consistent price process. The second component is given by the bid-ask prices
depending on the investors optimal action. Even more is true: nearby this consistent price process there are many strictly consistent price processes. Their exact structure allows us to apply this utility-maximizing argument in a multi-period model. In a backwards induction we adapt the given bid-ask prices in such a way so that the strictly consistent price processes found from maximizing utility can be extended to terminal time. In addition possible arbitrage opportunities of the 2nd kind vanish which can present for the original bid-ask process. The notion of arbitrage opportunities of the 2nd kind has been so
far investigated only in models with strict costs in every state. In our model
transaction costs need not be present in every state.
For a model with finitely many risky assets a similar idea is applicable. However, in the single-period case we need to develop new methods compared
to the single-period case with only one risky asset. There are mainly two reasons
for that. Firstly, it is not at all obvious how to get a consistent price process
from the utility-maximizing payoff, since the consistent price process has to be
found for all assets simultaneously. Secondly, we need to show directly that the
so-called vector space property for null payoffs implies the robust no-arbitrage condition. Once this step is accomplished we can à priori use prices with a
smaller spread than the original ones so that the consistent price process found
from the utility-maximizing payoff is strictly consistent for the original prices.
To make the results applicable for the multi-period case we assume that the prices are given by compact and convex random sets. Then the multi-period case is similar to the case with only one risky asset but more demanding with regard to technical questions.
Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.
Universal Shortest Paths
(2010)
We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality. Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum shortest path problem.
Universal Algebra
(2004)
An asymptotic preserving numerical scheme (with respect to diffusion scalings) for a linear transport equation is investigated. The scheme is adopted from a class of recently developped schemes. Stability is proven uniformly in the mean free path under a CFL type condition turning into a parabolic CFL condition in the diffusion limit.