Refine
Year of publication
Document Type
- Preprint (607)
- Doctoral Thesis (265)
- Report (121)
- Article (40)
- Diploma Thesis (26)
- Lecture (25)
- Master's Thesis (6)
- Part of a Book (4)
- Course Material (4)
- Study Thesis (4)
Is part of the Bibliography
- no (1109)
Keywords
- Mathematische Modellierung (15)
- Wavelet (14)
- MINT (13)
- Schule (13)
- Inverses Problem (12)
- Mehrskalenanalyse (12)
- Modellierung (12)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)
Faculty / Organisational entity
- Fachbereich Mathematik (1109) (remove)
An increasing number of nowadays tasks, such as speech recognition, image generation,
translation, classification or prediction are performed with the help of machine learning.
Especially artificial neural networks (ANNs) provide convincing results for these tasks.
The reasons for this success story are the drastic increase of available data sources in
our more and more digitalized world as well as the development of remarkable ANN
architectures. This development has led to an increasing number of model parameters
together with more and more complex models. Unfortunately, this yields a loss in the
interpretability of deployed models. However, there exists a natural desire to explain the
deployed models, not just by empirical observations but also by analytical calculations.
In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding
of these models. As the name suggests, VAEs are based on standard autoencoders (AEs)
and therefore used to perform dimensionality reduction of data. This is achieved by a
bottleneck structure within the hidden layers of the ANN. From a data input the encoder,
that is the part up to the bottleneck, produces a low dimensional representation. The
decoder, the part from the bottleneck to the output, uses this representation to reconstruct
the input. The model is learned by minimizing the error from the reconstruction.
In our point of view, the most remarkable property and, hence, also a central topic
in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning
is preventing the VAE with thousands of parameters from overfitting. However, such a
desirable property comes with the risk for the model of learning nothing at all. In this
thesis, we look at VAEs and the auto-pruning from two different angles and our main
contributions to research are the following:
(i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the
framework of generalized linear models (GLMs). As a result, we are able to explain
training results of VAEs before conducting the actual training.
(ii) We construct a time dependent VAE and show the effects of the auto-pruning in
this model. As a result, we are able to model financial data sequences and estimate
the value-at-risk (VaR) of associated portfolios. Our results show that we surpass
the standard benchmarks for VaR estimation.
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
Risk management is an indispensable component of the financial system. In this context, capital requirements are built by financial institutions to avoid future bankruptcy. Their calculation is based on a specific kind of maps, so-called risk measures. There exist several forms and definitions of them. Multi-asset risk measures are the starting point of this dissertation. They determine the capital requirements as the minimal amount of money invested into multiple eligible assets to secure future payoffs. The dissertation consists of three main contributions: First, multi-asset risk measures are used to calculate pricing bounds for European type options. Second, multi-asset risk measures are combined with recently proposed intrinsic risk measures to obtain a new kind of a risk measure which we call a multi-asset intrinsic (MAI) risk measure. Third, the preferences of an agent are included in the calculation of the capital requirements. This leads to another new risk measure which we call a scalarized utility-based multi-asset (SUBMA) risk measure.
In the introductory chapter, we recall the definition and properties of multi-asset risk
measures. Then, each of the aforementioned contributions covers a separate chapter. In
the following, the content of these three chapters is explained in more detail:
Risk measures can be used to calculate pricing bounds for financial derivatives. In
Chapter 2, we deal with the pricing of European options in an incomplete financial market
model. We use the common risk measures Value-at-Risk and Expected Shortfall to define
good deals on a financial market with log-normally distributed rates of return. We show that the pricing bounds obtained from Value-at-Risk may have a non-smooth behavior under parameter changes. Additionally, we find situations in which the seller's bound for a call option is smaller than the buyer's bound. We identify the missing convexity of the Value-at-Risk as main reason for this behavior. Due to the strong connection between the obtained pricing bounds and the theory of risk measures, we further obtain new insights in the finiteness and the continuity of multi-asset risk measures.
In Chapter 3, we construct the MAI risk measure. Therefore, recall that a multi-asset risk measure describes the minimal external capital that has to be raised into multiple eligible assets to make a future financial position acceptable, i.e., that it passes a capital adequacy test. Recently, the alternative methodology of intrinsic risk measures
was introduced in the literature. These ask for the minimal proportion of the financial position that has to be reallocated to pass the capital adequacy test, i.e., only internal capital is used. We combine these two concepts and call this new type of risk measure an MAI risk measure. It allows to secure the financial position by external capital as well as reallocating parts of the portfolio as an internal rebooking. We investigate several properties to demonstrate similarities and differences to the two
aforementioned classical types of risk measures. We find out that diversification reduces
the capital requirement only in special situations depending on the financial positions. With the help of Sion's minimax theorem we also prove a dual representation for MAI risk measures. Finally, we determine capital requirements in a model motivated by the Solvency II methodology.
In the final Chapter 4, we construct the SUBMA risk measure. In doing so, we consider the situation in which a financial institution has to satisfy a capital adequacy test, e.g., by the Basel Accords for banks or by Solvency II for insurers. If the financial situation of this institution is tight, then it can happen that no reallocation of the initial
endowment would pass the capital adequacy test. The classical portfolio optimization approach breaks down and a capital increase is needed. We introduce the SUBMA risk measure which optimizes the hedging costs and the expected utility of the institution simultaneously subject to the capital adequacy test. We find out that the SUBMA risk measure is coherent if the utility function has constant relative risk aversion and the capital adequacy test leads to a coherent acceptance set. In a one-period financial market model we present a sufficient condition for the SUBMA risk measure to be finite-valued and continuous. Finally, we calculate the SUBMA risk measure in a continuous-time financial market model for two benchmark capital adequacy tests.
The main objects of study in this thesis are abelian varieties and their endomorphism rings. Abelian varieties are not just interesting in their own right, they also have numerous applications in various areas such as in algebraic geometry, number theory and information security. In fact, they make up one of the best choices in public key cryptography and more recently in post-quantum cryptography. Endomorphism rings are objects attached to abelian varieties. Their computation plays an important role in explicit class field theory and in the security of some post-quantum cryptosystems.
There are subexponential algorithms to compute the endomorphism rings of abelian varieties of dimension one and two. Prior to this work, all these subexponential algorithms came with a probability of failure and additional steps were required to unconditionally prove the output. In addition, these methods do not cover all abelian varieties of dimension two. The objective of this thesis is to analyse the subexponential methods and develop ways to deal with the exceptional cases.
We improve the existing methods by developing algorithms that always output the correct endomorphism ring. In addition to that, we develop a novel approach to compute endomorphism rings of some abelian varieties that could not be handled before. We also prove that the subexponential approaches are simply not good enough to cover all the cases. We use some of our results to construct a family of abelian surfaces with which we build post-quantum cryptosystems that are believed to resist subexponential quantum attacks - a desirable property for cryptosystems. This has the potential of providing an efficient non interactive isogeny based key exchange protocol, which is also capable of resisting subexponential quantum attacks and will be the first of its kind.
The knowledge of structural properties in microscopic materials contributes to a deeper understanding of macroscopic properties. For the study of such materials, several imaging techniques reaching scales in the order of nanometers have been developed. One of the most powerful and sophisticated imaging methods is focused-ion-beam scanning electron
microscopy (FIB-SEM), which combines serial sectioning by an ion beam and imaging by
a scanning electron microscope.
FIB-SEM imaging reaches extraordinary scales below 5 nm with large representative
volumes. However, the complexity of the imaging process results in the addition of artificial distortion and artifacts generating poor-quality images. We introduce a method
for the quality evaluation of images by analyzing general characteristics of the images
as well as artifacts exclusively for FIB-SEM, namely curtaining and charging. For the
evaluation, we propose quality indexes, which are tested on several data sets of porous and non-porous materials with different characteristics and distortions. The quality indexes report objective evaluations in accordance with visual judgment.
Moreover, the acquisition of large volumes at high resolution can be time-consuming. An approach to speed up the imaging is by decreasing the resolution and by considering cuboidal voxel configurations. However, non-isotropic resolutions may lead to errors in the reconstructions. Even if the reconstruction is correct, effects are visible in the analysis.
We study the effects of different voxel settings on the prediction of material and flow properties of reconstructed structures. Results show good agreement between highly resolved cases and ground truths as is expected. Structural anisotropy is reported as
resolution decreases, especially in anisotropic grids. Nevertheless, gray image interpolation
remedies the induced anisotropy. These benefits are visible at flow properties as well.
For highly porous structures, the structural reconstruction is even more difficult as
a consequence of deeper parts of the material visible through the pores. We show as an application example, the reconstruction of two highly porous structures of optical layers, where a typical workflow from image acquisition, preprocessing, reconstruction until a
spatial analysis is performed. The study case shows the advantages of 3D imaging for
optical porous layers. The analysis reveals geometrical structural properties related to the manufacturing processes.
The great interest in robust covering problems is manifold, especially due to the plenitude of real world applications and the additional incorporation of uncertainties which are inherent in many practical issues.
In this thesis, for a fixed positive integer \(q\), we introduce and elaborate on a new robust covering problem, called Robust Min-\(q\)-Multiset-Multicover, and related problems.
The common idea of these problems is, given a collection of subsets of a ground set, to decide on the frequency of choosing each subset so as to satisfy the uncertain demand of each overall occurring element.
Yet, in contrast to general covering problems, the subsets may only cover at most \(q\) of their elements.
Varying the properties of the occurring elements leads to a selection of four interesting robust covering problems which are investigated.
We extensively analyze the complexity of the arising problems, also for various restrictions to particular classes of uncertainty sets.
For a given problem, we either provide a polynomial time algorithm or show that, unless \(\text{P}=\text{NP}\), such an algorithm cannot exist.
Furthermore, in the majority of cases, we even give evidence that a polynomial time approximation scheme is most likely not possible for the hard problem variants.
Moreover, we aim for approximations and approximation algorithms for these hard variants, where we focus on Robust Min-\(q\)-Multiset-Multicover.
For a wide class of uncertainty sets, we present the first known polynomial time approximation algorithm for Robust Min-\(q\)-Multiset-Multicover having a provable worst-case performance guarantee.
Index Insurance for Farmers
(2021)
In this thesis we focus on weather index insurance for agriculture risk. Even though such an index insurance is easily applicable and reduces information asymmetries, the demand for it is quite low. This is in particular due to the basis risk and the lack of knowledge about it’s effectiveness. The basis risk is the difference between the index insurance payout and the actual loss of the insured. We evaluate the performance of weather index insurance in different contexts, because proper knowledge about index insurance will help to use it as a successful alternative for traditional crop insurance. In addition to that, we also propose and discuss methods to reduce the basis risk.
We also analyze the performance of an agriculture loan which is interlinked with a weather index insurance. We show that an index insurance with actuarial fair or subsidized premium helps to reduce the loan default probability. While we first consider an index insurance with a commonly used linear payout function for this analysis, we later design an index insurance payout function which maximizes the expected utility of the insured. Then we show that, an index insurance with that optimal payout function is more appropriate for bundling with an agriculture loan. The optimal payout function also helps to reduce the basis risk. In addition, we show that a lender who issues agriculture loans can be better off by purchasing a weather index insurance in some circumstances.
We investigate the market equilibrium for weather index insurance by assuming risk averse farmers and a risk averse insurer. When we consider two groups of farmers with different risks, we show that the low risk group subsidizes the high risk group when both should pay the same premium for the index insurance. Further, according to the analysis of an index insurance in an informal risk sharing environment, we observe that the demand of the index insurance can be increased by selling it to a group of farmers who informally share the risk based on the insurance payout, because it reduces the adverse effect of the basis risk. Besides of that we analyze the combination of an index insurance with a gap insurance. Such a combination can increase the demand and reduce the basis risk of the index insurance if we choose the correct levels of premium and of gap insurance cover. Moreover our work shows that index insurance can be a good alternative to proportional and excess loss reinsurance when it is issued at a low enough price.
Im Projekt MAFoaM - Modular Algorithms for closed Foam Mechanics - des
Fraunhofer ITWM in Zusammenarbeit mit dem Fraunhofer IMWS wurde eine Methode zur Analyse und Simulation geschlossenzelliger PMI-Hartschäume entwickelt. Die Zellstruktur der Hartschäume wurde auf Basis von CT-Aufnahmen modelliert, um ihr Verformungs- und Versagensverhalten zu simulieren, d.h. wie sich die Schäume unter Belastungen bis hin zum totalen Defekt verhalten.
In der Diplomarbeit wird die
bildanalytische Zellrekonstruktion für PMI-Hartschäume automatisiert. Die Zellrekonstruktion dient der Bestimmung von Mikrostrukturgrößen,
also geometrischer Eigenschaften der Schaumzellen, wie z.B.
Mittelwert und Varianz des Zellvolumens oder der Zelloberfläche.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2021.10. For more information, see https://www.lintim.net