Refine
Year of publication
Document Type
- Preprint (607)
- Doctoral Thesis (266)
- Report (121)
- Article (41)
- Diploma Thesis (26)
- Lecture (25)
- Master's Thesis (6)
- Part of a Book (4)
- Course Material (4)
- Study Thesis (4)
Is part of the Bibliography
- no (1111)
Keywords
- Mathematische Modellierung (16)
- MINT (14)
- Schule (14)
- Wavelet (14)
- Inverses Problem (12)
- Mehrskalenanalyse (12)
- Modellierung (12)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)
Faculty / Organisational entity
- Fachbereich Mathematik (1111) (remove)
Seit 1993 veranstaltet der Fachbereich Mathematik der TU Kaiserslautern jährlich die mathematischen Modellierungswochen. Die Veranstaltung erwuchs parallel zu der steigenden Relevanz angewandter mathematischer Forschungsgebiete, wie der Technomathematik und der Wirtschaftsmathematik. Sie soll dazu dienen, Schülerinnen und Schülern die Bedeutung mathematischer Arbeitsweisen in der heutigen Berufswelt, insbesondere in Industrie und Wirtschaft, begreifbar zu machen. Darüber hinaus bietet die Modellierungswoche den teilnehmenden Lehrkräften einen Einblick in die Projektarbeit mit offenen Fragestellungen im Rahmen der mathematischen Modellierung. In diesem Report beschreiben wir die Projekte, die während der Modellierungswoche im Dezember 2021 durchgeführt wurden. Der Themenschwerpunkt der Veranstaltung lautete "Wetter und Katastrophenschutz".
Adjoint-Based Shape Optimization and Optimal Control with Applications to Microchannel Systems
(2021)
Optimization problems constrained by partial differential equations (PDEs) play an important role in many areas of science and engineering. They often arise in the optimization of technological applications, where the underlying physical effects are modeled by PDEs. This thesis investigates such problems in the context of shape optimization and optimal control with microchannel systems as novel applications. Such systems are used, e.g., as cooling systems, heat exchangers, or chemical reactors as their high surface-to-volume ratio, which results in beneficial heat and mass transfer characteristics, allows them to excel in these settings. Additionally, this thesis considers general PDE constrained optimization problems with particular regard to their efficient solution.
As our first application, we study a shape optimization problem for a microchannel cooling system: We rigorously analyze this problem, prove its shape differentiability, and calculate the corresponding shape derivative. Afterwards, we consider the numerical optimization of the cooling system for which we employ a hierarchy of reduced models derived via porous medium modeling and a dimension reduction technique. A comparison of the models in this context shows that the reduced models approximate the original one very accurately while requiring substantially less computational resources.
Our second application is the optimization of a chemical microchannel reactor for the Sabatier process using techniques from PDE constrained optimal control. To treat this problem, we introduce two models for the reactor and solve a parameter identification problem to determine the necessary kinetic reaction parameters for our models. Thereafter, we consider the optimization of the reactor's operating conditions with the objective of improving its product yield, which shows considerable potential for enhancing the design of the reactor.
To provide efficient solution techniques for general shape optimization problems, we introduce novel nonlinear conjugate gradient methods for PDE constrained shape optimization and analyze their performance on several well-established benchmark problems. Our results show that the proposed methods perform very well, making them efficient and appealing gradient-based shape optimization algorithms.
Finally, we continue recent software-based developments for PDE constrained optimization and present our novel open-source software package cashocs. Our software implements and automates the adjoint approach and, thus, facilitates the solution of general PDE constrained shape optimization and optimal control problems. Particularly, we highlight our software's user-friendly interface, straightforward applicability, and mesh independent behavior.
An increasing number of nowadays tasks, such as speech recognition, image generation,
translation, classification or prediction are performed with the help of machine learning.
Especially artificial neural networks (ANNs) provide convincing results for these tasks.
The reasons for this success story are the drastic increase of available data sources in
our more and more digitalized world as well as the development of remarkable ANN
architectures. This development has led to an increasing number of model parameters
together with more and more complex models. Unfortunately, this yields a loss in the
interpretability of deployed models. However, there exists a natural desire to explain the
deployed models, not just by empirical observations but also by analytical calculations.
In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding
of these models. As the name suggests, VAEs are based on standard autoencoders (AEs)
and therefore used to perform dimensionality reduction of data. This is achieved by a
bottleneck structure within the hidden layers of the ANN. From a data input the encoder,
that is the part up to the bottleneck, produces a low dimensional representation. The
decoder, the part from the bottleneck to the output, uses this representation to reconstruct
the input. The model is learned by minimizing the error from the reconstruction.
In our point of view, the most remarkable property and, hence, also a central topic
in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning
is preventing the VAE with thousands of parameters from overfitting. However, such a
desirable property comes with the risk for the model of learning nothing at all. In this
thesis, we look at VAEs and the auto-pruning from two different angles and our main
contributions to research are the following:
(i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the
framework of generalized linear models (GLMs). As a result, we are able to explain
training results of VAEs before conducting the actual training.
(ii) We construct a time dependent VAE and show the effects of the auto-pruning in
this model. As a result, we are able to model financial data sequences and estimate
the value-at-risk (VaR) of associated portfolios. Our results show that we surpass
the standard benchmarks for VaR estimation.
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
Risk management is an indispensable component of the financial system. In this context, capital requirements are built by financial institutions to avoid future bankruptcy. Their calculation is based on a specific kind of maps, so-called risk measures. There exist several forms and definitions of them. Multi-asset risk measures are the starting point of this dissertation. They determine the capital requirements as the minimal amount of money invested into multiple eligible assets to secure future payoffs. The dissertation consists of three main contributions: First, multi-asset risk measures are used to calculate pricing bounds for European type options. Second, multi-asset risk measures are combined with recently proposed intrinsic risk measures to obtain a new kind of a risk measure which we call a multi-asset intrinsic (MAI) risk measure. Third, the preferences of an agent are included in the calculation of the capital requirements. This leads to another new risk measure which we call a scalarized utility-based multi-asset (SUBMA) risk measure.
In the introductory chapter, we recall the definition and properties of multi-asset risk
measures. Then, each of the aforementioned contributions covers a separate chapter. In
the following, the content of these three chapters is explained in more detail:
Risk measures can be used to calculate pricing bounds for financial derivatives. In
Chapter 2, we deal with the pricing of European options in an incomplete financial market
model. We use the common risk measures Value-at-Risk and Expected Shortfall to define
good deals on a financial market with log-normally distributed rates of return. We show that the pricing bounds obtained from Value-at-Risk may have a non-smooth behavior under parameter changes. Additionally, we find situations in which the seller's bound for a call option is smaller than the buyer's bound. We identify the missing convexity of the Value-at-Risk as main reason for this behavior. Due to the strong connection between the obtained pricing bounds and the theory of risk measures, we further obtain new insights in the finiteness and the continuity of multi-asset risk measures.
In Chapter 3, we construct the MAI risk measure. Therefore, recall that a multi-asset risk measure describes the minimal external capital that has to be raised into multiple eligible assets to make a future financial position acceptable, i.e., that it passes a capital adequacy test. Recently, the alternative methodology of intrinsic risk measures
was introduced in the literature. These ask for the minimal proportion of the financial position that has to be reallocated to pass the capital adequacy test, i.e., only internal capital is used. We combine these two concepts and call this new type of risk measure an MAI risk measure. It allows to secure the financial position by external capital as well as reallocating parts of the portfolio as an internal rebooking. We investigate several properties to demonstrate similarities and differences to the two
aforementioned classical types of risk measures. We find out that diversification reduces
the capital requirement only in special situations depending on the financial positions. With the help of Sion's minimax theorem we also prove a dual representation for MAI risk measures. Finally, we determine capital requirements in a model motivated by the Solvency II methodology.
In the final Chapter 4, we construct the SUBMA risk measure. In doing so, we consider the situation in which a financial institution has to satisfy a capital adequacy test, e.g., by the Basel Accords for banks or by Solvency II for insurers. If the financial situation of this institution is tight, then it can happen that no reallocation of the initial
endowment would pass the capital adequacy test. The classical portfolio optimization approach breaks down and a capital increase is needed. We introduce the SUBMA risk measure which optimizes the hedging costs and the expected utility of the institution simultaneously subject to the capital adequacy test. We find out that the SUBMA risk measure is coherent if the utility function has constant relative risk aversion and the capital adequacy test leads to a coherent acceptance set. In a one-period financial market model we present a sufficient condition for the SUBMA risk measure to be finite-valued and continuous. Finally, we calculate the SUBMA risk measure in a continuous-time financial market model for two benchmark capital adequacy tests.
The main objects of study in this thesis are abelian varieties and their endomorphism rings. Abelian varieties are not just interesting in their own right, they also have numerous applications in various areas such as in algebraic geometry, number theory and information security. In fact, they make up one of the best choices in public key cryptography and more recently in post-quantum cryptography. Endomorphism rings are objects attached to abelian varieties. Their computation plays an important role in explicit class field theory and in the security of some post-quantum cryptosystems.
There are subexponential algorithms to compute the endomorphism rings of abelian varieties of dimension one and two. Prior to this work, all these subexponential algorithms came with a probability of failure and additional steps were required to unconditionally prove the output. In addition, these methods do not cover all abelian varieties of dimension two. The objective of this thesis is to analyse the subexponential methods and develop ways to deal with the exceptional cases.
We improve the existing methods by developing algorithms that always output the correct endomorphism ring. In addition to that, we develop a novel approach to compute endomorphism rings of some abelian varieties that could not be handled before. We also prove that the subexponential approaches are simply not good enough to cover all the cases. We use some of our results to construct a family of abelian surfaces with which we build post-quantum cryptosystems that are believed to resist subexponential quantum attacks - a desirable property for cryptosystems. This has the potential of providing an efficient non interactive isogeny based key exchange protocol, which is also capable of resisting subexponential quantum attacks and will be the first of its kind.
The knowledge of structural properties in microscopic materials contributes to a deeper understanding of macroscopic properties. For the study of such materials, several imaging techniques reaching scales in the order of nanometers have been developed. One of the most powerful and sophisticated imaging methods is focused-ion-beam scanning electron
microscopy (FIB-SEM), which combines serial sectioning by an ion beam and imaging by
a scanning electron microscope.
FIB-SEM imaging reaches extraordinary scales below 5 nm with large representative
volumes. However, the complexity of the imaging process results in the addition of artificial distortion and artifacts generating poor-quality images. We introduce a method
for the quality evaluation of images by analyzing general characteristics of the images
as well as artifacts exclusively for FIB-SEM, namely curtaining and charging. For the
evaluation, we propose quality indexes, which are tested on several data sets of porous and non-porous materials with different characteristics and distortions. The quality indexes report objective evaluations in accordance with visual judgment.
Moreover, the acquisition of large volumes at high resolution can be time-consuming. An approach to speed up the imaging is by decreasing the resolution and by considering cuboidal voxel configurations. However, non-isotropic resolutions may lead to errors in the reconstructions. Even if the reconstruction is correct, effects are visible in the analysis.
We study the effects of different voxel settings on the prediction of material and flow properties of reconstructed structures. Results show good agreement between highly resolved cases and ground truths as is expected. Structural anisotropy is reported as
resolution decreases, especially in anisotropic grids. Nevertheless, gray image interpolation
remedies the induced anisotropy. These benefits are visible at flow properties as well.
For highly porous structures, the structural reconstruction is even more difficult as
a consequence of deeper parts of the material visible through the pores. We show as an application example, the reconstruction of two highly porous structures of optical layers, where a typical workflow from image acquisition, preprocessing, reconstruction until a
spatial analysis is performed. The study case shows the advantages of 3D imaging for
optical porous layers. The analysis reveals geometrical structural properties related to the manufacturing processes.
The great interest in robust covering problems is manifold, especially due to the plenitude of real world applications and the additional incorporation of uncertainties which are inherent in many practical issues.
In this thesis, for a fixed positive integer \(q\), we introduce and elaborate on a new robust covering problem, called Robust Min-\(q\)-Multiset-Multicover, and related problems.
The common idea of these problems is, given a collection of subsets of a ground set, to decide on the frequency of choosing each subset so as to satisfy the uncertain demand of each overall occurring element.
Yet, in contrast to general covering problems, the subsets may only cover at most \(q\) of their elements.
Varying the properties of the occurring elements leads to a selection of four interesting robust covering problems which are investigated.
We extensively analyze the complexity of the arising problems, also for various restrictions to particular classes of uncertainty sets.
For a given problem, we either provide a polynomial time algorithm or show that, unless \(\text{P}=\text{NP}\), such an algorithm cannot exist.
Furthermore, in the majority of cases, we even give evidence that a polynomial time approximation scheme is most likely not possible for the hard problem variants.
Moreover, we aim for approximations and approximation algorithms for these hard variants, where we focus on Robust Min-\(q\)-Multiset-Multicover.
For a wide class of uncertainty sets, we present the first known polynomial time approximation algorithm for Robust Min-\(q\)-Multiset-Multicover having a provable worst-case performance guarantee.
Index Insurance for Farmers
(2021)
In this thesis we focus on weather index insurance for agriculture risk. Even though such an index insurance is easily applicable and reduces information asymmetries, the demand for it is quite low. This is in particular due to the basis risk and the lack of knowledge about it’s effectiveness. The basis risk is the difference between the index insurance payout and the actual loss of the insured. We evaluate the performance of weather index insurance in different contexts, because proper knowledge about index insurance will help to use it as a successful alternative for traditional crop insurance. In addition to that, we also propose and discuss methods to reduce the basis risk.
We also analyze the performance of an agriculture loan which is interlinked with a weather index insurance. We show that an index insurance with actuarial fair or subsidized premium helps to reduce the loan default probability. While we first consider an index insurance with a commonly used linear payout function for this analysis, we later design an index insurance payout function which maximizes the expected utility of the insured. Then we show that, an index insurance with that optimal payout function is more appropriate for bundling with an agriculture loan. The optimal payout function also helps to reduce the basis risk. In addition, we show that a lender who issues agriculture loans can be better off by purchasing a weather index insurance in some circumstances.
We investigate the market equilibrium for weather index insurance by assuming risk averse farmers and a risk averse insurer. When we consider two groups of farmers with different risks, we show that the low risk group subsidizes the high risk group when both should pay the same premium for the index insurance. Further, according to the analysis of an index insurance in an informal risk sharing environment, we observe that the demand of the index insurance can be increased by selling it to a group of farmers who informally share the risk based on the insurance payout, because it reduces the adverse effect of the basis risk. Besides of that we analyze the combination of an index insurance with a gap insurance. Such a combination can increase the demand and reduce the basis risk of the index insurance if we choose the correct levels of premium and of gap insurance cover. Moreover our work shows that index insurance can be a good alternative to proportional and excess loss reinsurance when it is issued at a low enough price.
Im Projekt MAFoaM - Modular Algorithms for closed Foam Mechanics - des
Fraunhofer ITWM in Zusammenarbeit mit dem Fraunhofer IMWS wurde eine Methode zur Analyse und Simulation geschlossenzelliger PMI-Hartschäume entwickelt. Die Zellstruktur der Hartschäume wurde auf Basis von CT-Aufnahmen modelliert, um ihr Verformungs- und Versagensverhalten zu simulieren, d.h. wie sich die Schäume unter Belastungen bis hin zum totalen Defekt verhalten.
In der Diplomarbeit wird die
bildanalytische Zellrekonstruktion für PMI-Hartschäume automatisiert. Die Zellrekonstruktion dient der Bestimmung von Mikrostrukturgrößen,
also geometrischer Eigenschaften der Schaumzellen, wie z.B.
Mittelwert und Varianz des Zellvolumens oder der Zelloberfläche.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2021.10. For more information, see https://www.lintim.net
The high complexity of civil engineering structures makes it difficult to satisfactorily evaluate their reliability. However, a good risk assessment of such structures is incredibly important to avert dangers and possible disasters for public life. For this purpose, we need algorithms that reliably deliver estimates for their failure probabilities with high efficiency and whose results enable a better understanding of their reliability. This is a major challenge, especially when dynamics, for example due to uncertainties or time-dependent states, must be included in the model.
The contributions are centered around Subset Simulation, a very popular adaptive Monte Carlo method for reliability analysis in the engineering sciences. It particularly well estimates small failure probabilities in high dimensions and is therefore tailored to the demands of many complex problems. We modify Subset Simulation and couple it with interpolation methods in order to keep its remarkable properties and receive all conditional failure probabilities with respect to one variable of the structural reliability model. This covers many sorts of model dynamics with several model constellations, such as time-dependent modeling, sensitivity and uncertainty, in an efficient way, requiring similar computational demands as a static reliability analysis for one model constellation by Subset Simulation. The algorithm offers many new opportunities for reliability evaluation and can even be used to verify results of Subset Simulation by artificially manipulating the geometry of the underlying limit state in numerous ways, allowing to provide correct results where Subset Simulation systematically fails. To improve understanding and further account for model uncertainties, we present a new visualization technique that matches the extensive information on reliability we get as a result from the novel algorithm.
In addition to these extensions, we are also dedicated to the fundamental analysis of Subset Simulation, partially bridging the gap between theory and results by simulation where inconsistencies exist. Based on these findings, we also extend practical recommendations on selection of the intermediate probability with respect to the implementation of the algorithm and derive a formula for correction of the bias. For a better understanding, we also provide another stochastic interpretation of the algorithm and offer alternative implementations which stick to the theoretical assumptions, typically made in analysis.
A significant step to engineering design is to take into account uncertainties and to
develop optimal designs that are robust with respect to perturbations. Furthermore, it
is often of interest to optimize for different conflicting objective functions describing the
quality of a design, leading to a multi-objective optimization problem. In this context,
generating methods for solving multi-objective optimization problems seek to find a
representative set of solutions fulfilling the concept of Pareto optimality. When multiple
uncertain objective functions are involved, it is essential to define suitable measures for
robustness that account for a combined effect of uncertainties in objective space. Many
tasks in engineering design include the solution of an underlying partial differential
equation that can be computationally expensive. Thus, it is of interest to use efficient
strategies for finding optimal designs. This research aims to present suitable measures
for robustness in a multi-objective context, as well as optimization strategies for multi-
objective robust design.
This work introduces new ideas for robustness measures in the context of multi-
objective robust design. Losses and expected losses based on distances in objective space
are used to describe robustness. A direct formulation and a two-phase formulation based
on expected losses are proposed for finding a set of robust optimal solutions.
Furthermore, suitable optimization strategies for solving the resulting multi-objective
robust design problem are formulated and analyzed. The multi-objective optimization
problem is solved with a constraint-based approach that is based on solving several
constrained single-objective optimization problems with a hybrid optimization strategy.
The hybrid method combines a global search method on a surrogate model with adjoint-
based optimization methods. In the context of optimization with an underlying partial
differential equation, a one-shot approach is extended to handle additional constraints.
The developed concepts for multi-objective robust design and the proposed optimiza-
tion strategies are applied to an aerodynamic shape optimization problem. The drag
coefficient and the lift coefficient are optimized under the consideration of uncertain-
ties in the operational conditions and geometrical uncertainties. The uncertainties are
propagated with the help of a non-intrusive polynomial chaos approach. For increasing
the efficiency when considering a higher-dimensional random space, it is made use of a
Karhunen-Loève expansion and a dimension-adaptive sparse grid quadrature.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around one or several sites of vaso-occlusion are typical for GBM and hint on poor prognosis of patient survival.
This thesis focuses on studying the establishment and maintenance of these histological patterns specific to GBM with the aim of modeling the microlocal tumor environment under the influence of acidity, tissue anisotropy and hypoxia-induced angiogenesis. This aim is reached with two classes of models: multiscale and multiphase. Each of them features a reaction-diffusion equation (RDE) for the acidity acting as a chemorepellent and inhibitor of growth, coupled in a nonlinear way to a reaction-diffusion-taxis equation (RDTE) for glioma dynamics. The numerical simulations of the resulting systems are able to reproduce pseudopalisade-like patterns. The effect of tumor vascularization on these patterns is studied through a flux-limited model belonging to the multiscale class. Thereby, PDEs of reaction-diffusion-taxis type are deduced for glioma and endothelial cell (EC) densities with flux-limited pH-taxis for the tumor and chemotaxis towards vascular endothelial growth factor (VEGF) for ECs. These, in turn, are coupled to RDEs for acidity and VEGF produced by tumor. The numerical simulations of the obtained system show pattern disruption and transient behavior due to hypoxia-induced angiogenesis. Moreover, comparing two upscaling techniques through numerical simulations, we observe that the macroscopic PDEs obtained via parabolic scaling (directed tissue) are able to reproduce glioma patterns, while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related.
Life insurance companies are asked by the Solvency II regime to retain capital requirements against economically adverse developments. This ensures that they are continuously able to meet their payment obligations towards the policyholders. When relying on an internal model approach, an insurer's solvency capital requirement is defined as the 99.5% value-at-risk of its full loss probability distribution over the coming year. In the introductory part of this thesis, we provide the actuarial modeling tools and risk aggregation methods by which the companies can accomplish the derivations of these forecasts. Since the industry still lacks the computational capacities to fully simulate these distributions, the insurers have to refer to suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. We dedicate the first part of this thesis to establishing a theoretical framework of the LSMC method. We start with how LSMC for calculating capital requirements is related to its original use in American option pricing. Then we decompose LSMC into four steps. In the first one, the Monte Carlo simulation setting is defined. The second and third steps serve the calibration and validation of the proxy function, and the fourth step yields the loss distribution forecast by evaluating the proxy model. When guiding through the steps, we address practical challenges and propose an adaptive calibration algorithm. We complete with a slightly disguised real-world application. The second part builds upon the first one by taking up the LSMC framework and diving deeper into its calibration step. After a literature review and a basic recapitulation, various adaptive machine learning approaches relying on least-squares regression and model selection criteria are presented as solutions to the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over GLM and GAM methods to MARS and kernel regression routines. We justify the combinability of the regression ingredients mathematically and compare their approximation quality in slightly altered real-world experiments. Thereby, we perform sensitivity analyses, discuss numerical stability and run comprehensive out-of-sample tests. The scope of the analyzed regression variants extends to other high-dimensional variable selection applications. Life insurance contracts with early exercise features can be priced by LSMC as well due to their analogies to American options. In the third part of this thesis, equity-linked contracts with American-style surrender options and minimum interest rate guarantees payable upon contract termination are valued. We allow randomness and jumps in the movements of the interest rate, stochastic volatility, stock market and mortality. For the simultaneous valuation of numerous insurance contracts, a hybrid probability measure and an additional regression function are introduced. Furthermore, an efficient seed-related simulation procedure accounting for the forward discretization bias and a validation concept are proposed. An extensive numerical example rounds off the last part.
Linear algebra, together with polynomial arithmetic, is the foundation of computer algebra. The algorithms have improved over the last 20 years, and the current state of the art algorithms for matrix inverse, solution of a linear system and determinants have a theoretical sub-cubic complexity. This thesis presents fast and practical algorithms for some classical problems in linear algebra over number fields and polynomial rings. Here, a number field is a finite extension of the field of rational numbers, and the polynomial rings we considered in this thesis are over finite fields.
One of the key problems of symbolic computation is intermediate coefficient swell: the bit length of intermediate results can grow during the computation compared to those in the input and output. The standard strategy to overcome this is not to compute the number directly but to compute it modulo some other numbers, using either the Chinese remainder theorem (CRT) or a variation of Newton-Hensel lifting. Often, the final step of these algorithms is combined with reconstruction methods such as rational reconstruction to convert the integral result into the rational solution. Here, we present reconstruction methods over number fields with a fast and simple vector-reconstruction algorithm.
The state of the art method for computing the determinant over integers is due to Storjohann. When generalizing his method over number field, we encountered the problem that modules generated by the rows of a matrix over number fields are in general not free, thus Strojohann's method cannot be used directly. Therefore, we have used the theory of pseudo-matrices to overcome this problem. As a sub-problem of this application, we generalized a unimodular certification method for pseudo-matrices: similar to the integer case, we check whether the determinant of the given pseudo matrix is a unit by testing the integrality of the corresponding dual module using higher-order lifting.
One of the main algorithms in linear algebra is the Dixon solver for linear system solving due to Dixon. Traditionally this algorithm is used only for square systems having a unique solution. Here we generalized Dixon algorithm for non-square linear system solving. As the solution is not unique, we have used a basis of the kernel to normalize the solution. The implementation is accompanied by a fast kernel computation algorithm that also extends to compute the reduced-row-echelon form of a matrix over integers and number fields.
The fast implementations for computing the characteristic polynomial and minimal polynomial over number fields use the CRT-based modular approach. Finally, we extended Storjohann's determinant computation algorithm over polynomial ring over finite fields, with its sub-algorithms for reconstructions and unimodular certification. In this case, we face the problem of intermediate degree swell. To avoid this phenomenon, we used higher-order lifting techniques in the unimodular certification algorithm. We have successfully used the half-gcd approach to optimize the rational polynomial reconstruction.
Dealing with uncertain structures or data has lately been getting much attention in discrete optimization. This thesis addresses two different areas in discrete optimization: Connectivity and covering.
When discussing uncertain structures in networks it is often of interest to determine how many vertices or edges may fail in order for the network to stay connected.
Connectivity is a broad, well studied topic in graph theory. One of the most important results in this area is Menger's Theorem which states that the minimum number of vertices needed to separate two non-adjacent vertices equals the maximum number of internally vertex-disjoint paths between these vertices. Here, we discuss mixed forms of connectivity in which both vertices and edges are removed from a graph at the same time. The Beineke Harary Conjecture states that for any two distinct vertices that can be separated with k vertices and l edges but not with k-1 vertices and l edges or k vertices and l-1 edges there exist k+l edge-disjoint paths between them of which k+1 are internally vertex-disjoint. In contrast to Menger's Theorem, the existence of the paths is not sufficient for the connectivity statement to hold. Our main contribution is the proof of the Beineke Harary Conjecture for the case that l equals 2.
We also consider different problems from the area of facility location and covering. We regard problems in which we are given sets of locations and regions, where each region has an assigned number of clients. We are now looking for an allocation of suppliers into the locations, such that each client is served by some supplier. The notable difference to other covering problems is that we assume that each supplier may only serve a fixed number of clients which is not part of the input. We discuss the complexity and solution approaches of three such problems which vary in the way the clients are assigned to the suppliers.
In this thesis one considers the periodic homogenization of a linearly coupled magneto-elastic model problem and focuses on the derivation of spectral methods to solve the obtained unit cell problem afterwards. In the beginning, the equations of linear elasticity and magnetism are presented together with the physical quantities used within. After specifying the model assumptions, the system of partial differential equations is rewritten in a weak form for which the existence and uniqueness of solutions is discussed. The model problem then undergoes a homogenization process where the original problem is approximated by a substitute problem with a repeating micro-structural geometry that was generated from a representative volume element (RVE). The following separation of scales, which can be achieved either by an asymptotic expansion or through a two-scale limit process, yields the homogenized problem on the macroscopic scale and the periodic unit cell problem. The latter is further analyzed using Fourier series, leading to periodic Lippmann-Schwinger type equations allowing for the development of matrix-free solvers. It is shown that, while it is possible to craft a scheme for the coupled problem from the purely elastic and magnetic Lippmann-Schwinger equations alone without much additional effort, a more general setting is provided when deriving a Lippmann-Schwinger equation for the coupled system directly. These numerical approaches are then validated with some analytically solvable test problems, before their performance is tested against each other for some more complex examples.