### Refine

#### Year of publication

#### Document Type

- Preprint (607)
- Doctoral Thesis (260)
- Report (121)
- Article (39)
- Diploma Thesis (26)
- Lecture (25)
- Master's Thesis (6)
- Part of a Book (4)
- Course Material (4)
- Study Thesis (4)

#### Is part of the Bibliography

- no (1103)

#### Keywords

- Mathematische Modellierung (15)
- Wavelet (14)
- MINT (13)
- Schule (13)
- Inverses Problem (12)
- Mehrskalenanalyse (12)
- Modellierung (12)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)

#### Faculty / Organisational entity

- Fachbereich Mathematik (1103) (remove)

The great interest in robust covering problems is manifold, especially due to the plenitude of real world applications and the additional incorporation of uncertainties which are inherent in many practical issues.
In this thesis, for a fixed positive integer \(q\), we introduce and elaborate on a new robust covering problem, called Robust Min-\(q\)-Multiset-Multicover, and related problems.
The common idea of these problems is, given a collection of subsets of a ground set, to decide on the frequency of choosing each subset so as to satisfy the uncertain demand of each overall occurring element.
Yet, in contrast to general covering problems, the subsets may only cover at most \(q\) of their elements.
Varying the properties of the occurring elements leads to a selection of four interesting robust covering problems which are investigated.
We extensively analyze the complexity of the arising problems, also for various restrictions to particular classes of uncertainty sets.
For a given problem, we either provide a polynomial time algorithm or show that, unless \(\text{P}=\text{NP}\), such an algorithm cannot exist.
Furthermore, in the majority of cases, we even give evidence that a polynomial time approximation scheme is most likely not possible for the hard problem variants.
Moreover, we aim for approximations and approximation algorithms for these hard variants, where we focus on Robust Min-\(q\)-Multiset-Multicover.
For a wide class of uncertainty sets, we present the first known polynomial time approximation algorithm for Robust Min-\(q\)-Multiset-Multicover having a provable worst-case performance guarantee.

Index Insurance for Farmers
(2021)

In this thesis we focus on weather index insurance for agriculture risk. Even though such an index insurance is easily applicable and reduces information asymmetries, the demand for it is quite low. This is in particular due to the basis risk and the lack of knowledge about it’s effectiveness. The basis risk is the difference between the index insurance payout and the actual loss of the insured. We evaluate the performance of weather index insurance in different contexts, because proper knowledge about index insurance will help to use it as a successful alternative for traditional crop insurance. In addition to that, we also propose and discuss methods to reduce the basis risk.
We also analyze the performance of an agriculture loan which is interlinked with a weather index insurance. We show that an index insurance with actuarial fair or subsidized premium helps to reduce the loan default probability. While we first consider an index insurance with a commonly used linear payout function for this analysis, we later design an index insurance payout function which maximizes the expected utility of the insured. Then we show that, an index insurance with that optimal payout function is more appropriate for bundling with an agriculture loan. The optimal payout function also helps to reduce the basis risk. In addition, we show that a lender who issues agriculture loans can be better off by purchasing a weather index insurance in some circumstances.
We investigate the market equilibrium for weather index insurance by assuming risk averse farmers and a risk averse insurer. When we consider two groups of farmers with different risks, we show that the low risk group subsidizes the high risk group when both should pay the same premium for the index insurance. Further, according to the analysis of an index insurance in an informal risk sharing environment, we observe that the demand of the index insurance can be increased by selling it to a group of farmers who informally share the risk based on the insurance payout, because it reduces the adverse effect of the basis risk. Besides of that we analyze the combination of an index insurance with a gap insurance. Such a combination can increase the demand and reduce the basis risk of the index insurance if we choose the correct levels of premium and of gap insurance cover. Moreover our work shows that index insurance can be a good alternative to proportional and excess loss reinsurance when it is issued at a low enough price.

Im Projekt MAFoaM - Modular Algorithms for closed Foam Mechanics - des
Fraunhofer ITWM in Zusammenarbeit mit dem Fraunhofer IMWS wurde eine Methode zur Analyse und Simulation geschlossenzelliger PMI-Hartschäume entwickelt. Die Zellstruktur der Hartschäume wurde auf Basis von CT-Aufnahmen modelliert, um ihr Verformungs- und Versagensverhalten zu simulieren, d.h. wie sich die Schäume unter Belastungen bis hin zum totalen Defekt verhalten.
In der Diplomarbeit wird die
bildanalytische Zellrekonstruktion für PMI-Hartschäume automatisiert. Die Zellrekonstruktion dient der Bestimmung von Mikrostrukturgrößen,
also geometrischer Eigenschaften der Schaumzellen, wie z.B.
Mittelwert und Varianz des Zellvolumens oder der Zelloberfläche.

LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2021.10. For more information, see https://www.lintim.net

The high complexity of civil engineering structures makes it difficult to satisfactorily evaluate their reliability. However, a good risk assessment of such structures is incredibly important to avert dangers and possible disasters for public life. For this purpose, we need algorithms that reliably deliver estimates for their failure probabilities with high efficiency and whose results enable a better understanding of their reliability. This is a major challenge, especially when dynamics, for example due to uncertainties or time-dependent states, must be included in the model.
The contributions are centered around Subset Simulation, a very popular adaptive Monte Carlo method for reliability analysis in the engineering sciences. It particularly well estimates small failure probabilities in high dimensions and is therefore tailored to the demands of many complex problems. We modify Subset Simulation and couple it with interpolation methods in order to keep its remarkable properties and receive all conditional failure probabilities with respect to one variable of the structural reliability model. This covers many sorts of model dynamics with several model constellations, such as time-dependent modeling, sensitivity and uncertainty, in an efficient way, requiring similar computational demands as a static reliability analysis for one model constellation by Subset Simulation. The algorithm offers many new opportunities for reliability evaluation and can even be used to verify results of Subset Simulation by artificially manipulating the geometry of the underlying limit state in numerous ways, allowing to provide correct results where Subset Simulation systematically fails. To improve understanding and further account for model uncertainties, we present a new visualization technique that matches the extensive information on reliability we get as a result from the novel algorithm.
In addition to these extensions, we are also dedicated to the fundamental analysis of Subset Simulation, partially bridging the gap between theory and results by simulation where inconsistencies exist. Based on these findings, we also extend practical recommendations on selection of the intermediate probability with respect to the implementation of the algorithm and derive a formula for correction of the bias. For a better understanding, we also provide another stochastic interpretation of the algorithm and offer alternative implementations which stick to the theoretical assumptions, typically made in analysis.

A significant step to engineering design is to take into account uncertainties and to
develop optimal designs that are robust with respect to perturbations. Furthermore, it
is often of interest to optimize for different conflicting objective functions describing the
quality of a design, leading to a multi-objective optimization problem. In this context,
generating methods for solving multi-objective optimization problems seek to find a
representative set of solutions fulfilling the concept of Pareto optimality. When multiple
uncertain objective functions are involved, it is essential to define suitable measures for
robustness that account for a combined effect of uncertainties in objective space. Many
tasks in engineering design include the solution of an underlying partial differential
equation that can be computationally expensive. Thus, it is of interest to use efficient
strategies for finding optimal designs. This research aims to present suitable measures
for robustness in a multi-objective context, as well as optimization strategies for multi-
objective robust design.
This work introduces new ideas for robustness measures in the context of multi-
objective robust design. Losses and expected losses based on distances in objective space
are used to describe robustness. A direct formulation and a two-phase formulation based
on expected losses are proposed for finding a set of robust optimal solutions.
Furthermore, suitable optimization strategies for solving the resulting multi-objective
robust design problem are formulated and analyzed. The multi-objective optimization
problem is solved with a constraint-based approach that is based on solving several
constrained single-objective optimization problems with a hybrid optimization strategy.
The hybrid method combines a global search method on a surrogate model with adjoint-
based optimization methods. In the context of optimization with an underlying partial
differential equation, a one-shot approach is extended to handle additional constraints.
The developed concepts for multi-objective robust design and the proposed optimiza-
tion strategies are applied to an aerodynamic shape optimization problem. The drag
coefficient and the lift coefficient are optimized under the consideration of uncertain-
ties in the operational conditions and geometrical uncertainties. The uncertainties are
propagated with the help of a non-intrusive polynomial chaos approach. For increasing
the efficiency when considering a higher-dimensional random space, it is made use of a
Karhunen-Loève expansion and a dimension-adaptive sparse grid quadrature.

Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around one or several sites of vaso-occlusion are typical for GBM and hint on poor prognosis of patient survival.
This thesis focuses on studying the establishment and maintenance of these histological patterns specific to GBM with the aim of modeling the microlocal tumor environment under the influence of acidity, tissue anisotropy and hypoxia-induced angiogenesis. This aim is reached with two classes of models: multiscale and multiphase. Each of them features a reaction-diffusion equation (RDE) for the acidity acting as a chemorepellent and inhibitor of growth, coupled in a nonlinear way to a reaction-diffusion-taxis equation (RDTE) for glioma dynamics. The numerical simulations of the resulting systems are able to reproduce pseudopalisade-like patterns. The effect of tumor vascularization on these patterns is studied through a flux-limited model belonging to the multiscale class. Thereby, PDEs of reaction-diffusion-taxis type are deduced for glioma and endothelial cell (EC) densities with flux-limited pH-taxis for the tumor and chemotaxis towards vascular endothelial growth factor (VEGF) for ECs. These, in turn, are coupled to RDEs for acidity and VEGF produced by tumor. The numerical simulations of the obtained system show pattern disruption and transient behavior due to hypoxia-induced angiogenesis. Moreover, comparing two upscaling techniques through numerical simulations, we observe that the macroscopic PDEs obtained via parabolic scaling (directed tissue) are able to reproduce glioma patterns, while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related.

Life insurance companies are asked by the Solvency II regime to retain capital requirements against economically adverse developments. This ensures that they are continuously able to meet their payment obligations towards the policyholders. When relying on an internal model approach, an insurer's solvency capital requirement is defined as the 99.5% value-at-risk of its full loss probability distribution over the coming year. In the introductory part of this thesis, we provide the actuarial modeling tools and risk aggregation methods by which the companies can accomplish the derivations of these forecasts. Since the industry still lacks the computational capacities to fully simulate these distributions, the insurers have to refer to suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. We dedicate the first part of this thesis to establishing a theoretical framework of the LSMC method. We start with how LSMC for calculating capital requirements is related to its original use in American option pricing. Then we decompose LSMC into four steps. In the first one, the Monte Carlo simulation setting is defined. The second and third steps serve the calibration and validation of the proxy function, and the fourth step yields the loss distribution forecast by evaluating the proxy model. When guiding through the steps, we address practical challenges and propose an adaptive calibration algorithm. We complete with a slightly disguised real-world application. The second part builds upon the first one by taking up the LSMC framework and diving deeper into its calibration step. After a literature review and a basic recapitulation, various adaptive machine learning approaches relying on least-squares regression and model selection criteria are presented as solutions to the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over GLM and GAM methods to MARS and kernel regression routines. We justify the combinability of the regression ingredients mathematically and compare their approximation quality in slightly altered real-world experiments. Thereby, we perform sensitivity analyses, discuss numerical stability and run comprehensive out-of-sample tests. The scope of the analyzed regression variants extends to other high-dimensional variable selection applications. Life insurance contracts with early exercise features can be priced by LSMC as well due to their analogies to American options. In the third part of this thesis, equity-linked contracts with American-style surrender options and minimum interest rate guarantees payable upon contract termination are valued. We allow randomness and jumps in the movements of the interest rate, stochastic volatility, stock market and mortality. For the simultaneous valuation of numerous insurance contracts, a hybrid probability measure and an additional regression function are introduced. Furthermore, an efficient seed-related simulation procedure accounting for the forward discretization bias and a validation concept are proposed. An extensive numerical example rounds off the last part.

Linear algebra, together with polynomial arithmetic, is the foundation of computer algebra. The algorithms have improved over the last 20 years, and the current state of the art algorithms for matrix inverse, solution of a linear system and determinants have a theoretical sub-cubic complexity. This thesis presents fast and practical algorithms for some classical problems in linear algebra over number fields and polynomial rings. Here, a number field is a finite extension of the field of rational numbers, and the polynomial rings we considered in this thesis are over finite fields.
One of the key problems of symbolic computation is intermediate coefficient swell: the bit length of intermediate results can grow during the computation compared to those in the input and output. The standard strategy to overcome this is not to compute the number directly but to compute it modulo some other numbers, using either the Chinese remainder theorem (CRT) or a variation of Newton-Hensel lifting. Often, the final step of these algorithms is combined with reconstruction methods such as rational reconstruction to convert the integral result into the rational solution. Here, we present reconstruction methods over number fields with a fast and simple vector-reconstruction algorithm.
The state of the art method for computing the determinant over integers is due to Storjohann. When generalizing his method over number field, we encountered the problem that modules generated by the rows of a matrix over number fields are in general not free, thus Strojohann's method cannot be used directly. Therefore, we have used the theory of pseudo-matrices to overcome this problem. As a sub-problem of this application, we generalized a unimodular certification method for pseudo-matrices: similar to the integer case, we check whether the determinant of the given pseudo matrix is a unit by testing the integrality of the corresponding dual module using higher-order lifting.
One of the main algorithms in linear algebra is the Dixon solver for linear system solving due to Dixon. Traditionally this algorithm is used only for square systems having a unique solution. Here we generalized Dixon algorithm for non-square linear system solving. As the solution is not unique, we have used a basis of the kernel to normalize the solution. The implementation is accompanied by a fast kernel computation algorithm that also extends to compute the reduced-row-echelon form of a matrix over integers and number fields.
The fast implementations for computing the characteristic polynomial and minimal polynomial over number fields use the CRT-based modular approach. Finally, we extended Storjohann's determinant computation algorithm over polynomial ring over finite fields, with its sub-algorithms for reconstructions and unimodular certification. In this case, we face the problem of intermediate degree swell. To avoid this phenomenon, we used higher-order lifting techniques in the unimodular certification algorithm. We have successfully used the half-gcd approach to optimize the rational polynomial reconstruction.

Dealing with uncertain structures or data has lately been getting much attention in discrete optimization. This thesis addresses two different areas in discrete optimization: Connectivity and covering.
When discussing uncertain structures in networks it is often of interest to determine how many vertices or edges may fail in order for the network to stay connected.
Connectivity is a broad, well studied topic in graph theory. One of the most important results in this area is Menger's Theorem which states that the minimum number of vertices needed to separate two non-adjacent vertices equals the maximum number of internally vertex-disjoint paths between these vertices. Here, we discuss mixed forms of connectivity in which both vertices and edges are removed from a graph at the same time. The Beineke Harary Conjecture states that for any two distinct vertices that can be separated with k vertices and l edges but not with k-1 vertices and l edges or k vertices and l-1 edges there exist k+l edge-disjoint paths between them of which k+1 are internally vertex-disjoint. In contrast to Menger's Theorem, the existence of the paths is not sufficient for the connectivity statement to hold. Our main contribution is the proof of the Beineke Harary Conjecture for the case that l equals 2.
We also consider different problems from the area of facility location and covering. We regard problems in which we are given sets of locations and regions, where each region has an assigned number of clients. We are now looking for an allocation of suppliers into the locations, such that each client is served by some supplier. The notable difference to other covering problems is that we assume that each supplier may only serve a fixed number of clients which is not part of the input. We discuss the complexity and solution approaches of three such problems which vary in the way the clients are assigned to the suppliers.