## The 10 most recently published documents

Life insurance companies are asked by the Solvency II regime to retain capital requirements against economically adverse developments. This ensures that they are continuously able to meet their payment obligations towards the policyholders. When relying on an internal model approach, an insurer's solvency capital requirement is defined as the 99.5% value-at-risk of its full loss probability distribution over the coming year. In the introductory part of this thesis, we provide the actuarial modeling tools and risk aggregation methods by which the companies can accomplish the derivations of these forecasts. Since the industry still lacks the computational capacities to fully simulate these distributions, the insurers have to refer to suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. We dedicate the first part of this thesis to establishing a theoretical framework of the LSMC method. We start with how LSMC for calculating capital requirements is related to its original use in American option pricing. Then we decompose LSMC into four steps. In the first one, the Monte Carlo simulation setting is defined. The second and third steps serve the calibration and validation of the proxy function, and the fourth step yields the loss distribution forecast by evaluating the proxy model. When guiding through the steps, we address practical challenges and propose an adaptive calibration algorithm. We complete with a slightly disguised real-world application. The second part builds upon the first one by taking up the LSMC framework and diving deeper into its calibration step. After a literature review and a basic recapitulation, various adaptive machine learning approaches relying on least-squares regression and model selection criteria are presented as solutions to the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over GLM and GAM methods to MARS and kernel regression routines. We justify the combinability of the regression ingredients mathematically and compare their approximation quality in slightly altered real-world experiments. Thereby, we perform sensitivity analyses, discuss numerical stability and run comprehensive out-of-sample tests. The scope of the analyzed regression variants extends to other high-dimensional variable selection applications. Life insurance contracts with early exercise features can be priced by LSMC as well due to their analogies to American options. In the third part of this thesis, equity-linked contracts with American-style surrender options and minimum interest rate guarantees payable upon contract termination are valued. We allow randomness and jumps in the movements of the interest rate, stochastic volatility, stock market and mortality. For the simultaneous valuation of numerous insurance contracts, a hybrid probability measure and an additional regression function are introduced. Furthermore, an efficient seed-related simulation procedure accounting for the forward discretization bias and a validation concept are proposed. An extensive numerical example rounds off the last part.

High performance fiber-reinforced concrete (HPFRC) has been frequently investigated in recent years. Plenty of studies have focused on different materials and types of fibers in combination with the concrete matrix. Experimental tests show that fiber dosage improves the energy absorption capacity of concrete and enhances the robustness of concrete elements. Fiber reinforced concrete has also been illustrated to be a material for developing infrastructure sustainability in RC elements like façade plates, columns, beams, or walls. Due to increasing costs of the produced fiber reinforced
concrete and to ensure the serviceability limit state of construction elements, there is a demand to analyze the necessary fiber dosage in the concrete composition. It is expected that the surface and length of used fiber in combination with their dosage influence the structure of fresh and hardened concrete. This work presents an investigation of the mechanical parameters of HPFRC with different polymer fiber dosage. Tests were carried out on a mixture with polypropylene and
polyvinyl alcohol fiber with dosages of 15, 25, and 35 kg/m3 as well as with control concrete without fiber. Differences were observed in the compressive strength and in the modulus of elasticity as well
as in the flexural and splitting tensile strength. The flexural tensile strength test was conducted on two different element shapes: square panel and beam samples. These mechanical properties could
lead to recommendations for designers of façade elements made of HPFRC.

Small embedded devices are highly specialized platforms that integrate several pe- ripherals alongside the CPU core. Embedded devices extensively rely on Firmware (FW) to control and access the peripherals as well as other important functionality. Customizing embedded computing platforms to specific application domains often necessitates optimizing the firmware and/or the HW/SW interface under tight re- source constraints. Such optimizations frequently alter the communication between the firmware and the peripheral devices, possibly compromising functional correct- ness of the input/output behavior of the embedded system. This poses challenges to the development and verification of such systems. The system must be adapted and verified to each specific device configuration.
This thesis presents a formal approach to formulate these verification tasks at several levels of abstraction, along with corresponding HW/SW co-equivalence checking techniques for verifying correct I/O behavior of peripherals under a modified firmware. The feasibility of the approach is shown on several case studies, including industrial driver software as well as open-source peripherals. In addition, a subtle bug in one of the peripherals and several undocumented preconditions for correct device behavior were detected by the verification method.

To render membrane proteins amenable to in vitro functional and structural studies, they need to be extracted from cellular membranes and stabilised using membrane-mimetic systems. Amphiphilic copolymers gain considerable interest, because they are able to coextract
membrane proteins and their surrounding lipids from complex cellular membranes to form polymer-bounded nanodiscs. The latter harbour a native-like lipid-bilayer core stabilised by a copolymer rim. Accordingly, these membrane mimics are supposed to provide superior
stability to embedded membrane proteins as compared with conventional detergent micelles.
Herein, the formation of nanodiscs by the most commonly used styrene/maleic acid (SMA)copolymer, termed SMA(2:1), was elucidated in detail. To this end, the equilibrium solubilisation efficiencies towards model and cellular membranes were quantified and
compared with those of the more hydrophobic SMA(3:1) and the more hydrophilic diisobutylene/maleic acid (DIBMA) copolymers. It was shown that, from a thermodynamic viewpoint, SMA(2:1) is the most efficient membrane solubiliser in terms of lipid- and proteinextraction
yields. Solvent properties (pH, ionic strength) or membrane characteristics (lateral pressure, charge, or thickness) can affect the polymers’ solubilisation efficiency to a certain extent. In addition, the lipid transfer behaviour of SMA(2:1) nanodiscs was studied.
Notwithstanding their high effective negative charge, SMA(2:1) nanodiscs exchange phospholipids more rapidly among each other than vesicles or protein-bounded nanodiscs, thus rendering them highly dynamic nano-assemblies. Two alternative electroneutral polymers, namely SMA(2:1)-SB and DIBMA-SB, were introduced in this thesis. They were generated by polymer backbone modifications of SMA(2:1) and DIBMA, respectively. The derivatised polymers were shown to quantitatively solubilise model and biological membranes and, like DIBMA, only had a mild effect on lipidbilayer integrity. Along these lines, DIBMA-SB preserved membrane-protein complexes of distinct structural classes and extracted them from various cellular membranes. Importantly, the electroneutral polymers were amenable to protein/lipid interaction studies otherwise masked by unspecific interactions of their anionic counterparts with target lipids or proteins. Taken together, the in-depth characterisation of nanodiscs formed by anionic and electroneutral polymers allows for adjusting the nanodisc properties to specifically suit experimental requirements or address membrane-protein research questions.

Linear algebra, together with polynomial arithmetic, is the foundation of computer algebra. The algorithms have improved over the last 20 years, and the current state of the art algorithms for matrix inverse, solution of a linear system and determinants have a theoretical sub-cubic complexity. This thesis presents fast and practical algorithms for some classical problems in linear algebra over number fields and polynomial rings. Here, a number field is a finite extension of the field of rational numbers, and the polynomial rings we considered in this thesis are over finite fields.
One of the key problems of symbolic computation is intermediate coefficient swell: the bit length of intermediate results can grow during the computation compared to those in the input and output. The standard strategy to overcome this is not to compute the number directly but to compute it modulo some other numbers, using either the Chinese remainder theorem (CRT) or a variation of Newton-Hensel lifting. Often, the final step of these algorithms is combined with reconstruction methods such as rational reconstruction to convert the integral result into the rational solution. Here, we present reconstruction methods over number fields with a fast and simple vector-reconstruction algorithm.
The state of the art method for computing the determinant over integers is due to Storjohann. When generalizing his method over number field, we encountered the problem that modules generated by the rows of a matrix over number fields are in general not free, thus Strojohann's method cannot be used directly. Therefore, we have used the theory of pseudo-matrices to overcome this problem. As a sub-problem of this application, we generalized a unimodular certification method for pseudo-matrices: similar to the integer case, we check whether the determinant of the given pseudo matrix is a unit by testing the integrality of the corresponding dual module using higher-order lifting.
One of the main algorithms in linear algebra is the Dixon solver for linear system solving due to Dixon. Traditionally this algorithm is used only for square systems having a unique solution. Here we generalized Dixon algorithm for non-square linear system solving. As the solution is not unique, we have used a basis of the kernel to normalize the solution. The implementation is accompanied by a fast kernel computation algorithm that also extends to compute the reduced-row-echelon form of a matrix over integers and number fields.
The fast implementations for computing the characteristic polynomial and minimal polynomial over number fields use the CRT-based modular approach. Finally, we extended Storjohann's determinant computation algorithm over polynomial ring over finite fields, with its sub-algorithms for reconstructions and unimodular certification. In this case, we face the problem of intermediate degree swell. To avoid this phenomenon, we used higher-order lifting techniques in the unimodular certification algorithm. We have successfully used the half-gcd approach to optimize the rational polynomial reconstruction.

Amino acids, apart from being building blocks of proteins, serve various cellular and metabolic functions1,2. Changes in amino acid handling have been observed in a wide range of human pathologies, including diabetes and various metabolic disorders (aminoacidopathies)3–5. Saccharomyces cerevisiae is used as a model to investigate how increase in amino acid content (in the form of amino acid dropout mix: AAM) in growth medium influences cell growth. Intriguingly, it was observed that increasing the concentration of AAM in the media (double or triple times; 2 X AAM and 3 X AAM respectively), severely affects the growth of auxotrophic but not of prototrophic yeast strains in presence of glucose as carbon substrate. Increased concentration of Ehrlich amino acids, which are degraded to fusel acidic/alcoholic compounds, induced the observed slow growth phenotype of BY4742. These phenotypes can be rescued by either re-establishing the functional leucine biosynthetic pathway in BY4742 (leucine auxotroph) or increasing leucine in proportion to the increased AAM. Interestingly, the amino acid dependent growth phenotypes are absent when cells grow in media containing non-fermentable carbon sources. Furthermore, the deletion of ILV2 or ILV3 (genes encoding enzymes involved in the leucine biosynthetic pathway) also rescues the growth phenotype of BY4742 on 2 X AAM and 3 X AAM growth media. It was found that Ilv3 is the potential switching point and links cellular growth to redox homeostasis. The possibility of leucine limitation per se or transport competition between different Ehrlich amino acids and leucine, as a cause for the observed phenotypes, is ruled out. Upregulation of the branched-chain amino acid pathway inhibits cell growth of BY4742 on 2 X AAM. Although we could not detect KIV, the α-keto acid intermediate formed by the Ilv3. It is proposed that KIV itself (or its unknown downstream product) leads to the onset of the observed phenotypes. Different studies suggest that oxidative stress (due to accumulation of branched-chain amino acids (BCaa) and their α-keto acids) contributes to the neurological damage of MSUD patients6–9. It was also observed that the trigger of the BCaa bio-synthesis pathway on 2 X AAM growth conditions also contributes to the significant oxidative stress in the cell. In conclusion, we propose that yeast can be used as a suitable model system to study how accumulation of BCaa and their α-keto acids are lead to oxidative stress that is potentially toxic to cells. Further, this knowledge and the underlying molecular mechanisms will enhance our understanding of MSUD in humans.

The emerging field of magnonics uses spin waves and their quanta, magnons, to implement wave-based computing on the micro- and nanoscale. Multifrequency magnon networks would allow for parallel data processing within single logic elements, whereas this is not the case with conventional transistor-based electronic logic. However, a lack of experimentally proven solutions to efficiently combine and separate magnons of different frequencies has impeded the intensive use of this concept. Herein, the experimental realization of a spin-wave demultiplexer enabling frequency-dependent separation of magnonic signals in the gigahertz range is demonstrated. The device is based on 2D magnon trans- port in the form of spin-wave beams in unpatterned magnetic films. The intrinsic frequency dependence of the beam direction is exploited to realize a passive functioning obviating an external control and additional power consumption. This approach paves the way to magnonic multiplexing circuits enabling simultaneous information transport and processing.

This work presents a visual analytics-driven workflow for an interpretable and understandable machine learning model. The model is driven by a reverse
engineering task in automotive assembly processes. The model aims
to predict the assembly parameters leading to the given displacement field
on the geometries surface. The derived model can work on both measurement
and simulation data. The proposed approach is driven by the scientific
goals from visual analytics and interpretable artificial intelligence alike. First, a concept for systematic uncertainty monitoring, an object-oriented, virtual reference scheme (OOVRS), is developed. Afterward, the prediction task is solved via a regressive machine learning model using adversarial neural networks.
A profound model parameter study is conducted and assisted with an interactive visual analytics pipeline. Further, the effects of the learned
variance in displacement fields are analyzed in detail. Therefore a visual analytics pipeline is developed, resulting in a sensitivity benchmarking tool. This allows the testing of various segmentation approaches to lower the machine learning input dimensions. The effects of the assembly parameters are
investigated in domain space to find a suitable segmentation of the training
data set’s geometry. Therefore, a sensitivity matrix visualization is developed. Further, it is shown how this concept could directly compare results
from various segmentation methods, e.g., topological segmentation, concerning the assembly parameters and their impact on the displacement field variance. The resulting databases are still of substantial size for complex simulations with large and high-dimensional parameter spaces. Finally, the applicability of video compression techniques towards compressing visualization image databases is studied.

Dealing with uncertain structures or data has lately been getting much attention in discrete optimization. This thesis addresses two different areas in discrete optimization: Connectivity and covering.
When discussing uncertain structures in networks it is often of interest to determine how many vertices or edges may fail in order for the network to stay connected.
Connectivity is a broad, well studied topic in graph theory. One of the most important results in this area is Menger's Theorem which states that the minimum number of vertices needed to separate two non-adjacent vertices equals the maximum number of internally vertex-disjoint paths between these vertices. Here, we discuss mixed forms of connectivity in which both vertices and edges are removed from a graph at the same time. The Beineke Harary Conjecture states that for any two distinct vertices that can be separated with k vertices and l edges but not with k-1 vertices and l edges or k vertices and l-1 edges there exist k+l edge-disjoint paths between them of which k+1 are internally vertex-disjoint. In contrast to Menger's Theorem, the existence of the paths is not sufficient for the connectivity statement to hold. Our main contribution is the proof of the Beineke Harary Conjecture for the case that l equals 2.
We also consider different problems from the area of facility location and covering. We regard problems in which we are given sets of locations and regions, where each region has an assigned number of clients. We are now looking for an allocation of suppliers into the locations, such that each client is served by some supplier. The notable difference to other covering problems is that we assume that each supplier may only serve a fixed number of clients which is not part of the input. We discuss the complexity and solution approaches of three such problems which vary in the way the clients are assigned to the suppliers.

Anlässlich der aktuellen Diskussion über das Fortbestehen des Berliner Neutralitätsgesetzes,
das bisher unter anderem Musliminnen das Tragen des Kopftuches im Schuldienst verbietet,
geht die vorliegende Bachelorarbeit der Frage nach, inwiefern das islamische Kopftuch einen
Einfluss auf die Qualifikation als Lehrerin haben kann. Dazu wird die sogenannte Kopftuchdebatte in Deutschland diskursanalytisch rekapituliert und hinsichtlich pädagogischer Bezüge
kritisch diskutiert. Die erstmalig durch den Fall Fereshta Ludin ausgelöste Diskussion über die
Vereinbarkeit des Lehrer*innenberufes und der öffentlichen Glaubensbekennung durch das
Tragen sichtbarer religiöser Symbole spaltet gewohnte Meinungskoalitionen auf und entwickelt
sich zu einer kontroversen und weitreichenden Debatte. Aufgrund der Mehrdeutigkeit und des
Konfliktpotenzials des Kopftuches, stellt sich die Frage nach dem angemessenen Umgang mit
muslimischen Lehrerinnen, die das Kopftuch im Schuldienst nicht ablegen möchten.
Die Analyse des religiös-weltanschaulichen Diskursstrangs zeigt auf, dass es eine anhaltende
Uneinigkeit über die Auslegung des Neutralitätsverständnisses des Staates und der Differenzierung zwischen Symbolen christlichen Ursprungs und des islamischen Kopftuches aufgrund
umstrittener Implikationen gibt. Diese Dissonanzen finden sich auch im feministischen Diskurs
wieder, wenn es darum geht, ob das Kopftuch ein Zeichen der Unterdrückung von Frauen ist,
oder auch eine Art und Weise sein kann, Selbstbestimmung und Autonomie auszudrücken.
Anhand einiger Studien (siehe Karakasoglu-Aydin 2000; Jessen, von Wilamowitz-Moellendorff
2006) wird deutlich, dass es vielfältige Beweggründe für Frauen gibt, dem Bedeckungsgebot
nachzugehen und somit ein Pauschalverdacht und generelle Kopftuchverbote zu kurz greifen.
Die Arbeit soll aufzeigen, dass der Umgang mit dem Kopftuch der Lehrerin aufgrund diverser
Deutungsmöglichkeiten differenziert stattfinden muss. Denn besonders im schulischen
Bereich kann das Heranführen der Schüler*innen an den Umgang mit Differenzen einen
wichtigen Beitrag zur Integration leisten.