Refine
Year of publication
Document Type
- Preprint (607)
- Doctoral Thesis (265)
- Report (121)
- Article (40)
- Diploma Thesis (26)
- Lecture (25)
- Master's Thesis (6)
- Part of a Book (4)
- Course Material (4)
- Study Thesis (4)
Is part of the Bibliography
- no (1109)
Keywords
- Mathematische Modellierung (15)
- Wavelet (14)
- MINT (13)
- Schule (13)
- Inverses Problem (12)
- Mehrskalenanalyse (12)
- Modellierung (12)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)
Faculty / Organisational entity
- Fachbereich Mathematik (1109) (remove)
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
The knowledge of structural properties in microscopic materials contributes to a deeper understanding of macroscopic properties. For the study of such materials, several imaging techniques reaching scales in the order of nanometers have been developed. One of the most powerful and sophisticated imaging methods is focused-ion-beam scanning electron
microscopy (FIB-SEM), which combines serial sectioning by an ion beam and imaging by
a scanning electron microscope.
FIB-SEM imaging reaches extraordinary scales below 5 nm with large representative
volumes. However, the complexity of the imaging process results in the addition of artificial distortion and artifacts generating poor-quality images. We introduce a method
for the quality evaluation of images by analyzing general characteristics of the images
as well as artifacts exclusively for FIB-SEM, namely curtaining and charging. For the
evaluation, we propose quality indexes, which are tested on several data sets of porous and non-porous materials with different characteristics and distortions. The quality indexes report objective evaluations in accordance with visual judgment.
Moreover, the acquisition of large volumes at high resolution can be time-consuming. An approach to speed up the imaging is by decreasing the resolution and by considering cuboidal voxel configurations. However, non-isotropic resolutions may lead to errors in the reconstructions. Even if the reconstruction is correct, effects are visible in the analysis.
We study the effects of different voxel settings on the prediction of material and flow properties of reconstructed structures. Results show good agreement between highly resolved cases and ground truths as is expected. Structural anisotropy is reported as
resolution decreases, especially in anisotropic grids. Nevertheless, gray image interpolation
remedies the induced anisotropy. These benefits are visible at flow properties as well.
For highly porous structures, the structural reconstruction is even more difficult as
a consequence of deeper parts of the material visible through the pores. We show as an application example, the reconstruction of two highly porous structures of optical layers, where a typical workflow from image acquisition, preprocessing, reconstruction until a
spatial analysis is performed. The study case shows the advantages of 3D imaging for
optical porous layers. The analysis reveals geometrical structural properties related to the manufacturing processes.
The main objects of study in this thesis are abelian varieties and their endomorphism rings. Abelian varieties are not just interesting in their own right, they also have numerous applications in various areas such as in algebraic geometry, number theory and information security. In fact, they make up one of the best choices in public key cryptography and more recently in post-quantum cryptography. Endomorphism rings are objects attached to abelian varieties. Their computation plays an important role in explicit class field theory and in the security of some post-quantum cryptosystems.
There are subexponential algorithms to compute the endomorphism rings of abelian varieties of dimension one and two. Prior to this work, all these subexponential algorithms came with a probability of failure and additional steps were required to unconditionally prove the output. In addition, these methods do not cover all abelian varieties of dimension two. The objective of this thesis is to analyse the subexponential methods and develop ways to deal with the exceptional cases.
We improve the existing methods by developing algorithms that always output the correct endomorphism ring. In addition to that, we develop a novel approach to compute endomorphism rings of some abelian varieties that could not be handled before. We also prove that the subexponential approaches are simply not good enough to cover all the cases. We use some of our results to construct a family of abelian surfaces with which we build post-quantum cryptosystems that are believed to resist subexponential quantum attacks - a desirable property for cryptosystems. This has the potential of providing an efficient non interactive isogeny based key exchange protocol, which is also capable of resisting subexponential quantum attacks and will be the first of its kind.
Risk management is an indispensable component of the financial system. In this context, capital requirements are built by financial institutions to avoid future bankruptcy. Their calculation is based on a specific kind of maps, so-called risk measures. There exist several forms and definitions of them. Multi-asset risk measures are the starting point of this dissertation. They determine the capital requirements as the minimal amount of money invested into multiple eligible assets to secure future payoffs. The dissertation consists of three main contributions: First, multi-asset risk measures are used to calculate pricing bounds for European type options. Second, multi-asset risk measures are combined with recently proposed intrinsic risk measures to obtain a new kind of a risk measure which we call a multi-asset intrinsic (MAI) risk measure. Third, the preferences of an agent are included in the calculation of the capital requirements. This leads to another new risk measure which we call a scalarized utility-based multi-asset (SUBMA) risk measure.
In the introductory chapter, we recall the definition and properties of multi-asset risk
measures. Then, each of the aforementioned contributions covers a separate chapter. In
the following, the content of these three chapters is explained in more detail:
Risk measures can be used to calculate pricing bounds for financial derivatives. In
Chapter 2, we deal with the pricing of European options in an incomplete financial market
model. We use the common risk measures Value-at-Risk and Expected Shortfall to define
good deals on a financial market with log-normally distributed rates of return. We show that the pricing bounds obtained from Value-at-Risk may have a non-smooth behavior under parameter changes. Additionally, we find situations in which the seller's bound for a call option is smaller than the buyer's bound. We identify the missing convexity of the Value-at-Risk as main reason for this behavior. Due to the strong connection between the obtained pricing bounds and the theory of risk measures, we further obtain new insights in the finiteness and the continuity of multi-asset risk measures.
In Chapter 3, we construct the MAI risk measure. Therefore, recall that a multi-asset risk measure describes the minimal external capital that has to be raised into multiple eligible assets to make a future financial position acceptable, i.e., that it passes a capital adequacy test. Recently, the alternative methodology of intrinsic risk measures
was introduced in the literature. These ask for the minimal proportion of the financial position that has to be reallocated to pass the capital adequacy test, i.e., only internal capital is used. We combine these two concepts and call this new type of risk measure an MAI risk measure. It allows to secure the financial position by external capital as well as reallocating parts of the portfolio as an internal rebooking. We investigate several properties to demonstrate similarities and differences to the two
aforementioned classical types of risk measures. We find out that diversification reduces
the capital requirement only in special situations depending on the financial positions. With the help of Sion's minimax theorem we also prove a dual representation for MAI risk measures. Finally, we determine capital requirements in a model motivated by the Solvency II methodology.
In the final Chapter 4, we construct the SUBMA risk measure. In doing so, we consider the situation in which a financial institution has to satisfy a capital adequacy test, e.g., by the Basel Accords for banks or by Solvency II for insurers. If the financial situation of this institution is tight, then it can happen that no reallocation of the initial
endowment would pass the capital adequacy test. The classical portfolio optimization approach breaks down and a capital increase is needed. We introduce the SUBMA risk measure which optimizes the hedging costs and the expected utility of the institution simultaneously subject to the capital adequacy test. We find out that the SUBMA risk measure is coherent if the utility function has constant relative risk aversion and the capital adequacy test leads to a coherent acceptance set. In a one-period financial market model we present a sufficient condition for the SUBMA risk measure to be finite-valued and continuous. Finally, we calculate the SUBMA risk measure in a continuous-time financial market model for two benchmark capital adequacy tests.
An increasing number of nowadays tasks, such as speech recognition, image generation,
translation, classification or prediction are performed with the help of machine learning.
Especially artificial neural networks (ANNs) provide convincing results for these tasks.
The reasons for this success story are the drastic increase of available data sources in
our more and more digitalized world as well as the development of remarkable ANN
architectures. This development has led to an increasing number of model parameters
together with more and more complex models. Unfortunately, this yields a loss in the
interpretability of deployed models. However, there exists a natural desire to explain the
deployed models, not just by empirical observations but also by analytical calculations.
In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding
of these models. As the name suggests, VAEs are based on standard autoencoders (AEs)
and therefore used to perform dimensionality reduction of data. This is achieved by a
bottleneck structure within the hidden layers of the ANN. From a data input the encoder,
that is the part up to the bottleneck, produces a low dimensional representation. The
decoder, the part from the bottleneck to the output, uses this representation to reconstruct
the input. The model is learned by minimizing the error from the reconstruction.
In our point of view, the most remarkable property and, hence, also a central topic
in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning
is preventing the VAE with thousands of parameters from overfitting. However, such a
desirable property comes with the risk for the model of learning nothing at all. In this
thesis, we look at VAEs and the auto-pruning from two different angles and our main
contributions to research are the following:
(i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the
framework of generalized linear models (GLMs). As a result, we are able to explain
training results of VAEs before conducting the actual training.
(ii) We construct a time dependent VAE and show the effects of the auto-pruning in
this model. As a result, we are able to model financial data sequences and estimate
the value-at-risk (VaR) of associated portfolios. Our results show that we surpass
the standard benchmarks for VaR estimation.
Index Insurance for Farmers
(2021)
In this thesis we focus on weather index insurance for agriculture risk. Even though such an index insurance is easily applicable and reduces information asymmetries, the demand for it is quite low. This is in particular due to the basis risk and the lack of knowledge about it’s effectiveness. The basis risk is the difference between the index insurance payout and the actual loss of the insured. We evaluate the performance of weather index insurance in different contexts, because proper knowledge about index insurance will help to use it as a successful alternative for traditional crop insurance. In addition to that, we also propose and discuss methods to reduce the basis risk.
We also analyze the performance of an agriculture loan which is interlinked with a weather index insurance. We show that an index insurance with actuarial fair or subsidized premium helps to reduce the loan default probability. While we first consider an index insurance with a commonly used linear payout function for this analysis, we later design an index insurance payout function which maximizes the expected utility of the insured. Then we show that, an index insurance with that optimal payout function is more appropriate for bundling with an agriculture loan. The optimal payout function also helps to reduce the basis risk. In addition, we show that a lender who issues agriculture loans can be better off by purchasing a weather index insurance in some circumstances.
We investigate the market equilibrium for weather index insurance by assuming risk averse farmers and a risk averse insurer. When we consider two groups of farmers with different risks, we show that the low risk group subsidizes the high risk group when both should pay the same premium for the index insurance. Further, according to the analysis of an index insurance in an informal risk sharing environment, we observe that the demand of the index insurance can be increased by selling it to a group of farmers who informally share the risk based on the insurance payout, because it reduces the adverse effect of the basis risk. Besides of that we analyze the combination of an index insurance with a gap insurance. Such a combination can increase the demand and reduce the basis risk of the index insurance if we choose the correct levels of premium and of gap insurance cover. Moreover our work shows that index insurance can be a good alternative to proportional and excess loss reinsurance when it is issued at a low enough price.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2021.10. For more information, see https://www.lintim.net
The high complexity of civil engineering structures makes it difficult to satisfactorily evaluate their reliability. However, a good risk assessment of such structures is incredibly important to avert dangers and possible disasters for public life. For this purpose, we need algorithms that reliably deliver estimates for their failure probabilities with high efficiency and whose results enable a better understanding of their reliability. This is a major challenge, especially when dynamics, for example due to uncertainties or time-dependent states, must be included in the model.
The contributions are centered around Subset Simulation, a very popular adaptive Monte Carlo method for reliability analysis in the engineering sciences. It particularly well estimates small failure probabilities in high dimensions and is therefore tailored to the demands of many complex problems. We modify Subset Simulation and couple it with interpolation methods in order to keep its remarkable properties and receive all conditional failure probabilities with respect to one variable of the structural reliability model. This covers many sorts of model dynamics with several model constellations, such as time-dependent modeling, sensitivity and uncertainty, in an efficient way, requiring similar computational demands as a static reliability analysis for one model constellation by Subset Simulation. The algorithm offers many new opportunities for reliability evaluation and can even be used to verify results of Subset Simulation by artificially manipulating the geometry of the underlying limit state in numerous ways, allowing to provide correct results where Subset Simulation systematically fails. To improve understanding and further account for model uncertainties, we present a new visualization technique that matches the extensive information on reliability we get as a result from the novel algorithm.
In addition to these extensions, we are also dedicated to the fundamental analysis of Subset Simulation, partially bridging the gap between theory and results by simulation where inconsistencies exist. Based on these findings, we also extend practical recommendations on selection of the intermediate probability with respect to the implementation of the algorithm and derive a formula for correction of the bias. For a better understanding, we also provide another stochastic interpretation of the algorithm and offer alternative implementations which stick to the theoretical assumptions, typically made in analysis.
The great interest in robust covering problems is manifold, especially due to the plenitude of real world applications and the additional incorporation of uncertainties which are inherent in many practical issues.
In this thesis, for a fixed positive integer \(q\), we introduce and elaborate on a new robust covering problem, called Robust Min-\(q\)-Multiset-Multicover, and related problems.
The common idea of these problems is, given a collection of subsets of a ground set, to decide on the frequency of choosing each subset so as to satisfy the uncertain demand of each overall occurring element.
Yet, in contrast to general covering problems, the subsets may only cover at most \(q\) of their elements.
Varying the properties of the occurring elements leads to a selection of four interesting robust covering problems which are investigated.
We extensively analyze the complexity of the arising problems, also for various restrictions to particular classes of uncertainty sets.
For a given problem, we either provide a polynomial time algorithm or show that, unless \(\text{P}=\text{NP}\), such an algorithm cannot exist.
Furthermore, in the majority of cases, we even give evidence that a polynomial time approximation scheme is most likely not possible for the hard problem variants.
Moreover, we aim for approximations and approximation algorithms for these hard variants, where we focus on Robust Min-\(q\)-Multiset-Multicover.
For a wide class of uncertainty sets, we present the first known polynomial time approximation algorithm for Robust Min-\(q\)-Multiset-Multicover having a provable worst-case performance guarantee.
Dealing with uncertain structures or data has lately been getting much attention in discrete optimization. This thesis addresses two different areas in discrete optimization: Connectivity and covering.
When discussing uncertain structures in networks it is often of interest to determine how many vertices or edges may fail in order for the network to stay connected.
Connectivity is a broad, well studied topic in graph theory. One of the most important results in this area is Menger's Theorem which states that the minimum number of vertices needed to separate two non-adjacent vertices equals the maximum number of internally vertex-disjoint paths between these vertices. Here, we discuss mixed forms of connectivity in which both vertices and edges are removed from a graph at the same time. The Beineke Harary Conjecture states that for any two distinct vertices that can be separated with k vertices and l edges but not with k-1 vertices and l edges or k vertices and l-1 edges there exist k+l edge-disjoint paths between them of which k+1 are internally vertex-disjoint. In contrast to Menger's Theorem, the existence of the paths is not sufficient for the connectivity statement to hold. Our main contribution is the proof of the Beineke Harary Conjecture for the case that l equals 2.
We also consider different problems from the area of facility location and covering. We regard problems in which we are given sets of locations and regions, where each region has an assigned number of clients. We are now looking for an allocation of suppliers into the locations, such that each client is served by some supplier. The notable difference to other covering problems is that we assume that each supplier may only serve a fixed number of clients which is not part of the input. We discuss the complexity and solution approaches of three such problems which vary in the way the clients are assigned to the suppliers.
Linear algebra, together with polynomial arithmetic, is the foundation of computer algebra. The algorithms have improved over the last 20 years, and the current state of the art algorithms for matrix inverse, solution of a linear system and determinants have a theoretical sub-cubic complexity. This thesis presents fast and practical algorithms for some classical problems in linear algebra over number fields and polynomial rings. Here, a number field is a finite extension of the field of rational numbers, and the polynomial rings we considered in this thesis are over finite fields.
One of the key problems of symbolic computation is intermediate coefficient swell: the bit length of intermediate results can grow during the computation compared to those in the input and output. The standard strategy to overcome this is not to compute the number directly but to compute it modulo some other numbers, using either the Chinese remainder theorem (CRT) or a variation of Newton-Hensel lifting. Often, the final step of these algorithms is combined with reconstruction methods such as rational reconstruction to convert the integral result into the rational solution. Here, we present reconstruction methods over number fields with a fast and simple vector-reconstruction algorithm.
The state of the art method for computing the determinant over integers is due to Storjohann. When generalizing his method over number field, we encountered the problem that modules generated by the rows of a matrix over number fields are in general not free, thus Strojohann's method cannot be used directly. Therefore, we have used the theory of pseudo-matrices to overcome this problem. As a sub-problem of this application, we generalized a unimodular certification method for pseudo-matrices: similar to the integer case, we check whether the determinant of the given pseudo matrix is a unit by testing the integrality of the corresponding dual module using higher-order lifting.
One of the main algorithms in linear algebra is the Dixon solver for linear system solving due to Dixon. Traditionally this algorithm is used only for square systems having a unique solution. Here we generalized Dixon algorithm for non-square linear system solving. As the solution is not unique, we have used a basis of the kernel to normalize the solution. The implementation is accompanied by a fast kernel computation algorithm that also extends to compute the reduced-row-echelon form of a matrix over integers and number fields.
The fast implementations for computing the characteristic polynomial and minimal polynomial over number fields use the CRT-based modular approach. Finally, we extended Storjohann's determinant computation algorithm over polynomial ring over finite fields, with its sub-algorithms for reconstructions and unimodular certification. In this case, we face the problem of intermediate degree swell. To avoid this phenomenon, we used higher-order lifting techniques in the unimodular certification algorithm. We have successfully used the half-gcd approach to optimize the rational polynomial reconstruction.
Life insurance companies are asked by the Solvency II regime to retain capital requirements against economically adverse developments. This ensures that they are continuously able to meet their payment obligations towards the policyholders. When relying on an internal model approach, an insurer's solvency capital requirement is defined as the 99.5% value-at-risk of its full loss probability distribution over the coming year. In the introductory part of this thesis, we provide the actuarial modeling tools and risk aggregation methods by which the companies can accomplish the derivations of these forecasts. Since the industry still lacks the computational capacities to fully simulate these distributions, the insurers have to refer to suitable approximation techniques such as the least-squares Monte Carlo (LSMC) method. The key idea of LSMC is to run only a few wisely selected simulations and to process their output further to obtain a risk-dependent proxy function of the loss. We dedicate the first part of this thesis to establishing a theoretical framework of the LSMC method. We start with how LSMC for calculating capital requirements is related to its original use in American option pricing. Then we decompose LSMC into four steps. In the first one, the Monte Carlo simulation setting is defined. The second and third steps serve the calibration and validation of the proxy function, and the fourth step yields the loss distribution forecast by evaluating the proxy model. When guiding through the steps, we address practical challenges and propose an adaptive calibration algorithm. We complete with a slightly disguised real-world application. The second part builds upon the first one by taking up the LSMC framework and diving deeper into its calibration step. After a literature review and a basic recapitulation, various adaptive machine learning approaches relying on least-squares regression and model selection criteria are presented as solutions to the proxy modeling task. The studied approaches range from ordinary and generalized least-squares regression variants over GLM and GAM methods to MARS and kernel regression routines. We justify the combinability of the regression ingredients mathematically and compare their approximation quality in slightly altered real-world experiments. Thereby, we perform sensitivity analyses, discuss numerical stability and run comprehensive out-of-sample tests. The scope of the analyzed regression variants extends to other high-dimensional variable selection applications. Life insurance contracts with early exercise features can be priced by LSMC as well due to their analogies to American options. In the third part of this thesis, equity-linked contracts with American-style surrender options and minimum interest rate guarantees payable upon contract termination are valued. We allow randomness and jumps in the movements of the interest rate, stochastic volatility, stock market and mortality. For the simultaneous valuation of numerous insurance contracts, a hybrid probability measure and an additional regression function are introduced. Furthermore, an efficient seed-related simulation procedure accounting for the forward discretization bias and a validation concept are proposed. An extensive numerical example rounds off the last part.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around one or several sites of vaso-occlusion are typical for GBM and hint on poor prognosis of patient survival.
This thesis focuses on studying the establishment and maintenance of these histological patterns specific to GBM with the aim of modeling the microlocal tumor environment under the influence of acidity, tissue anisotropy and hypoxia-induced angiogenesis. This aim is reached with two classes of models: multiscale and multiphase. Each of them features a reaction-diffusion equation (RDE) for the acidity acting as a chemorepellent and inhibitor of growth, coupled in a nonlinear way to a reaction-diffusion-taxis equation (RDTE) for glioma dynamics. The numerical simulations of the resulting systems are able to reproduce pseudopalisade-like patterns. The effect of tumor vascularization on these patterns is studied through a flux-limited model belonging to the multiscale class. Thereby, PDEs of reaction-diffusion-taxis type are deduced for glioma and endothelial cell (EC) densities with flux-limited pH-taxis for the tumor and chemotaxis towards vascular endothelial growth factor (VEGF) for ECs. These, in turn, are coupled to RDEs for acidity and VEGF produced by tumor. The numerical simulations of the obtained system show pattern disruption and transient behavior due to hypoxia-induced angiogenesis. Moreover, comparing two upscaling techniques through numerical simulations, we observe that the macroscopic PDEs obtained via parabolic scaling (directed tissue) are able to reproduce glioma patterns, while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related.
Yield Curves and Chance-Risk Classification: Modeling, Forecasting, and Pension Product Portfolios
(2021)
This dissertation consists of three independent parts: The yield curve shapes generated by interest rate models, the yield curve forecasting, and the application of the chance-risk classification to a portfolio of pension products. As a component of the capital market model, the yield curve influences the chance-risk classification which was introduced to improve the comparability of pension products and strengthen consumer protection. Consequently, all three topics have a major impact on this essential safeguard.
Firstly, we focus on the obtained yield curve shapes of the Vasicek interest rate models. We extend the existing studies on the attainable yield curve shapes in the one-factor Vasicek model by analysis of the curvature. Further, we show that the two-factor Vasicek model can explain significantly more effects that are observed at the market than its one-factor variant. Among them is the occurrence of dipped yield curves.
We further introduce a general change of measure framework for the Monte Carlo simulation of the Vasicek model under a subjective measure. This can be used to avoid the occurrence of a far too high frequency of inverse yield curves with growing time.
Secondly, we examine different time series models including machine learning algorithms forecasting the yield curve. For this, we consider statistical time series models such as autoregression and vector autoregression. Their performances are compared with the performance of a multilayer perceptron, a fully connected feed-forward neural network. For this purpose, we develop an extended approach for the hyperparameter optimization of the perceptron which is based on standard procedures like Grid and Random Search but allows to search a larger hyperparameter space. Our investigation shows that multilayer perceptrons outperform statistical models for long forecast horizons.
The third part deals with the chance-risk classification of state-subsidized pension products in Germany as well as its relevance for customer consulting. To optimize the use of the chance-risk classes assigned by Produktinformationsstelle Altersvorsorge gGmbH, we develop a procedure for determining the chance-risk class of different portfolios of state-subsidized pension products under the constraint that the portfolio chance-risk class does not exceed the customer's risk preference. For this, we consider a portfolio consisting of two new pension products as well as a second one containing a product already owned by the customer as well as the offer of a new one. This is of particular interest for customer consulting and can include other assets of the customer. We examine the properties of various chance and risk parameters as well as their corresponding mappings and show that a diversification effect exists. Based on the properties, we conclude that the average final contract values have to be used to obtain the upper bound of the portfolio chance-risk class. Furthermore, we develop an approach for determining the chance-risk class over the contract term since the chance-risk class is only assigned at the beginning of the accumulation phase. On the one hand, we apply the current legal situation, but on the other hand, we suggest an approach that requires further simulations. Finally, we translate our results into recommendations for customer consultation.
In this thesis one considers the periodic homogenization of a linearly coupled magneto-elastic model problem and focuses on the derivation of spectral methods to solve the obtained unit cell problem afterwards. In the beginning, the equations of linear elasticity and magnetism are presented together with the physical quantities used within. After specifying the model assumptions, the system of partial differential equations is rewritten in a weak form for which the existence and uniqueness of solutions is discussed. The model problem then undergoes a homogenization process where the original problem is approximated by a substitute problem with a repeating micro-structural geometry that was generated from a representative volume element (RVE). The following separation of scales, which can be achieved either by an asymptotic expansion or through a two-scale limit process, yields the homogenized problem on the macroscopic scale and the periodic unit cell problem. The latter is further analyzed using Fourier series, leading to periodic Lippmann-Schwinger type equations allowing for the development of matrix-free solvers. It is shown that, while it is possible to craft a scheme for the coupled problem from the purely elastic and magnetic Lippmann-Schwinger equations alone without much additional effort, a more general setting is provided when deriving a Lippmann-Schwinger equation for the coupled system directly. These numerical approaches are then validated with some analytically solvable test problems, before their performance is tested against each other for some more complex examples.
In diesem Text werden einige wichtige Grundlagen zusammengefasst, mit denen ein schneller Einstieg in das Arbeiten mit Arduino und Raspberry Pi möglich ist. Wir diskutieren nicht die Grundfunktionen der Geräte, weil es dafür zahlreiche Hilfestellungen im Internet gibt. Stattdessen konzentrieren wir uns vor allem auf die Steuerung von Sensoren und Aktoren und diskutieren einige Projektideen, die den MINT-interdisziplinären Projektunterricht bereichern können.
Simplified ODE models describing blood flow rate are governed by the pressure gradient.
However, assuming the orientation of the blood flow in a human body correlates to a positive
direction, a negative pressure gradient forces the valve to shut, which stops the flow through
the valve, hence, the flow rate is zero, whereas the pressure rate is formulated by an ODE.
Presence of ODEs together with algebraic constraints and sudden changes of system characterizations
yield systems of switched differential-algebraic equations (swDAEs). Alternating
dynamics of the heart can be well modelled by means of swDAEs. Moreover, to study pulse
wave propagation in arteries and veins, PDE models have been developed. Connection between
the heart and vessels leads to coupling PDEs and swDAEs. This model motivates
to study PDEs coupled with swDAEs, for which the information exchange happens at PDE
boundaries, where swDAE provides boundary conditions to the PDE and PDE outputs serve
as inputs to swDAE. Such coupled systems occur, e.g. while modelling power grids using
telegrapher’s equations with switches, water flow networks with valves and district
heating networks with rapid consumption changes. Solutions of swDAEs might
include jumps, Dirac impulses and their derivatives of arbitrary high orders. As outputs of
swDAE read as boundary conditions of PDE, a rigorous solution framework for PDE must
be developed so that jumps, Dirac impulses and their derivatives are allowed at PDE boundaries
and in PDE solutions. This is a wider solution class than solutions of small bounded
variation (BV), for instance, used in where nonlinear hyperbolic PDEs are coupled with
ODEs. Similarly, in, the solutions to switched linear PDEs with source terms are
restricted to the class of BV. However, in the presence of Dirac impulses and their derivatives,
BV functions cannot handle the coupled systems including DAEs with index greater than one.
Therefore, hyperbolic PDEs coupled with swDAEs with index one will be studied in the BV
setting and with swDAEs whose index is greater than one will be investigated in the distributional
sense. To this end, the 1D space of piecewise-smooth distributions is extended to a 2D
piecewise-smooth distributional solution framework. 2D space of piecewise-smooth distributions
allows trace evaluations at boundaries of the PDE. Moreover, a relationship between
solutions to coupled system and switched delay DAEs is established. The coupling structure
in this thesis forms a rather general framework. In fact, any arbitrary network, where PDEs
are represented by edges and (switched) DAEs by nodes, is covered via this structure. Given
a network, by rescaling spatial domains which modifies the coefficient matrices by a constant,
each PDE can be defined on the same interval which leads to a formulation of a single
PDE whose unknown is made up of the unknowns of each PDE that are stacked over each
other with a block diagonal coefficient matrix. Likewise, every swDAE is reformulated such
that the unknowns are collected above each other and coefficient matrices compose a block
diagonal coefficient matrix so that each node in the network is expressed as a single swDAE.
The results are illustrated by numerical simulations of the power grid and simplified circulatory
system examples. Numerical results for the power grid display the evolution of jumps
and Dirac impulses caused by initial and boundary conditions as a result of instant switches.
On the other hand, the analysis and numerical results for the simplified circulatory system do
not entail a Dirac impulse, for otherwise such an entity would destroy the entire system. Yet
jumps in the flow rate in the numerical results can come about due to opening and closure of
valves, which suits clinical and physiological findings. Regarding physiological parameters,
numerical results obtained in this thesis for the simplified circulatory system agree well with
medical data and findings from literature when compared for the validation
Skript zur Vorlesung "Character Theory of finite groups".
Estimation and Portfolio Optimization with Expert Opinions in Discrete-time Financial Markets
(2021)
In this thesis, we mainly discuss the problem of parameter estimation and
portfolio optimization with partial information in discrete-time. In the portfolio optimization problem, we specifically aim at maximizing the utility of
terminal wealth. We focus on the logarithmic and power utility functions. We consider expert opinions as another observation in addition to stock returns to improve estimation of drift and volatility parameters at different times and for the purpose of asset optimization.
In the first part, we assume that the drift term has a fixed distribution, and
the volatility term is constant. We use the Kalman filter to combine the two
types of observations. Moreover, we discuss how to transform this problem
into a non-linear problem of Gaussian noise when the expert opinion is uniformly distributed. The generalized Kalman filter is used to estimate the parameters in this problem.
In the second part, we assume that drift and volatility of asset returns are both driven by a Markov chain. We mainly use the change-of-measure technique to estimate various values required by the EM algorithm. In addition,
we focus on different ways to combine the two observations, expert opinions and asset returns. First, we use the linear combination method. At the same time, we discuss how to use a logistic regression model to quantify expert
opinions. Second, we consider that expert opinions follow a mixed Dirichlet distribution. Under this assumption, we use another probability measure to
estimate the unnormalized filters, needed for the EM algorithm.
In the third part, we assume that expert opinions follow a mixed Dirichlet distribution and focus on how we can obtain approximate optimal portfolio
strategies in different observation settings. We claim the approximate strategies from the dynamic programming equations in different settings and analyze the dependence on the discretization step. Finally we compute different
observation settings in a simulation study.
Simulating the flow of water in district heating networks requires numerical methods which are independent of the CFL condition. We develop a high order scheme for networks of advection equations allowing large time steps. With the MOOD technique unphysical oscillations of non smooth solutions are avoided. In numerical tests the applicability to real networks is shown.
This thesis consists of two parts, i.e. the theoretical background of (R)ABSDE including basic theorems, theoretical proofs and properties (Chapter 2-4), as well as numerical algorithms and simulations for (R)ABSDES (Chapter 5). For the theoretical part, we study ABSDEs (Chapter 2), RABSDEs with one obstacle (Chapter 3)and RABSDEs with two obstacles (Chapter 4) in the defaultable setting respectively, including the existence and uniqueness theorems, applications, the comparison theorem for ABSDEs, their relations with PDEs and stochastic differential delay equations (SDDE). The numerical algorithm part (Chapter 5) introduces two main algorithms, a discrete penalization scheme and a discrete reflected scheme based on a random walk approximation of the Brownian motion as well as a discrete approximation of the default martingale; we give the convergence results of the algorithms, provide a numerical example and an application in American game options in order to illustrate the performance of the algorithms.
Laser-induced interstitial thermotherapy (LITT) is a minimally invasive procedure to destroy liver
tumors through thermal ablation. Mathematical models are the basis for computer simulations
of LITT, which support the practitioner in planning and monitoring the therapy.
In this thesis, we propose three potential extensions of an established mathematical model of
LITT, which is based on two nonlinearly coupled partial differential equations (PDEs) modeling
the distribution of the temperature and the laser radiation in the liver.
First, we introduce the Cattaneo–LITT model for delayed heat transfer in this context, prove its
well-posedness and study the effect of an inherent delay parameter numerically.
Second, we model the influence of large blood vessels in the heat-transfer model by means
of a spatially varying blood-perfusion rate. This parameter is unknown at the beginning of
each therapy because it depends on the individual patient and the placement of the LITT
applicator relative to the liver. We propose a PDE-constrained optimal-control problem for the
identification of the blood-perfusion rate, prove the existence of an optimal control and prove
necessary first-order optimality conditions. Furthermore, we introduce a numerical example
based on which we demonstrate the algorithmic solution of this problem.
Third, we propose a reformulation of the well-known PN model hierarchy with Marshak
boundary conditions as a coupled system of second-order PDEs to approximate the radiative-transfer
equation. The new model hierarchy is derived in a general context and is applicable
to a wide range of applications other than LITT. It can be generated in an automated way by
means of algebraic transformations and allows the solution with standard finite-element tools.
We validate our formulation in a general context by means of various numerical experiments.
Finally, we investigate the coupling of this new model hierarchy with the LITT model numerically.
Deligne-Lusztig theory allows the parametrization of generic character tables of finite groups of Lie type in terms of families of conjugacy classes and families of irreducible characters "independently" of \(q\).
Only in small cases the theory also gives all the values of the table.
For most of the groups the completion of the table must be carried out with ad-hoc methods.
The aim of the present work is to describe one possible computation which avoids Lusztig's theory of "character sheaves".
In particular, the theory of Gel'fand-Graev characters and Clifford theory is used to complete the generic character table of \(G={\rm Spin}_8^+(q)\) for \(q\) odd.
As an example of the computations, we also determine the character table of \({\rm SL}_4(q)\), for \(q\) odd.
In the process of finding character values, the following tools are developed.
By explicit use of the Bruhat decomposition of elements, the fusion of the unipotent classes of \(G\) is determined.
Among others, this is used to compute the 2-parameter Green functions of every Levi subgroup with disconnected centre of \(G\).
Furthermore, thanks to a certain action of the centre \(Z(G)\) on the characters of \(G\), it is shown how, in principle, the values of any character depend on its values at the unipotent elements.
It is important to consider \({\rm Spin}_8^+(q)\) as it is one of the "smallest" interesting examples for which Deligne--Lusztig theory is not sufficient to construct the whole character table.
The reasons is related to the structure of \({\mathbf G}={\rm Spin}_8\), from which \(G\) is constructed.
Firstly, \({\mathbf G}\) has disconnected centre.
Secondly, \({\mathbf G}\) is the only simple algebraic group which has an outer group automorphism of order 3.
And finally, \(G\) can be realized as a subgroup of bigger groups, like \(E_6(q)\), \(E_7(q)\) or \(E_8(q)\).
The computation on \({\rm Spin}_8^+(q)\) serves as preparation for those cases.
The construction of number fields with given Galois group fits into the framework of the inverse Galois problem. This problem remains still unsolved, although many partial results have been obtained over the last century.
Shafarevich proved in 1954 that every solvable group is realizable as the Galois group of a number field. Unfortunately, the proof does not provide a method to explicitly find such a field.
This work aims at producing a constructive version of the theorem by solving the following task: given a solvable group $G$ and a $B\in \mathbf N$, construct all normal number fields with Galois group $G$ and absolute discriminant bounded by $B$.
Since a field with solvable Galois group can be realized as a tower of abelian extensions, the main role in our algorithm is played by class field theory, which is the subject of the first part of this work.
The second half is devoted to the study of the relation between the group structure and the field through Galois correspondence.
In particular, we study the existence of obstructions to embedding problems and some criteria to predict the Galois group of an extension.
This thesis introduces a novel deformation method for computational meshes. It is based on the numerical path following for the equations of nonlinear elasticity. By employing a logarithmic variation of the neo-Hookean hyperelastic material law, the method guarantees that the mesh elements do not become inverted and remain well-shaped. In order to demonstrate the performance of the method, this thesis addresses two areas of active research in isogeometric analysis: volumetric domain parametrization and fluid-structure interaction. The former concerns itself with the construction of a parametrization for a given computational domain provided only a parametrization of the domain’s boundary. The proposed mesh deformation method gives rise to a novel solution approach to this problem. Within it, the domain parametrization is constructed as a deformed configuration of a simplified domain. In order to obtain the simplified domain, the boundary of the target domain is projected in the \(L^2\)-sense onto a coarse NURBS basis. Then, the Coons patch is applied to parametrize the simplified domain. As a range of 2D and 3D examples demonstrates, the mesh deformation approach is able to produce high-quality parametrizations for complex domains where many state-of-the-art methods either fail or become unstable and inefficient. In the context of fluid-structure interaction, the proposed mesh deformation method is applied to robustly update the computational mesh in situations when the fluid domain undergoes large deformations. In comparison to the state-of-the-art mesh update methods, it is able to handle larger deformations and does not result in an eventual reduction of mesh quality. The performance of the method is demonstrated on a classic 2D fluid-structure interaction benchmark reproduced by using an isogeometric partitioned solver with strong coupling.
We study a multi-scale model for growth of malignant gliomas in the human brain.
Interactions of individual glioma cells with their environment determine the gross tumor shape.
We connect models on different time and length scales to derive a practical description of tumor growth that takes these microscopic interactions into account.
From a simple subcellular model for haptotactic interactions of glioma cells with the white matter we derive a microscopic particle system, which leads to a meso-scale model for the distribution of particles, and finally to a macroscopic description of the cell density.
The main body of this work is dedicated to the development and study of numerical methods adequate for the meso-scale transport model and its transition to the macroscopic limit.
Operator semigroups and infinite dimensional analysis applied to problems from mathematical physics
(2020)
In this dissertation we treat several problems from mathematical physics via methods from functional analysis and probability theory and in particular operator semigroups. The thesis consists thematically of two parts.
In the first part we consider so-called generalized stochastic Hamiltonian systems. These are generalizations of Langevin dynamics which describe interacting particles moving in a surrounding medium. From a mathematical point of view these systems are stochastic differential equations with a degenerated diffusion coefficient. We construct weak solutions of these equations via the corresponding martingale problem. Therefore, we prove essential m-dissipativity of the degenerated and non-sectorial It\^{o} differential operator. Further, we apply results from the analytic and probabilistic potential theory to obtain an associated Markov process. Afterwards we show our main result, the convergence in law of the positions of the particles in the overdamped regime, the so-called overdamped limit, to a distorted Brownian motion. To this end, we show convergence of the associated operator semigroups in the framework of Kuwae-Shioya. Further, we established a tightness result for the approximations which proves together with the convergence of the semigroups weak convergence of the laws.
In the second part we deal with problems from infinite dimensional Analysis. Three different issues are considered. The first one is an improvement of a characterization theorem of the so-called regular test functions and distribution of White noise analysis. As an application we analyze a stochastic transport equation in terms of regularity of its solution in the space of regular distributions. The last two problems are from the field of relativistic quantum field theory. In the first one the $ (\Phi)_3^4 $-model of quantum field theory is under consideration. We show that the Schwinger functions of this model have a representation as the moments of a positive Hida distribution from White noise analysis. In the last chapter we construct a non-trivial relativistic quantum field in arbitrary space-time dimension. The field is given via Schwinger functions. For these which we establish all axioms of Osterwalder and Schrader. This yields via the reconstruction theorem of Osterwalder and Schrader a unique relativistic quantum field. The Schwinger functions are given as the moments of a non-Gaussian measure on the space of tempered distributions. We obtain the measure as a superposition of Gaussian measures. In particular, this measure is itself non-Gaussian, which implies that the field under consideration is not a generalized free field.
Diversification is one of the main pillars of investment strategies. The prominent 1/N portfolio, which puts equal weight on each asset is, apart from its simplicity, a method which is hard to outperform in realistic settings, as many studies have shown. However, depending on the number of considered assets, this method can lead to very large portfolios. On the other hand, optimization methods like the mean-variance portfolio suffer from estimation errors, which often destroy the theoretical benefits. We investigate the performance of the equal weight portfolio when using fewer assets. For this we explore different naive portfolios, from selecting the best Sharpe ratio assets to exploiting knowledge about correlation structures using clustering methods. The clustering techniques separate the possible assets into non-overlapping clusters and the assets within a cluster are ordered by their Sharpe ratio. Then the best asset of each portfolio is chosen to be a member of the new portfolio with equal weights, the cluster portfolio. We show that this portfolio inherits the advantages of the 1/N portfolio and can even outperform it empirically. For this we use real data and several simulation models. We prove these findings from a statistical point of view using the framework by DeMiguel, Garlappi and Uppal (2009). Moreover, we show the superiority regarding the Sharpe ratio in a setting, where in each cluster the assets are comonotonic. In addition, we recommend the consideration of a diversification-risk ratio to evaluate the performance of different portfolios.
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope.
This document is the documentation for version 2020.02.
For more information, see https://www.lintim.net
In this thesis we study a variant of the quadrature problem for stochastic differential equations (SDEs), namely the approximation of expectations \(\mathrm{E}(f(X))\), where \(X = (X(t))_{t \in [0,1]}\) is the solution of an SDE and \(f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R}\) is a functional, mapping each realization of \(X\) into the real numbers. The distinctive feature in this work is that we consider randomized (Monte Carlo) algorithms with random bits as their only source of randomness, whereas the algorithms commonly studied in the literature are allowed to sample from the uniform distribution on the unit interval, i.e., they do have access to random numbers from \([0,1]\).
By assumption, all further operations like, e.g., arithmetic operations, evaluations of elementary functions, and oracle calls to evaluate \(f\) are considered within the real number model of computation, i.e., they are carried out exactly.
In the following, we provide a detailed description of the quadrature problem, namely we are interested in the approximation of
\begin{align*}
S(f) = \mathrm{E}(f(X))
\end{align*}
for \(X\) being the \(r\)-dimensional solution of an autonomous SDE of the form
\begin{align*}
\mathrm{d}X(t) = a(X(t)) \, \mathrm{d}t + b(X(t)) \, \mathrm{d}W(t), \quad t \in [0,1],
\end{align*}
with deterministic initial value
\begin{align*}
X(0) = x_0 \in \mathbb{R}^r,
\end{align*}
and driven by a \(d\)-dimensional standard Brownian motion \(W\). Furthermore, the drift coefficient \(a \colon \mathbb{R}^r \to \mathbb{R}^r\) and the diffusion coefficient \(b \colon \mathbb{R}^r \to \mathbb{R}^{r \times d}\) are assumed to be globally Lipschitz continuous.
For the function classes
\begin{align*}
F_{\infty} = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{\sup}\bigr\}
\end{align*}
and
\begin{align*}
F_p = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{L_p}\bigr\}, \quad 1 \leq p < \infty.
\end{align*}
we have established the following.
\[\]
\(\textit{Theorem 1.}\)
There exists a random bit multilevel Monte Carlo (MLMC) algorithm \(M\) using
\[
L = L(\varepsilon,F) = \begin{cases}\lceil{\log_2(\varepsilon^{-2}}\rceil, &\text{if} \ F = F_p,\\
\lceil{\log_2(\varepsilon^{-2} + \log_2(\log_2(\varepsilon^{-1}))}\rceil, &\text{if} \ F = F_\infty
\end{cases}
\]
and replication numbers
\[
N_\ell = N_\ell(\varepsilon,F) = \begin{cases}
\lceil{(L+1) \cdot 2^{-\ell} \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F = F_p,\\
\lceil{(L+1) \cdot 2^{-\ell} \cdot \max(\ell,1) \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F=f_\infty
\end{cases}
\]
for \(\ell = 0,\ldots,L\), for which exists a positive constant \(c\) such that
\begin{align*}
\mathrm{error}(M,F) = \sup_{f \in F} \bigl(\mathrm{E}(S(f) - M(f))^2\bigr)^{1/2} \leq c \cdot \varepsilon
\end{align*}
and
\begin{align*}
\mathrm{cost}(M,F) = \sup_{f \in F} \mathrm{E}(\mathrm{cost}(M,f)) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for every \(\varepsilon \in {]0,1/2[}\).
\[\]
Hence, in terms of the \(\varepsilon\)-complexity
\begin{align*}
\mathrm{comp}(\varepsilon,F) = \inf\bigl\{\mathrm{cost}(M,F) \colon M \ \text{is a random bit MC algorithm}, \mathrm{error}(M,F) \leq \varepsilon\bigr\}
\end{align*}
we have established the upper bound
\begin{align*}
\mathrm{comp}(\varepsilon,F) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for some positive constant \(c\). That is, we have shown the same weak asymptotic upper bound as in the case of random numbers from \([0,1]\). Hence, in this sense, random bits are almost as powerful as random numbers for our computational problem.
Moreover, we present numerical results for a non-analyzed adaptive random bit MLMC Euler algorithm, in the particular cases of the Brownian motion, the geometric Brownian motion, the Ornstein-Uhlenbeck SDE and the Cox-Ingersoll-Ross SDE. We also provide a numerical comparison to the corresponding adaptive random number MLMC Euler method.
A key challenge in the analysis of the algorithm in Theorem 1 is the approximation of probability distributions by means of random bits. A problem very closely related to the quantization problem, i.e., the optimal approximation of a given probability measure (on a separable Hilbert space) by means of a probability measure with finite support size.
Though we have shown that the random bit approximation of the standard normal distribution is 'harder' than the corresponding quantization problem (lower weak rate of convergence), we have been able to establish the same weak rate of convergence as for the corresponding quantization problem in the case of the distribution of a Brownian bridge on \(L_2([0,1])\), the distribution of the solution of a scalar SDE on \(L_2([0,1])\), and the distribution of a centered Gaussian random element in a separable Hilbert space.
Fibre reinforced polymers(FRPs) are one the newest and modern materials. In FRPs a light polymer matrix holds but weak polymer matrix is strengthened by glass or carbon fibres. The result is a material that is light and compared to its weight, very strong.\par
The stiffness of the resulting material is governed by the direction and the length of the fibres. To better understand the behaviour of FRPs we need to know the fibre length distribution in the resulting material. The classic method for this is ashing, where a sample of the material is burned and destroyed. We look at CT images of the material. In the first part we assumed that we have a full fibre segmentation, we can fit an a cylinder to each individual fibre. In this setting we identified two problems, sampling bias and censoring.\par
Sampling bias occurs since a longer fibre has a higher probability to be visible in the observation window. To solve this problem we used a reweighed fibre length distribution. The weight depends on the used sampling rule.\par
For the censoring we used an EM algorithm. The EM algorithm is used to get a Maximum Likelihood estimator in cases of missing or censored data.\par
For this setting we deduced conditions such that the EM algorithm converges to at least a stationary point of the underlying likelihood function. We further found conditions such that if the EM converges to the correct ML estimator, the estimator is consistent and asymptotically normally distributed.\par
Since obtaining a full fibre segmentation is hard we further looked in the fibre endpoint process. The fibre end point process can be modelled as a Neymann-Scott cluster process. Using this model we can find a formula for the reduced second moment measure for this process. We use this formula to get an estimator for the fibre length distribution.\par
We investigated all estimators using simulation studies. We especially investigated their performance in the case of non overlapping fibres.
Cohomology of Groups
(2020)
In this thesis, we present the basic concepts of isogeometric analysis (IGA) and we consider Poisson's equation as model problem. Since in IGA the physical domain is parametrized via a geometry function that goes from a parameter domain, e.g. the unit square or unit cube, to the physical one, we present a class of parametrizations that can be viewed as a generalization of polar coordinates, known as the scaled boundary parametrizations (SB-parametrizations). These are easy to construct and are particularly attractive when only the boundary of a domain is available. We then present an IGA approach based on these parametrizations, that we call scaled boundary isogeometric analysis (SB-IGA). The SB-IGA derives the weak form of partial differential equations in a different way from the standard IGA. For the discretization projection
on a finite-dimensional space, we choose in both cases Galerkin's method. Thanks to this technique, we state an equivalence theorem for linear elliptic boundary value problems between the standard IGA, when it makes use of an SB-parametrization,
and the SB-IGA. We solve Poisson's equation with Dirichlet boundary conditions on different geometries and with different SB-parametrizations.
On the complexity and approximability of optimization problems with Minimum Quantity Constraints
(2020)
During the last couple of years, there has been a variety of publications on the topic of
minimum quantity constraints. In general, a minimum quantity constraint is a lower bound
constraint on an entity of an optimization problem that only has to be fulfilled if the entity is
“used” in the respective solution. For example, if a minimum quantity \(q_e\) is defined on an
edge \(e\) of a flow network, the edge flow on \(e\) may either be \(0\) or at least \(q_e\) units of flow.
Minimum quantity constraints have already been applied to problem classes such as flow, bin
packing, assignment, scheduling and matching problems. A result that is common to all these
problem classes is that in the majority of cases problems with minimum quantity constraints
are NP-hard, even if the problem without minimum quantity constraints but with fixed lower
bounds can be solved in polynomial time. For instance, the maximum flow problem is known
to be solvable in polynomial time, but becomes NP-hard once minimum quantity constraints
are added.
In this thesis we consider flow, bin packing, scheduling and matching problems with minimum
quantity constraints. For each of these problem classes we provide a summary of the
definitions and results that exist to date. In addition, we define new problems by applying
minimum quantity constraints to the maximum-weight b-matching problem and to open
shop scheduling problems. We contribute results to each of the four problem classes: We
show NP-hardness for a variety of problems with minimum quantity constraints that have
not been considered so far. If possible, we restrict NP-hard problems to special cases that
can be solved in polynomial time. In addition, we consider approximability of the problems:
For most problems it turns out that, unless P=NP, there cannot be any polynomial-time
approximation algorithm. Hence, we consider bicriteria approximation algorithms that allow
the constraints of the problem to be violated up to a certain degree. This approach proves to
be very helpful and we provide a polynomial-time bicriteria approximation algorithm for at
least one problem of each of the four problem classes we consider. For problems defined on
graphs, the class of series parallel graphs supports this approach very well.
We end the thesis with a summary of the results and several suggestions for future research
on minimum quantity constraints.
A significant step to engineering design is to take into account uncertainties and to
develop optimal designs that are robust with respect to perturbations. Furthermore, it
is often of interest to optimize for different conflicting objective functions describing the
quality of a design, leading to a multi-objective optimization problem. In this context,
generating methods for solving multi-objective optimization problems seek to find a
representative set of solutions fulfilling the concept of Pareto optimality. When multiple
uncertain objective functions are involved, it is essential to define suitable measures for
robustness that account for a combined effect of uncertainties in objective space. Many
tasks in engineering design include the solution of an underlying partial differential
equation that can be computationally expensive. Thus, it is of interest to use efficient
strategies for finding optimal designs. This research aims to present suitable measures
for robustness in a multi-objective context, as well as optimization strategies for multi-
objective robust design.
This work introduces new ideas for robustness measures in the context of multi-
objective robust design. Losses and expected losses based on distances in objective space
are used to describe robustness. A direct formulation and a two-phase formulation based
on expected losses are proposed for finding a set of robust optimal solutions.
Furthermore, suitable optimization strategies for solving the resulting multi-objective
robust design problem are formulated and analyzed. The multi-objective optimization
problem is solved with a constraint-based approach that is based on solving several
constrained single-objective optimization problems with a hybrid optimization strategy.
The hybrid method combines a global search method on a surrogate model with adjoint-
based optimization methods. In the context of optimization with an underlying partial
differential equation, a one-shot approach is extended to handle additional constraints.
The developed concepts for multi-objective robust design and the proposed optimiza-
tion strategies are applied to an aerodynamic shape optimization problem. The drag
coefficient and the lift coefficient are optimized under the consideration of uncertain-
ties in the operational conditions and geometrical uncertainties. The uncertainties are
propagated with the help of a non-intrusive polynomial chaos approach. For increasing
the efficiency when considering a higher-dimensional random space, it is made use of a
Karhunen-Loève expansion and a dimension-adaptive sparse grid quadrature.
We propose a model for glioma patterns in a microlocal tumor environment under
the influence of acidity, angiogenesis, and tissue anisotropy. The bottom-up model deduction
eventually leads to a system of reaction–diffusion–taxis equations for glioma and endothelial cell
population densities, of which the former infers flux limitation both in the self-diffusion and taxis
terms. The model extends a recently introduced (Kumar, Li and Surulescu, 2020) description of
glioma pseudopalisade formation with the aim of studying the effect of hypoxia-induced tumor
vascularization on the establishment and maintenance of these histological patterns which are typical
for high-grade brain cancer. Numerical simulations of the population level dynamics are performed
to investigate several model scenarios containing this and further effects.
Die Konstruktion eines Schrittzählers mit einem Arduino-Mikrocontroller und einem Bewegungssensor ist ein spannendes Technikprojekt. Wir erläutern den Grundgedanken hinter der produktorientierten Modellierung und die vielfältigen Möglichkeiten, die Fragestellung zu bearbeiten. Darüberhinaus werden die technischen Details der verwendeten Hardware diskutiert, um einen schnellen Einstieg ins Thema zu ermöglichen.
The famous Mather-Yau theorem in singularity theory yields a bijection of isomorphy classes of germs of isolated hypersurface singularities and their respective Tjurina algebras.
This result has been generalized by T. Gaffney and H. Hauser to singularities of isolated singularity type. Due to the fact that both results do not have a constructive proof, it is the objective of this thesis to extract explicit information about hypersurface singularities from their Tjurina algebras.
First we generalize the result by Gaffney-Hauser to germs of hypersurface singularities, which are strongly Euler-homogeneous at the origin. Afterwards we investigate the Lie algebra structure of the module of logarithmic derivations of Tjurina algebra while considering the theory of graded analytic algebras by G. Scheja and H. Wiebe. We use the aforementioned theory to show that germs of hypersurface singularities with positively graded Tjurina algebras are strongly Euler-homogeneous at the origin. We deduce the classification of hypersurface singularities with Stanley-Reisner Tjurina ideals.
The notion of freeness and holonomicity play an important role in the investigation of properties of the aforementioned singularities. Both notions have been introduced by K. Saito in 1980. We show that hypersurface singularities with Stanley--Reisner Tjurina ideals are holonomic and have a free singular locus. Furthermore, we present a Las Vegas algorithm, which decides whether a given zero-dimensional \(\mathbb{C}\)-algebra is the Tjurina algebra of a quasi-homogeneous isolated hypersurface singularity. The algorithm is implemented in the computer algebra system OSCAR.
In a recent paper, G. Malle and G. Robinson proposed a modular anologue to Brauer's famous \( k(B) \)-conjecture. If \( B \) is a \( p \)-block of a finite group with defect group \( D \), then they conjecture that \( l(B) \leq p^r \), where \( r \) is the sectional \( p \)-rank of \( D \). Since this conjecture is relatively new, there is obviously still a lot of work to do. This thesis is concerned with proving their conjecture for the finite groups of exceptional Lie type.
Elementare Zahlentheorie
(2020)
Synapses are connections between different nerve cells that form an essential link in neural signal transmission. It is generally distinguished between electrical and chemical synapses, where chemical synapses are more common in the human brain and are also the type we deal with in this work.
In chemical synapses, small container-like objects called vesicles fill with neurotransmitter and expel them from the cell during synaptic transmission. This process is vital for communication between neurons. However, to the best of our knowledge no mathematical models that take different filling states of the vesicles into account have been developed before this thesis was written.
In this thesis we propose a novel mathematical model for modeling synaptic transmission at chemical synapses which includes the description of vesicles of different filling states. The model consists of a transport equation (for the vesicle growth process) plus three ordinary differential equations (ODEs) and focuses on the presynapse and synaptic cleft.
The well-posedness is proved in detail for this partial differential equation (PDE) system. We also propose a few different variations and related models. In particular, an ODE system is derived and a delay differential equation (DDE) system is formulated. We then use nonlinear optimization methods for data fitting to test some of the models on data made available to us by the Animal Physiology group at TU Kaiserslautern.
Einführung in die Algebra
(2020)
LinTim is a scientific software toolbox that has been under development since 2007, giving the possibility to solve the various planning steps in public transportation. Although the name originally derives from "Lineplanning and Timetabling", the available functions have grown far beyond this scope. This document is the documentation for version 2020.12. For more information, see https://www.lintim.net
In this thesis, we deal with the worst-case portfolio optimization problem occuring in discrete-time markets.
First, we consider the discrete-time market model in the presence of crash threats. We construct the discrete worst-case optimal portfolio strategy by the indifference principle in the case of the logarithmic utility. After that we extend this problem to general utility functions and derive the discrete worst-case optimal portfolio processes, which are characterized by a dynamic programming equation. Furthermore, the convergence of the discrete worst-case optimal portfolio processes are investigated when we deal with the explicit utility functions.
In order to further study the relation of the worst-case optimal value function in discrete-time models to continuous-time models we establish the finite-difference approach. By deriving the discrete HJB equation we verify the worst-case optimal value function in discrete-time models, which satisfies a system of dynamic programming inequalities. With increasing degree of fineness of the time discretization, the convergence of the worst-case value function in discrete-time models to that in continuous-time models are proved by using a viscosity solution method.