Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
Document Type
- Preprint (608)
- Doctoral Thesis (292)
- Report (121)
- Article (84)
- Diploma Thesis (26)
- Lecture (25)
- Master's Thesis (8)
- Course Material (5)
- Part of a Book (4)
- Study Thesis (4)
Keywords
- Mathematische Modellierung (19)
- MINT (16)
- Schule (16)
- Wavelet (14)
- Inverses Problem (12)
- Mehrskalenanalyse (12)
- Modellierung (12)
- Mathematikunterricht (9)
- praxisorientiert (9)
- Approximation (8)
Faculty / Organisational entity
Scaled boundary isogeometric analysis (SB-IGA) describes the computational domain by proper boundary NURBS together with a well-defined scaling center; see [5]. More precisely, we consider star convex domains whose domain boundaries correspond to a sequence of NURBS curves and the interior is determined by a scaling of the boundary segments with respect to a chosen scaling center. However, providing a decomposition into star shaped blocks one can utilize SB-IGA also for more general shapes. Even though several geometries can be described by a single patch, in applications frequently there appear multipatch structures. Whereas a C0 continuous patch coupling can be achieved relatively easily, the situation becomes more complicated if higher regularity is required. Consequently, a suitable coupling method is inevitably needed for analyses that require global C1 continuity.In this contribution we apply the concept of analysis-suitable G1 parametrizations [2] to the framework of SB-IGA for the C1 coupling of planar domains with a special consideration of the scaling center. We obtain globally C1 regular basis functions and this enables us to handle problems such as the Kirchhoff-Love plate and shell, where smooth coupling is an issue. Furthermore, the boundary representation within SB-IGA makes the method suitable for the concept of trimming. In particular, we see the possibility to extend the coupling procedure to study trimmed plates and shells.The approach was implemented using the GeoPDEs package [1] and its performance was tested on several numerical examples. Finally, we discuss the advantages and disadvantages of the proposed method and outline future perspectives.
We compute three-dimensional displacement vector fields to estimate the deformation of microstructural data sets in mechanical tests. For this, we extend the well-known optical flow by Brox et al. to three dimensions, with special focus on the discretization of nonlinear terms. We evaluate our method first by synthetically deforming foams and comparing against this ground truth and second with data sets of samples that underwent real mechanical tests. Our results are compared to those from state-of-the-art algorithms in materials science and medical image registration. By a thorough evaluation, we show that our proposed method is able to resolve the displacement best among all chosen comparison methods.
We show that every convergent power series with monomial extended Jacobian ideal is right equivalent to a Thom–Sebastiani polynomial. This solves a problem posed by Hauser and Schicho. On the combinatorial side, we introduce a notion of Jacobian semigroup ideal involving a transversal matroid. For any such ideal, we construct a defining Thom–Sebastiani polynomial. On the analytic side, we show that power series with a quasihomogeneous extended Jacobian ideal are strongly Euler homogeneous. Due to a Mather–Yau-type theorem, such power series are determined by their Jacobian ideal up to right equivalence.
The German energy mix, which provides an overview of the sources of electricity available in Germany, is changing as a result of the expansion of renewable energy sources. With this shift towards sustainable energy sources such as wind and solar power, the electricity market situation is also in flux. Whereas in the past there were few uncertainties in electricity generation and only demand was subject to stochastic uncertainties, generation is now subject to stochastic fluctuations as well, especially due to weather dependency. To provide a supportive framework for this different situation, the electricity market has introduced, among other things, the intraday market, products with half-hourly and quarter-hourly time slices, and a modified balancing energy market design. As a result, both electricity price forecasting and optimization issues remain topical.
In this thesis, we first address intraday market modeling and intraday index forecasting. To do so, we move to the level of individual bids in the intraday market and use them to model the limit order books of intraday products. Based on statistics of the modeled limit order books, we present a novel estimator for the intraday indices. Especially for less liquid products, the order book statistics contain relevant information that allows for significantly more accurate predictions in comparison to the benchmark estimator.
Unlike the intraday market, the day ahead market allows smaller companies without their own trading department to participate since it is operated as a market with daily auctions. We optimize the flexibility offer of such a small company in the day ahead market and model the prices with a stochastic multi-factor model already used in the industry. To make this model accessible for stochastic optimization, we discretize it in time and space using scenario trees. Here we present existing algorithms for scenario tree generation as well as our own extensions and adaptations. These are based on the nested distance, which measures the distance between two distributions of stochastic processes. Based on the resulting scenario trees, we apply the stochastic optimization methods of stochastic programming, dynamic programming, and reinforcement learning to illustrate in which context the methods are appropriate.
A characterisation of the spaces \({\mathcal {G}}_K\) and \({\mathcal {G}}_K'\) introduced in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) and Potthoff and Timpel (Potential Anal 4(6):637–654, 1995) is given. A first characterisation of these spaces provided in Grothaus et al. (Methods Funct Anal Topol 3(2):46–64, 1997) uses the concepts of holomorphy on infinite dimensional spaces. We, instead, give a characterisation in terms of U-functionals, i.e., classic holomorphic function on the one dimensional field of complex numbers. We apply our new characterisation to derive new results concerning a stochastic transport equation and the stochastic heat equation with multiplicative noise.
Covering edges in networks
(2019)
In this paper we consider the covering problem on a networkG=(V,E)withedgedemands. The task is to cover a subsetJ⊆Eof the edges with a minimum numberof facilities within a predefined coverage radius. We focus on both the nodal andthe absolute version of this problem. In the latter, facilities may be placed every-where in the network. While there already exist polynomial time algorithms to solvethe problem on trees, we establish a finite dominating set (i.e., a finite subset ofpoints provably containing an optimal solution) for the absolute version in generalgraphs. Complexity and approximability results are given and a greedy strategy isproved to be a (1+ln(|J|))-approximate algorithm. Finally, the different approachesare compared in a computational study.
Hajós' conjecture asserts that a simple Eulerian graph on n vertices can be decomposed into at most [(n-1)/2] cycles. The conjecture is only proved for graph classes in which every element contains vertices of degree 2 or 4. We develop new techniques to construct cycle decompositions. They work on the common neighborhood of two degree-6 vertices. With these techniques, we find structures that cannot occur in a minimal counterexample to Hajós' conjecture and verify the conjecture for Eulerian graphs of pathwidth at most 6. This implies that these graphs satisfy the small cycle double cover conjecture.
Insurance companies and banks regularly have to face stress tests performed by regulatory instances. To model their investment decision problems that includes stress scenarios, we propose the worst-case portfolio approach. Thus, the resulting optimal portfolios are already stress test prone by construction. A central issue of the worst-case portfolio approach is that neither the time nor the order of occurrence of the stress scenarios are known. Even more, there are no probabilistic assumptions regarding the occurrence of the stresses. By defining the relative worst-case loss and introducing the concept of minimum constant portfolio processes, we generalize the traditional concepts of the indifference frontier and the indifference-optimality principle. We prove the existence of a minimum constant portfolio process that is optimal for the multi-stress worst-case problem. As a main result we derive a verification theorem that provides conditions on Lagrange multipliers and nonlinear ordinary differential equations that support the construction of optimal worst-case portfolio strategies. The practical applicability of the verification theorem is demonstrated via numerical solution of various worst-case problems with stresses. There, it is in particular shown that an investor who chooses the worst-case optimal portfolio process may have a preference regarding the order of stresses, but there may also be stress scenarios where he/she is indifferent regarding the order and time of occurrence.
We present new results on standard basis computations of a 0-dimensional ideal I in a power series ring or in the localization of a polynomial ring over a computable field K. We prove the semicontinuity of the “highest corner” in a family of ideals, parametrized by the spectrum of a Noetherian domain A. This semicontinuity is used to design a new modular algorithm for computing a standard basis of I if K is the quotient field of A. It uses the computation over the residue field of a “good” prime ideal of A to truncate high order terms in the subsequent computation over K. We prove that almost all prime ideals are good, so a random choice is very likely to be good, and whether it is good is detected a posteriori by the algorithm. The algorithm yields a significant speed advantage over the non-modular version and works for arbitrary Noetherian domains. The most important special cases are perhaps A = ℤ and A = k[t], k any field and t a set of parameters. Besides its generality, the method differs substantially from previously known modular algorithms for A = ℤ, since it does not manipulate the coefficients. It is also usually faster and can be combined with other modular methods for computations in local rings. The algorithm is implemented in the computer algebra system SINGULAR and we present several examples illustrating its power.
In this paper we consider the stochastic primitive equation for geophysical flows subject to transport noise and turbulent pressure. Admitting very rough noise terms, the global existence and uniqueness of solutions to this stochastic partial differential equation are proven using stochastic maximal
-regularity, the theory of critical spaces for stochastic evolution equations, and global a priori bounds. Compared to other results in this direction, we do not need any smallness assumption on the transport noise which acts directly on the velocity field and we also allow rougher noise terms. The adaptation to Stratonovich type noise and, more generally, to variable viscosity and/or conductivity are discussed as well.
As a consequence of the real estate market crash after 2008, large investors invested a significant amount of wealth into single-family houses to construct a portfolio of rental dwellings, whose income is securitized in the capital. In some local housing markets, these investors own remarkable numbers of single-family houses. Furthermore, their trading activities have resulted in a new investment strategy, which exacerbates property wealth concentration and polarization. This new investment strategy and its portfolio optimization inspire curiosity about its influence on housing markets. This paper first aims to find an optimal portfolio strategy by employing an expected utility optimization from the terminal wealth, which adopts a stochastic model that includes a variety of economic states to estimate house prices. Second, it aims to analyze the effect of large investors on the housing market. The results show the investment strategies of large investors depend on the balance among economic state, maintenance cost, rental income, interest rate and investment willingness of large investors to housing and their effect depends on the state of the economy.
Continuous-time regime-switching models are a very popular class of models for financial applications. In this work the so-called signal-to-noise matrix is introduced for hidden Markov models where the switching is driven by an unobservable Markov chain. Its relations to filtering, i.e. state estimation of the chain given the available observations, and portfolio optimization are investigated. A convergence result for the filter is derived: The filter converges to its invariant distribution if the eigenvalues of the signal-to-noise matrix converge to zero. This matrix is then also used to prove a mutual fund representation for regime-switching models and a corresponding market reduction which is consistent with filtering and portfolio optimization. Two canonical cases for the reduction are analyzed in more detail, the first based on the market regimes and the second depending on the eigenvalues. These considerations are presented both for observable and unobservable Markov chains. The results are illustrated by numerical simulations.
In this paper we investigate a utility maximization problem with drift uncertainty in a multivariate continuous-time Black–Scholes type financial market which may be incomplete. We impose a constraint on the admissible strategies that prevents a pure bond investment and we include uncertainty by means of ellipsoidal uncertainty sets for the drift. Our main results consist firstly in finding an explicit representation of the optimal strategy and the worst-case parameter, secondly in proving a minimax theorem that connects our robust utility maximization problem with the corresponding dual problem. Thirdly, we show that, as the degree of model uncertainty increases, the optimal strategy converges to a generalized uniform diversification strategy.
This contribution defends two claims. The first is about why thought experiments are so relevant and powerful in mathematics. Heuristics and proof are not strictly and, therefore, the relevance of thought experiments is not contained to heuristics. The main argument is based on a semiotic analysis of how mathematics works with signs. Seen in this way, formal symbols do not eliminate thought experiments (replacing them by something rigorous), but rather provide a new stage for them. The formal world resembles the empirical world in that it calls for exploration and offers surprises. This presents a major reason why thought experiments occur both in empirical sciences and in mathematics. The second claim is about a looming aporia that signals the limitation of thought experiments. This aporia arises when mathematical arguments cease to be fully accessible, thus violating a precondition for experimenting in thought. The contribution focuses on the work of Vladimir Voevodsky (1966–2017, Fields medalist in 2002) who argued that even very pure branches of mathematics cannot avoid inaccessibility of proof. Furthermore, he suggested that computer verification is a feasible path forward, but only if proof is not modeled in terms of formal logic.
We consider the optimization problem of a large insurance company that wants to maximize the expected utility of its surplus through the optimal control of the proportional reinsurance. In addition, the insurer is exposed to the risk of default of its reinsurer at the worst possible time, a setting that is closely related to a scenario of the Swiss Solvency Test.
Many real-world optimization and decision-making problems comprise several, partly conflicting objective functions. The English saying “Quality has its price” is just as true on a large scale as it is in private sphere and, therefore, quality and price are a typical pair of conflicting objective functions that are very common in applications. Yet, in industrial applications, both quality and cost may be understood in the specific context and differ whether a transportation, a production, or a planning problem is considered. Other objective functions that are receiving increasing attention in real-world decision-making situations are, for example, robustness, time, sustainability, adaptability, or longevity.
In this paper, we devise a stochastic asset–liability management (ALM) model for a life insurance company and analyze its influence on the balance sheet within a low-interest rate environment. In particular, a flexible procedure for the generation of insurers’ compressed contract portfolios that respects the given biometric structure is presented, extending the existing literature on stochastic ALM modeling. The introduced balance sheet model is in line with the principles of double-entry bookkeeping as required in accounting. We further focus on the incorporation of new business, i.e. the addition of newly concluded contracts and thus of insured in each period. Efficient simulations are obtained by integrating new policies into existing cohorts according to contract-related criteria. We provide new results on the consistency of the balance sheet equations. In extensive simulation studies for different scenarios regarding the business form of today’s life insurers, we utilize these to analyze the long-term behavior and the stability of the components of the balance sheet for different asset–liability approaches. Finally, we investigate the robustness of two prominent investment strategies against crashes in the capital markets, which lead to extreme liquidity shocks and thus threaten the insurer’s financial health.
First essential m-dissipativity of an infinite-dimensional Ornstein-Uhlenbeck operator N, perturbed by the gradient of a potential, on a domain FC
∞
b
of finitely based, smooth and bounded functions, is shown. Our considerations allow unbounded diffusion operators as coefficients. We derive corresponding second order regularity estimates for solutions f of the Kolmogorov equation ◂−▸αf−Nf=g, ◂+▸α∈(0,∞), generalizing some results of Da Prato and Lunardi. Second, we prove essential m-dissipativity for generators (◂,▸LΦ,FC
∞
b
) of infinite-dimensional degenerate diffusion processes. We emphasize that the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) is useful to apply general resolvent methods developed by Beznea, Boboc and Röckner, in order to construct martingale/weak solutions to infinite-dimensional non-linear degenerate stochastic differential equations. Furthermore, the essential m-dissipativity of (◂,▸LΦ,FC
∞
b
) and (◂,▸N,FC
∞
b
), as well as the regularity estimates are essential to apply the general abstract Hilbert space hypocoercivity method from Dolbeault, Mouhot, Schmeiser and Grothaus, Stilgenbauer, respectively, to the corresponding diffusions.
We provide a complete elaboration of the L2-Hilbert space hypocoercivity theorem for the degenerate Langevin dynamics with multiplicative noise, studying the longtime behavior of the strongly continuous contraction semigroup solving the abstract Cauchy problem for the associated backward Kolmogorov operator. Hypocoercivity for the Langevin dynamics with constant diffusion matrix was proven previously by Dolbeault, Mouhot and Schmeiser in the corresponding Fokker–Planck framework and made rigorous in the Kolmogorov backwards setting by Grothaus and Stilgenbauer. We extend these results to weakly differentiable diffusion coefficient matrices, introducing multiplicative noise for the corresponding stochastic differential equation. The rate of convergence is explicitly computed depending on the choice of these coefficients and the potential giving the outer force. In order to obtain a solution to the abstract Cauchy problem, we first prove essential self-adjointness of non-degenerate elliptic Dirichlet operators on Hilbert spaces, using prior elliptic regularity results and techniques from Bogachev, Krylov and Röckner. We apply operator perturbation theory to obtain essential m-dissipativity of the Kolmogorov operator, extending the m-dissipativity results from Conrad and Grothaus. We emphasize that the chosen Kolmogorov approach is natural, as the theory of generalized Dirichlet forms implies a stochastic representation of the Langevin semigroup as the transition kernel of a diffusion process which provides a martingale solution to the Langevin equation with multiplicative noise. Moreover, we show that even a weak solution is obtained this way.
This article presents a methodology whereby adjoint solutions for partitioned multiphysics problems can be computed efficiently, in a way that is completely independent of the underlying physical sub-problems, the associated numerical solution methods, and the number and type of couplings between them. By applying the reverse mode of algorithmic differentiation to each discipline, and by using a specialized recording strategy, diagonal and cross terms can be evaluated individually, thereby allowing different solution methods for the generic coupled problem (for example block-Jacobi or block-Gauss-Seidel). Based on an implementation in the open-source multiphysics simulation and design software SU2, we demonstrate how the same algorithm can be applied for shape sensitivity analysis on a heat exchanger (conjugate heat transfer), a deforming wing (fluid–structure interaction), and a cooled turbine blade where both effects are simultaneously taken into account.
In this note, we define one more way of quantization of classical systems. The quantization we consider is an analogue of classical Jordan–Schwinger map which has been known and used for a long time by physicists. The difference, compared to Jordan–Schwinger map, is that we use generators of Cuntz algebra O∞ (i.e. countable family of mutually orthogonal partial isometries of separable Hilbert space) as a “building blocks” instead of creation–annihilation operators. The resulting scheme satisfies properties similar to Van Hove prequantization, i.e. exact conservation of Lie brackets and linearity.
In a widely-studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. Then, the goal is to provide an optimal solution set, i.e., a set containing an optimal solution for each non-parametric problem obtained by fixing a parameter vector. For many multi-parametric optimization problems, however, an optimal solution set of minimum cardinality can contain super-polynomially many solutions. Consequently, no polynomial-time exact algorithms can exist for these problems even if P=NP. We propose an approximation method that is applicable to a general class of multi-parametric optimization problems and outputs a set of solutions with cardinality polynomial in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric version and provides an approximation guarantee that is arbitrarily close to the approximation guarantee of the approximation algorithm for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, our algorithm is an FPTAS. Further, we show that, for any given approximation guarantee, the minimum cardinality of an approximation set is, in general, not ℓ-approximable for any natural number ℓ less or equal to the number of parameters, and we discuss applications of our results to classical multi-parametric combinatorial optimizations problems. In particular, we obtain an FPTAS for the multi-parametric minimum s-t-cut problem, an FPTAS for the multi-parametric knapsack problem, as well as an approximation algorithm for the multi-parametric maximization of independence systems problem.
Over the past 2 decades, there has been much progress on the classification of symplectic linear quotient singularities V/G admitting a symplectic (equivalently, crepant) resolution of singularities. The classification is almost complete but there is an infinite series of groups in dimension 4—the symplectically primitive but complex imprimitive groups—and 10 exceptional groups up to dimension 10, for which it is still open. In this paper, we treat the remaining infinite series and prove that for all but possibly 39 cases there is no symplectic resolution. We thereby reduce the classification problem to finitely many open cases. We furthermore prove non-existence of a symplectic resolution for one exceptional group, leaving 39+9=48 open cases in total. We do not expect any of the remaining cases to admit a symplectic resolution.
Consider the primitive equations on ◂+▸R2×(◂,▸z0,z1) with initial data a of the form a=◂+▸a1+a2, where ◂+▸a1∈◂◽.▸BUCσ(◂,▸R2;L1(◂,▸z0,z1)) and ◂+▸a2∈L
∞
σ
(◂,▸R2;L1(◂,▸z0,z1)). These spaces are scaling-invariant and represent the anisotropic character of these equations. It is shown that for a1 arbitrary large and a2 sufficiently small, this set of equations admits a unique strong solution which extends to a global one and is thus strongly globally well posed for these data provided a is periodic in the horizontal variables. The approach presented depends crucially on mapping properties of the hydrostatic Stokes semigroup in the L∞(L1)-setting. It can be seen as the counterpart of the classical iteration schemes for the Navier–Stokes equations, now for the primitive equations in the L∞(L1)-setting.
Die Möglichkeit einer Prämienanpassung in der deutschen PKV ist vom Wert des sogenannten auslösenden Faktors abhängig, der mittels einer linearen Extrapolation der Schadenquotienten der vergangenen drei Jahre berechnet wird. Seine frühzeitige, verlässliche Vorhersage ist aus Sicht des Risikomanagements von großer Bedeutung. Wir untersuchen deshalb vielfältige Vorhersageansätze, die von klassischen Zeitreihenansätzen und Regression über neuronale Netze bis hin zu hybriden Modellen reichen. Während bei den klassischen Methoden Regression mit ARIMA-Fehlern am besten abschneidet, zeigt ein neuronales Netz, das mit Zeitreihenvorhersage kombiniert oder auf desaisonalisierten und trendbereinigten Daten trainiert wurde, das insgesamt beste Verhalten.
Gliomas are primary brain tumors with a high invasive potential and infiltrative spread. Among them, glioblastoma multiforme (GBM) exhibits microvascular hyperplasia and pronounced necrosis triggered by hypoxia. Histological samples showing garland-like hypercellular structures (so-called pseudopalisades) centered around the occlusion site of a capillary are typical for GBM and hint on poor prognosis of patient survival. We propose a multiscale modeling approach in the kinetic theory of active particles framework and deduce by an upscaling process a reaction-diffusion model with repellent pH-taxis. We prove existence of a unique global bounded classical solution for a version of the obtained macroscopic system and investigate the asymptotic behavior of the solution. Moreover, we study two different types of scaling and compare the behavior of the obtained macroscopic PDEs by way of simulations. These show that patterns (not necessarily of Turing type), including pseudopalisades, can be formed for some parameter ranges, in accordance with the tumor grade. This is true when the PDEs are obtained via parabolic scaling (undirected tissue), while no such patterns are observed for the PDEs arising by a hyperbolic limit (directed tissue). This suggests that brain tissue might be undirected - at least as far as glioma migration is concerned. We also investigate two different ways of including cell level descriptions of response to hypoxia and the way they are related .
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
Consider a linear realization of a matroid over a field. One associates with it a configuration
polynomial and a symmetric bilinear form with linear homogeneous coefficients.
The corresponding configuration hypersurface and its non-smooth locus support the
respective first and second degeneracy scheme of the bilinear form.We showthat these
schemes are reduced and describe the effect of matroid connectivity: for (2-)connected
matroids, the configuration hypersurface is integral, and the second degeneracy scheme
is reduced Cohen–Macaulay of codimension 3. If the matroid is 3-connected, then also
the second degeneracy scheme is integral. In the process, we describe the behavior
of configuration polynomials, forms and schemes with respect to various matroid
constructions.
This article investigates a network interdiction problem on a tree network: given a subset of nodes chosen as facilities, an interdictor may dissect the network by removing a size-constrained set of edges, striving to worsen the established facilities best possible. Here, we consider a reachability objective function, which is closely related to the covering objective function: the interdictor aims to minimize the number of customers that are still connected to any facility after interdiction. For the covering objective on general graphs, this problem is known to be NP-complete (Fröhlich and Ruzika In: On the hardness of covering-interdiction problems. Theor. Comput. Sci., 2021). In contrast to this, we propose a polynomial-time solution algorithm to solve the problem on trees. The algorithm is based on dynamic programming and reveals the relation of this location-interdiction problem to knapsack-type problems. However, the input data for the dynamic program must be elaborately generated and relies on the theoretical results presented in this article. As a result, trees are the first known graph class that admits a polynomial-time algorithm for edge interdiction problems in the context of facility location planning.
Linear evolution equations are considered usually for the time variable being defined on an interval where typically initial conditions or time periodicity of solutions is required to single out certain solutions. Here, we would like to make a point of allowing time to be defined on a metric graph or network where on the branching points coupling conditions are imposed such that time can have ramifications and even loops. This not only generalizes the classical setting and allows for more freedom in the modeling of coupled and interacting systems of evolution equations, but it also provides a unified framework for initial value and time-periodic problems. For these time-graph Cauchy problems questions of well-posedness and regularity of solutions for parabolic problems are studied along with the question of which time-graph Cauchy problems cannot be reduced to an iteratively solvable sequence of Cauchy problems on intervals. Based on two different approaches—an application of the Kalton–Weis theorem on the sum of closed operators and an explicit computation of a Green’s function—we present the main well-posedness and regularity results. We further study some qualitative properties of solutions. While we mainly focus on parabolic problems, we also explain how other Cauchy problems can be studied along the same lines. This is exemplified by discussing coupled systems with constraints that are non-local in time akin to periodicity.
Introducing parallelism and exploring its use is still a fundamental challenge for the computer algebra community. In high-performance numerical simulation, on the other hand, transparent environments for distributed computing which follow the principle of separating coordination and computation have been a success story for many years. In this paper, we explore the potential of using this principle in the context of computer algebra. More precisely, we combine two well-established systems: The mathematics we are interested in is implemented in the computer algebra system SINGULAR, whose focus is on polynomial computations, while the coordination is left to the workflow management system GPI-Space, which relies on Petri nets as its mathematical modeling language and has been successfully used for coordinating the parallel execution (autoparallelization) of academic codes as well as for commercial software in application areas such as seismic data processing. The result of our efforts is a major step towards a framework for massively parallel computations in the application areas of SINGULAR, specifically in commutative algebra and algebraic geometry. As a first test case for this framework, we have modeled and implemented a hybrid smoothness test for algebraic varieties which combines ideas from Hironaka’s celebrated desingularization proof with the classical Jacobian criterion. Applying our implementation to two examples originating from current research in algebraic geometry, one of which cannot be handled by other means, we illustrate the behavior of the smoothness test within our framework and investigate how the computations scale up to 256 cores.
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
In this paper we present the comparison of experiments and numerical simulations for bubble cutting by a wire. The air bubble is surrounded by water. In the experimental setup an air bubble is injected on the bottom of a water column. When the bubble rises and contacts the wire, it is separated into two daughter bubbles. The flow is modeled by the incompressible Navier–Stokes equations. A meshfree method is used to simulate the bubble cutting. We have observed that the experimental and numerical results are in very good agreement. Moreover, we have further presented simulation results for liquid with higher viscosity. In this case the numerical results are close to previously published results.
Papadimitriou and Yannakakis (Proceedings of the 41st annual IEEE symposium on the
Foundations of Computer Science (FOCS), pp 86–92, 2000) show that the polynomial-time
solvability of a certain auxiliary problem determines the class of multiobjective optimization
problems that admit a polynomial-time computable (1+ε, . . . , 1+ε)-approximate Pareto set
(also called an ε-Pareto set). Similarly, in this article, we characterize the class ofmultiobjective
optimization problems having a polynomial-time computable approximate ε-Pareto set
that is exact in one objective by the efficient solvability of an appropriate auxiliary problem.
This class includes important problems such as multiobjective shortest path and spanning
tree, and the approximation guarantee we provide is, in general, best possible. Furthermore,
for biobjective optimization problems from this class, we provide an algorithm that computes
a one-exact ε-Pareto set of cardinality at most twice the cardinality of a smallest such set and
show that this factor of 2 is best possible. For three or more objective functions, however,
we prove that no constant-factor approximation on the cardinality of the set can be obtained
efficiently.
An important ingredient of any moving-mesh method for fluid-structure interaction (FSI) problems is the mesh moving technique (MMT) used to adapt the computational mesh in the moving fluid domain. An ideal MMT is computationally inexpensive, can handle large mesh motions without inverting mesh elements and can sustain an FSI simulation for extensive periods of time without irreversibly distorting the mesh. Here we compare several commonly used MMTs which are based on the solution of elliptic partial differential equations, including harmonic extension, bi-harmonic extension and techniques based on the equations of linear elasticity. Moreover, we propose a novel MMT which utilizes ideas from continuation methods to efficiently solve the equations of nonlinear elasticity and proves to be robust even when the mesh undergoes extreme motions. In addition to that, we study how each MMT behaves when combined with the mesh-Jacobian-based stiffening. Finally, we evaluate the performance of different MMTs on a popular two-dimensional FSI benchmark reproduced by using an isogeometric partitioned solver with strong coupling.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
This article is dedicated to the weight set decomposition of a multiobjective (mixed-)integer linear problem with three objectives. We propose an algorithm that returns a decomposition of the parameter set of the weighted sum scalarization by solving biobjective subproblems via Dichotomic Search which corresponds to a line exploration in the weight set. Additionally, we present theoretical results regarding the boundary of the weight set components that direct the line exploration. The resulting algorithm runs in output polynomial time, i.e. its running time is polynomial in the encoding length of both the input and output. Also, the proposed approach can be used for each weight set component individually and is able to give intermediate results, which can be seen as an “approximation” of the weight set component. We compare the running time of our method with the one of an existing algorithm and conduct a computational study that shows the competitiveness of our algorithm. Further, we give a state-of-the-art survey of algorithms in the literature.
In a (linear) parametric optimization problem, the objective value of each feasible solution is an affine function of a real-valued parameter and one is interested in computing a solution for each possible value of the parameter. For many important parametric optimization problems including the parametric versions of the shortest path problem, the assignment problem, and the minimum cost flow problem, however, the piecewise linear function mapping the parameter to the optimal objective value of the corresponding non-parametric instance (the optimal value function) can have super-polynomially many breakpoints (points of slope change). This implies that any optimal algorithm for such a problem must output a super-polynomial number of solutions. We provide a method for lifting approximation algorithms for non-parametric optimization problems to their parametric counterparts that is applicable to a general class of parametric optimization problems. The approximation guarantee achieved by this method for a parametric problem is arbitrarily close to the approximation guarantee of the algorithm for the corresponding non-parametric problem. It outputs polynomially many solutions and has polynomial running time if the non-parametric algorithm has polynomial running time. In the case that the non-parametric problem can be solved exactly in polynomial time or that an FPTAS is available, the method yields an FPTAS. In particular, under mild assumptions, we obtain the first parametric FPTAS for each of the specific problems mentioned above and a (3/2 + ε) -approximation algorithm for the parametric metric traveling salesman problem. Moreover, we describe a post-processing procedure that, if the non-parametric problem can be solved exactly in polynomial time, further decreases the number of returned solutions such that the method outputs at most twice as many solutions as needed at minimum for achieving the desired approximation guarantee.
Various regulatory initiatives (such as the pan-European PRIIP-regulation or the German chance-risk classification for state subsidized pension products) have been introduced that require product providers to assess and disclose the risk-return profile of their issued products by means of a key information document. We will in this context outline a concept for a (forward-looking) simulation-based approach and highlight its application and advantages. For reasons of comparison, we further illustrate the performance of approximation methods based on a projection of observed returns into the future such as the Cornish–Fisher expansion or bootstrap methods.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
Epidemiological models have gained much interest during the COVID-19 pandemic.
As the pandemic is now driven by newly emerging variants of SARS-CoV-2, the
question arises how to model multiple virus variants in a single model.
In this thesis, we have extended an established model for COVID-19 forecasts to multiple
virus variants. We analyzed the model mathematically and showed the global
existence and uniqueness of the solution as well as important invariance properties
for a meaningful model. The implementation into an existing framework which allows
us to identify model parameters based on surveillance data is described briefly.
When applying our model to actual transitions between SARS-CoV-2 variants, we
found that forecasts would have been significantly improved by our model extension.
In most cases, we were able to precisely predict peak dates and heights in
case incidences of waves caused by newly emerging variants during early transition
phases. More severe outcomes, like hospitalizations, are found to be harder to predict
because of very limited observational data regarding these outcomes for newly
emerging variants.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
LinTim is a scientific algorithm and dataset library that has been under development since 2007 and offers the possibility to carry out the various planning steps in public transportation. Although the name originally derives from "Line planning and Timetabling", the available functions have grown far beyond this scope. This is the documentation for version 2023.12. For more information, see https://www.lintim.net.
This thesis deals with modeling and simulation of district heating networks (DHN) and the mathematical analysis of the proposed DHN model. We provide a detailed derivation of the complete system of governing equations, starting from a brief exposition of the physical quantities of interest, continued with the components to set up a graph based network model accounting for fluxes and coupling conditions, the transport equations for water and thermal energy in pipelines, and the terms representing consumers and producers. On this basis, we perform an analysis of the solvability of the model equations, starting from the scalar advection problem in a single–consumer single–producer network, to a generalized problem suitable to model simple networks without loops. We also derive an abstract formulation of the problem, which serves as a rigorous mathematical model that can be utilized for optimization problems. The theoretical results can be utilized to perform tran- sient simulations of real world DHN and optimize their performance by optimal control, as indicated in a case study.
Given a finite or countably infinite family of Hilbert spaces \((H_j)_{j\in N} \), we study the Hilbert space tensor product \(\bigotimes_{j\in N} H_j\). In the general case, these tensor products were introduced by John von Neumann. We are especially interested in the case where each Hilbert space \(H_j\) is given as a reproducing kernel Hilbert space, i.e., \(H_j = H(K_j)\) for some reproducing kernel \(K_j\). We establish the following result, which is new for the case of N being infinite: If we restrict the domains of the kernels \(K_j\) properly, their pointwise product \(K\) is again a reproducing kernel, and
\[
H(K) \cong \bigotimes_{j\in N} H_j\,
\]
i.e., there is an isometric isomorphism between both spaces respecting the tensor product structure.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
Methods for scale and orientation invariant analysis of lower dimensional structures in 3d images
(2023)
This thesis is motivated by two groups of scientific disciplines: engineering sciences and mathematics. On the one hand, engineering sciences such as civil engineering want to design sustainable and cost-effective materials with desirable mechanical properties. The material behaviour depends on physical properties and production parameters. Therefore, physical properties are measured experimentally from real samples. In our case, computed tomography (CT) is used to non-destructively gain insight into the materials’ microstructure. This results in large 3d images which yield information on geometric microstructure characteristics. On the other hand, mathematical sciences are interested in designing methods with suitable and guaranteed properties. For example, a natural assumption of human vision is to analyse images regardless of object position, orientation, or scale. This assumption is formalized through the concepts of equivariance and invariance.
In Part I, we deal with oriented structures in materials such as concrete or fiber-reinforced composites. In image processing, knowledge of the local structure orientation can be used for various tasks, e.g. structure enhancement. The idea of using banks of directed filters parameterized in the orientation space is effective in 2d. However, this class of methods is prohibitive in 3d due to the high computational burden of filtering when using a fine discretization of the unit sphere. Hence, we introduce a method for 3d pixel-wise orientation estimation and directional filtering inspired by the idea of adaptive refinement in discretized settings. Furthermore, an operator for distinction between isotropic and anisotropic structures is defined based on our method. Finally, usefulness of the method is shown on 3d CT images in three different tasks on a fiber-reinforced polymer, concrete with cracks, and partially closed foams. Additionally, our method is extended to construct line granulometry and characterize fiber length and orientation distributions in fiber-reinforced polymers produced by either 3d printing or by injection moulding.
In Part II, we investigate how to introduce scale invariance for neural networks by using the Riesz transform. In classical convolutional neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, the network may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform, a scale equivariant operator. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider segmenting cracks in CT images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real CT images featuring a wide range of crack widths. As an alternative to deep learning models, the Riesz transform is utilized to construct a scale equivariant scattering network, which does not require a lengthy training procedure and works with very few training examples. Mathematical foundations behind this representation are laid out and analyzed. We show that this representation with 4 times less features than the original scattering networks from Mallat performs comparably well on texture classification and gives superior performance when dealing with scales outside the training set distribution.