Kaiserslautern - Fachbereich Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (291) (remove)
Has Fulltext
- yes (291)
Keywords
Faculty / Organisational entity
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
This thesis deals with modeling and simulation of district heating networks (DHN) and the mathematical analysis of the proposed DHN model. We provide a detailed derivation of the complete system of governing equations, starting from a brief exposition of the physical quantities of interest, continued with the components to set up a graph based network model accounting for fluxes and coupling conditions, the transport equations for water and thermal energy in pipelines, and the terms representing consumers and producers. On this basis, we perform an analysis of the solvability of the model equations, starting from the scalar advection problem in a single–consumer single–producer network, to a generalized problem suitable to model simple networks without loops. We also derive an abstract formulation of the problem, which serves as a rigorous mathematical model that can be utilized for optimization problems. The theoretical results can be utilized to perform tran- sient simulations of real world DHN and optimize their performance by optimal control, as indicated in a case study.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
Methods for scale and orientation invariant analysis of lower dimensional structures in 3d images
(2023)
This thesis is motivated by two groups of scientific disciplines: engineering sciences and mathematics. On the one hand, engineering sciences such as civil engineering want to design sustainable and cost-effective materials with desirable mechanical properties. The material behaviour depends on physical properties and production parameters. Therefore, physical properties are measured experimentally from real samples. In our case, computed tomography (CT) is used to non-destructively gain insight into the materials’ microstructure. This results in large 3d images which yield information on geometric microstructure characteristics. On the other hand, mathematical sciences are interested in designing methods with suitable and guaranteed properties. For example, a natural assumption of human vision is to analyse images regardless of object position, orientation, or scale. This assumption is formalized through the concepts of equivariance and invariance.
In Part I, we deal with oriented structures in materials such as concrete or fiber-reinforced composites. In image processing, knowledge of the local structure orientation can be used for various tasks, e.g. structure enhancement. The idea of using banks of directed filters parameterized in the orientation space is effective in 2d. However, this class of methods is prohibitive in 3d due to the high computational burden of filtering when using a fine discretization of the unit sphere. Hence, we introduce a method for 3d pixel-wise orientation estimation and directional filtering inspired by the idea of adaptive refinement in discretized settings. Furthermore, an operator for distinction between isotropic and anisotropic structures is defined based on our method. Finally, usefulness of the method is shown on 3d CT images in three different tasks on a fiber-reinforced polymer, concrete with cracks, and partially closed foams. Additionally, our method is extended to construct line granulometry and characterize fiber length and orientation distributions in fiber-reinforced polymers produced by either 3d printing or by injection moulding.
In Part II, we investigate how to introduce scale invariance for neural networks by using the Riesz transform. In classical convolutional neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, the network may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform, a scale equivariant operator. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider segmenting cracks in CT images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real CT images featuring a wide range of crack widths. As an alternative to deep learning models, the Riesz transform is utilized to construct a scale equivariant scattering network, which does not require a lengthy training procedure and works with very few training examples. Mathematical foundations behind this representation are laid out and analyzed. We show that this representation with 4 times less features than the original scattering networks from Mallat performs comparably well on texture classification and gives superior performance when dealing with scales outside the training set distribution.
This work aims to study textile structures in the frame of linear elasticity to understand how
the structure and material parameters influence the macroscopic homogenized model. More
precisely, we are interested in how the textile design parameters, such as the ratio between
fibers’ distance and cross-section width, the strength of the contact sliding between yarns,
and the partial clamp on the textile boundaries determine the phenomena that one can see in
shear experiments with textiles. Among others, when the warp and weft yarns change their
in-plane angles first and, after reaching some critical shear angle, the textile plate comes out
of the plane, and its folding starts.
The textile structure under consideration is a woven square, partially clamped on the left
and bottom boundary, made of long thin fibers that cross each other in a periodic pattern.
The fibers cannot penetrate each other, and in-plane sliding is allowed. This last assumption,
together with the partial clamp, adds new levels of complexity to the problem due to
the anisotropy in the yarn’s behavior in the unclamped subdomains of the textile.
The limiting behavior and macroscopic strain fields are found by passing to the limit with
respect to the yarn’s thickness r and the distance between them e, parameters that are asymptotically
related. The homogenization and dimension reduction are done via the unfolding
method, which separates the macroscopic scale from the periodicity cell. In addition to the
homogenization, a dimension reduction from a 3D to a 2D problem is applied. Adapting
the classical unfolding results to both the anisotropic context and to lattice grids (which are
constructed starting from the center lines of the rods crossing each other) are the main tools
we developed to tackle this type of model. They represent the first part of the thesis and are
published in Falconi, Griso, and Orlik, 2022b and Falconi, Griso, and Orlik, 2022a.
Given the parameters mentioned above, we then proceed to classify different textile problems,
incorporating the results from other works on the topic and thoroughly investigating
some others. After the study is conducted, we draw conclusions and give a mathematical
explanation concerning the expected approximation of the displacements, the expected solvability
of the limit problems, and the phenomena mentioned above. The results can be found
in “Asymptotic behavior for textiles with loose contact”, which has been recently submitted.
Symplectic linear quotient singularities belong to the class of symplectic singularities introduced by Beauville in 2000.
They are linear quotients by a group preserving a symplectic form on the vector space and are necessarily singular by a classical theorem of Chevalley-Serre-Shephard-Todd.
We study \(\mathbb Q\)-factorial terminalizations of such quotient singularities, that is, crepant partial resolutions that are allowed to have mild singularities.
The only symplectic linear quotients that can possibly admit a smooth \(\mathbb Q\)-factorial terminalization are by a theorem of Verbitsky those by symplectic reflection groups.
A smooth \(\mathbb Q\)-factorial terminalization is in this context referred to as a symplectic resolution and over the past two decades, there is an ongoing effort to classify exactly which symplectic reflection groups give rise to quotients that admit symplectic resolutions.
We reduce this classification to finitely many, precisely 45, open cases by proving that for almost all quotients by symplectically primitive symplectic reflection groups no such resolution exists.
Concentrating on the groups themselves, we prove that a parabolic subgroup of a symplectic reflection group is generated by symplectic reflections as well.
This is a direct analogue of a theorem of Steinberg for complex reflection groups.
We further study divisor class groups of \(\mathbb Q\)-factorial terminalizations of linear quotients by finite subgroups \(G\) of the special linear group and prove that such a class group is completely controlled by the symplectic reflections - or more generally junior elements - contained in \(G\).
We finally discuss our implementation of an algorithm by Yamagishi for the computation of the Cox ring of a \(\mathbb Q\)-factorial terminalization of a linear quotient in the computer algebra system OSCAR.
We use this algorithm to construct a generating system of the Cox ring corresponding to the quotient by a dihedral group of order \(2d\) with \(d\) odd acting by symplectic reflections.
Although our argument follows the algorithm, the proof does not logically depend on computer calculations.
We are able to derive the \(\mathbb Q\)-factorial terminalization itself from the Cox ring in this case.