### Refine

#### Year of publication

#### Document Type

- Preprint (550)
- Doctoral Thesis (215)
- Report (106)
- Article (25)
- Diploma Thesis (20)
- Lecture (7)
- Master's Thesis (4)
- Working Paper (4)
- Bachelor Thesis (2)
- Study Thesis (2)

#### Language

- English (936) (remove)

#### Keywords

- Wavelet (13)
- Inverses Problem (11)
- Mehrskalenanalyse (10)
- Approximation (8)
- Boltzmann Equation (8)
- Location Theory (7)
- Numerical Simulation (7)
- Optimization (7)
- Regularisierung (7)
- integer programming (7)

#### Faculty / Organisational entity

- Fachbereich Mathematik (936) (remove)

In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.

Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.

In this thesis, we deal with the worst-case portfolio optimization problem occuring in discrete-time markets.
First, we consider the discrete-time market model in the presence of crash threats. We construct the discrete worst-case optimal portfolio strategy by the indifference principle in the case of the logarithmic utility. After that we extend this problem to general utility functions and derive the discrete worst-case optimal portfolio processes, which are characterized by a dynamic programming equation. Furthermore, the convergence of the discrete worst-case optimal portfolio processes are investigated when we deal with the explicit utility functions.
In order to further study the relation of the worst-case optimal value function in discrete-time models to continuous-time models we establish the finite-difference approach. By deriving the discrete HJB equation we verify the worst-case optimal value function in discrete-time models, which satisfies a system of dynamic programming inequalities. With increasing degree of fineness of the time discretization, the convergence of the worst-case value function in discrete-time models to that in continuous-time models are proved by using a viscosity solution method.

Magnetoelastic coupling describes the mutual dependence of the elastic and magnetic fields and can be observed in certain types of materials, among which are the so-called "magnetostrictive materials". They belong to the large class of "smart materials", which change their shape, dimensions or material properties under the influence of an external field. The mechanical strain or deformation a material experiences due to an externally applied magnetic field is referred to as magnetostriction; the reciprocal effect, i.e. the change of the magnetization of a body subjected to mechanical stress is called inverse magnetostriction. The coupling of mechanical and electromagnetic fields is particularly observed in "giant magnetostrictive materials", alloys of ferromagnetic materials that can exhibit several thousand times greater magnitudes of magnetostriction (measured as the ratio of the change in length of the material to its original length) than the common magnetostrictive materials. These materials have wide applications areas: They are used as variable-stiffness devices, as sensors and actuators in mechanical systems or as artificial muscles. Possible application fields also include robotics, vibration control, hydraulics and sonar systems.
Although the computational treatment of coupled problems has seen great advances over the last decade, the underlying problem structure is often not fully understood nor taken into account when using black box simulation codes. A thorough analysis of the properties of coupled systems is thus an important task.
The thesis focuses on the mathematical modeling and analysis of the coupling effects in magnetostrictive materials. Under the assumption of linear and reversible material behavior with no magnetic hysteresis effects, a coupled magnetoelastic problem is set up using two different approaches: the magnetic scalar potential and vector potential formulations. On the basis of a minimum energy principle, a system of partial differential equations is derived and analyzed for both approaches. While the scalar potential model involves only stationary elastic and magnetic fields, the model using the magnetic vector potential accounts for different settings such as the eddy current approximation or the full Maxwell system in the frequency domain.
The distinctive feature of this work is the analysis of the obtained coupled magnetoelastic problems with regard to their structure, strong and weak formulations, the corresponding function spaces and the existence and uniqueness of the solutions. We show that the model based on the magnetic scalar potential constitutes a coupled saddle point problem with a penalty term. The main focus in proving the unique solvability of this problem lies on the verification of an inf-sup condition in the continuous and discrete cases. Furthermore, we discuss the impact of the reformulation of the coupled constitutive equations on the structure of the coupled problem and show that in contrast to the scalar potential approach, the vector potential formulation yields a symmetric system of PDEs. The dependence of the problem structure on the chosen formulation of the constitutive equations arises from the distinction of the energy and coenergy terms in the Lagrangian of the system. While certain combinations of the elastic and magnetic variables lead to a coupled magnetoelastic energy function yielding a symmetric problem, the use of their dual variables results in a coupled coenergy function for which a mixed problem is obtained.
The presented models are supplemented with numerical simulations carried out with MATLAB for different examples including a 1D Euler-Bernoulli beam under magnetic influence and a 2D magnetostrictive plate in the state of plane stress. The simulations are based on material data of Terfenol-D, a giant magnetostrictive materials used in many industrial applications.

For some optimization problems on a graph \(G=(V,E)\), one can give a general formulation: Let \(c\colon E \to \mathbb{R}_{\geq 0}\) be a cost function on the edges and \(X \subseteq 2^E\) be a set of (so-called feasible) subsets of \(E\), one aims to minimize \(\sum_{e\in S} c(e)\) among all feasible \(S\in X\). This formulation covers, for instance, the shortest path problem by choosing \(X\) as the set of all paths between two vertices, or the minimum spanning tree problem by choosing \(X\) to be the set of all spanning trees. This bachelor thesis deals with a parametric version of this formulation, where the edge costs \(c_\lambda\colon E \to \mathbb{R}_{\geq 0}\) depend on a parameter \(\lambda\in\mathbb{R}_{\geq 0}\) in a concave and piecewise linear manner. The goal is to investigate the worst case minimum size of a so-called representation system \(R\subseteq X\), which contains for each scenario \(\lambda\in\mathbb{R}_{\geq 0}\) an optimal solution \(S(\lambda)\in R\). It turns out that only a pseudo-polynomial size can be ensured in general, but smaller systems have to exist in special cases. Moreover, methods are presented to find such small systems algorithmically. Finally, the notion of a representation system is relaxed in order to get smaller (i.e. polynomial) systems ensuring a certain approximation ratio.

Cutting-edge cancer therapy involves producing individualized medicine for many patients at the same time. Within this process, most steps can be completed for a certain number of patients simultaneously. Using these resources efficiently may significantly reduce waiting times for the patients and is therefore crucial for saving human lives. However, this involves solving a complex scheduling problem, which can mathematically be modeled as a proportionate flow shop of batching machines (PFB). In this thesis we investigate exact and approximate algorithms for tackling many variants of this problem. Related mathematical models have been studied before in the context of semiconductor manufacturing.

Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.

Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).

SDE-driven modeling of phenotypically heterogeneous tumors: The influence of cancer cell stemness
(2018)

We deduce cell population models describing the evolution of a tumor (possibly interacting with its
environment of healthy cells) with the aid of differential equations. Thereby, different subpopulations
of cancer cells allow accounting for the tumor heterogeneity. In our settings these include cancer
stem cells known to be less sensitive to treatment and differentiated cancer cells having a higher
sensitivity towards chemo- and radiotherapy. Our approach relies on stochastic differential equations
in order to account for randomness in the system, arising e.g., by the therapy-induced decreasing
number of clonogens, which renders a pure deterministic model arguable. The equations are deduced
relying on transition probabilities characterizing innovations of the two cancer cell subpopulations,
and similarly extended to also account for the evolution of normal tissue. Several therapy approaches
are introduced and compared by way of tumor control probability (TCP) and uncomplicated tumor
control probability (UTCP). A PDE approach allows to assess the evolution of tumor and normal
tissue with respect to time and to cell population densities which can vary continuously in a given set
of states. Analytical approximations of solutions to the obtained PDE system are provided as well.

Optimal control of partial differential equations is an important task in applied mathematics where it is used in order to optimize, for example, industrial or medical processes. In this thesis we investigate an optimal control problem with tracking type cost functional for the Cattaneo equation with distributed control, that is, \(\tau y_{tt} + y_t - \Delta y = u\). Our focus is on the theoretical and numerical analysis of the limit process \(\tau \to 0\) where we prove the convergence of solutions of the Cattaneo equation to solutions of the heat equation.
We start by deriving both the Cattaneo and the classical heat equation as well as introducing our notation and some functional analytic background. Afterwards, we prove the well-posedness of the Cattaneo equation for homogeneous Dirichlet boundary conditions, that is, we show the existence and uniqueness of a weak solution together with its continuous dependence on the data. We need this in the following, where we investigate the optimal control problem for the Cattaneo equation: We show the existence and uniqueness of a global minimizer for an optimal control problem with tracking type cost functional and the Cattaneo equation as a constraint. Subsequently, we do an asymptotic analysis for \(\tau \to 0\) for both the forward equation and the aforementioned optimal control problem and show that the solutions of these problems for the Cattaneo equation converge strongly to the ones for the heat equation. Finally, we investigate these problems numerically, where we examine the different behaviour of the models and also consider the limit \(\tau \to 0\), suggesting a linear convergence rate.

Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.

Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.

Multiphase materials combine properties of several materials, which makes them interesting for high-performing components. This thesis considers a certain set of multiphase materials, namely silicon-carbide (SiC) particle-reinforced aluminium (Al) metal matrix composites and their modelling based on stochastic geometry models.
Stochastic modelling can be used for the generation of virtual material samples: Once we have fitted a model to the material statistics, we can obtain independent three-dimensional “samples” of the material under investigation without the need of any actual imaging. Additionally, by changing the model parameters, we can easily simulate a new material composition.
The materials under investigation have a rather complicated microstructure, as the system of SiC particles has many degrees of freedom: Size, shape, orientation and spatial distribution. Based on FIB-SEM images, that yield three-dimensional image data, we extract the SiC particle structure using methods of image analysis. Then we model the SiC particles by anisotropically rescaled cells of a random Laguerre tessellation that was fitted to the shapes of isotropically rescaled particles. We fit a log-normal distribution for the volume distribution of the SiC particles. Additionally, we propose models for the Al grain structure and the Aluminium-Copper (\({Al}_2{Cu}\)) precipitations occurring on the grain boundaries and on SiC-Al phase boundaries.
Finally, we show how we can estimate the parameters of the volume-distribution based on two-dimensional SEM images. This estimation is applied to two samples with different mean SiC particle diameters and to a random section through the model. The stereological estimations are within acceptable agreement with the parameters estimated from three-dimensional image data
as well as with the parameters of the model.

Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.

In the present master’s thesis we investigate the connection between derivations and
homogeneities of complete analytic algebras. We prove a theorem, which describes a specific set of generators
for the module of derivations of an analytic algebra, which map the maximal ideal of R into itself. It turns out, that this set has a structure similar to a Cartan subalgebra and contains
information regarding multi-homogeneity. In order to prove
this theorem, we extend the notion of grading by Scheja and Wiebe to projective systems and state the connection between multi-gradings and pairwise
commuting diagonalizable derivations. We prove a theorem similar to Cartan’s Conjugacy Theorem in the setup of infinite-dimensional Lie algebras, which arise as projective limits of finite-dimensional Lie algebras. Using this result, we can show that the structure of the aforementioned set of generators is an intrinsic property of the analytic algebra. At the end we state an algorithm, which is theoretically able to compute the maximal multi-homogeneity of a complete analytic algebra.

In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.

In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.

A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.

In this thesis, we focus on the application of the Heath-Platen (HP) estimator in option
pricing. In particular, we extend the approach of the HP estimator for pricing path dependent
options under the Heston model. The theoretical background of the estimator
was first introduced by Heath and Platen [32]. The HP estimator was originally interpreted
as a control variate technique and an application for European vanilla options was
presented in [32]. For European vanilla options, the HP estimator provided a considerable
amount of variance reduction. Thus, applying the technique for path dependent options
under the Heston model is the main contribution of this thesis.
The first part of the thesis deals with the implementation of the HP estimator for pricing
one-sided knockout barrier options. The main difficulty for the implementation of the HP
estimator is located in the determination of the first hitting time of the barrier. To test the
efficiency of the HP estimator we conduct numerical tests with regard to various aspects.
We provide a comparison among the crude Monte Carlo estimation, the crude control
variate technique and the HP estimator for all types of barrier options. Furthermore, we
present the numerical results for at the money, in the money and out of the money barrier
options. As numerical results imply, the HP estimator performs superior among others
for pricing one-sided knockout barrier options under the Heston model.
Another contribution of this thesis is the application of the HP estimator in pricing bond
options under the Cox-Ingersoll-Ross (CIR) model and the Fong-Vasicek (FV) model. As
suggested in the original paper of Heath and Platen [32], the HP estimator has a wide
range of applicability for derivative pricing. Therefore, transferring the structure of the
HP estimator for pricing bond options is a promising contribution. As the approximating
Vasicek process does not seem to be as good as the deterministic volatility process in the
Heston setting, the performance of the HP estimator in the CIR model is only relatively
good. However, for the FV model the variance reduction provided by the HP estimator is
again considerable.
Finally, the numerical result concerning the weak convergence rate of the HP estimator
for pricing European vanilla options in the Heston model is presented. As supported by
numerical analysis, the HP estimator has weak convergence of order almost 1.

Multifacility location problems arise in many real world applications. Often, the facilities can only be placed in feasible regions such as development or industrial areas. In this paper we show the existence of a finite dominating set (FDS) for the planar multifacility location problem with polyhedral gauges as distance functions, and polyhedral feasible regions, if the interacting facilities form a tree. As application we show how to solve the planar 2-hub location problem in polynomial time. This approach will yield an ε-approximation for the euclidean norm case polynomial in the input data and 1/ε.

In this article a new numerical solver for simulations of district heating networks is presented. The numerical method applies the local time stepping introduced in [11] to networks of linear advection equations. In combination with the high order approach of [4] an accurate and very efficient scheme is developed. In several numerical test cases the advantages for simulations of district heating networks are shown.

In this paper, we demonstrate the power of functional data models for a statistical analysis of stimulus-response experiments which is a quite natural way to look at this kind of data and which makes use of the full information available. In particular, we focus on the detection of a change in the mean of the response in a series of stimulus-response curves where we also take into account dependence in time.

Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.

In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.

We continue in this paper the study of k-adaptable robust solutions for combinatorial optimization problems with bounded uncertainty sets. In this concept not a single solution needs to be chosen to hedge against the uncertainty. Instead one is allowed to choose a set of k different solutions from which one can be chosen after the uncertain scenario has been revealed. We first show how the problem can be decomposed into polynomially many subproblems if k is fixed. In the remaining part of the paper we consider the special case where k=2, i.e., one is allowed to choose two different solutions to hedge against the uncertainty. We decompose this problem into so called coordination problems. The study of these coordination problems turns out to be interesting on its own. We prove positive results for the unconstrained combinatorial optimization problem, the matroid maximization problem, the selection problem, and the shortest path problem on series parallel graphs. The shortest path problem on general graphs turns out to be NP-complete. Further, we present for minimization problems how to transform approximation algorithms for the coordination problem to approximation algorithms for the original problem. We study the knapsack problem to show that this relation does not hold for maximization problems in general. We present a PTAS for the corresponding coordination problem and prove that the 2-adaptable knapsack problem is not at all approximable.

This paper presents a case study of duty rostering for physicians at a department of orthopedics and trauma surgery. We provide a detailed description of the rostering problem faced and present an integer programming model that has been used in practice for creating duty rosters at the department for more than a year. Using real world data, we compare the model output to a manually generated roster as used previously by the department and analyze the quality of the rosters generated by the model over a longer time span. Moreover, we demonstrate how unforeseen events such as absences of scheduled physicians are handled.

Order-semi-primal lattices
(1994)

A nonequilibrium situation governed by kinetic equations with strongly contrasted Knudsen numbers in different subdomains is discussed. We consider a domain decomposition problem for Boltzmann- and Euler equations, establish the correct coupling conditions and prove the validity of the obtained coupled solution . Moreover numerical examples comparing different types of coupling conditions are presented.

We are concerned with a parameter choice strategy for the Tikhonov regularization \((\tilde{A}+\alpha I)\tilde{x}\) = T* \(\tilde{y}\)+ w where \(\tilde{A}\) is a (not necessarily selfadjoint) approximation of T*T and T*\(\tilde y\)+ w is a perturbed form of the (not exactly computed) term T*y. We give conditions for convergence and optimal convergence rates.

A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\).
If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices
and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of
lattices and present several applications in geometry.

On derived varieties
(1996)

Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.

It is shown that Tikhonov regularization for ill- posed operator equation
\(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach.

The article provides an asymptotic probabilistic analysis of the variance of the number of pivot steps required by phase II of the "shadow vertex algorithm" - a parametric variant of the simplex algorithm, which has been proposed by Borgwardt [1] . The analysis is done for data which satisfy a rotationally
invariant distribution law in the \(n\)-dimensional unit ball.

Let \(a_i i:= 1,\dots,m.\) be an i.i.d. sequence taking values in \(\mathbb{R}^n\). Whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables which decompose additively relative to their boundary simplices, eg. the volume of \(P\), integral representations of their first two moments are given which lead to asymptotic estimations of variances for special "additive variables" known from stochastic approximation theory in case of rotationally symmetric distributions.

Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional
subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).

Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail.

Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data.

Let \(A\):= {\(a_i\mid i= 1,\dots,m\)} be an i.i.d. random sample in (\mathbb{R}^n\), which we consider a random polyhedron, either as the convex hull of the \(a_i\) or as the intersection of halfspaces {\(x \mid a ^T_i x\leq 1\)}. We introduce a class of polyhedral functionals we will call "additive-type functionals", which covers a number of polyhedral functionals discussed in different mathematical fields, where the emphasis in our contribution will be on those, which arise in linear optimization theory. The class of additive-type functionals is a suitable setting in order to unify and to simplify the asymptotic probabilistic analysis of first and second moments of polyhedral functionals. We provide examples of asymptotic results on expectations and on variances.

Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)),
\(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation
functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in
each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework
of continuous and \(f\)- additive polytope functionals.

Let \(a_1, i:=1,\dots,m\), be an i.i.d. sequence taking values in \(\mathbb{R}^n\), whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables, which decompose additively relative to their boundary simplices, eg. the volume of \(P\), simple integral representations of its first two moments are given in case of rotationally symmetric distributions in order to facilitate estimations of variances or to quantify large deviations from the mean.

Max ordering (MO) optimization is introduced as tool for modelling production
planning with unknown lot sizes and in scenario modelling. In MO optimization a feasible solution set \(X\) and, for each \(x\in X, Q\) individual objective functions \(f_1(x),\dots,f_Q(x)\) are given. The max ordering objective
\(g(x):=max\) {\(f_1(x),\dots,f_Q(x)\)} is then minimized over all \(x\in X\).
The paper discusses complexity results and describes exact and approximative
algorithms for the case where \(X\) is the solution set of combinatorial
optimization problems and network flow problems, respectively.

In this paper the existence of translation transversal designs which is equivalent to the existence of certain particular partitions in finite groups is studied. All considerations are based on the fact that the particular component of such a partition (the component representing the point classes of the corresponding design) is a normal subgroup of the translation group. With regard to groups admitting an (s,k,\(\lambda\))-partiton, on one hand the already known families of such groups are determined without using R. BAER's, 0.H.KEGEL's and M. SUZUKI' s classification of finite groups with partition and on the other hand some new results on the special structure of p - groups are proved. Furthermore, the existence of a series of nonabelian p - groups of odd order which can be represented as translation groups of certain (s,k,1) - translation transversal designs is shown; moreover, the translation groups are normal subgroups of collineation groups acting regularly on the set of flags of the same designs.

We show that the different module structures of GF(\(q^m\)) arising from the intermediate fields of GF(\(q^m\))and GF(q) can be studied simultaneously with the help of some basic properties of cyclotomic polynomials. We use this ideas to give a detailed and constructive proof of the most difficult part of a Theorem of D. Blessenohl and K. Johnsen (1986), i.e., the existence of elements v in GF(\(q^m\)) over GF(q) which generate normal bases over any intermediate field of GF(\(q^m\)) and GF(q), provided that m is a prime power. Such elements are called completely free in GF(\(q^m\)) over GF(q). We develop a recursive formula for the number of completely free elements in GF(\(q^m\)) over GF(q) in the case where m is a prime power. Some of the results can be generalized to finite cyclic Galois extensions
over arbitrary fields.

In this paper we continue the study of p - groups G of square order \(p^{2n}\) and investigate the existence of partial congruence partitions (sets of mutually disjoint subgroups of order \(p^n\)) in G. Partial congruence partitions are used to construct translation nets and partial difference sets, two objects studied extensively in finite geometries and combinatorics. We prove that the maximal number of mutually disjoint subgroups of order \(p^n\) in a group G of order \(p^{2n}\) cannot be more than \((p^{n-1}-1)(p-1)^{-1}\) provided that \(n\ge4\)and that G is not elementary abelian. This improves a result in [6] and as we do not distinguish the cases p=2 and p odd in the present paper, we also have a generalization of D. FROHARDT' s theorem on 2 - groups in [4]. Furthermore we study groups of order \(p^6\). We can show that for each odd prime number, there exist exactly four nonisomorphic groups which contain at least p+2 mutually disjoint subgroups of order \(p^3\). Again, as we do not distinguish between the even and the odd case in advance, we in particular obtain
D. GLUCK' s and A. P. SPRAGUE' s classification of groups of order 64 which contain at least 4 mutually disjoint subgroups of order 8 in [5] and [13] respectively.

We present a generalization of Proth's theorem for testing certain large integers for primality. The use of Gauß sums leads to a much simpler approach to these primality criteria as compared to the earlier tests. The running time of the algorithms is bounded by a polynomial in the length of the input string. The applicability of our algorithms is linked to certain diophantine approximations of \(l\)-adic roots of unity.

Hyperidentities
(1992)

The concept of a free algebra plays an essential role in universal algebra and in computer science. Manipulation of terms, calculations and the derivation of identities are performed in free algebras. Word problems, normal forms, system of reductions, unification and finite bases of identities are topics in algebra and logic as well as in computer science. A very fruitful point of view is to consider structural properties of free algebras. A.I. Malcev initiated a thorough research of the congruences of free algebras. Henceforth congruence permutable, congruence distributive and congruence modular varieties are
intensively studied. A lot of Malcev type theorems are connected to the congruence lattice of free algebras. Here we consider free algebras as semigroups of compositions of terms and more specific as clones of terms. The properties of these semigroups and clones are adequately described by hyperidentities. Naturally a lot of theorems of "semigroup" or "clone" type can be derived. This topic of research is still in its beginning and therefore a lot öf concepts and results cannot be presented in a final and polished form. Furthermore a lot of problems and questions are open which are of importance for the further development of the theory of hyperidentities.

Nonwoven materials are used as filter media which are the key component of automotive filters such as air filters, oil filters, and fuel filters. Today, the advanced engine technologies require innovative filter media with higher performances. A virtual microstructure of the nonwoven filter medium, which has similar filter properties as the existing material, can be used to design new filter media from existing media. Nonwoven materials considered in this thesis prominently feature non-overlapping fibers, curved fibers, fibers with circular cross section, fibers of apparently infinite length, and fiber bundles. To this end, as part of this thesis, we extend the Altendorf-Jeulin individual fiber model to incorporate all the above mentioned features. The resulting novel stochastic 3D fiber model can generate geometries with good visual resemblance of real filter media. Furthermore, pressure drop, which is one of the important physical properties of the filter, simulated numerically on the computed tomography (CT) data of the real nonwoven material agrees well (with a relative error of 8%) with the pressure drop simulated in the generated microstructure realizations from our model.
Generally, filter properties for the CT data and generated microstructure realizations are computed using numerical simulations. Since numerical simulations require extensive system memory and computation time, it is important to find the representative domain size of the generated microstructure for a required filter property. As part of this thesis, simulation and a statistical approach are used to estimate the representative domain size of our microstructure model. Precisely, the representative domain size with respect to the packing density, the pore size distribution, and the pressure drop are considered. It turns out that the statistical approach can be used to estimate the representative domain size for the given property more precisely and using less generated microstructures than the purely simulation based approach.
Among the various properties of fibrous filter media, fiber thickness and orientation are important characteristics which should be considered in design and quality assurance of filter media. Automatic analysis of images from scanning electron microscopy (SEM) is a suitable tool in that context. Yet, the accuracy of such image analysis tools cannot be judged based on images of real filter media since their true fiber thickness and orientation can never be known accurately. A solution is to employ synthetically generated models for evaluation. By combining our 3D fiber system model with simulation of the SEM imaging process, quantitative evaluation of the fiber thickness and orientation measurements becomes feasible. We evaluate the state-of-the-art automatic thickness and orientation estimation method that way.

We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the \(L^2\)-space of a differentiable measure the analoga of the classical concepts of gradient, divergence and Laplacian (which coincides with the Ornstein-Uhlenbeck
operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite direct and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the
procedure of quantization of anharmonic oscillators is discussed.

Weighted k-cardinality trees
(1992)

We consider the k -CARD TREE problem, i.e., the problem of finding in a given undirected graph G a subtree with k edges, having minimum weight. Applications of this problem arise in oil-field leasing and facility layout. While the general problem is shown to be strongly NP hard, it can be solved in polynomial time if G is itself a tree. We give an integer programming formulation of k-CARD TREE, and an efficient exact separation routine for a set of generalized subtour elimination constraints. The polyhedral structure of the convex huLl of the integer solutions is studied.

Efficient algorithms and structural results are presented for median
problems with 2 new facilities including the classical 2-Median problem,
the 2-Median problem with forbidden regions and bicriterial 2-Median
problems. This is the first paper dealing with multi-facility multiobjective location problems. The time complexity of all presented algorithms is O(MlogM), where M is the number of existing facilities.

Limits of instantons
(1991)

Despite their very good empirical performance most of the simplex algorithm's variants require exponentially many pivot steps in terms of the problem dimensions of the given linear programming problem (LPP) in worst-case situtation. The first to explain the large gap between practical experience and the disappointing worst-case was Borgwardt (1982a,b), who could prove polynomiality on tbe average for a certain variant of the algorithm-the " Schatteneckenalgorithmus (shadow vertex algorithm)" - using a stochastic problem simulation.

The notion of Q-Gorenstein smoothings has been introduced by Kollar. ([KoJ], 6.2.3). This notion is essential for formulating Kollar's conjectures on smoothing components for rational surface singularities. He conjectures, loosely speaking, that every smoothing of a rational surface singularity can be obtained by blowing down a deformation of a partial resolution, this partial resolution having the property (among others) that the singularities occuring on it all have qG-smoothings. (For more details and precise statements see [Ko], ch. 6.). It is therefore of interest to construct singularities having qG-smoothings.

This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts).
Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem.

This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration.
These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.

A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.

The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain
topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is
studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature.

Facility location problems in the plane are among the most widely used tools of Mathematical Programming in modeling real-world problems. In many of these problems restrictions have to be considered which correspond to regions in which a placement of new locations is forbidden. We consider center and median problems where the forbidden set is
a union of pairwise disjoint convex sets. As applications we discuss the assembly of printed circuit boards, obnoxious facility location and the location of emergency facilities.

We investigate two versions of multiple objective minimum spanning tree
problems defined on a network with vectorial weights. First, we want to minimize the maximum of Q linear objective functions taken over the set of all spanning trees (max linear spanning tree problem ML-ST). Secondly, we look for efficient spanning trees (multi criteria spanning tree problem MC-ST). Problem ML-ST is shown to be NP-complete. An exact algorithm which is based on ranking is presented. The procedure can also be used as an approximation scheme. For solving the bicriterion MC-ST, which in the worst case may have an exponential number of efficient trees, a two-phase procedure is presented. Based on the computation of extremal efficient spanning trees we use neighbourhood search to determine a sequence of solutions with the property that the distance
between two consecutive solutions is less than a given accuracy.

Given Q different objective functions, three types of single-facility problems
are considered: Lexicographic, pareto and max ordering problems. After discussing the interrelation between the problem types, a complete characterization of lexicographic locations and some instances of pareto and max ordering locations is given. The characterizations result in efficient solution algorithms for finding these locations. The paper relies heavily on the theory of restricted locations developed by the same authors, and can be further extended, for instance, to multifacility problems with several objectives. The proposed approach is more general than previously published results on multicriteria planar location problems and is particulary suited for modelling real-world problems.

Moduli for singularities
(1991)

The aim of this article is to give a survey on recent results about moduli spaces for curve singularities and for modules over the local ring of a fixed curve singularity. We emphasize especially the general concept which lies behind these constructions.
Therefore, the article might be useful to the reader who wishes to have the leading ideas and the main steps of the proofs explained without going into all the details. We also calculate explicit examples (for singularities and for modules) which illustrate
the general theorems.

In this paper we investigate two optimization problems for matroids with multiple objective functions, namely finding the pareto set and the max-ordering problem which conists in finding a basis such that the largest objective value is minimal. We prove that the decision versions of both problems are NP-complete. A solution procedure for the max-ordering problem is presented and a result on the relation of the solution sets of the two problems is given. The main results are a characterization of pareto bases by a basis exchange property and finally a connectivity result for proper pareto solutions.

In this paper we will introduce the concept of lexicographic max-ordering solutions for multicriteria combinatorial optimization problems. Section 1 provides the basic notions of
multicriteria combinatorial optimization and the definition of lexicographic max-ordering solutions. In Section 2 we will show that lexicographic max-ordering solutions are pareto optimal as well as max-ordering optimal solutions. Furthermore lexicographic max-ordering solutions can be used to characterize the set of pareto solutions. Further properties of lexicographic max-ordering solutions are given. Section 3 will be devoted to algorithms. We give a polynomial time algorithm for the two criteria case where one criterion is a sum and one is a bottleneck objective function, provided that the one criterion sum problem is solvable in polynomial time. For bottleneck functions an algorithm for the general case of Q criteria is presented.

In multiple criteria optimization an important research topic is the topological structure of the set \( X_e \) of efficient solutions. Of major interest is the connectedness of \( X_e \), since it would allow the determination of \( X_e \) without considering non-efficient solutions in the
process. We review general results on the subject,including the connectedness result for efficient solutions in multiple criteria linear programming. This result can be used to derive a definition of connectedness for discrete optimization problems. We present a counterexample to a previously stated result in this area, namely that the set of efficient solutions of the shortest path problem is connected. We will also show that connectedness does not hold for another important problem in discrete multiple criteria optimization: the spanning tree problem.

The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe
the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type.

The thesis studies change points in absolute time for censored survival data with some contributions to the more common analysis of change points with respect to survival time. We first introduce the notions and estimates of survival analysis, in particular the hazard function and censoring mechanisms. Then, we discuss change point models for survival data. In the literature, usually change points with respect to survival time are studied. Typical examples are piecewise constant and piecewise linear hazard functions. For that kind of models, we propose a new algorithm for numerical calculation of maximum likelihood estimates based on a cross entropy approach which in our simulations outperforms the common Nelder-Mead algorithm.
Our original motivation was the study of censored survival data (e.g., after diagnosis of breast cancer) over several decades. We wanted to investigate if the hazard functions differ between various time periods due, e.g., to progress in cancer treatment. This is a change point problem in the spirit of classical change point analysis. Horváth (1998) proposed a suitable change point test based on estimates of the cumulative hazard function. As an alternative, we propose similar tests based on nonparametric estimates of the hazard function. For one class of tests related to kernel probability density estimates, we develop fully the asymptotic theory for the change point tests. For the other class of estimates, which are versions of the Watson-Leadbetter estimate with censoring taken into account and which are related to the Nelson-Aalen estimate, we discuss some steps towards developing the full asymptotic theory. We close by applying the change point tests to simulated and real data, in particular to the breast cancer survival data from the SEER study.