### Refine

#### Year of publication

- 2011 (51) (remove)

#### Document Type

- Doctoral Thesis (24)
- Report (16)
- Preprint (9)
- Habilitation (1)
- Periodical Part (1)

#### Language

- English (51) (remove)

#### Keywords

- Visualisierung (3)
- autoregressive process (3)
- neural network (3)
- nonparametric regression (3)
- (dynamic) network flows (1)
- American options (1)
- CUSUM statistic (1)
- Change analysis (1)
- Chow Quotient (1)
- Codierung (1)
- Compiler (1)
- Computervision (1)
- Copula (1)
- Credit Default Swap (1)
- DSM (1)
- Data Spreading (1)
- Datenspreizung (1)
- Distributed Rendering (1)
- Dynamic Network Flows (1)
- FPTAS (1)
- Femtosecond Laser (1)
- Finite Pointset Method (1)
- Fließanalyse (1)
- FlowLoc (1)
- Green’s function (1)
- Heston model (1)
- Heterogene Katalyse (1)
- INGARCH (1)
- Immobilisierung (1)
- Informationsübertragung (1)
- Integer-valued time series (1)
- Interaktion (1)
- Invariante Momente (1)
- Katalyse (1)
- Katalytische Hydrierung (1)
- Knapsack problem (1)
- Korrelationsanalyse (1)
- Large High-Resolution Displays (1)
- Leistungseffizienz (1)
- LiDAR (1)
- Local smoothing (1)
- Markov Chain (1)
- Markov Kette (1)
- Material Properties under Exctreme Conditions (1)
- Mathematik (1)
- Mensch-Maschine-Kommunikation (1)
- Merkmalsextraktion (1)
- Merkmalsraum (1)
- Mesh-Free (1)
- Methode der Fundamentallösungen (1)
- Mobile Telekommunikation (1)
- Moment Invariants (1)
- Momentum and Mas Transfer (1)
- Monte Carlo methods (1)
- Monte-Carlo Modelling (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Field (1)
- Multiphase Flows (1)
- Mustererkennung (1)
- Ninequilibirum Electron Kinetics (1)
- Non-commutative Computer Algebra (1)
- OFDM (1)
- Order (1)
- Parallel volume (1)
- Pattern Recognition (1)
- Poisson autoregression (1)
- Population Balance Equation (1)
- Poroelastizität (1)
- Power Efficiency (1)
- Pseudopolynomial-Time Algorithm (1)
- Rank test (1)
- Restricted Shortest Path (1)
- Scalar (1)
- Scattered-Data-Interpolation (1)
- Scientific Visualization (1)
- Shapley Value (1)
- Shortest path problem (1)
- Siliciumdioxid (1)
- Similarity measures (1)
- Skalar (1)
- Standard basis (1)
- Strömung (1)
- Swift Heavy Ion (1)
- Tensor (1)
- Theorie schwacher Lösungen (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Urban Flooding (1)
- VCG payment scheme (1)
- Vector (1)
- Vector Field (1)
- Vektor (1)
- Vektorfelder (1)
- Verzerrungstensor (1)
- Visualization (1)
- Volumen-Rendering (1)
- Wills functional (1)
- Winner Determination Problem (WDP) (1)
- additive Gaussian noise (1)
- bedingte Aktionen (1)
- calls (1)
- catalysis (1)
- change analysis (1)
- changepoint test (1)
- combinatorial optimization (1)
- combinatorial procurement (1)
- connectedness (1)
- constrained systems (1)
- convex optimization (1)
- corre- lation (1)
- correlated errors (1)
- denoising (1)
- deterministic technical systems (1)
- discrete variational mechanics (1)
- dynamic capillary pressure (1)
- finite volume method (1)
- flexible multibody dynamics (1)
- gradient descent reprojection (1)
- gravitation (1)
- guarded actions (1)
- heterogenous catalysis (1)
- heuristic (1)
- higher-order moments (1)
- hydrogenation (1)
- image restoration (1)
- immobilization (1)
- local search algorithm (1)
- locally supported wavelets (1)
- location theory (1)
- logistic regression (1)
- magnetic field (1)
- maximum a posteriori estimation (1)
- maximum likelihood estimation (1)
- method of fundamental solutions (1)
- moment matching (1)
- multicriteria optimization (1)
- multiplicative noise (1)
- non-convex body (1)
- non-local filtering (1)
- nonlinear stochastic systems (1)
- numerical irreducible decomposition (1)
- oblique derivative (1)
- optimal control (1)
- options (1)
- poroelasticity (1)
- potential (1)
- pressing section of a paper machine (1)
- puts (1)
- real-time scheduling (1)
- real-world accident data (1)
- regularization methods (1)
- resource constrained shortest path problem (1)
- safety function (1)
- sequential test (1)
- silica (1)
- single layer kernel (1)
- spherical decomposition (1)
- splines (1)
- statistical modeling (1)
- steady modified Richards’ equation (1)
- strongly polynomial-time algorithm (1)
- superstep cycles (1)
- suzuki coupling (1)
- swap (1)
- synchrone Sprachen (1)
- synchronous languages (1)
- target sensitivity (1)
- time utility functions (1)
- tree method (1)
- uniform central limit theorem (1)
- universal objective function (1)
- variational integrators (1)
- volatility (1)
- weak solution theory (1)

#### Faculty / Organisational entity

The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.

In this paper, we present a viscoelastic rod model that is suitable for fast and accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (‘stiff’ dof), bending and torsion (‘soft’ dof). For inner dissipation, a consistent damping potential proposed by Antman is chosen. We parametrise the rotational dof by unit quaternions and directly use the quaternionic evolution differential equation for the discretisation of the Cosserat rod curvature. The discrete version of our rod model is obtained via a finite difference discretisation on a staggered grid. After an index reduction from three to zero, the right-hand side function f and the Jacobian \(\partial f/\partial(q, v, t)\) of the dynamical system \(\dot{q} = v, \dot{v} = f(q, v, t)\) is free of higher algebraic (e. g. root) or transcendental (e. g. trigonometric or exponential) functions and therefore cheap to evaluate. A comparison with Abaqus finite element results demonstrates the correct mechanical behavior of our discrete rod model. For the time integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computational times within milliseconds, it is suitable for interactive applications in ‘virtual reality’ as well as for multibody dynamics simulation.

This work presents a proof of convergence of a discrete solution to a continuous one. At first, the continuous problem is stated as a system
of equations which describe filtration process in the pressing section of a
paper machine. Two flow regimes appear in the modeling of this problem.
The model for the saturated flow is presented by the Darcy’s law and the mass conservation. The second regime is described by the Richards approach together with a dynamic capillary pressure model. The finite
volume method is used to approximate the system of PDEs. Then the existence of a discrete solution to proposed finite difference scheme is proven.
Compactness of the set of all discrete solutions for different mesh sizes is
proven. The main Theorem shows that the discrete solution converges
to the solution of continuous problem. At the end we present numerical
studies for the rate of convergence.

Continuously improving imaging technologies allow to capture the complex spatial
geometry of particles. Consequently, methods to characterize their three
dimensional shapes must become more sophisticated, too. Our contribution to
the geometric analysis of particles based on 3d image data is to unambiguously
generalize size and shape descriptors used in 2d particle analysis to the spatial
setting.
While being defined and meaningful for arbitrary particles, the characteristics
were actually selected motivated by the application to technical cleanliness. Residual
dirt particles can seriously harm mechanical components in vehicles, machines,
or medical instruments. 3d geometric characterization based on micro-computed
tomography allows to detect dangerous particles reliably and with
high throughput. It thus enables intervention within the production line. Analogously
to the commonly agreed standards for the two dimensional case, we
show how to classify 3d particles as granules, chips and fibers on the basis of
the chosen characteristics. The application to 3d image data of dirt particles is
demonstrated.

Input loads are essential for the numerical simulation of vehicle multibody system
(MBS)- models. Such load data is called invariant, if it is independent of the specific system under consideration. A digital road profile, e.g., can be used to excite MBS models of different
vehicle variants. However, quantities efficiently obtained by measurement such as wheel forces
are typically not invariant in this sense. This leads to the general task to derive invariant loads
on the basis of measurable, but system-dependent quantities. We present an approach to derive
input data for full-vehicle simulation that can be used to simulate different variants of a vehicle
MBS model. An important ingredient of this input data is a virtual road profile computed by optimal control methods.

This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.

The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.

Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)

Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.

For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.

In this paper we develop monitoring schemes for detecting structural changes
in nonlinear autoregressive models. We approximate the regression function by a
single layer feedforward neural network. We show that CUSUM-type tests based
on cumulative sums of estimated residuals, that have been intensively studied
for linear regression in both an offline as well as online setting, can be extended
to this model. The proposed monitoring schemes reject (asymptotically) the null
hypothesis only with a given probability but will detect a large class of alternatives
with probability one. In order to construct these sequential size tests the limit
distribution under the null hypothesis is obtained.

The present dissertation contains the theoretical studies performed on the topic of a high energy deposition in matter. The work focuses on electronic excitation and relaxation processes on ultrafast timescales. Energy deposition by means of intense ultrashort (femtosecond) laser pulses or by means of swift heavy ions irradiation have a certain similarities: the final observable material modifications result from a number of processes on different timescales. First, the electronic excitation by photoabsorption or by ion impact takes place on subfemtosecond timescales. Then these excited electrons propagate and redistribute their energy interacting among themselves and exciting secondary generations of electrons. This typically takes place on femtosecond timescales. On the order of tens to hundreds femtoseconds the excited electrons are usually thermalized. The energy exchange with the lattice atoms lasts up to tens of picoseconds. The lattice temperature can reach melting point; then the material cools down and recrystalizes, forming the final modified nanostructures, which are observed experimentally. The processes on each previous step form the initial conditions for the following step. Thus, to describe the final phase transition and formation of nanostructures, one has to start from the very beginning and follow through all the steps.
The present work focuses on the early stages of the energy dissipation after its deposition, taking place in the electronic subsystems of excited materials. Different models applicable for different excitation mechanisms will be presented: in the thesis I will start from the description of high energy excitation (electron energies of \(\sim\) keV), then I shall focus on excitations to intermediate energies of electrons (\(\sim\) 100 eV), and finally coming down to a few eV electron excitations (visible light). The results will be compared with experimental observations.
For the high energy material excitation assumed to be caused by irradiation with swift heavy ions, the classical Asymptotical Trajectory Monte-Carlo (ATMC) is applied to describe the excitation of electrons by the impact of the projectile, the initial kinetics of electrons, secondary electron creation and Auger-redistribution of holes. I first simulate the early stage (first tens of fs) of kinetics of the electronic subsystem (in silica target, SiO\(_2\)) in tracks of ions decelerated in the electronic stopping regime. It will be shown that the well pronounced front of excitation in the electronic and ionic subsystems is formed due to the propagation of electrons, which cannot be described by models based on diffusion mechanisms (e.g. parabolic equations of heat diffusion). On later timescales, the thermalization time of electrons can be estimated as a time when the particle- and the energy propagation turns from the ballistic to the diffusive one. As soon as the electrons are thermalized, one can apply the Two Temperature Model. It will be demonstrated how to combine the MC output with the two temperature model. The results of this combination demonstrate that secondary ionizations play a very important role for the track formation process, leading to energy stored in the hole subsystem. This energy storage causes a significant delay of heating and prolongs the timescales of lattice modifications up to tens of picoseconds.
For intermediate energies of excitation (XUV-VUV laser pulse excitation of materials) I applied the Monte-Carlo simulation, modified where necessary and extended in order to take into account the electronic band structure and Pauli's principle for electrons within the conduction band. I apply the new method for semiconductors and for metals on examples of solid silicon and aluminum, respectively.
It will be demonstrated that for the case of semiconductors the final kinetic energy of free electrons is much less than the total energy provided by the laser pulse, due to the energy spent to overcome ionization potentials. It was found that the final total number of electrons excited by a single photon is significantly less than \(\hbar \omega / E_{gap}\). The concept of an 'effective energy gap' is introduced for collective electronic excitation, which can be applied to estimate the free electron density after high-intensity VUV laser pulse irradiation.
For metals, experimentally observed spectra of emitted photons from irradiated aluminum can be explained well with our results. At the characteristic time of a photon emission due to radiative decay of \(L-\)shell hole (\(t < 60\) fs), the distribution function of the electrons is not yet fully thermalized. This distribution consists of two main branches: low energy distribution as a distorted Fermi-distribution, and a long high energy tail. Therefore, the experimentally observed spectra demonstrate two different branches of results: the one observed with \(L-\)shell radiation emission reflects the low energy distribution, the Bremsstrahlung spectra reflects high energy (nonthermalized) tail. The comparison with experiments demonstrated a good agreement of the calculated spectra with the experimentally observed ones.
For the irradiation of semiconductor with low energy photons (visible light), a statistical model named the "extended multiple rate equation" is proposed. Based on the earlier developed multiple rate equation, the model additionally includes the interaction of electrons with the phononic subsystem of the lattice and allows for the direct determination of the conditions for crystal damage. Our model effectively describes the dynamics of the electronic subsystem, dynamical changes in the optical properties, and lattice heating, and the results are in very good agreement with experimental measurements on the transient reflectivity and the fluence damage threshold of silicon irradiated with a femtosecond laser pulse.

The primary objective of this work is the development of robust, accurate and efficient simulation methods for the optimal control of mechanical systems, in particular of constrained mechanical systems as they appear in the context of multibody dynamics. The focus is on the development of new numerical methods that meet the demand of structure preservation, i.e. the approximate numerical solution inherits certain characteristic properties from the real dynamical process.
This task includes three main challenges. First of all, a kinematic description of multibody systems is required that treats rigid bodies and spatially discretised elastic structures in a uniform way and takes their interconnection by joints into account. This kinematic description must not be subject to singularities when the system performs large nonlinear dynamics. Here, a holonomically constrained formulation that completely circumvents the use of rotational parameters has proved to perform very well. The arising constrained equations of motion are suitable for an easy temporal discretisation in a structure preserving way. In the temporal discrete setting, the equations can be reduced to minimal dimension by elimination of the constraint forces. Structure preserving integration is the second important ingredient. Computational methods that are designed to inherit system specific characteristics – like consistency in energy, momentum maps or symplecticity – often show superior numerical performance regarding stability and accuracy compared to standard methods. In addition to that, they provide a more meaningful picture of the behaviour of the systems they approximate. The third step is to take the previ- ously addressed points into the context of optimal control, where differential equation and inequality constrained optimisation problems with boundary values arise. To obtain meaningful results from optimal control simulations, wherein energy expenditure or the control effort of a motion are often part of the optimisation goal, it is crucial to approxi- mate the underlying dynamics in a structure preserving way, i.e. in a way that does not numerically, thus artificially, dissipate energy and in which momentum maps change only and exactly according to the applied loads.
The excellent numerical performance of the newly developed simulation method for optimal control problems is demonstrated by various examples dealing with robotic systems and a biomotion problem. Furthermore, the method is extended to uncertain systems where the goal is to minimise a probability of failure upper bound and to problems with contacts arising for example in bipedal walking.

A standard approach for deducing a variational denoising method is the maximum a posteriori strategy. Here, the denoising result is chosen in such a way that it maximizes the conditional density function of the reconstruction given its observed noisy version. Unfortunately, this approach does not imply that the empirical distribution of the reconstructed noise components follows the statistics of the assumed noise model. In this paper, we propose to overcome this drawback by applying an additional transformation to the random vector modeling the noise. This transformation is then incorporated into the standard denoising approach and leads to a more sophisticated data fidelity term, which forces the removed noise components to have the desired statistical properties. The good properties of our new approach are demonstrated for additive Gaussian noise by numerical examples. Our method shows to be especially well suited for data containing high frequency structures, where other denoising methods which assume a certain smoothness of the signal cannot restore the small structures.

In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.

In recent years, convex optimization methods were successfully applied for various image processing tasks and a large number of first-order methods were designed to minimize the corresponding functionals. Interestingly, it was shown recently by Grewenig et al. that the simple idea of so-called “superstep cycles” leads to very efficient schemes for time-dependent (parabolic) image enhancement problems as well as for steady state (elliptic) image compression tasks. The ”superstep cycles” approach is similar to the nonstationary (cyclic)
Richardson method which has been around for over sixty years.
In this paper, we investigate the incorporation of superstep cycles into the gradient descent reprojection method. We show for two problems in compressive sensing and image processing, namely the LASSO approach and the Rudin-Osher-Fatemi model that the resulting simple cyclic gradient descent reprojection algorithm can numerically compare with various state-of-the-art first-order algorithms. However, due to the nonlinear
projection within the algorithm convergence proofs even under restrictive assumptions on the linear operators appear to be hard. We demonstrate the difficulties by studying the
simplest case of a two-cycle algorithm in R^2 with projections onto the Euclidian ball.

This thesis is concerned with the modeling of the domain structure evolution in ferroelectric materials. Both a sharp interface model, in which the driving force on a domain wall is used to postulate an evolution law, and a continuum phase field model are treated in a thermodynamically consistent framework. Within the phase field model, a Ginzburg-Landau type evolution law for the spontaneous polarization is derived. Numerical simulations (FEM) show the influence of various kinds of defects on the domain wall mobility in comparison with experimental findings. A macroscopic material law derived from the phase field model is used to calculate polarization yield surfaces for multiaxial loading conditions.

In this paper, we discuss the problem of testing for a changepoint in the structure
of an integer-valued time series. In particular, we consider a test statistic
of cumulative sum (CUSUM) type for general Poisson autoregressions of order
1. We investigate the asymptotic behaviour of conditional least-squares estimates
of the parameters in the presence of a changepoint. Then, we derive the
asymptotic distribution of the test statistic under the hypothesis of no change,
allowing for the calculation of critical values. We prove consistency of the test,
i.e. asymptotic power 1, and consistency of the corresponding changepoint estimate.
As an application, we have a look at changepoint detection in daily
epileptic seizure counts from a clinical study.

The shortest path problem in which the \((s,t)\)-paths \(P\) of a given digraph \(G =(V,E)\) are compared with respect to the sum of their edge costs is one of the best known problems in combinatorial optimization. The paper is concerned with a number of variations of this problem having different objective functions like bottleneck, balanced, minimum deviation, algebraic sum, \(k\)-sum and \(k\)-max objectives, \((k_1, k_2)-max, (k_1, k_2)\)-balanced and several types of trimmed-mean objectives. We give a survey on existing algorithms and propose a general model for those problems not yet treated in literature. The latter is based on the solution of resource constrained shortest path problems with equality constraints which can be solved in pseudo-polynomial time if the given graph is acyclic and the number of resources is fixed. In our setting, however, these problems can be solved in strongly polynomial time. Combining this with known results on \(k\)-sum and \(k\)-max optimization for general combinatorial problems, we obtain strongly polynomial algorithms for a variety of path problems on acyclic and general digraphs.

Mrázek et al. [14] proposed a unified approach to curve estimation which combines
localization and regularization. In this thesis we will use their approach to study
some asymptotic properties of local smoothers with regularization. In Particular, we
shall discuss the regularized local least squares (RLLS) estimate with correlated errors
(more precisely with stationary time series errors), and then based on this approach
we will discuss the case when the kernel function is dirac function and compare our
smoother with the spline smoother. Finally, we will do some simulation study.

The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier.
In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group.
The particular difficulty in such approaches is the formulation of limit and
jump relations for surfaces used in seismic data processing, i.e., non-smooth
surfaces in various topologies (for example, uniform and
quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces.
By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of
a tree algorithm for the fast numerical computation of functions on the boundary.
In order to demonstrate the
applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency
artifacts and noise. Essential feature is the space localization property of
Helmholtz wavelets which numerically enables to discuss the velocity field in
pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.

This report gives an insight into basics of stress field simulations for geothermal reservoirs.
The quasistatic equations of poroelasticity are deduced from constitutive equations, balance
of mass and balance of momentum. Existence and uniqueness of a weak solution is shown.
In order of to find an approximate solution numerically, usage of the so–called method of
fundamental solutions is a promising way. The idea of this method as well as a sketch of
how convergence may be proven are given.

This thesis treats the extension of the classical computational homogenization scheme towards the multi-scale computation of material quantities like the Eshelby stresses and material forces. To this end, microscopic body forces are considered in the scale-transition, which may emerge due to inhomogeneities in the material. Regarding the determination of material quantities based on the underlying microscopic structure different approaches are compared by means of their virtual work consistency. In analogy to the homogenization of spatial quantities, this consistency is discussed within Hill-Mandel type conditions.

For many years real-time task models have focused the timing constraints on execution windows defined by earliest start times and deadlines for feasibility.
However, the utility of some application may vary among scenarios which yield correct behavior, and maximizing this utility improves the resource utilization.
For example, target sensitive applications have a target point where execution results in maximized utility, and an execution window for feasibility.
Execution around this point and within the execution window is allowed, albeit at lower utility.
The intensity of the utility decay accounts for the importance of the application.
Examples of such applications include multimedia and control; multimedia application are very popular nowadays and control applications are present in every automated system.
In this thesis, we present a novel real-time task model which provides for easy abstractions to express the timing constraints of target sensitive RT applications: the gravitational task model.
This model uses a simple gravity pendulum (or bob pendulum) system as a visualization model for trade-offs among target sensitive RT applications.
We consider jobs as objects in a pendulum system, and the target points as the central point.
Then, the equilibrium state of the physical problem is equivalent to the best compromise among jobs with conflicting targets.
Analogies with well-known systems are helpful to fill in the gap between application requirements and theoretical abstractions used in task models.
For instance, the so-called nature algorithms use key elements of physical processes to form the basis of an optimization algorithm.
Examples include the knapsack problem, traveling salesman problem, ant colony optimization, and simulated annealing.
We also present a few scheduling algorithms designed for the gravitational task model which fulfill the requirements for on-line adaptivity.
The scheduling of target sensitive RT applications must account for timing constraints, and the trade-off among tasks with conflicting targets.
Our proposed scheduling algorithms use the equilibrium state concept to order the execution sequence of jobs, and compute the deviation of jobs from their target points for increased system utility.
The execution sequence of jobs in the schedule has a significant impact on the equilibrium of jobs, and dominates the complexity of the problem --- the optimum solution is NP-hard.
We show the efficacy of our approach through simulations results and 3 target sensitive RT applications enhanced with the gravitational task model.

This thesis is devoted to constructive module theory of polynomial
graded commutative algebras over a field.
It treats the theory of Groebner bases (GB), standard bases (SB) and syzygies as well as algorithms
and their implementations.
Graded commutative algebras naturally unify exterior and commutative polynomial algebras.
They are graded non-commutative, associative unital algebras over fields and may contain zero-divisors.
In this thesis
we try to make the most use out of _a priori_ knowledge about
their characteristic (super-commutative) structure
in developing direct symbolic methods, algorithms and implementations,
which are intrinsic to graded commutative algebras and practically efficient.
For our symbolic treatment we represent them as polynomial algebras
and redefine the product rule in order to allow super-commutative structures
and, in particular, to allow zero-divisors.
Using this representation we give a nice characterization
of a GB and an algorithm for its computation.
We can also tackle central localizations of graded commutative algebras by allowing commutative variables to be _local_,
generalizing Mora algorithm (in a similar fashion as G.M.Greuel and G.Pfister by allowing local or mixed monomial orderings)
and working with SBs.
In this general setting we prove a generalized Buchberger's criterion,
which shows that syzygies of leading terms play the utmost important role
in SB and syzygy module computations.
Furthermore, we develop a variation of the La Scala-Stillman free resolution algorithm,
which we can formulate particularly close to our implementation.
On the implementation side
we have further developed the Singular non-commutative subsystem Plural
in order to allow polynomial arithmetic
and more involved non-commutative basic Computer Algebra computations (e.g. S-polynomial, GB)
to be easily implementable for specific algebras.
At the moment graded commutative algebra-related algorithms
are implemented in this framework.
Benchmarks show that our new algorithms and implementation are practically efficient.
The developed framework has a lot of applications in various
branches of mathematics and theoretical physics.
They include computation of sheaf cohomology, coordinate-free verification of affine geometry
theorems and computation of cohomology rings of p-groups, which are partially described in this thesis.

We discuss some first steps towards experimental design for neural network regression which, at present, is too complex to treat fully in general. We encounter two difficulties: the nonlinearity of the models together with the high parameter dimension on one hand, and the common misspecification of the models on the other hand.
Regarding the first problem, we restrict our consideration to neural networks with only one and two neurons in the hidden layer and a univariate input variable. We prove some results regarding locally D-optimal designs, and present a numerical study using the concept of maximin optimal designs.
In respect of the second problem, we have a look at the effects of misspecification on optimal experimental designs.

In this paper we continue the investigation of the asymptotic behavior of the parallel volume in Minkowski spaces as the distance tends to infinity that was started in [13]. We will show that the difference of the parallel volume of the convex hull of a body and the parallel volume of the body itself can at most have order \(r^{d-2}\) in dimension \(d\). Then we will show that in the Euclidean case this difference can at most have order \(r^{d-3}\). We will also examine the asymptotic behavior of the derivative of this difference as the distance tends to infinity. After this we will compute the derivative of \(f_\mu (rK)\) in \(r\), where \(f_\mu\) is the Wills functional or a similar functional, \(K\) is a fixed body and \(rK\) is the Minkowski-product of \(r\) and \(K\). Finally we will use these results to examine Brownian paths and Boolean models and derive new proofs for formulae for intrinsic volumes.

The recognition of patterns and structures has gained importance for dealing with the growing amount of data being generated by sensors and simulations. Most existing methods for pattern recognition are tailored for scalar data and non-correlated data of higher dimensions. The recognition of general patterns in flow structures is possible, but not yet practically usable, due to the high computation effort. The main goal of this work is to present methods for comparative visualization of flow data, amongst others, based on a new method for efficient pattern recognition on flow data. This work is structured in three parts: At first, a known feature-based approach for pattern recognition on flow data, the Clifford convolution, has been applied to color edge detection, and been extended to non-uniform grids. However, this method is still computationally expensive for a general pattern recognition, since the recognition algorithm has to be applied for numerous different scales and orientations of the query pattern. A more efficient and accurate method for pattern recognition on flow data is presented in the second part. It is based upon a novel mathematical formulation of moment invariants for flow data. The common moment invariants for pattern recognition are not applicable on flow data, since they are only invariant on non-correlated data. Because of the spatial correlation of flow data, the moment invariants had to be redefined with different basis functions to satisfy the demands for an invariant mapping of flow data. The computation of the moment invariants is done by a multi-scale convolution of the complete flow field with the basis functions. This pre-processing computation time almost equals the time for the pattern recognition of one single general pattern with the former algorithms. However, after having computed the moments once, they can be indexed and used as a look-up-table to recognize any desired pattern quickly and interactively. This results in a flexible and easy-to-use tool for the analysis of patterns in 2d flow data. For an improved rendering of the recognized features, an importance driven streamline algorithm has been developed. The density of the streamlines can be adjusted by using importance maps. The result of a pattern recognition can be used as such a map, for example. Finally, new comparative flow visualization approaches utilizing the streamline approach, the flow pattern matching, and the moment invariants are presented.

Intersection Theory on Tropical Toric Varieties and Compactifications of Tropical Parameter Spaces
(2011)

We study toric varieties over the tropical semifield. We define tropical cycles inside these toric varieties and extend the stable intersection of tropical cycles in R^n to these toric varieties. In particular, we show that every tropical cycle can be degenerated into a sum of torus-invariant cycles. This allows us to tropicalize algebraic cycles of toric varieties over an algebraically closed field with non-Archimedean valuation. We see that the tropicalization map is a homomorphism on cycles and an isomorphism on cycle classes. Furthermore, we can use projective toric varieties to compactify known tropical varieties and study their combinatorics. We do this for the tropical Grassmannian in the Plücker embedding and compactify the tropical parameter space of rational degree d curves in tropical projective space using Chow quotients of the tropical Grassmannian.

This work presents the dynamic capillary pressure model (Hassanizadeh, Gray, 1990, 1993a) adapted for the needs of paper manufacturing process simulations. The dynamic capillary pressure-saturation relation is included in a one-dimensional simulation model for the pressing section of a paper machine. The one-dimensional model is derived from a two-dimensional model by averaging with respect to the vertical direction. Then, the model is discretized by the finite volume method and solved by Newton’s method. The numerical experiments are carried out for parameters typical for the paper layer. The dynamic capillary pressure-saturation relation shows significant influence on the distribution of water pressure. The behaviour of the solution agrees with laboratory experiments (Beck, 1983).

In this paper we study the possibilities of sharing profit in combinatorial procurement auctions and exchanges. Bundles of heterogeneous items are offered by the sellers, and the buyers can then place bundle bids on sets of these items. That way, both sellers and buyers can express synergies between items and avoid the well-known risk of exposure (see, e.g., [3]). The reassignment of items to participants is known as the Winner Determination Problem (WDP). We propose solving the WDP by using a Set Covering formulation, because profits are potentially higher than with the usual Set Partitioning formulation, and subsidies are unnecessary. The achieved benefit is then to be distributed amongst the participants of the auction, a process which is known as profit sharing. The literature on profit sharing provides various desirable criteria. We focus on three main properties we would like to guarantee: Budget balance, meaning that no more money is distributed than profit was generated, individual rationality, which guarantees to each player that participation does not lead to a loss, and the core property, which provides every subcoalition with enough money to keep them from separating. We characterize all profit sharing schemes that satisfy these three conditions by a monetary flow network and state necessary conditions on the solution of the WDP for the existence of such a profit sharing. Finally, we establish a connection to the famous VCG payment scheme [2, 8, 19], and the Shapley Value [17].

In this work two main approaches for the evaluation of credit derivatives are analyzed: the copula based approach and the Markov Chain based approach. This work gives the opportunity to use the advantages and avoid disadvantages of both approaches. For example, modeling of contagion effects, i.e. modeling dependencies between counterparty defaults, is complicated under the copula approach. One remedy is to use Markov Chain, where it can be done directly. The work consists of five chapters. The first chapter of this work extends the model for the pricing of CDS contracts presented in the paper by Kraft and Steffensen (2007). In the widely used models for CDS pricing it is assumed that only borrower can default. In our model we assume that each of the counterparties involved in the contract may default. Calculated contract prices are compared with those calculated under usual assumptions. All results are summarized in the form of numerical examples and plots. In the second chapter the copula and its main properties are described. The methods of constructing copulas as well as most common copulas families and its properties are introduced. In the third chapter the method of constructing a copula for the existing Markov Chain is introduced. The cases with two and three counterparties are considered. Necessary relations between the transition intensities are derived to directly find some copula functions. The formulae for default dependencies like Spearman's rho and Kendall's tau for defined copulas are derived. Several numerical examples are presented in which the copulas are built for given Markov Chains. The fourth chapter deals with the approximation of copulas if for a given Markov Chain a copula cannot be provided explicitly. The fifth chapter concludes this thesis.

We introduce a refined tree method to compute option prices using the stochastic volatility model of Heston. In a first step, we model the stock and variance process as two separate trees and with transition probabilities obtained by matching tree moments up to order two against the Heston model ones. The correlation between the driving Brownian motions in the Heston model is then incorporated by the node-wise adjustment of the probabilities. This adjustment, leaving the marginals fixed, optimizes the match between tree and model correlation. In some nodes, we are even able to further match moments of higher order. Numerically this gives convergence orders faster than 1/N, where N is the number of dis- cretization steps. Accuracy of our method is checked for European option prices against a semi closed-form, and our prices for both European and American options are compared to alternative approaches.

In this paper we deal with dierent statistical modeling of real world accident data in order to quantify the eectiveness of a safety function or a safety conguration (meaning a specic combination of safety functions) in vehicles. It is shown that the eectiveness can be estimated along the so-called relative risk, even if the eectiveness does depend on a confounding variable which may be categorical or continuous. For doing so a concrete statistical modeling is not necessary, that is the resulting estimate is of nonparametric nature. In a second step the quite usual and from a statistical point of view classical logistic regression modeling is investigated. Main emphasis has been laid on the understanding of the model and the interpretation of the occurring parameters. It is shown that the eectiveness of the safety function also can be detected via such a logistic approach and that relevant confounding variables can and should be taken into account. The interpretation of the parameters related to the confounder and the quantication of the in uence of the confounder is shown to be rather problematic. All the theoretical results are illuminated by numerical data examples.

This paper presents a new similarity measure and nonlocal filters for images corrupted by multiplicative noise. The considered filters are generalizations of the nonlocal means filter of Buades et al., which is known to be well suited for removing additive Gaussian noise. To adapt to different noise models, the patch comparison involved in this filter has first of all to be performed by a suitable noise dependent similarity measure. To this purpose, we start by studying a probabilistic measure recently proposed for general noise models by Deledalle et al. We analyze this measure in the context of conditional density functions and examine its properties for images corrupted by additive and multiplicative noise. Since it turns out to have unfavorable properties for multiplicative noise we deduce a new similarity measure consisting of a probability density function specially chosen for this type of noise. The properties of our new measure are studied theoretically as well as by numerical experiments. To obtain the final nonlocal filters we apply a weighted maximum likelihood estimation framework, which also incorporates the noise statistics. Moreover, we define the weights occurring in these filters using our new similarity measure and propose different adaptations to further improve the results. Finally, restoration results for images corrupted by multiplicative Gamma and Rayleigh noise are presented to demonstrate the very good performance of our nonlocal filters.

We consider an autoregressive process with a nonlinear regression function that is modeled by a feedforward neural network. We derive a uniform central limit theorem which is useful in the context of change-point analysis. We propose a test for a change in the autoregression function which - by the uniform central limit theorem - has asymptotic power one for a large class of alternatives including local alternatives.

In this paper we develop testing procedures for the detection of structural changes in nonlinear autoregressive processes. For the detection procedure we model the regression function by a single layer feedforward neural network. We show that CUSUM-type tests based on cumulative sums of estimated residuals, that have been intensively studied for linear regression, can be extended to this case. The limit distribution under the null hypothesis is obtained, which is needed to construct asymptotic tests. For a large class of alternatives it is shown that the tests have asymptotic power one. In this case, we obtain a consistent change-point estimator which is related to the test statistics. Power and size are further investigated in a small simulation study with a particular emphasis on situations where the model is misspecified, i.e. the data is not generated by a neural network but some other regression function. As illustration, an application on the Nile data set as well as S&P log-returns is given.

This report gives an overview of the separate translation of synchronous imperative programs to synchronous guarded actions. In particular, we consider problems to be solved for separate compilation that stem from preemption statements and local variable declarations. We explain how we solved these problems and sketch our solutions implemented in the our Averest framework to implement a compiler that allows a separate compilation of imperative synchronous programs with local variables and unrestricted preemption statements. The focus of the report is the big picture of our entire design flow.

Multi-Field Visualization
(2011)

Modern science utilizes advanced measurement and simulation techniques to analyze phenomena from fields such as medicine, physics, or mechanics. The data produced by application of these techniques takes the form of multi-dimensional functions or fields, which have to be processed in order to provide meaningful parts of the data to domain experts. Definition and implementation of such processing techniques with the goal to produce visual representations of portions of the data are topic of research in scientific visualization or multi-field visualization in the case of multiple fields. In this thesis, we contribute novel feature extraction and visualization techniques that are able to convey data from multiple fields created by scientific simulations or measurements. Furthermore, our scalar-, vector-, and tensor field processing techniques contribute to scattered field processing in general and introduce novel ways of analyzing and processing tensorial quantities such as strain and displacement in flow fields, providing insights into field topology. We introduce novel mesh-free extraction techniques for visualization of complex-valued scalar fields in acoustics that aid in understanding wave topology in low frequency sound simulations. The resulting structures represent regions with locally minimal sound amplitude and convey wave node evolution and sound cancellation in time-varying sound pressure fields, which is considered an important feature in acoustics design. Furthermore, methods for flow field feature extraction are presented that facilitate analysis of velocity and strain field properties by visualizing deformation of infinitesimal Lagrangian particles and macroscopic deformation of surfaces and volumes in flow. The resulting adaptive manifolds are used to perform flow field segmentation which supports multi-field visualization by selective visualization of scalar flow quantities. The effects of continuum displacement in scattered moment tensor fields can be studied by a novel method for multi-field visualization presented in this thesis. The visualization method demonstrates the benefit of clustering and separate views for the visualization of multiple fields.

This thesis has the goal to propose measures which allow an increase of the power efficiency of OFDM transmission systems. As compared to OFDM transmission over AWGN channels, OFDM transmission over frequency selective radio channels requires a significantly larger transmit power in order to achieve a certain transmission quality. It is well known that this detrimental impact of frequency selectivity can be combated by frequency diversity. We revisit and further investigate an approach to frequency diversity based on the spreading of subsets of the data elements over corresponding subsets of the OFDM subcarriers and term this approach Partial Data Spreading (PDS). The size of said subsets, which we designate as spreading factor, is a design parameter of PDS, and by properly choosing , depending on the system designer's requirements, an adequate compromise between a good system performance and a low complexity can be found. We show how PDS can be combined with ML, MMSE and ZF data detection, and it is recognized that MMSE data detection offers a good compromise between performance and complexity. After having presented the utilization of PDS in OFDM transmission without FEC encoding, we also show that PDS readily lends itself for FEC encoded OFDM transmission. We display that in this case the system performance can be significantly enhanced by specific schemes of interleaving and utilization of reliabiliy information developed in the thesis. A severe problem of OFDM transmission is the large Peak-to-Average-Power Ratio (PAPR) of the OFDM symbols, which hampers the application of power efficient transmit amplifiers. Our investigations reveal that PDS inherently reduces the PAPR. Another approch to PAPR reduction is the well known scheme Selective Data Mapping (SDM). In the thesis it is shown that PDS can be beneficially combined with SDM to the scheme PDS-SDM with a view to jointly exploit the PAPR reduction potentials of both schemes. However, even when such a PAPR reduction is achieved, the amplitude maximum of the resulting OFDM symbols is not constant, but depends on the data content. This entails the disadvantage that the power amplifier cannot be designed, with a view to achieve a high power efficiency, for a fixed amplitude maximum, what would be desirable. In order to overcome this problem, we propose the scheme Optimum Clipping (OC), in which we obtain the desired fixed amplitude maximum by a specific combination of the measures clipping, filtering and rescaling. In OFDM transmission a certain number of OFDM subcarriers have to be sacrificed for pilot transmission in order to enable channel estimation in the receiver. For a given energy of the OFDM symbols, the question arises in which way this energy should be subdivided among the pilots and the data carrying OFDM subcarriers. If a large portion of the available transmit energy goes to the pilots, then the quality of channel estimation is good, however, the data detection performs poor. Data detection also performs poor if the energy provided for the pilots is too small, because then the channel estimate indispensable for data detection is not accurate enough. We present a scheme how to assign the energy to pilot and data OFDM subcarriers in an optimum way which minimizes the symbol error probability as the ultimate quality measure of the transmission. The major part of the thesis is dedicated to point-to-point OFDM transmission systems. Towards the end of the thesis we show that the PDS can be also applied to multipoint-to-point OFDM transmission systems encountered for instance in the uplinks of mobile radio systems.

A Multi-Phase Flow Model Incorporated with Population Balance Equation in a Meshfree Framework
(2011)

This study deals with the numerical solution of a meshfree coupled model of Computational Fluid Dynamics (CFD) and Population Balance Equation (PBE) for liquid-liquid extraction columns. In modeling the coupled hydrodynamics and mass transfer in liquid extraction columns one encounters multidimensional population balance equation that could not be fully resolved numerically within a reasonable time necessary for steady state or dynamic simulations. For this reason, there is an obvious need for a new liquid extraction model that captures all the essential physical phenomena and still tractable from computational point of view. This thesis discusses a new model which focuses on discretization of the external (spatial) and internal coordinates such that the computational time is drastically reduced. For the internal coordinates, the concept of the multi-primary particle method; as a special case of the Sectional Quadrature Method of Moments (SQMOM) is used to represent the droplet internal properties. This model is capable of conserving the most important integral properties of the distribution; namely: the total number, solute and volume concentrations and reduces the computational time when compared to the classical finite difference methods, which require many grid points to conserve the desired physical quantities. On the other hand, due to the discrete nature of the dispersed phase, a meshfree Lagrangian particle method is used to discretize the spatial domain (extraction column height) using the Finite Pointset Method (FPM). This method avoids the extremely difficult convective term discretization using the classical finite volume methods, which require a lot of grid points to capture the moving fronts propagating along column height.

The present PhD thesis is mainly focused on synthesis, characterization and catalytic application of functionalized triphenylphosphine (TPP) ligands and their complexes. We developed a simple and effective strategy to immobilize TPP: A methylester group attached to one of the phenyl rings of TPP allowes the derivatization of the ligand with 3-trimethoxysilylpropylamine, a typical silane coupling agent used for the covalent immobilization of organic compounds on silica surfaces. The resulting functionalized TPP was further coordinated to Pd, Rh and Ru precursors to achieve homogeneous complexes which can be tethered on silica by the post synthetic grafting method and co-condensation method. The obtained heterogeneous catalysts exhibited excellent activity, selectivity and reusability in Suzuki, hydrogenation and transfer hydrogenation reactions. In order to investigate the stability of the catalysts, different types of characterizations such as TEM, solid state NMR of the used catalysts as well as AAS of filtrate and leaching tests were carried out. The results prove the practicability and efficiency of our method. This strategy was further modified to generate an anionic side chain linked to the TPP core by simply replacing the trimethoxysilylpropylamine group by sodium(3-amino- 1-propanesulfonate), which allowes the immobilization on imidazolium modified SBA-15 through electrostatic interaction. The obtained material was further reacted with PdCl2(CNPh)2 and the resulting hybrid material was used for the hydrogenation of olefins allowing mild reaction conditions. The catalyst shows excellent activity, selectivity and stability and it can furthermore be reused for at least ten times without any loss of activity. TEM images of the used catalyst clearly show the absence of palladium nanoparticles, proving the high stability of the palladium compound. By AAS no palladium could be detected in the products and further leaching tests very- fied the reaction to be truly heterogeneous. This concept of non-covalent immobili- zation guarantees a tight bonding of the catalytically active species to the surface in combination with a high mobility, which should be favorable for other catalyses.

This report describes the calibration and completion of the volatility cube in the SABR model. The description is based on a project done for Assenagon GmbH in Munich. However, we use fictitious market data which resembles realistic market data. The problem posed by our client is formulated in section 1. Here we also motivate why this is a relevant problem. The SABR model is briefly reviewed in section 2. Section 3 discusses the calibration and completion of the volatility cube. An example is presented in section 4. We conclude by suggesting possible future research in section 5.

In this article, a new model predictive control approach to nonlinear stochastic systems will be presented. The new approach is based on particle filters, which are usually used for estimating states or parameters. Here, two particle filters will be combined, the first one giving an estimate for the actual state based on the actual output of the system; the second one gives an estimate of a control input for the system. This is basically done by adopting the basic model predictive control strategies for the second particle filter. Later in this paper, this new approach is applied to a CSTR (continuous stirred-tank reactor) example and to the inverted pendulum.

In this paper the multi terminal q-FlowLoc problem (q-MT-FlowLoc) is introduced. FlowLoc problems combine two well-known modeling tools: (dynamic) network flows and locational analysis. Since the q-MT-FlowLoc problem is NP-hard we give a mixed integer programming formulation and propose a heuristic which obtains a feasible solution by calculating a maximum flow in a special graph H. If this flow is also a minimum cost flow, various versions of the heuristic can be obtained by the use of different cost functions. The quality of this solutions is compared.

In a dynamic network, the quickest path problem asks for a path minimizing the time needed to send a given amount of flow from source to sink along this path. In practical settings, for example in evacuation or transportation planning, the reliability of network arcs depends on the specific scenario of interest. In this circumstance, the question of finding a quickest path among all those having at least a desired path reliability arises. In this article, this reliable quickest path problem is solved by transforming it to the restricted quickest path problem. In the latter, each arc is associated a nonnegative cost value and the goal is to find a quickest path among those not exceeding a predefined budget with respect to the overall (additive) cost value. For both, the restricted and reliable quickest path problem, pseudopolynomial exact algorithms and fully polynomial-time approximation schemes are proposed.

Due to remarkable technological advances in the last three decades the capacity of computer systems has improved tremendously. Considering Moore's law, the number of transistors on integrated circuits has doubled approximately every two years and the trend is continuing. Likewise, developments in storage density, network bandwidth, and compute capacity show similar patterns. As a consequence, the amount of data that can be processed by today's systems has increased by orders of magnitude. At the same time, however, the resolution of screens has hardly increased by a factor of ten. Thus, there is a gap between the amount of data that can be processed and the amount of data that can be visualized. Large high-resolution displays offer a way to deal with this gap and provide a significantly increased screen area by combining the images of multiple smaller display devices. The main objective of this dissertation is the development of new visualization and interaction techniques for large high-resolution displays.

We consider a variant of a knapsack problem with a fixed cardinality constraint. There are three objective functions to be optimized: one real-valued and two integer-valued objectives. We show that this problem can be solved efficiently by a local search. The algorithm utilizes connectedness of a subset of feasible solutions and has optimal run-time.

We provide a space domain oriented separation of magnetic fields into parts generated by sources in the exterior and sources in the interior of a given sphere. The separation itself is well-known in geomagnetic modeling, usually in terms of a spherical harmonic analysis or a wavelet analysis that is spherical harmonic based. However, it can also be regarded as a modification of the Helmholtz decomposition for which we derive integral representations with explicitly known convolution kernels. Regularizing these singular kernels allows a multiscale representation of the magnetic field with locally supported wavelets. This representation is applied to a set of CHAMP data for crustal field modeling.

A main result of this thesis is a conceptual proof of the fact that the weighted number of tropical curves of given degree and genus, which pass through the right number of general points in the plane (resp., which pass through general points in R^r and represent a given point in the moduli space of genus g curves) is independent of the choices of points. Another main result is a new correspondence theorem between plane tropical cycles and plane elliptic algebraic curves.

Annual Report 2010
(2011)

Annual Report, Jahrbuch AG Magnetismus