## Fachbereich Mathematik

### Refine

#### Year of publication

- 2011 (14) (remove)

#### Document Type

- Doctoral Thesis (14) (remove)

#### Keywords

- Chow Quotient (1)
- Copula (1)
- Credit Default Swap (1)
- Finite Pointset Method (1)
- Local smoothing (1)
- Markov Chain (1)
- Markov Kette (1)
- Mathematik (1)
- Momentum and Mas Transfer (1)
- Multi Primary and One Second Particle Method (1)

- Automatic Segmentation and Clustering of Spectral Terahertz Data (2011)
- The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry. Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized. Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown. Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.

- 3D Morphological Analysis and Modeling of Random Fiber Networks (2011)
- The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.

- Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR (2011)
- Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics, computer vision, truss design, geometric modeling, and many others. Many polynomial systems have solutions sets, called algebraic varieties, having several irreducible components. A fundamental problem of the numerical algebraic geometry is to decompose such an algebraic variety into its irreducible components. The witness point sets are the natural numerical data structure to encode irreducible algebraic varieties. Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI. We modify this concept using partially Gröbner bases, triangular sets, local dimension, and the so-called zero sum relation. We present in the second chapter the corresponding algorithms and their implementations in SINGULAR. We give some examples and timings, which show that the modified algorithms are more efficient if the number of variables is not too large. For a large number of variables BERTINI is more efficient. Leykin presented an algorithm to compute the embedded components of an algebraic variety based on the concept of the deflation of an algebraic variety. Depending on the modified algorithm mentioned above, we will present in the third chapter an algorithm and its implementation in SINGULAR to compute the embedded components. The irreducible decomposition of algebraic varieties allows us to formulate in the fourth chapter some numerical algebraic algorithms. In the last chapter we present two SINGULAR libraries. The first library is used to compute the numerical irreducible decomposition and the embedded components of an algebraic variety. The second library contains the procedures of the algorithms in the last Chapter to test inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional component, and the local dimension.

- Real Earth Oriented Gravitational Potential Determination (2011)
- For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.

- Algorithms for Symbolic Computation and their Applications - Standard Bases over Rings and Rank Tests in Statistics (2011)
- In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges. The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.

- Local Smoothing Methods with Regularization in Nonparametric Regression Models (2011)
- Mrázek et al. [14] proposed a unified approach to curve estimation which combines localization and regularization. In this thesis we will use their approach to study some asymptotic properties of local smoothers with regularization. In Particular, we shall discuss the regularized local least squares (RLLS) estimate with correlated errors (more precisely with stationary time series errors), and then based on this approach we will discuss the case when the kernel function is dirac function and compare our smoother with the spline smoother. Finally, we will do some simulation study.

- A Tree Algorithm for Helmholtz Potential Wavelets on Non-Smooth Surfaces: Theoretical Background und Application to Seismic Data Postprocessing (2011)
- The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier. In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group. The particular difficulty in such approaches is the formulation of limit and jump relations for surfaces used in seismic data processing, i.e., non-smooth surfaces in various topologies (for example, uniform and quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces. By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of a tree algorithm for the fast numerical computation of functions on the boundary. In order to demonstrate the applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency artifacts and noise. Essential feature is the space localization property of Helmholtz wavelets which numerically enables to discuss the velocity field in pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.

- Graded commutative algebra and related structures in Singular with applications (2011)
- This thesis is devoted to constructive module theory of polynomial graded commutative algebras over a field. It treats the theory of Groebner bases (GB), standard bases (SB) and syzygies as well as algorithms and their implementations. Graded commutative algebras naturally unify exterior and commutative polynomial algebras. They are graded non-commutative, associative unital algebras over fields and may contain zero-divisors. In this thesis we try to make the most use out of _a priori_ knowledge about their characteristic (super-commutative) structure in developing direct symbolic methods, algorithms and implementations, which are intrinsic to graded commutative algebras and practically efficient. For our symbolic treatment we represent them as polynomial algebras and redefine the product rule in order to allow super-commutative structures and, in particular, to allow zero-divisors. Using this representation we give a nice characterization of a GB and an algorithm for its computation. We can also tackle central localizations of graded commutative algebras by allowing commutative variables to be _local_, generalizing Mora algorithm (in a similar fashion as G.M.Greuel and G.Pfister by allowing local or mixed monomial orderings) and working with SBs. In this general setting we prove a generalized Buchberger's criterion, which shows that syzygies of leading terms play the utmost important role in SB and syzygy module computations. Furthermore, we develop a variation of the La Scala-Stillman free resolution algorithm, which we can formulate particularly close to our implementation. On the implementation side we have further developed the Singular non-commutative subsystem Plural in order to allow polynomial arithmetic and more involved non-commutative basic Computer Algebra computations (e.g. S-polynomial, GB) to be easily implementable for specific algebras. At the moment graded commutative algebra-related algorithms are implemented in this framework. Benchmarks show that our new algorithms and implementation are practically efficient. The developed framework has a lot of applications in various branches of mathematics and theoretical physics. They include computation of sheaf cohomology, coordinate-free verification of affine geometry theorems and computation of cohomology rings of p-groups, which are partially described in this thesis.

- Mathematical Programming Approaches for Decoding of Binary Linear Codes (2011)
- In this thesis, we aim at finding appropriate integer programming models and associated solution approaches for the maximum likelihood decoding problem of several binary linear code classes.

- Some Steps towards Experimental Design for Neural Network Regression (2011)
- We discuss some first steps towards experimental design for neural network regression which, at present, is too complex to treat fully in general. We encounter two difficulties: the nonlinearity of the models together with the high parameter dimension on one hand, and the common misspecification of the models on the other hand. Regarding the first problem, we restrict our consideration to neural networks with only one and two neurons in the hidden layer and a univariate input variable. We prove some results regarding locally D-optimal designs, and present a numerical study using the concept of maximin optimal designs. In respect of the second problem, we have a look at the effects of misspecification on optimal experimental designs.