## Doctoral Thesis

### Refine

#### Year of publication

- 2011 (24) (remove)

#### Document Type

- Doctoral Thesis (24) (remove)

#### Language

- English (24) (remove)

#### Keywords

- Visualisierung (3)
- Chow Quotient (1)
- Codierung (1)
- Computervision (1)
- Copula (1)
- Credit Default Swap (1)
- DSM (1)
- Data Spreading (1)
- Datenspreizung (1)
- Distributed Rendering (1)

#### Faculty / Organisational entity

The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.

This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.

The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.

Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)

Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.

For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.

The present dissertation contains the theoretical studies performed on the topic of a high energy deposition in matter. The work focuses on electronic excitation and relaxation processes on ultrafast timescales. Energy deposition by means of intense ultrashort (femtosecond) laser pulses or by means of swift heavy ions irradiation have a certain similarities: the final observable material modifications result from a number of processes on different timescales. First, the electronic excitation by photoabsorption or by ion impact takes place on subfemtosecond timescales. Then these excited electrons propagate and redistribute their energy interacting among themselves and exciting secondary generations of electrons. This typically takes place on femtosecond timescales. On the order of tens to hundreds femtoseconds the excited electrons are usually thermalized. The energy exchange with the lattice atoms lasts up to tens of picoseconds. The lattice temperature can reach melting point; then the material cools down and recrystalizes, forming the final modified nanostructures, which are observed experimentally. The processes on each previous step form the initial conditions for the following step. Thus, to describe the final phase transition and formation of nanostructures, one has to start from the very beginning and follow through all the steps.
The present work focuses on the early stages of the energy dissipation after its deposition, taking place in the electronic subsystems of excited materials. Different models applicable for different excitation mechanisms will be presented: in the thesis I will start from the description of high energy excitation (electron energies of \(\sim\) keV), then I shall focus on excitations to intermediate energies of electrons (\(\sim\) 100 eV), and finally coming down to a few eV electron excitations (visible light). The results will be compared with experimental observations.
For the high energy material excitation assumed to be caused by irradiation with swift heavy ions, the classical Asymptotical Trajectory Monte-Carlo (ATMC) is applied to describe the excitation of electrons by the impact of the projectile, the initial kinetics of electrons, secondary electron creation and Auger-redistribution of holes. I first simulate the early stage (first tens of fs) of kinetics of the electronic subsystem (in silica target, SiO\(_2\)) in tracks of ions decelerated in the electronic stopping regime. It will be shown that the well pronounced front of excitation in the electronic and ionic subsystems is formed due to the propagation of electrons, which cannot be described by models based on diffusion mechanisms (e.g. parabolic equations of heat diffusion). On later timescales, the thermalization time of electrons can be estimated as a time when the particle- and the energy propagation turns from the ballistic to the diffusive one. As soon as the electrons are thermalized, one can apply the Two Temperature Model. It will be demonstrated how to combine the MC output with the two temperature model. The results of this combination demonstrate that secondary ionizations play a very important role for the track formation process, leading to energy stored in the hole subsystem. This energy storage causes a significant delay of heating and prolongs the timescales of lattice modifications up to tens of picoseconds.
For intermediate energies of excitation (XUV-VUV laser pulse excitation of materials) I applied the Monte-Carlo simulation, modified where necessary and extended in order to take into account the electronic band structure and Pauli's principle for electrons within the conduction band. I apply the new method for semiconductors and for metals on examples of solid silicon and aluminum, respectively.
It will be demonstrated that for the case of semiconductors the final kinetic energy of free electrons is much less than the total energy provided by the laser pulse, due to the energy spent to overcome ionization potentials. It was found that the final total number of electrons excited by a single photon is significantly less than \(\hbar \omega / E_{gap}\). The concept of an 'effective energy gap' is introduced for collective electronic excitation, which can be applied to estimate the free electron density after high-intensity VUV laser pulse irradiation.
For metals, experimentally observed spectra of emitted photons from irradiated aluminum can be explained well with our results. At the characteristic time of a photon emission due to radiative decay of \(L-\)shell hole (\(t < 60\) fs), the distribution function of the electrons is not yet fully thermalized. This distribution consists of two main branches: low energy distribution as a distorted Fermi-distribution, and a long high energy tail. Therefore, the experimentally observed spectra demonstrate two different branches of results: the one observed with \(L-\)shell radiation emission reflects the low energy distribution, the Bremsstrahlung spectra reflects high energy (nonthermalized) tail. The comparison with experiments demonstrated a good agreement of the calculated spectra with the experimentally observed ones.
For the irradiation of semiconductor with low energy photons (visible light), a statistical model named the "extended multiple rate equation" is proposed. Based on the earlier developed multiple rate equation, the model additionally includes the interaction of electrons with the phononic subsystem of the lattice and allows for the direct determination of the conditions for crystal damage. Our model effectively describes the dynamics of the electronic subsystem, dynamical changes in the optical properties, and lattice heating, and the results are in very good agreement with experimental measurements on the transient reflectivity and the fluence damage threshold of silicon irradiated with a femtosecond laser pulse.

In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.

This thesis is concerned with the modeling of the domain structure evolution in ferroelectric materials. Both a sharp interface model, in which the driving force on a domain wall is used to postulate an evolution law, and a continuum phase field model are treated in a thermodynamically consistent framework. Within the phase field model, a Ginzburg-Landau type evolution law for the spontaneous polarization is derived. Numerical simulations (FEM) show the influence of various kinds of defects on the domain wall mobility in comparison with experimental findings. A macroscopic material law derived from the phase field model is used to calculate polarization yield surfaces for multiaxial loading conditions.

Mrázek et al. [14] proposed a unified approach to curve estimation which combines
localization and regularization. In this thesis we will use their approach to study
some asymptotic properties of local smoothers with regularization. In Particular, we
shall discuss the regularized local least squares (RLLS) estimate with correlated errors
(more precisely with stationary time series errors), and then based on this approach
we will discuss the case when the kernel function is dirac function and compare our
smoother with the spline smoother. Finally, we will do some simulation study.

The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier.
In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group.
The particular difficulty in such approaches is the formulation of limit and
jump relations for surfaces used in seismic data processing, i.e., non-smooth
surfaces in various topologies (for example, uniform and
quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces.
By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of
a tree algorithm for the fast numerical computation of functions on the boundary.
In order to demonstrate the
applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency
artifacts and noise. Essential feature is the space localization property of
Helmholtz wavelets which numerically enables to discuss the velocity field in
pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.