Refine
Year of publication
- 2011 (24) (remove)
Document Type
- Doctoral Thesis (24) (remove)
Language
- English (24) (remove)
Has Fulltext
- yes (24)
Keywords
- Visualisierung (3)
- Chow Quotient (1)
- Codierung (1)
- Computervision (1)
- Copula (1)
- Credit Default Swap (1)
- DSM (1)
- Data Spreading (1)
- Datenspreizung (1)
- Distributed Rendering (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (14)
- Kaiserslautern - Fachbereich Informatik (3)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (2)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (2)
- Kaiserslautern - Fachbereich ARUBI (1)
- Kaiserslautern - Fachbereich Chemie (1)
- Kaiserslautern - Fachbereich Physik (1)
The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.
This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)
Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.
For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.
The present dissertation contains the theoretical studies performed on the topic of a high energy deposition in matter. The work focuses on electronic excitation and relaxation processes on ultrafast timescales. Energy deposition by means of intense ultrashort (femtosecond) laser pulses or by means of swift heavy ions irradiation have a certain similarities: the final observable material modifications result from a number of processes on different timescales. First, the electronic excitation by photoabsorption or by ion impact takes place on subfemtosecond timescales. Then these excited electrons propagate and redistribute their energy interacting among themselves and exciting secondary generations of electrons. This typically takes place on femtosecond timescales. On the order of tens to hundreds femtoseconds the excited electrons are usually thermalized. The energy exchange with the lattice atoms lasts up to tens of picoseconds. The lattice temperature can reach melting point; then the material cools down and recrystalizes, forming the final modified nanostructures, which are observed experimentally. The processes on each previous step form the initial conditions for the following step. Thus, to describe the final phase transition and formation of nanostructures, one has to start from the very beginning and follow through all the steps.
The present work focuses on the early stages of the energy dissipation after its deposition, taking place in the electronic subsystems of excited materials. Different models applicable for different excitation mechanisms will be presented: in the thesis I will start from the description of high energy excitation (electron energies of \(\sim\) keV), then I shall focus on excitations to intermediate energies of electrons (\(\sim\) 100 eV), and finally coming down to a few eV electron excitations (visible light). The results will be compared with experimental observations.
For the high energy material excitation assumed to be caused by irradiation with swift heavy ions, the classical Asymptotical Trajectory Monte-Carlo (ATMC) is applied to describe the excitation of electrons by the impact of the projectile, the initial kinetics of electrons, secondary electron creation and Auger-redistribution of holes. I first simulate the early stage (first tens of fs) of kinetics of the electronic subsystem (in silica target, SiO\(_2\)) in tracks of ions decelerated in the electronic stopping regime. It will be shown that the well pronounced front of excitation in the electronic and ionic subsystems is formed due to the propagation of electrons, which cannot be described by models based on diffusion mechanisms (e.g. parabolic equations of heat diffusion). On later timescales, the thermalization time of electrons can be estimated as a time when the particle- and the energy propagation turns from the ballistic to the diffusive one. As soon as the electrons are thermalized, one can apply the Two Temperature Model. It will be demonstrated how to combine the MC output with the two temperature model. The results of this combination demonstrate that secondary ionizations play a very important role for the track formation process, leading to energy stored in the hole subsystem. This energy storage causes a significant delay of heating and prolongs the timescales of lattice modifications up to tens of picoseconds.
For intermediate energies of excitation (XUV-VUV laser pulse excitation of materials) I applied the Monte-Carlo simulation, modified where necessary and extended in order to take into account the electronic band structure and Pauli's principle for electrons within the conduction band. I apply the new method for semiconductors and for metals on examples of solid silicon and aluminum, respectively.
It will be demonstrated that for the case of semiconductors the final kinetic energy of free electrons is much less than the total energy provided by the laser pulse, due to the energy spent to overcome ionization potentials. It was found that the final total number of electrons excited by a single photon is significantly less than \(\hbar \omega / E_{gap}\). The concept of an 'effective energy gap' is introduced for collective electronic excitation, which can be applied to estimate the free electron density after high-intensity VUV laser pulse irradiation.
For metals, experimentally observed spectra of emitted photons from irradiated aluminum can be explained well with our results. At the characteristic time of a photon emission due to radiative decay of \(L-\)shell hole (\(t < 60\) fs), the distribution function of the electrons is not yet fully thermalized. This distribution consists of two main branches: low energy distribution as a distorted Fermi-distribution, and a long high energy tail. Therefore, the experimentally observed spectra demonstrate two different branches of results: the one observed with \(L-\)shell radiation emission reflects the low energy distribution, the Bremsstrahlung spectra reflects high energy (nonthermalized) tail. The comparison with experiments demonstrated a good agreement of the calculated spectra with the experimentally observed ones.
For the irradiation of semiconductor with low energy photons (visible light), a statistical model named the "extended multiple rate equation" is proposed. Based on the earlier developed multiple rate equation, the model additionally includes the interaction of electrons with the phononic subsystem of the lattice and allows for the direct determination of the conditions for crystal damage. Our model effectively describes the dynamics of the electronic subsystem, dynamical changes in the optical properties, and lattice heating, and the results are in very good agreement with experimental measurements on the transient reflectivity and the fluence damage threshold of silicon irradiated with a femtosecond laser pulse.
In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.
This thesis is concerned with the modeling of the domain structure evolution in ferroelectric materials. Both a sharp interface model, in which the driving force on a domain wall is used to postulate an evolution law, and a continuum phase field model are treated in a thermodynamically consistent framework. Within the phase field model, a Ginzburg-Landau type evolution law for the spontaneous polarization is derived. Numerical simulations (FEM) show the influence of various kinds of defects on the domain wall mobility in comparison with experimental findings. A macroscopic material law derived from the phase field model is used to calculate polarization yield surfaces for multiaxial loading conditions.
Mrázek et al. [14] proposed a unified approach to curve estimation which combines
localization and regularization. In this thesis we will use their approach to study
some asymptotic properties of local smoothers with regularization. In Particular, we
shall discuss the regularized local least squares (RLLS) estimate with correlated errors
(more precisely with stationary time series errors), and then based on this approach
we will discuss the case when the kernel function is dirac function and compare our
smoother with the spline smoother. Finally, we will do some simulation study.
The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier.
In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group.
The particular difficulty in such approaches is the formulation of limit and
jump relations for surfaces used in seismic data processing, i.e., non-smooth
surfaces in various topologies (for example, uniform and
quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces.
By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of
a tree algorithm for the fast numerical computation of functions on the boundary.
In order to demonstrate the
applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency
artifacts and noise. Essential feature is the space localization property of
Helmholtz wavelets which numerically enables to discuss the velocity field in
pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.
This thesis treats the extension of the classical computational homogenization scheme towards the multi-scale computation of material quantities like the Eshelby stresses and material forces. To this end, microscopic body forces are considered in the scale-transition, which may emerge due to inhomogeneities in the material. Regarding the determination of material quantities based on the underlying microscopic structure different approaches are compared by means of their virtual work consistency. In analogy to the homogenization of spatial quantities, this consistency is discussed within Hill-Mandel type conditions.
For many years real-time task models have focused the timing constraints on execution windows defined by earliest start times and deadlines for feasibility.
However, the utility of some application may vary among scenarios which yield correct behavior, and maximizing this utility improves the resource utilization.
For example, target sensitive applications have a target point where execution results in maximized utility, and an execution window for feasibility.
Execution around this point and within the execution window is allowed, albeit at lower utility.
The intensity of the utility decay accounts for the importance of the application.
Examples of such applications include multimedia and control; multimedia application are very popular nowadays and control applications are present in every automated system.
In this thesis, we present a novel real-time task model which provides for easy abstractions to express the timing constraints of target sensitive RT applications: the gravitational task model.
This model uses a simple gravity pendulum (or bob pendulum) system as a visualization model for trade-offs among target sensitive RT applications.
We consider jobs as objects in a pendulum system, and the target points as the central point.
Then, the equilibrium state of the physical problem is equivalent to the best compromise among jobs with conflicting targets.
Analogies with well-known systems are helpful to fill in the gap between application requirements and theoretical abstractions used in task models.
For instance, the so-called nature algorithms use key elements of physical processes to form the basis of an optimization algorithm.
Examples include the knapsack problem, traveling salesman problem, ant colony optimization, and simulated annealing.
We also present a few scheduling algorithms designed for the gravitational task model which fulfill the requirements for on-line adaptivity.
The scheduling of target sensitive RT applications must account for timing constraints, and the trade-off among tasks with conflicting targets.
Our proposed scheduling algorithms use the equilibrium state concept to order the execution sequence of jobs, and compute the deviation of jobs from their target points for increased system utility.
The execution sequence of jobs in the schedule has a significant impact on the equilibrium of jobs, and dominates the complexity of the problem --- the optimum solution is NP-hard.
We show the efficacy of our approach through simulations results and 3 target sensitive RT applications enhanced with the gravitational task model.
This thesis is devoted to constructive module theory of polynomial
graded commutative algebras over a field.
It treats the theory of Groebner bases (GB), standard bases (SB) and syzygies as well as algorithms
and their implementations.
Graded commutative algebras naturally unify exterior and commutative polynomial algebras.
They are graded non-commutative, associative unital algebras over fields and may contain zero-divisors.
In this thesis
we try to make the most use out of _a priori_ knowledge about
their characteristic (super-commutative) structure
in developing direct symbolic methods, algorithms and implementations,
which are intrinsic to graded commutative algebras and practically efficient.
For our symbolic treatment we represent them as polynomial algebras
and redefine the product rule in order to allow super-commutative structures
and, in particular, to allow zero-divisors.
Using this representation we give a nice characterization
of a GB and an algorithm for its computation.
We can also tackle central localizations of graded commutative algebras by allowing commutative variables to be _local_,
generalizing Mora algorithm (in a similar fashion as G.M.Greuel and G.Pfister by allowing local or mixed monomial orderings)
and working with SBs.
In this general setting we prove a generalized Buchberger's criterion,
which shows that syzygies of leading terms play the utmost important role
in SB and syzygy module computations.
Furthermore, we develop a variation of the La Scala-Stillman free resolution algorithm,
which we can formulate particularly close to our implementation.
On the implementation side
we have further developed the Singular non-commutative subsystem Plural
in order to allow polynomial arithmetic
and more involved non-commutative basic Computer Algebra computations (e.g. S-polynomial, GB)
to be easily implementable for specific algebras.
At the moment graded commutative algebra-related algorithms
are implemented in this framework.
Benchmarks show that our new algorithms and implementation are practically efficient.
The developed framework has a lot of applications in various
branches of mathematics and theoretical physics.
They include computation of sheaf cohomology, coordinate-free verification of affine geometry
theorems and computation of cohomology rings of p-groups, which are partially described in this thesis.
We discuss some first steps towards experimental design for neural network regression which, at present, is too complex to treat fully in general. We encounter two difficulties: the nonlinearity of the models together with the high parameter dimension on one hand, and the common misspecification of the models on the other hand.
Regarding the first problem, we restrict our consideration to neural networks with only one and two neurons in the hidden layer and a univariate input variable. We prove some results regarding locally D-optimal designs, and present a numerical study using the concept of maximin optimal designs.
In respect of the second problem, we have a look at the effects of misspecification on optimal experimental designs.
The recognition of patterns and structures has gained importance for dealing with the growing amount of data being generated by sensors and simulations. Most existing methods for pattern recognition are tailored for scalar data and non-correlated data of higher dimensions. The recognition of general patterns in flow structures is possible, but not yet practically usable, due to the high computation effort. The main goal of this work is to present methods for comparative visualization of flow data, amongst others, based on a new method for efficient pattern recognition on flow data. This work is structured in three parts: At first, a known feature-based approach for pattern recognition on flow data, the Clifford convolution, has been applied to color edge detection, and been extended to non-uniform grids. However, this method is still computationally expensive for a general pattern recognition, since the recognition algorithm has to be applied for numerous different scales and orientations of the query pattern. A more efficient and accurate method for pattern recognition on flow data is presented in the second part. It is based upon a novel mathematical formulation of moment invariants for flow data. The common moment invariants for pattern recognition are not applicable on flow data, since they are only invariant on non-correlated data. Because of the spatial correlation of flow data, the moment invariants had to be redefined with different basis functions to satisfy the demands for an invariant mapping of flow data. The computation of the moment invariants is done by a multi-scale convolution of the complete flow field with the basis functions. This pre-processing computation time almost equals the time for the pattern recognition of one single general pattern with the former algorithms. However, after having computed the moments once, they can be indexed and used as a look-up-table to recognize any desired pattern quickly and interactively. This results in a flexible and easy-to-use tool for the analysis of patterns in 2d flow data. For an improved rendering of the recognized features, an importance driven streamline algorithm has been developed. The density of the streamlines can be adjusted by using importance maps. The result of a pattern recognition can be used as such a map, for example. Finally, new comparative flow visualization approaches utilizing the streamline approach, the flow pattern matching, and the moment invariants are presented.
Intersection Theory on Tropical Toric Varieties and Compactifications of Tropical Parameter Spaces
(2011)
We study toric varieties over the tropical semifield. We define tropical cycles inside these toric varieties and extend the stable intersection of tropical cycles in R^n to these toric varieties. In particular, we show that every tropical cycle can be degenerated into a sum of torus-invariant cycles. This allows us to tropicalize algebraic cycles of toric varieties over an algebraically closed field with non-Archimedean valuation. We see that the tropicalization map is a homomorphism on cycles and an isomorphism on cycle classes. Furthermore, we can use projective toric varieties to compactify known tropical varieties and study their combinatorics. We do this for the tropical Grassmannian in the Plücker embedding and compactify the tropical parameter space of rational degree d curves in tropical projective space using Chow quotients of the tropical Grassmannian.
In this work two main approaches for the evaluation of credit derivatives are analyzed: the copula based approach and the Markov Chain based approach. This work gives the opportunity to use the advantages and avoid disadvantages of both approaches. For example, modeling of contagion effects, i.e. modeling dependencies between counterparty defaults, is complicated under the copula approach. One remedy is to use Markov Chain, where it can be done directly. The work consists of five chapters. The first chapter of this work extends the model for the pricing of CDS contracts presented in the paper by Kraft and Steffensen (2007). In the widely used models for CDS pricing it is assumed that only borrower can default. In our model we assume that each of the counterparties involved in the contract may default. Calculated contract prices are compared with those calculated under usual assumptions. All results are summarized in the form of numerical examples and plots. In the second chapter the copula and its main properties are described. The methods of constructing copulas as well as most common copulas families and its properties are introduced. In the third chapter the method of constructing a copula for the existing Markov Chain is introduced. The cases with two and three counterparties are considered. Necessary relations between the transition intensities are derived to directly find some copula functions. The formulae for default dependencies like Spearman's rho and Kendall's tau for defined copulas are derived. Several numerical examples are presented in which the copulas are built for given Markov Chains. The fourth chapter deals with the approximation of copulas if for a given Markov Chain a copula cannot be provided explicitly. The fifth chapter concludes this thesis.
Multi-Field Visualization
(2011)
Modern science utilizes advanced measurement and simulation techniques to analyze phenomena from fields such as medicine, physics, or mechanics. The data produced by application of these techniques takes the form of multi-dimensional functions or fields, which have to be processed in order to provide meaningful parts of the data to domain experts. Definition and implementation of such processing techniques with the goal to produce visual representations of portions of the data are topic of research in scientific visualization or multi-field visualization in the case of multiple fields. In this thesis, we contribute novel feature extraction and visualization techniques that are able to convey data from multiple fields created by scientific simulations or measurements. Furthermore, our scalar-, vector-, and tensor field processing techniques contribute to scattered field processing in general and introduce novel ways of analyzing and processing tensorial quantities such as strain and displacement in flow fields, providing insights into field topology. We introduce novel mesh-free extraction techniques for visualization of complex-valued scalar fields in acoustics that aid in understanding wave topology in low frequency sound simulations. The resulting structures represent regions with locally minimal sound amplitude and convey wave node evolution and sound cancellation in time-varying sound pressure fields, which is considered an important feature in acoustics design. Furthermore, methods for flow field feature extraction are presented that facilitate analysis of velocity and strain field properties by visualizing deformation of infinitesimal Lagrangian particles and macroscopic deformation of surfaces and volumes in flow. The resulting adaptive manifolds are used to perform flow field segmentation which supports multi-field visualization by selective visualization of scalar flow quantities. The effects of continuum displacement in scattered moment tensor fields can be studied by a novel method for multi-field visualization presented in this thesis. The visualization method demonstrates the benefit of clustering and separate views for the visualization of multiple fields.
This thesis has the goal to propose measures which allow an increase of the power efficiency of OFDM transmission systems. As compared to OFDM transmission over AWGN channels, OFDM transmission over frequency selective radio channels requires a significantly larger transmit power in order to achieve a certain transmission quality. It is well known that this detrimental impact of frequency selectivity can be combated by frequency diversity. We revisit and further investigate an approach to frequency diversity based on the spreading of subsets of the data elements over corresponding subsets of the OFDM subcarriers and term this approach Partial Data Spreading (PDS). The size of said subsets, which we designate as spreading factor, is a design parameter of PDS, and by properly choosing , depending on the system designer's requirements, an adequate compromise between a good system performance and a low complexity can be found. We show how PDS can be combined with ML, MMSE and ZF data detection, and it is recognized that MMSE data detection offers a good compromise between performance and complexity. After having presented the utilization of PDS in OFDM transmission without FEC encoding, we also show that PDS readily lends itself for FEC encoded OFDM transmission. We display that in this case the system performance can be significantly enhanced by specific schemes of interleaving and utilization of reliabiliy information developed in the thesis. A severe problem of OFDM transmission is the large Peak-to-Average-Power Ratio (PAPR) of the OFDM symbols, which hampers the application of power efficient transmit amplifiers. Our investigations reveal that PDS inherently reduces the PAPR. Another approch to PAPR reduction is the well known scheme Selective Data Mapping (SDM). In the thesis it is shown that PDS can be beneficially combined with SDM to the scheme PDS-SDM with a view to jointly exploit the PAPR reduction potentials of both schemes. However, even when such a PAPR reduction is achieved, the amplitude maximum of the resulting OFDM symbols is not constant, but depends on the data content. This entails the disadvantage that the power amplifier cannot be designed, with a view to achieve a high power efficiency, for a fixed amplitude maximum, what would be desirable. In order to overcome this problem, we propose the scheme Optimum Clipping (OC), in which we obtain the desired fixed amplitude maximum by a specific combination of the measures clipping, filtering and rescaling. In OFDM transmission a certain number of OFDM subcarriers have to be sacrificed for pilot transmission in order to enable channel estimation in the receiver. For a given energy of the OFDM symbols, the question arises in which way this energy should be subdivided among the pilots and the data carrying OFDM subcarriers. If a large portion of the available transmit energy goes to the pilots, then the quality of channel estimation is good, however, the data detection performs poor. Data detection also performs poor if the energy provided for the pilots is too small, because then the channel estimate indispensable for data detection is not accurate enough. We present a scheme how to assign the energy to pilot and data OFDM subcarriers in an optimum way which minimizes the symbol error probability as the ultimate quality measure of the transmission. The major part of the thesis is dedicated to point-to-point OFDM transmission systems. Towards the end of the thesis we show that the PDS can be also applied to multipoint-to-point OFDM transmission systems encountered for instance in the uplinks of mobile radio systems.