## Doctoral Thesis

### Refine

#### Year of publication

- 2014 (47) (remove)

#### Document Type

- Doctoral Thesis (47) (remove)

#### Language

- English (47) (remove)

#### Keywords

- Activity Recognition, Wearable Computing, Minimal Training, Unobtrusive Instrumentations (1)
- Activity recognition (1)
- Adaptive Data Structure (1)
- AhRR (1)
- Algorithm (1)
- Boosting (1)
- CYP1A1 (1)
- Classification (1)
- Closure (1)
- Code Generation (1)
- Computer graphics (1)
- Cycle Accuracy (1)
- DL-PCBs (1)
- Dataset (1)
- Dekonsolidierung (1)
- Dioxin (1)
- Direct Numerical Simulation (1)
- Discrete Event Simulation (DES) (1)
- EROD (1)
- Eikonal equation (1)
- Endlicher Automat (1)
- Evaluation (1)
- Feasibility study (1)
- Formale Grammatik (1)
- Formale Sprache (1)
- Grouping by similarity (1)
- Hypergraph (1)
- IP-XACT (1)
- Ileostomy (1)
- Immunoblot (1)
- Intensity estimation (1)
- Interactive decision support systems (1)
- Invariante (1)
- Kellerautomat (1)
- Knowledge Management (1)
- LIR-Tree (1)
- Machine learning (1)
- Microarray (1)
- Mobile system (1)
- Mustererkennung (1)
- Noise Control, Feature Extraction, Speech Recognition (1)
- OCR (1)
- PCDD/Fs (1)
- Partial Differential Equations (1)
- Pedestrian FLow (1)
- Perceptual grouping (1)
- Personalisation (1)
- Pervasive health (1)
- Physical activity monitoring (1)
- Recommender Systems (1)
- Response Priming (1)
- Self-splitting objects (1)
- Semantic Web (1)
- Semantic Wikis (1)
- Shared Resource Modeling (1)
- Stokes Equations (1)
- Sustainability (1)
- Symmetry (1)
- SystemC (1)
- TIPARP (1)
- Temporal Decoupling (1)
- Tensorfeld (1)
- Thermoplast (1)
- Topology visualization (1)
- Transaction Level Modeling (TLM) (1)
- Ubiquitous system (1)
- Urban Water Supply (1)
- Volume rendering (1)
- Water resources (1)
- Wearable computing (1)
- XMCD (1)
- aryl hydrocarbon receptor (1)
- bioavailability (1)
- cobalt (1)
- coffee (1)
- dioxin-like compounds (1)
- fatigue (1)
- flow cytometry (1)
- gas phase (1)
- geographic information systems (1)
- geology (1)
- hypergraph (1)
- invariant (1)
- iron (1)
- magnetism (1)
- metal cluster (1)
- moment (1)
- nickel (1)
- optimization (1)
- orbit (1)
- peripheral blood mononuclear cells (1)
- point cloud (1)
- polyphenol (1)
- rat liver cell systems (1)
- relative effect potencies (1)
- single molecule magnet (1)
- spin (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- tetrachlorodibenzo-p-dioxin (1)
- toxic equivalency factor (TEF) concept (1)
- vectorfield (1)
- virtual reality (1)
- whole genome microarray analysis (1)

#### Faculty / Organisational entity

The study addresses the effect of multiple jet passes and other parameters namely feedrate, water pressure and standoff distance in waterjet peening of metallic
surfaces. An analysis of surface integrity was used to evaluate the performance of
different parameters in the process. An increase in the number of jet passes and
pressure leads to a higher roughness and more erosion and also a higher hardness.
In contrast, the feedrate shows a reverse effect on those surface characteristics.
There exists a specific value of standoff distance that results in the maximum surface
roughness, erosion as well as hardness. Analysis of the surface microstructure gave
a good insight into the mechanism material removal process involving initial and
evolved damage. Also, the waterjet peening process was optimized based on the
design of experiment approach. The developed empirical models had shown
reasonable correlations between the measured and predicted responses. A proper selection of waterjet peening parameters can be formulated to be used in practical
works.

ABSTRACT
"Spin and orbital contribution to the magnetic moment of transition metal clusters and complexes"
The spin and orbital contributions to the magnetic moments of isolated iron \(Fe_n^+\) \((7 ≤ n ≤ 18)\), cobalt \(Co_n^+\) \((8 ≤ n ≤ 22)\) and nickel \(Ni_n^+\) \((7 ≤ n ≤ 17)\) clusters were investigated. An experimental access to both contributions is possible by the application of x-ray magnetic circular dichroism (XMCD) spectroscopy. XMCD spectroscopy is based on x-ray absorption spectroscopy (XAS). It exploits the fact that for a magnetic sample the resonant absorption cross sections for negative and positive circular polarized x-rays differ for the transition from a spin orbit split ground state to the valence level. The resulting dichroic effects contain the information about the magnetism of the investigated sample. It can be extracted from the experimental spectrum via application of the so called sum rules. However, only the projections of the magnetic moments onto the quantization axis are experimentally accessible which corresponds to the magnetization of the sample.
We developed a method to apply XMCD spectroscopy to isolated clusters in the gas phase. A modified Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometer was used to record the XA spectra in Total Ion Yield (TIY) mode, i.e. by recording the fragmentation intensity of the clusters in dependence of x-ray energy. The clusters can be considered to be a superparamagnetic ensemble. Thus, the magnetization follows a Langevin curve. The intrinsic magnetic moments can be calculated by Langevin correction of the experimental magnetic moments because the cluster temperature and the magnetic field are known.
The spin and the orbital magnetic moments are enhanced compared to the respective bulk values for all three investigated elements. The enhancement of the orbital contribution is more pronounced, by about a factor 3 - 4 compared to the bulk, than for the spin magnetic moment. However, if compared to the atomic value, both contributions are quenched. The orbital magnetic moment only amounts to about 10 - 15 % of the atomic value while the spin retains about 80 % of its atomic value. If the magnetic moments found for the clusters are put into perspective with respect to the atomic and bulk values by means of scaling laws, it becomes evident that both contributions follow different interpolations between the atomic and bulk value. The spin follows the well-known trend
\(n^{-1/3} = 1/(cluster radius)\) (n = number of atoms per cluster, assumption of a spherical particle). This trend relates to the ratio of surface to inner atoms in spherical particle. Hence, our interpretation is that the spin magnetic moment seems to follow the surface area of the cluster. On the other hand, the orbital magnetic moment follows \(1/n = 1/(cluster volume)\).
First XA spectra recorded with circularly polarized x-rays of a Single Molecule Magnet (SMM) \([Fe_4Ln_2(N_3)_4(Htea)_4(piv_6)]\) (Ln = Gd, Tb; \(H_3tea\) = triethanolamine, Hpiv = pivalic acid) are presented.

This thesis combines mass spectrometric studies on ionic dicarboxylic acids and transition metal cluster adsorbate complexes. IR-MPD spectra of protonated and deprotonated aliphatic and aromatic dicarboxylic acids provide insights in the nature of intramolecular hydrogen bonding. Investigations of their fragmentation behavior are supported by MP2 calculations. Prior work on cobalt transition metal clusters is extended to iron and nickel and three cobalt alloys have been studied.

We consider two major topics in this thesis: spatial domain partitioning which serves as a framework to simulate creep flows in representative volume elements.
First, we introduce a novel multi-dimensional space partitioning method. A new type of tree combines the advantages of the Octree and the KD-tree without having their disadvantages. We present a new data structure allowing local refinement, parallelization and proper restriction of transition ratios between nodes. Our technique has no dimensional restrictions at all. The tree's data structure is defined by a topological algebra based on the symbols \( A = \{ L, I, R \} \) that encode the partitioning steps. The set of successors is restricted such that each node has the partition of unity property to partition domains without overlap. With our method it is possible to construct a wide choice of spline spaces to compress or reconstruct scientific data such as pressure and velocity fields and multidimensional images. We present a generator function to build a tree that represents a voxel geometry. The space partitioning system is used as a framework to allow numerical computations. This work is triggered by the problem of representing, in a numerically appropriate way, huge three-dimensional voxel geometries that could have up to billions of voxels. These large datasets occure in situations where it is needed to deal with large representative volume elements (REV).
Second, we introduce a novel approach of variable arrangement for pressure and velocity to solve the Stokes equations. The basic idea of our method is to arrange variables in a way such that each cell is able to satisfy a given physical law independently from its neighbor cells. This is done by splitting velocity values to a left and right converging component. For each cell we can set up a small linear system that describes the momentum and mass conservation equations. This formulation allows to use the Gauß-Seidel algorithm to solve the global linear system. Our tree structure is used for spatial partitioning of the geometry and provides a proper initial guess. In addition, we introduce a method that uses the actual velocity field to refine the tree and improve the numerical accuracy where it is needed. We developed a novel approach rather than using existing approaches such as the SIMPLE algorithm, Lattice-Boltzmann methods or Exlicit jump methods since they are suited for regular grid structures. Other standard CFD approaches extract surfaces and creates tetrahedral meshes to solve on unstructured grids thus can not be applied to our datastructure. The discretization converges to the analytical solution with respect to grid refinement. We conclude a high strength in computational time and memory for high porosity geometries and a high strength in memory requirement for low porosity geometries.

This PhD-Thesis deals with the calculation and application of a new class of invariants, that can be used to recognize patterns in tensor fields (i.e. scalar fields, vector fields und matrix fields), and by the composition of scalar fields with delta-functions also to point-clouds.
In the first chapter an overview over already existing invariants is given.
In the second chapter the general definition of the new invariants is given:
starting with a tensor field a set of moment tensor is created via folding in tensor-product manner with different orders of the tensor product of the positional vector. From these, rotational invariant values are calculated via contraction of tensor products. An algorithm to get a complete and independent set of invariants from a given moment tensor set is described. Furthermore methods to make these sets of invariants invariant against translation, rotation, scaling, and affine transformation.
In the third chapter, a method to optimize the calculation of these sets of invariants is described: every invariant can be modeled as undirected graph comprising multiple sub-graphs representing partially contracted tensor products of the moment tensors.
The composition of the sets of invariants is optimized by a clever choice of the decomposition into sub-graphs, all paths creating a hyper-graph of sub-graphs where each node describes a composition step. Finally, C++-source-code is created, which optimized using the symmetry of the different tensors and tensor-products, and a comparison of the effort to other calculation methods of invariants is given.
The fourth chapter describes the application of the invariants to object recognition in point-clouds from 3D-scans. To do this, the invariants of sub-sets of point-clouds are stored for every known object. Afterwards, invariants are calculated from an unknown point-cloud and tried to find them in the database to assign it to one of the known objects. Benchmarks using three 3D-object databases are made testing time and recognition rate.

This thesis discusses several applications of computational topology to the visualization
of scalar fields. Scalar field data come from different measurements and simulations. The
intrinsic properties of this kind of data, which make the visualization of it to a complicated
task, are the large size and presence of noise. Computational topology is a powerful tool
for automatic feature extraction, which allows the user to interpret the information contained
in the dataset in a more efficient way. Utilizing it one can make the main purpose of
scientific visualization, namely extracting knowledge from data, a more convenient task.
Volume rendering is a class of methods designed for realistic visual representation of 3D
scalar fields. It is used in a wide range of applications with different data size, noise
rate and requirements on interactivity and flexibility. At the moment there is no known
technique which can meet the needs of every application domain, therefore development
of methods solving specific problems is required. One of such algorithms, designed for
rendering of noisy data with high frequencies is presented in the first part of this thesis.
The method works with multidimensional transfer functions and is especially suited for
functions exhibiting sharp features. Compared with known methods the presented algorithm
achieves better visual quality with a faster performance in presence of mentioned
features. An improvement on the method utilizing a topological theory, Morse theory, and
a topological construct, Morse-Smale complex, is also presented in this part of the thesis.
The improvement allows for performance speedup at a little precomputation and memory
cost.
The usage of topological methods for feature extraction on a real world dataset often
results in a very large feature space which easily leads to information overflow. Topology
simplification is designed to reduce the number of features and allow a domain expert
to concentrate on the most important ones. In the terms of Morse theory features are
represented by critical points. An importance measure which is usually used for removing
critical points is called homological persistence. Critical points are cancelled pairwise
according to their homological persistence value. In the presence of outlier-like noise
homological persistence has a clear drawback: the outliers get a high importance value
assigned and therefore are not being removed. In the second part of this thesis a new
importance measure is presented which is especially suited for data with outliers. This
importance measure is called scale space persistence. The algorithm for the computation
of this measure is based on the scale space theory known from the area of computer
vision. The development of a critical point in scale space gives information about its
spacial extent, therefore outliers can be distinguished from other critical points. The usage
of the presented importance measure is demonstrated on a real world application, crater
identification on a surface of Mars.
The third part of this work presents a system for general interactive topology analysis
and exploration. The development of such a system is motivated by the fact that topological
methods are often considered to be complicated and hard to understand, because
application of topology for visualization requires deep understanding of the mathematical
background behind it. A domain expert exploring the data using topology for feature
extraction needs an intuitive way to manipulate the exploration process. The presented
system is based on an intuitive notion of a scene graph, where the user can choose and
place the component blocks to achieve an individual result. This way the domain expert
can extract more knowledge from given data independent on the application domain. The
tool gives the possibility for calculation and simplification of the underlying topological
structure, Morse-Smale complex, and also the visualization of parts of it. The system also
includes a simple generic query language to acquire different structures of the topological
structure at different levels of hierarchy.
The fourth part of this dissertation is concentrated on an application of computational
geometry for quality assessment of a triangulated surface. Quality assessment of a triangulation
is called surface interrogation and is aimed for revealing intrinsic irregularities
of a surface. Curvature and continuity are the properties required to design a visually
pleasing geometric object. For example, a surface of a manufactured body usually should
be convex without bumps of wiggles. Conventional rendering methods hide the regions
of interest because of smoothing or interpolation. Two new methods which are presented
here: curvature estimation using local fitting with B´ezier patches and computation of reflection
lines for visual representation of continuity, are specially designed for assessment
problems. The examples and comparisons presented in this part of the thesis prove the
benefits of the introduced algorithms. The methods are also well suited for concurrent visualization
of the results from simulation and surface interrogation to reveal the possible
intrinsic relationship between them.

Multilevel Constructions
(2014)

The thesis consists of the two chapters.
The first chapter is addressed to make a deep investigation of the MLMC method. In particular we take an optimisation view at the estimate. Rather than fixing the number of discretisation points \(n_i\) to be a geometric sequence, we are trying to find an optimal set up for \(n_i\) such that for a fixed error the estimate can be computed within a minimal time.
In the second chapter we propose to enhance the MLMC estimate with the weak extrapolation technique. This technique helps to improve order of a weak convergence of a scheme and as a result reduce CC of an estimate. In particular we study high order weak extrapolation approach, which is know not be inefficient in the standard settings. However, a combination of the MLMC and the weak extrapolation yields an improvement of the MLMC.

If an automated system is tasked to provide services such as search or clustering of information on an information repository, the quality of the output depends a lot on the information that is available to the system in machine-readable form. Simple text, for example, is machine-readable only in a very limited sense. Advanced services typically need to derive other representations of the text (e.g., sets of keywords) as input for their core algorithms. Some services might need information that cannot be derived from the resource in question alone, but is available as separate metadata only, such as usage information. Annotations can be used to carry this information.
This thesis focuses on so-called ontology-based annotations. In contrast to other forms of annotations such as Tags (arbitrary strings that users can assign to resources), ontology-based annotations conform to a predefined data structure and class hierarchy. An advantage of this approach is that rich information can be stored in a well-structured way in the annotations; a drawback is that users need to be familiar with the hierarchy and other design decisions of the underlying ontology used for annotations.
Two scenarios are considered in this thesis:
First, a document-based scenario in which text annotations are used to represent both information about the text content and usage and user context information in a multi-user setting with mostly objective annotation criteria; second, a resource-based scenario whose annotation model focuses on multi-user settings with subjective annotation criteria, using (dis-)similarities in user annotations to derive user similarity metrics, and building personalized views from this information.
Finally, the prototypical systems that have been developed throughout this thesis get evaluated, proving the concepts presented in this thesis.

As the complexity of embedded systems continuously rises, their development becomes more and more challenging. One technique to cope with this complexity is the employment of virtual prototypes. The virtual prototypes are intended to represent the embedded system’s properties on different levels of detail like register transfer level or transaction level. Virtual prototypes can be used for different tasks throughout the development process. They can act as executable specification, can be used for architecture exploration, can ease system integration, and allow for pre- and post-silicon software development and verification. The optimization objectives for virtual prototypes and their creation process are manifold. Finding an appropriate trade-off between the simulation accuracy, the simulation performance, and the implementation effort is a major challenge, as these requirements are contradictory.
In this work, two new and complementary techniques for the efficient creation of accurate and high-performance SystemC based virtual prototypes are proposed: Advanced Temporal Decoupling (ATD) and Transparent Transaction Level Modeling (TTLM). The suitability for industrial environments is assured by the employment of common standards like SystemC TLM-2.0 and IP-XACT.
Advanced Temporal Decoupling enhances the simulation accuracy while retaining high simulation performance by allowing for cycle accurate simulation in the context of SystemC TLM-2.0 temporal decoupling. This is achieved by exploiting the local time warp arising in SystemC TLM-2.0 temporal decoupled models to support the computation of resource contention effects. In ATD, accesses to shared resource are managed by Temporal Decoupled Semaphores (TDSems) which are integrated into the modeled shared resources. The set of TDSems assures the correct execution order of shared resource accesses and incorporates timing effects resulting from shared resource access execution and resource conflicts. This is done by dynamically varying the data granularity of resource accesses based on information gathered from the local time warp. ATD facilitates modeling of a wide range of resource and resource access properties like preemptable and non-preemptable accesses, synchronous and asynchronous accesses, multiport resources, dynamic access priorities, interacting and cascaded resources, and user specified schedulers prioritizing simultaneous resource accesses.
Transparent Transaction Level Modeling focuses on the efficient creation of virtual prototypes by reducing the implementation effort and consists of a library and a code generator. The TTLM library adds a layer of convenience functions to ATD comprising various application programming interfaces for inter module communication, virtual prototype configuration and run time information extraction. The TTLM generator is used to automatically generate the structural code of the virtual prototype from the formal hardware specification language IP-XACT.
The applicability and benefits of the presented techniques are demonstrated using an image processing centric automotive application. Compared to an existing cycle accurate SystemC model, the implementation effort can be reduced by approximately 50% using TTLM. Applying ATD, the simulation performance can be increased by a factor of up to five while retaining cycle accuracy.

In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.