## Doctoral Thesis

### Refine

#### Year of publication

- 2014 (77) (remove)

#### Document Type

- Doctoral Thesis (77) (remove)

#### Keywords

#### Faculty / Organisational entity

- Fachbereich Mathematik (15)
- Fachbereich Informatik (14)
- Fachbereich Chemie (12)
- Fachbereich Maschinenbau und Verfahrenstechnik (12)
- Fachbereich Bauingenieurwesen (7)
- Fachbereich Sozialwissenschaften (7)
- Fachbereich Biologie (3)
- Fachbereich Elektrotechnik und Informationstechnik (3)
- Fachbereich ARUBI (2)
- Fachbereich Wirtschaftswissenschaften (1)

Embedded systems, ranging from very simple systems up to complex controllers, may
nowadays have quite challenging real-time requirements. Many embedded systems are reactive
systems that have to respond to environmental events and have to guarantee certain real-time
constrain. Their execution is usually divided into reaction steps, where in each step, the
system reads inputs from the environment and reacts to these by computing corresponding
outputs.
The synchronous Model of Computation (MoC) has proven to be well-suited for the
development of reactive real-time embedded systems whose paradigm directly reflects the
reactive nature of the systems it describes. Another advantage is the availability of formal
verification by model checking as a result of the deterministic execution based on a formal
semantics. Nevertheless, the increasing complexity of embedded systems requires to compensate
the natural disadvantages of model checking that suffers from the well-known state-space
explosion problem. It is therefore natural to try to integrate other verification methods with
the already established techniques. Hence, improvements to encounter these problems are
required, e.g., appropriate decomposition techniques, which encounter the disadvantages
of the model checking approach naturally. But defining decomposition techniques for synchronous
language is a difficult task, as a result of the inherent parallelism emerging from
the synchronous broadcast communication.
Inspired by the progress in the field of desynchronization of synchronous systems by
representing them in other MoCs, this work will investigate the possibility of adapting and use
methods and tools designed for other MoC for the verification of systems represented in the
synchronous MoC. Therefore, this work introduces the interactive verification of synchronous
systems based on the basic foundation of formal verification for sequential programs – the
Hoare calculus. Due to the different models of computation several problems have to be
solved. In particular due to the large amount of concurrency, several parts of the program
are active at the same point of time. In contrast to sequential programs, a decomposition
in the Hoare-logic style that is in some sense a symbolic execution from one control flow
location to another one requires the consideration of several flows here. Therefore, different
approaches for the interactive verification of synchronous systems are presented.
Additionally, the representation of synchronous systems by other MoCs and the influence
of the representation on the verification task by differently embedding synchronous system
in a single verification tool are elaborated.
The feasibility is shown by integration of the presented approach with the established
model checking methods by implementing the AIFProver on top of the Averest system.

This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.

Researchers and analysts in modern industrial and academic environments are faced with a daunting amount of multivariate data. While there has been significant development in the areas of data mining and knowledge
discovery, there is still the need for improved visualizations and generic solutions. The state-of-the-art in visual analytics and exploratory data visualization is to incorporate more profound analysis methods while focusing on improving interactive abilities, in order to support data analysts in gaining new insights through visual exploration and hypothesis building.
In the research field of exploratory data visualization, this thesis contributes new approaches in dimension reduction that tackle a number of shortcomings in state-of-the-art methods, such as interpretability and ambiguity. By combining methods from several disciplines, we describe how ambiguity can be countered effectively by visualizing coordinate values within a lower-dimensional embedding, thereby focusing on the display of the structural composition of high-dimensional data and on an intuitive depiction of inherent global relationships. We also describe how properties and alignment of high-dimensional manifolds can be analyzed in different levels of detail by means of a self-embedding hierarchy of local projections, each using full degree of freedom, while keeping the global context.
To the application field of air quality research, the thesis provides novel means for the research of aerosol source contributions. Triggered by this particularly challenging application problem, we instigate a new research direction in the area of visual analytics by describing a methodology to model-based visual analysis that (i) allows the scientist to be “in the loop” of computations and (ii) enables him to verify and control the analysis process, in order to steer computations towards physical meaning. Careful reflection of our work in this application has led us to derive key design choices that underlie and transcend beyond application-specific solutions. As a result, we describe a general design methodology to computing parameters of a pre-defined analytical model that map to multivariate data. Core applications areas that can benefit from our approach are within engineering disciplines, such as civil, chemical, electrical, and mechanical engineering, as well as in geology, physics, and biology.

The heart is reported to show a net consumption of lactate. This may contribute up to 15% to the total body lactate disposal. In this work, the consumption of lactate was shown for the first
time on the single cell level with the new FRET-based lactate sensor Laconic.
Research published until today, almost exclusively reports the monocarboxylate transporter 1
(MCT1) as the transporter responsible for myocardial lactate uptake. As this membrane
transporter transports lactate together with H+ in a stoichiometry of 1:1, lactate transport is
coupled to pH regulation. Consequently, interactions of MCT1 and acid/base regulating proteins
(carbonic anhydrases (CAs and sodium bicarbonate co-transporters (NBCs)) are described in
the oocyte expression system, skeletal muscle and cancer cells.
In this work it is shown that activity of extracellular CA increases lactate uptake into mouse
cardiomyocytes by 27% and lactate induced JA/B by 42.8% to 46.2%. This effect is most likely
mediated via NBC/CA interaction because inhibition of extracellular CA reduces HCO3--
dependent acid extruding JA/B by 53.3% to 78.4%. This may link lactate uptake to cellular
respiration. When lactate was applied in medium gassed with 100% N2, lactate induced
acidification was 12.6% faster than in medium gassed with 100% O2. Thus, CO2 produced on
the pathway transferring redox energy from substrates like glucose and lactate to ADP and
phosphate via oxidative phosphorylation, may support further lactate uptake. The findings of
this work suggest an auto regulation of lactate uptake via CO2 release in ventricular mouse
cardiomyocytes.

Mechanical ventilation of patients with severe lung injury is an important clinical treatment to ensure proper lung oxygenation and to mitigate the extent of collapsed lung regions. While current imaging technologies such as Computed Tomography (CT) and chest X-ray allow for a thorough inspection of the thorax, they are limited to static pictures and exhibit several disadvantages, including exposure to ionizing radiation and high cost. Electrical Impedance Tomography (EIT) is a novel method to determine functional processes inside the thorax such as lung ventilation and cardiac activity. EIT reconstructs the internal electrical conductivity distribution within the thorax from voltage measurements on the body surface. Conductivity changes correlate with important clinical parameters such as lung volume and perfusion. Current EIT systems and algorithms use simplified or generalized thorax models to solve the reconstruction problem, which reduce image quality and anatomical significance. In this thesis, the development of a clinically relevant workflow to compute sophisticated three-dimensional thorax models from patient-specific CT data is described. The method allows medical experts to generate a multi-material segmentation in an interactive and fast way, while a volumetric mesh is computed automatically from the segmentation. The significantly improved image quality and anatomical precision of EIT images reconstructed with these 3D models is reported, and the impact on clinical applicability is discussed. In addition, three projects concerning quantitative CT (qCT) measurements and multi-modal 3D visualization are presented, which demonstrate the importance and productivity of interdisciplinary research groups including computer scientists and medical experts. The results presented in this thesis contribute significantly to clinical research efforts to pave the way towards improved patient-specific treatments of lung injury using EIT and qCT.

Die Dissertation "Portfoliooptimierung im Binomialmodell" befasst sich mit der Frage, inwieweit
das Problem der optimalen Portfolioauswahl im Binomialmodell lösbar ist bzw. inwieweit
die Ergebnisse auf das stetige Modell übertragbar sind. Dabei werden neben dem
klassischen Modell ohne Kosten und ohne Veränderung der Marktsituation auch Modellerweiterungen
untersucht.

The study addresses the effect of multiple jet passes and other parameters namely feedrate, water pressure and standoff distance in waterjet peening of metallic
surfaces. An analysis of surface integrity was used to evaluate the performance of
different parameters in the process. An increase in the number of jet passes and
pressure leads to a higher roughness and more erosion and also a higher hardness.
In contrast, the feedrate shows a reverse effect on those surface characteristics.
There exists a specific value of standoff distance that results in the maximum surface
roughness, erosion as well as hardness. Analysis of the surface microstructure gave
a good insight into the mechanism material removal process involving initial and
evolved damage. Also, the waterjet peening process was optimized based on the
design of experiment approach. The developed empirical models had shown
reasonable correlations between the measured and predicted responses. A proper selection of waterjet peening parameters can be formulated to be used in practical
works.

This PhD-Thesis deals with the calculation and application of a new class of invariants, that can be used to recognize patterns in tensor fields (i.e. scalar fields, vector fields und matrix fields), and by the composition of scalar fields with delta-functions also to point-clouds.
In the first chapter an overview over already existing invariants is given.
In the second chapter the general definition of the new invariants is given:
starting with a tensor field a set of moment tensor is created via folding in tensor-product manner with different orders of the tensor product of the positional vector. From these, rotational invariant values are calculated via contraction of tensor products. An algorithm to get a complete and independent set of invariants from a given moment tensor set is described. Furthermore methods to make these sets of invariants invariant against translation, rotation, scaling, and affine transformation.
In the third chapter, a method to optimize the calculation of these sets of invariants is described: every invariant can be modeled as undirected graph comprising multiple sub-graphs representing partially contracted tensor products of the moment tensors.
The composition of the sets of invariants is optimized by a clever choice of the decomposition into sub-graphs, all paths creating a hyper-graph of sub-graphs where each node describes a composition step. Finally, C++-source-code is created, which optimized using the symmetry of the different tensors and tensor-products, and a comparison of the effort to other calculation methods of invariants is given.
The fourth chapter describes the application of the invariants to object recognition in point-clouds from 3D-scans. To do this, the invariants of sub-sets of point-clouds are stored for every known object. Afterwards, invariants are calculated from an unknown point-cloud and tried to find them in the database to assign it to one of the known objects. Benchmarks using three 3D-object databases are made testing time and recognition rate.

Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.

Methyleugenol (ME), (\(\it E \))-Methylisoeugenol (MIE), alpha-Asaron (aA), beta-Asaron (bA) und gamma-Asaron (gA) sind natürlich vorkommende Pflanzeninhaltsstoffe aus der Klasse der Phenylpropene (PP) und Bestandteil der menschlichen Ernährung. ME, aA und bA erwiesen sich im Tierversuch als hepatokanzerogen. MIE und gA wurden bislang nicht untersucht. Die allylischen Hepatokanzerogene Estragol und Safrol werden im Verlauf des Fremdstoff-metabolismus durch Cytochrom P450-Enzyme (CYP) in 1‘-Position hydroxyliert und anschließend durch Sulfotransferasen sulfoniert. Der reaktive Schwefelsäureester kann DNA-Addukte bilden, wodurch in der Folge Mutationen und Tumoren ausgelöst werden können. Für ME wurde der gleiche Mechanismus der Aktivierung postuliert, obgleich definitive Beweise bislang fehlten. Im Gegensatz dazu war die Mehrheit der bislang untersuchten propenylischen PP nicht kanzerogen. aA und bA stellen insofern eine Ausnahme dar. Tierexperimentelle Untersuchungen legen einen alternativen Mechanismus für die Aktivierung von aA und bA nahe. Untersuchungen zum Metabolismus der Asarone liegen bisher nicht vor. Ziel der Arbeit war es daher, den hepatischen Metabolismus dieser fünf PP zu untersuchen, um Einblicke in die potentiellen gentoxischen Mechanismen zu erhalten. Die Metabolitenbildung der fünf Substanzen wurde zeit- und konzentrationsabhängig in humanen und tierischen Lebermikrosomen (LM) sowie verschiedenen humanen CYP-Enzymen (Supersomes\(^{TM}\)) untersucht. Die Metaboliten wurden charakterisiert, identifiziert und synthetisiert, um als Referenzverbindungen für die Quantifizierung sowie für weitere mechanistische in vitro Untersuchungen zu dienen. Ferner wurden enzymkinetische Parameter bestimmt, um eine Extrapolation der Metabolitenbildung hin zu kleinen Substratkonzentrationen sowie die Beteiligung humaner CYPs an der Metabolitenbildung in humanen LM zu ermöglichen. Außerdem wurde die Bildung von DNA-Addukten in primären Rattenhepatozyten nach Inkubation mit ME, MIE sowie deren Metaboliten untersucht. Für alle Verbindungen konnten die folgenden fünf enzymatischen Reaktionen identifiziert werden: Hydroxylierung der Seitenkette, Oxidation dieser Alkohole, Epoxidierung der Seitenkette und Hydrolyse zu Diolen sowie \(\it O \)-Demethylierung. Als Hauptmetaboliten von ME, MIE, aA und gA wurden die jeweiligen Seitenkettenalkohole identifiziert, während bA hauptsächlich über Epoxide zu Diolen metabolisiert wurde. Die Ergebnisse belegen, dass ME, im Gegensatz zu MIE, über den gleichen Mechanismus wie Safrol und Estragol, auch in geringen, humanrelevanten Konzentrationen aktiviert werden kann. Für aA und vor allem bA scheint eine Aktivierung über Epoxide wahrscheinlicher, während es für gA keine Hinweise auf potentiell gentoxische Metaboliten gibt.