## Doctoral Thesis

### Refine

#### Year of publication

- 2014 (47) (remove)

#### Document Type

- Doctoral Thesis (47) (remove)

#### Language

- English (47) (remove)

#### Keywords

- Activity Recognition, Wearable Computing, Minimal Training, Unobtrusive Instrumentations (1)
- Activity recognition (1)
- Adaptive Data Structure (1)
- AhRR (1)
- Algorithm (1)
- Boosting (1)
- CYP1A1 (1)
- Classification (1)
- Closure (1)
- Code Generation (1)
- Computer graphics (1)
- Cycle Accuracy (1)
- DL-PCBs (1)
- Dataset (1)
- Dekonsolidierung (1)
- Dioxin (1)
- Direct Numerical Simulation (1)
- Discrete Event Simulation (DES) (1)
- EROD (1)
- Eikonal equation (1)
- Endlicher Automat (1)
- Evaluation (1)
- Feasibility study (1)
- Formale Grammatik (1)
- Formale Sprache (1)
- Grouping by similarity (1)
- Hypergraph (1)
- IP-XACT (1)
- Ileostomy (1)
- Immunoblot (1)
- Intensity estimation (1)
- Interactive decision support systems (1)
- Invariante (1)
- Kellerautomat (1)
- Knowledge Management (1)
- LIR-Tree (1)
- Machine learning (1)
- Microarray (1)
- Mobile system (1)
- Mustererkennung (1)
- Noise Control, Feature Extraction, Speech Recognition (1)
- OCR (1)
- PCDD/Fs (1)
- Partial Differential Equations (1)
- Pedestrian FLow (1)
- Perceptual grouping (1)
- Personalisation (1)
- Pervasive health (1)
- Physical activity monitoring (1)
- Recommender Systems (1)
- Response Priming (1)
- Self-splitting objects (1)
- Semantic Web (1)
- Semantic Wikis (1)
- Shared Resource Modeling (1)
- Stokes Equations (1)
- Sustainability (1)
- Symmetry (1)
- SystemC (1)
- TIPARP (1)
- Temporal Decoupling (1)
- Tensorfeld (1)
- Thermoplast (1)
- Topology visualization (1)
- Transaction Level Modeling (TLM) (1)
- Ubiquitous system (1)
- Urban Water Supply (1)
- Volume rendering (1)
- Water resources (1)
- Wearable computing (1)
- XMCD (1)
- aryl hydrocarbon receptor (1)
- bioavailability (1)
- cobalt (1)
- coffee (1)
- dioxin-like compounds (1)
- fatigue (1)
- flow cytometry (1)
- gas phase (1)
- geographic information systems (1)
- geology (1)
- hypergraph (1)
- invariant (1)
- iron (1)
- magnetism (1)
- metal cluster (1)
- moment (1)
- nickel (1)
- optimization (1)
- orbit (1)
- peripheral blood mononuclear cells (1)
- point cloud (1)
- polyphenol (1)
- rat liver cell systems (1)
- relative effect potencies (1)
- single molecule magnet (1)
- spin (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- tetrachlorodibenzo-p-dioxin (1)
- toxic equivalency factor (TEF) concept (1)
- vectorfield (1)
- virtual reality (1)
- whole genome microarray analysis (1)

#### Faculty / Organisational entity

The demand of sustainability is continuously increasing. Therefore, thermoplastic
composites became a focus of research due to their good weight to performance
ratio. Nevertheless, the limiting factor of their usage for some processes is the loss of
consolidation during re-melting (deconsolidation), which reduces the part quality.
Several studies dealing with deconsolidation are available. These studies investigate
a single material and process, which limit their usefulness in terms of general
interpretations as well as their comparability to other studies. There are two main
approaches. The first approach identifies the internal void pressure as the main
cause of deconsolidation and the second approach identifies the fiber reinforcement
network as the main cause. Due to of their controversial results and limited variety of
materials and processes, there is a big need of a more comprehensive investigation
on several materials and processes.
This study investigates the deconsolidation behavior of 17 different materials and
material configurations considering commodity, engineering, and performance
polymers as well as a carbon and two glass fiber fabrics. Based on the first law of
thermodynamics, a deconsolidation model is proposed and verified by experiments.
Universal applicable input parameters are proposed for the prediction of
deconsolidation to minimize the required input measurements. The study revealed
that the fiber reinforcement network is the main cause of deconsolidation, especially
for fiber volume fractions higher than 48 %. The internal void pressure can promote
deconsolidation, when the specimen was recently manufactured. In other cases the
internal void pressure as well as the surface tension prevents deconsolidation.
During deconsolidation the polymer is displaced by the volume increase of the void.
The polymer flow damps the progress of deconsolidation because of the internal
friction of the polymer. The crystallinity and the thermal expansion lead to a
reversible thickness increase during deconsolidation. Moisture can highly accelerate
deconsolidation and can increase the thickness by several times because of the
vaporization of water. The model is also capable to predict reconsolidation under the
defined boundary condition of pressure, time, and specimen size. For high pressure
matrix squeeze out occur, which falsifies the accuracy of the model.The proposed model was applied to thermoforming, induction welding, and
thermoplastic tape placement. It is demonstrated that the load rate during
thermoforming is the critical factor of achieving complete reconsolidation. The
required load rate can be determined by the model and is dependent on the cooling
rate, the forming length, the extent of deconsolidation, the processing temperature,
and the final pressure. During induction welding deconsolidation can tremendously
occur because of the left moisture in the polymer at the molten state. The moisture
cannot fully diffuse out of the specimen during the faster heating. Therefore,
additional pressure is needed for complete reconsolidation than it would be for a dry
specimen. Deconsolidation is an issue for thermoplastic tape placement, too. It limits
the placement velocity because of insufficient cooling after compaction. If the
specimen after compaction is locally in a molten state, it deconsolidates and causes
residual stresses in the bond line, which decreases the interlaminar shear strength. It
can be concluded that the study gains new knowledge and helps to optimize these
processes by means of the developed model without a high number of required
measurements.
Aufgrund seiner guten spezifischen Festigkeit und Steifigkeit ist der
endlosfaserverstärkte Thermoplast ein hervorragender Leichtbauwerkstoff. Allerdings
kann es während des Wiederaufschmelzens durch Dekonsolidierung zu einem
Verlust der guten mechanischen Eigenschaften kommen, daher ist Dekonsolidierung
unerwünscht. In vielen Studien wurde die Dekonsolidierung mit unterschiedlichen
Ergebnissen untersucht. Dabei wurde meist ein Material und ein Prozess betrachtet.
Eine allgemeine Interpretation und die Vergleichbarkeit unter den Studien sind
dadurch nur begrenzt möglich. Aus der Literatur sind zwei Ansätze bekannt. Dem
ersten Ansatz liegt der Druckunterschied zwischen Poreninnendruck und
Umgebungsdruck als Hauptursache der Dekonsolidierung zu Grunde. Beim zweiten
Ansatz wird die Faserverstärkung als Hauptursache identifiziert. Aufgrund der
kontroversen Ergebnisse und der begrenzten Anzahl der Materialien und
Verarbeitungsverfahren, besteht die Notwendigkeit einer umfassenden Untersuchung
über mehrere Materialien und Prozesse. Diese Studie umfasst drei Polymere
(Polypropylen, Polycarbonat und Polyphenylensulfid), drei Gewebe (Köper, Atlas und
Unidirektional) und zwei Prozesse (Autoklav und Heißpressen) bei verschiedenen
Faservolumengehalten.
Es wurde der Einfluss des Porengehaltes auf die interlaminare Scherfestigkeit
untersucht. Aus der Literatur ist bekannt, dass die interlaminare Scherfestigkeit mit
der Zunahme des Porengehaltes linear sinkt. Dies konnte für die Dekonsolidierung
bestätigt werden. Die Reduktion der interlaminaren Scherfestigkeit für
thermoplastische Matrizes ist kleiner als für duroplastische Matrizes und liegt im
Bereich zwischen 0,5 % bis 1,5 % pro Prozent Porengehalt. Außerdem ist die
Abnahme signifikant vom Matrixpolymer abhängig.
Im Falle der thermisch induzierten Dekonsolidierung nimmt der Porengehalt
proportional zu der Dicke der Probe zu und ist ein Maß für die Dekonsolidierung. Die
Pore expandiert aufgrund der thermischen Gasexpansion und kann durch äußere
Kräfte zur Expansion gezwungen werden, was zu einem Unterdruck in der Pore
führt. Die Faserverstärkung ist die Hauptursache der Dickenzunahme
beziehungsweise der Dekonsolidierung. Die gespeicherte Energie, aufgebaut während der Kompaktierung, wird während der Dekonsolidierung abgegeben. Der
Dekompaktierungsdruck reicht von 0,02 MPa bis 0,15 MPa für die untersuchten
Gewebe und Faservolumengehalte. Die Oberflächenspannung behindert die
Porenexpansion, weil die Oberfläche vergrößert werden muss, die zusätzliche
Energie benötigt. Beim Kontakt von benachbarten Poren verursacht die
Oberflächenspannung ein Verschmelzen der Poren. Durch das bessere Volumen-
Oberfläche-Verhältnis wird Energie abgebaut. Der Polymerfluss bremst die
Entwicklung der Dickenzunahme aufgrund der erforderlichen Energie (innere
Reibung) der viskosen Strömung. Je höher die Temperatur ist, desto niedriger ist die
Viskosität des Polymers, wodurch weniger Energie für ein weiteres Porenwachstum
benötigt wird. Durch den reversiblen Einfluss der Kristallinität und der
Wärmeausdehnung des Verbundes wird während der Erwärmung die Dicke erhöht
und während der Abkühlung wieder verringert. Feuchtigkeit kann einen enormen
Einfluss auf die Dekonsolidierung haben. Ist noch Feuchtigkeit über der
Schmelztemperatur im Verbund vorhanden, verdampft diese und kann die Dicke um
ein Vielfaches der ursprünglichen Dicke vergrößern.
Das Dekonsolidierungsmodell ist in der Lage die Rekonsolidierung vorherzusagen.
Allerdings muss der Rekonsolidierungsdruck unter einem Grenzwert liegen
(0,15 MPa für 50x50 mm² und 1,5 MPa für 500x500 mm² große Proben), da es sonst
bei der Probe zu einem Polymerfluss aus der Probe von mehr als 2 % kommt. Die
Rekonsolidierung ist eine inverse Dekonsolidierung und weist die gleichen
Mechanismen in der entgegengesetzten Richtung auf.
Das entwickelte Modell basiert auf dem ersten Hauptsatz der Thermodynamik und
kann die Dicke während der Dekonsolidierung und der Rekonsolidierung
vorhersagen. Dabei wurden eine homogene Porenverteilung und eine einheitliche,
kugelförmige Porengröße angenommen. Außerdem wurde die Massenerhaltung
angenommen. Um den Aufwand für die Bestimmung der Eingangsgrößen zu
reduzieren, wurden allgemein gültige Eingabeparameter bestimmt, die für eine
Vielzahl von Konfigurationen gelten. Das simulierte Materialverhalten mit den
allgemein gültigen Eingangsparametern erzielte unter den definierten
Einschränkungen eine gute Übereinstimmung mit dem tatsächlichen
Materialverhalten. Nur bei Konfigurationen mit einer Viskositätsdifferenz von mehr als 30 % zwischen der Schmelztemperatur und der Prozesstemperatur sind die
allgemein gültigen Eingangsparameter nicht anwendbar. Um die Relevanz für die
Industrie aufzuzeigen, wurden die Effekte der Dekonsolidierung für drei weitere
Verfahren simuliert. Es wurde gezeigt, dass die Kraftzunahmegeschwindigkeit
während des Thermoformens ein Schlüsselfaktor für eine vollständige
Rekonsolidierung ist. Wenn die Kraft zu langsam appliziert wird oder die finale Kraft
zu gering ist, ist die Probe bereits erstarrt, bevor eine vollständige Konsolidierung
erreicht werden kann. Auch beim Induktionsschweißen kann Dekonsolidierung
auftreten. Besonders die Feuchtigkeit kann zu einer starken Zunahme der
Dekonsolidierung führen, verursacht durch die sehr schnellen Heizraten von mehr als
100 K/min. Die Feuchtigkeit kann während der kurzen Aufheizphase nicht vollständig
aus dem Polymer ausdiffundieren, sodass die Feuchtigkeit beim Erreichen der
Schmelztemperatur in der Probe verdampft. Beim Tapelegen wird die
Ablegegeschwindigkeit durch die Dekonsolidierung begrenzt. Nach einer scheinbar
vollständigen Konsolidierung unter der Walze kann die Probe lokal dekonsolidieren,
wenn das Polymer unter der Oberfläche noch geschmolzen ist. Die daraus
resultierenden Poren reduzieren die interlaminare Scherfestigkeit drastisch um 5,8 %
pro Prozent Porengehalt für den untersuchten Fall. Ursache ist die Kristallisation in
der Verbindungszone. Dadurch werden Eigenspannungen erzeugt, die in der
gleichen Größenordnung wie die tatsächliche Scherfestigkeit sind.

In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.

In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.

Embedded systems, ranging from very simple systems up to complex controllers, may
nowadays have quite challenging real-time requirements. Many embedded systems are reactive
systems that have to respond to environmental events and have to guarantee certain real-time
constrain. Their execution is usually divided into reaction steps, where in each step, the
system reads inputs from the environment and reacts to these by computing corresponding
outputs.
The synchronous Model of Computation (MoC) has proven to be well-suited for the
development of reactive real-time embedded systems whose paradigm directly reflects the
reactive nature of the systems it describes. Another advantage is the availability of formal
verification by model checking as a result of the deterministic execution based on a formal
semantics. Nevertheless, the increasing complexity of embedded systems requires to compensate
the natural disadvantages of model checking that suffers from the well-known state-space
explosion problem. It is therefore natural to try to integrate other verification methods with
the already established techniques. Hence, improvements to encounter these problems are
required, e.g., appropriate decomposition techniques, which encounter the disadvantages
of the model checking approach naturally. But defining decomposition techniques for synchronous
language is a difficult task, as a result of the inherent parallelism emerging from
the synchronous broadcast communication.
Inspired by the progress in the field of desynchronization of synchronous systems by
representing them in other MoCs, this work will investigate the possibility of adapting and use
methods and tools designed for other MoC for the verification of systems represented in the
synchronous MoC. Therefore, this work introduces the interactive verification of synchronous
systems based on the basic foundation of formal verification for sequential programs – the
Hoare calculus. Due to the different models of computation several problems have to be
solved. In particular due to the large amount of concurrency, several parts of the program
are active at the same point of time. In contrast to sequential programs, a decomposition
in the Hoare-logic style that is in some sense a symbolic execution from one control flow
location to another one requires the consideration of several flows here. Therefore, different
approaches for the interactive verification of synchronous systems are presented.
Additionally, the representation of synchronous systems by other MoCs and the influence
of the representation on the verification task by differently embedding synchronous system
in a single verification tool are elaborated.
The feasibility is shown by integration of the presented approach with the established
model checking methods by implementing the AIFProver on top of the Averest system.

Test rig optimization
(2014)

Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.

In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.

‘Dioxin-like’ (DL) compounds occur ubiquitously in the environment. Toxic responses associated with specific dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs) include dermal toxicity, immunotoxicity, liver toxicity, carcinogenicity, as well as adverse effects on reproduction, development, and endocrine functions. Most, if not all of these effects are believed to be due to interaction of these compounds with the aryl hydrocarbon receptor (AhR).
With tetrachlorodibenzo-p-dioxin (TCDD) as representatively most potent congener, a toxic equivalency factor (TEF) concept was employed, in which respective congeners were assigned to a certain TEF-value reflecting the compound’s toxicity relative to TCDD’s.
The EU-project ‘SYSTEQ’ aimed to develop, validate, and implement human systemic TEFs as indicators of toxicity for DL-congeners. Hence, the identification of novel quantifiable biomarkers of exposure was a major objective of the SYSTEQ project.
In order to approach to this objective, a mouse whole genome microarray analysis was applied using a set of seven individual congeners, termed the ‘core congeners’. These core congeners (TCDD, 1-PeCDD, 4-PeCDF, PCB 126, PCB 118, PCB 156, and the non dioxin-like PCB 153), which contribute to approximately 90% of toxic equivalents (TEQs) in the human food chain, were further tested in vivo as well as in vitro. The mouse whole genome microarray revealed a conserved list of differentially regulated genes and pathways associated with ‘dioxin-like’ effects.
A definite data-set of in vitro studies was supposed to function as a fundament for a probable establishment of novel TEFs. Thus, CYP1A induction measured by EROD activity, which represents a sensitive and yet best known marker for dioxin-like effects, was used to estimate potency and efficacy of selected congeners. For this study, primary rat hepatocytes and the rat hepatoma cell line H4IIE were used as well as the core congeners and an additional group of compounds of comparable relevance for the environment: 1,6-HxCDD, 1,4,6-HpCDD, TCDF, 1,4-HxCDF, 1,4,6-HpCDF, PCB 77, and PCB 105.
Besides, a human whole genome microarray experiment was applied in order to gain knowledge with respect to TCDD’s impact towards cells of the immune system. Hence, human primary blood mononuclear cells (PBMCs) were isolated from individuals and exposed to TCDD or to TCDD in combination with a stimulus (lipopolysaccharide (LPS), or phytohemagglutinin (PHA)). A few members of the AhR-gene batterie were found to be regulated, and minor data with respect to potential TCDD-mediated immunomodulatory effects were given. Still, obtained data in this regard was limited due to great inter-individual differences.

In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.

We consider two major topics in this thesis: spatial domain partitioning which serves as a framework to simulate creep flows in representative volume elements.
First, we introduce a novel multi-dimensional space partitioning method. A new type of tree combines the advantages of the Octree and the KD-tree without having their disadvantages. We present a new data structure allowing local refinement, parallelization and proper restriction of transition ratios between nodes. Our technique has no dimensional restrictions at all. The tree's data structure is defined by a topological algebra based on the symbols \( A = \{ L, I, R \} \) that encode the partitioning steps. The set of successors is restricted such that each node has the partition of unity property to partition domains without overlap. With our method it is possible to construct a wide choice of spline spaces to compress or reconstruct scientific data such as pressure and velocity fields and multidimensional images. We present a generator function to build a tree that represents a voxel geometry. The space partitioning system is used as a framework to allow numerical computations. This work is triggered by the problem of representing, in a numerically appropriate way, huge three-dimensional voxel geometries that could have up to billions of voxels. These large datasets occure in situations where it is needed to deal with large representative volume elements (REV).
Second, we introduce a novel approach of variable arrangement for pressure and velocity to solve the Stokes equations. The basic idea of our method is to arrange variables in a way such that each cell is able to satisfy a given physical law independently from its neighbor cells. This is done by splitting velocity values to a left and right converging component. For each cell we can set up a small linear system that describes the momentum and mass conservation equations. This formulation allows to use the Gauß-Seidel algorithm to solve the global linear system. Our tree structure is used for spatial partitioning of the geometry and provides a proper initial guess. In addition, we introduce a method that uses the actual velocity field to refine the tree and improve the numerical accuracy where it is needed. We developed a novel approach rather than using existing approaches such as the SIMPLE algorithm, Lattice-Boltzmann methods or Exlicit jump methods since they are suited for regular grid structures. Other standard CFD approaches extract surfaces and creates tetrahedral meshes to solve on unstructured grids thus can not be applied to our datastructure. The discretization converges to the analytical solution with respect to grid refinement. We conclude a high strength in computational time and memory for high porosity geometries and a high strength in memory requirement for low porosity geometries.

Multilevel Constructions
(2014)

The thesis consists of the two chapters.
The first chapter is addressed to make a deep investigation of the MLMC method. In particular we take an optimisation view at the estimate. Rather than fixing the number of discretisation points \(n_i\) to be a geometric sequence, we are trying to find an optimal set up for \(n_i\) such that for a fixed error the estimate can be computed within a minimal time.
In the second chapter we propose to enhance the MLMC estimate with the weak extrapolation technique. This technique helps to improve order of a weak convergence of a scheme and as a result reduce CC of an estimate. In particular we study high order weak extrapolation approach, which is know not be inefficient in the standard settings. However, a combination of the MLMC and the weak extrapolation yields an improvement of the MLMC.