## Doctoral Thesis

### Refine

#### Faculty / Organisational entity

- Fachbereich Chemie (229)
- Fachbereich Mathematik (169)
- Fachbereich Maschinenbau und Verfahrenstechnik (112)
- Fachbereich Biologie (82)
- Fachbereich Informatik (73)
- Fachbereich ARUBI (68)
- Fachbereich Elektrotechnik und Informationstechnik (48)
- Fachbereich Bauingenieurwesen (18)
- Fachbereich Sozialwissenschaften (18)
- Fachbereich Physik (17)

#### Year of publication

#### Document Type

- Doctoral Thesis (846) (remove)

#### Keywords

- Visualisierung (12)
- Phasengleichgewicht (10)
- Modellierung (9)
- Simulation (8)
- Apoptosis (7)
- Katalyse (7)
- Flüssig-Flüssig-Extraktion (6)
- Mobilfunk (6)
- Polyphenole (6)
- Apoptosis (5)

- Interactive Verification of Synchronous Systems (2014)
- Embedded systems, ranging from very simple systems up to complex controllers, may nowadays have quite challenging real-time requirements. Many embedded systems are reactive systems that have to respond to environmental events and have to guarantee certain real-time constrain. Their execution is usually divided into reaction steps, where in each step, the system reads inputs from the environment and reacts to these by computing corresponding outputs. The synchronous Model of Computation (MoC) has proven to be well-suited for the development of reactive real-time embedded systems whose paradigm directly reflects the reactive nature of the systems it describes. Another advantage is the availability of formal verification by model checking as a result of the deterministic execution based on a formal semantics. Nevertheless, the increasing complexity of embedded systems requires to compensate the natural disadvantages of model checking that suffers from the well-known state-space explosion problem. It is therefore natural to try to integrate other verification methods with the already established techniques. Hence, improvements to encounter these problems are required, e.g., appropriate decomposition techniques, which encounter the disadvantages of the model checking approach naturally. But defining decomposition techniques for synchronous language is a difficult task, as a result of the inherent parallelism emerging from the synchronous broadcast communication. Inspired by the progress in the field of desynchronization of synchronous systems by representing them in other MoCs, this work will investigate the possibility of adapting and use methods and tools designed for other MoC for the verification of systems represented in the synchronous MoC. Therefore, this work introduces the interactive verification of synchronous systems based on the basic foundation of formal verification for sequential programs – the Hoare calculus. Due to the different models of computation several problems have to be solved. In particular due to the large amount of concurrency, several parts of the program are active at the same point of time. In contrast to sequential programs, a decomposition in the Hoare-logic style that is in some sense a symbolic execution from one control flow location to another one requires the consideration of several flows here. Therefore, different approaches for the interactive verification of synchronous systems are presented. Additionally, the representation of synchronous systems by other MoCs and the influence of the representation on the verification task by differently embedding synchronous system in a single verification tool are elaborated. The feasibility is shown by integration of the presented approach with the established model checking methods by implementing the AIFProver on top of the Averest system.

- Test rig optimization (2014)
- Designing good test rigs for fatigue life tests is a common task in the auto- motive industry. The problem to find an optimal test rig configuration and actuator load signals can be formulated as a mathematical program. We in- troduce a new optimization model that includes multi-criteria, discrete and continuous aspects. At the same time we manage to avoid the necessity to deal with the rainflow-counting (RFC) method. RFC is an algorithm, which extracts load cycles from an irregular time signal. As a mathematical func- tion it is non-convex and non-differentiable and, hence, makes optimization of the test rig intractable. The block structure of the load signals is assumed from the beginning. It highly reduces complexity of the problem without decreasing the feasible set. Also, we optimize with respect to the actuators’ positions, which makes it possible to take torques into account and thus extend the feasible set. As a result, the new model gives significantly better results, compared with the other approaches in the test rig optimization. Under certain conditions, the non-convex test rig problem is a union of convex problems on cones. Numerical methods for optimization usually need constraints and a starting point. We describe an algorithm that detects each cone and its interior point in a polynomial time. The test rig problem belongs to the class of bilevel programs. For every instance of the state vector, the sum of functions has to be maximized. We propose a new branch and bound technique that uses local maxima of every summand.

- Private-by-Design Advertising and Analytics: From Theory to Practice (2015)
- There are a number of designs for an online advertising system that allow for behavioral targeting without revealing user online behavior or user interest profiles to the ad network. Although these designs purport to be practical solutions, none of them adequately consider the role of ad auctions, which today are central to the operation of online advertising systems. Moreover, none of the proposed designs have been deployed in real-life settings. In this thesis, we present an effort to fill this gap. First, we address the challenge of running ad auctions that leverage user profiles while keeping the profile information private. We define the problem, broadly explore the solution space, and discuss the pros and cons of these solutions. We analyze the performance of our solutions using data from Microsoft Bing advertising auctions. We conclude that, while none of our auctions are ideal in all respects, they are adequate and practical solutions. Second, we build and evaluate a fully functional prototype of a practical privacy-preserving ad system at a reasonably large scale. With more than 13K opted-in users, our system was in operation for over two months serving an average of 4800 active users daily. During the last month alone, we registered 790K ad views, 417 clicks, and even a small number of product purchases. Our system obtained click-through rates comparable with those for Google display ads. In addition, our prototype is equipped with a differentially private analytics mechanism, which we used as the primary means for gathering experimental data. In this thesis, we describe our first-hand experience and lessons learned in running the world's first fully operational “private-by-design” behavioral advertising and analytics system.

- Ein universelles und dynamisch rekonfigurierbares Interface für eingebettete und intelligente Multi-Sensor-Systeme mit Self-x Eigenschaften (2015)
- Seit Aufkommen der Halbleiter-Technologie existiert ein Trend zur Miniaturisierung elektronischer Systeme. Dies, steigende Anforderungen sowie die zunehmende Integration verschiedener Sensoren zur Interaktion mit der Umgebung lassen solche eingebetteten Systeme, wie sie zum Beispiel in mobilen Geräten oder Fahrzeugen vorkommen, zunehmend komplexer werden. Die Folgen sind ein Anstieg der Entwicklungszeit und ein immer höherer Bauteileaufwand, bei gleichzeitig geforderter Reduktion von Größe und Energiebedarf. Insbesondere der Entwurf von Multi-Sensor-Systemen verlangt für jeden verwendeten Sensortyp jeweils gesondert nach einer spezifischen Sensorelektronik und steht damit den Forderungen nach Miniaturisierung und geringem Leistungsverbrauch entgegen. In dieser Forschungsarbeit wird das oben beschriebene Problem aufgegriffen und die Entwicklung eines universellen Sensor-Interfaces für eben solche Multi-Sensor-Systeme erörtert. Als ein einzelner integrierter Baustein kann dieses Interface bis zu neun verschiedenen Sensoren unterschiedlichen Typs als Sensorelektronik dienen. Die aufnehmbaren Messgrößen umfassen: Spannung, Strom, Widerstand, Kapazität, Induktivität und Impedanz. Durch dynamische Rekonfigurierbarkeit und applikationsspezifische Programmierung wird eine variable Konfiguration entsprechend der jeweiligen Anforderungen ermöglicht. Sowohl der Entwicklungs- als auch der Bauteileaufwand können dank dieser Schnittstelle, die zudem einen Energiesparmodus beinhaltet, erheblich reduziert werden. Die flexible Struktur ermöglicht den Aufbau intelligenter Systeme mit sogenannten Self-x Charakteristiken. Diese betreffen Fähigkeiten zur eigenständigen Systemüberwachung, Kalibrierung oder Reparatur und tragen damit zu einer erhöhten Robustheit und Fehlertoleranz bei. Als weitere Innovation enthält das universelle Interface neuartige Schaltungs- und Sensorkonzepte, beispielsweise zur Messung der Chip-Temperatur oder Kompensation thermischer Einflüsse auf die Sensorik. Zwei unterschiedliche Anwendungen demonstrieren die Funktionalität der hergestellten Prototypen. Die realisierten Applikationen haben die Lebensmittelanalyse sowie die dreidimensionale magnetische Lokalisierung zum Gegenstand.

- Konvektiver Feuchtetransport durch Leckagen in Holzleichtbaukonstruktionen mit permeablen Dämmstoffen (2015)
- Holzleichtbaukonstruktionen weisen eine besondere Anfälligkeit für Perforationen der raumseitigen Schichten auf. Diese können bei Verwendung permeabler Dämmstoffe und einem regelkonformen, luftdurchlässigen Aufbau der nach außen anschließenden Schichten zu Leckageströmen führen. Die vorliegende Arbeit behandelt den konvektiven Feuchteeintrag durch Einzelleckagen im Gefach von Holzleichbaukonstruktionen infolge derartiger Leckageströme. Das Potential der Durchströmung von Leckagen wird durch die anliegende Druckdifferenz zwischen Innenraum und Umgebung bestimmt. Im ersten Teil der Arbeit werden daher die über 5 Heizperioden, an verschiedenen Gebäuden, in unterschiedlichen Regionen Deutschlands durchgeführten Differenzdruckuntersuchungen analysiert und bewertet. Es wird gezeigt, dass die Häufigkeiten der anliegenden Druckdifferenzen schiefsymmetrisch verteilt ist. Sowohl der Auftrieb als auch die Windanströmung beeinflussen die Druckdifferenz an der Gebäudehülle. Die Richtung der Windanströmung auf die Gebäudehülle, die regionale Lage, der Umbauungsgrad, die Luftdichtigkeit des Gesamtgebäudes und die Leckageverteilung in diesem sind zudem einflussnehmend. Im zweiten Teil wird das mittels eines patentierten Messsystems und -verfahrens untersuchte Durchströmungsverhalten von Einzelleckagen in Holzleichtbaukonstruktionen dargestellt. Die Analysen zeigen, dass der Volumenstrom und Ausflussfaktor durch raumseitige Schichten u. a. von der Form, Randbeschaffenheit und Größe der Leckage beeinflusst werden. Weiterhin wird nachgewiesen, dass die Eigenschaften der Perforation(en) in den von der Dämmung nach außen angeordneten Schichten eine untergeordnete Rolle spielen. Außerdem wird verdeutlicht, dass sich der Luftstrom nach der raumseitigen Leckage konisch im mineralischen Faserdämmstoff öffnet. Dieses Ausbreitungsverhalten, kann mittels dem kleinsten durchströmbaren Dämmstoffvolumen berechnet werden. Eine entscheidende Einflussgröße dafür ist die Permeabilität des Dämmstoffs unter realistischen Druckdifferenzen. Das entwickelte praxisnahe, analytische Berechnungsmodell zum konvektiven Feuchteeintrag stellt den abschließenden Teil der Arbeit dar. Der Volumenstrom durch eine Leckage im Gefach einer Holzleichtbaukonstruktion kann anhand dieses Modells mit einer durchschnittlichen Genauigkeit von ±12 % berechnet werden. Mittels eines quasi-stationären Berechnungsverfahrens wird anschließend die hygrische Hüllflächeninfiltration durch eine Einzelleckage, für eine Heizperiode, unter realistisch an einer Gebäudehüllfläche anliegenden Druckdifferenzen berechnet. Die Plausibilität der Ergebnisse wird an einem untersuchten Schadensfall nachgewiesen. Mit den Berechnungsergebnissen ist eine Bewertung des Schadenspotentials durch konvektiven Feuchteeintrag möglich.

- On the Extended Finite Element Method for the Elasto-Plastic Deformation of Heterogeneous Materials (2015)
- This thesis is concerned with the extended finite element method (XFEM) for deformation analysis of three-dimensional heterogeneous materials. Using the "enhanced abs enrichment" the XFEM is able to reproduce kinks in the displacements and therewith jumps in the strains within elements of the underlying tetrahedral finite element mesh. A complex model for the micro structure reconstruction of aluminum matrix composite AMC225xe and the modeling of its macroscopic thermo-mechanical plastic deformation behavior is presented, using the XFEM. Additionally, a novel stabilization algorithm is introduced for the XFEM. This algorithm requires preprocessing only.

- Combinations of Boolean Groebner Bases and SAT Solvers (2014)
- In this thesis, we combine Groebner basis with SAT Solver in different manners. Both SAT solvers and Groebner basis techniques have their own strength and weakness. Combining them could fix their weakness. The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin. However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses. By selecting smaller and more compact input for Groebner basis computations, we can significantly reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition, the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems. The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials. Therefore, we propose a more efficient approach to handle such cases. In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection. We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm. Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths. For a given design, we construct an abstract model. Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\). The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering. Finally, the normal form is employed to prove the desired properties. To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.

- Impact of 'Dioxins' on Gene Expression in Mouse Liver in vivo, and in both Rat Liver Cells and Human Blood Cells In Culture (2014)
- ‘Dioxin-like’ (DL) compounds occur ubiquitously in the environment. Toxic responses associated with specific dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs) include dermal toxicity, immunotoxicity, liver toxicity, carcinogenicity, as well as adverse effects on reproduction, development, and endocrine functions. Most, if not all of these effects are believed to be due to interaction of these compounds with the aryl hydrocarbon receptor (AhR). With tetrachlorodibenzo-p-dioxin (TCDD) as representatively most potent congener, a toxic equivalency factor (TEF) concept was employed, in which respective congeners were assigned to a certain TEF-value reflecting the compound’s toxicity relative to TCDD’s. The EU-project ‘SYSTEQ’ aimed to develop, validate, and implement human systemic TEFs as indicators of toxicity for DL-congeners. Hence, the identification of novel quantifiable biomarkers of exposure was a major objective of the SYSTEQ project. In order to approach to this objective, a mouse whole genome microarray analysis was applied using a set of seven individual congeners, termed the ‘core congeners’. These core congeners (TCDD, 1-PeCDD, 4-PeCDF, PCB 126, PCB 118, PCB 156, and the non dioxin-like PCB 153), which contribute to approximately 90% of toxic equivalents (TEQs) in the human food chain, were further tested in vivo as well as in vitro. The mouse whole genome microarray revealed a conserved list of differentially regulated genes and pathways associated with ‘dioxin-like’ effects. A definite data-set of in vitro studies was supposed to function as a fundament for a probable establishment of novel TEFs. Thus, CYP1A induction measured by EROD activity, which represents a sensitive and yet best known marker for dioxin-like effects, was used to estimate potency and efficacy of selected congeners. For this study, primary rat hepatocytes and the rat hepatoma cell line H4IIE were used as well as the core congeners and an additional group of compounds of comparable relevance for the environment: 1,6-HxCDD, 1,4,6-HpCDD, TCDF, 1,4-HxCDF, 1,4,6-HpCDF, PCB 77, and PCB 105. Besides, a human whole genome microarray experiment was applied in order to gain knowledge with respect to TCDD’s impact towards cells of the immune system. Hence, human primary blood mononuclear cells (PBMCs) were isolated from individuals and exposed to TCDD or to TCDD in combination with a stimulus (lipopolysaccharide (LPS), or phytohemagglutinin (PHA)). A few members of the AhR-gene batterie were found to be regulated, and minor data with respect to potential TCDD-mediated immunomodulatory effects were given. Still, obtained data in this regard was limited due to great inter-individual differences.

- Virtual Reality Methods for Research in the Geosciences (2014)
- In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided. To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars. Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database. To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend. Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements. I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering. Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered. To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters. Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however. To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.

- The LIR Space Partitioning System applied to the Stokes Equations (2014)
- We consider two major topics in this thesis: spatial domain partitioning which serves as a framework to simulate creep flows in representative volume elements. First, we introduce a novel multi-dimensional space partitioning method. A new type of tree combines the advantages of the Octree and the KD-tree without having their disadvantages. We present a new data structure allowing local refinement, parallelization and proper restriction of transition ratios between nodes. Our technique has no dimensional restrictions at all. The tree's data structure is defined by a topological algebra based on the symbols \( A = \{ L, I, R \} \) that encode the partitioning steps. The set of successors is restricted such that each node has the partition of unity property to partition domains without overlap. With our method it is possible to construct a wide choice of spline spaces to compress or reconstruct scientific data such as pressure and velocity fields and multidimensional images. We present a generator function to build a tree that represents a voxel geometry. The space partitioning system is used as a framework to allow numerical computations. This work is triggered by the problem of representing, in a numerically appropriate way, huge three-dimensional voxel geometries that could have up to billions of voxels. These large datasets occure in situations where it is needed to deal with large representative volume elements (REV). Second, we introduce a novel approach of variable arrangement for pressure and velocity to solve the Stokes equations. The basic idea of our method is to arrange variables in a way such that each cell is able to satisfy a given physical law independently from its neighbor cells. This is done by splitting velocity values to a left and right converging component. For each cell we can set up a small linear system that describes the momentum and mass conservation equations. This formulation allows to use the Gauß-Seidel algorithm to solve the global linear system. Our tree structure is used for spatial partitioning of the geometry and provides a proper initial guess. In addition, we introduce a method that uses the actual velocity field to refine the tree and improve the numerical accuracy where it is needed. We developed a novel approach rather than using existing approaches such as the SIMPLE algorithm, Lattice-Boltzmann methods or Exlicit jump methods since they are suited for regular grid structures. Other standard CFD approaches extract surfaces and creates tetrahedral meshes to solve on unstructured grids thus can not be applied to our datastructure. The discretization converges to the analytical solution with respect to grid refinement. We conclude a high strength in computational time and memory for high porosity geometries and a high strength in memory requirement for low porosity geometries.