### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (621) (remove)

#### Language

- English (621) (remove)

#### Keywords

- Visualisierung (13)
- finite element method (8)
- Finite-Elemente-Methode (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Visualization (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)

#### Faculty / Organisational entity

- Fachbereich Mathematik (218)
- Fachbereich Informatik (139)
- Fachbereich Maschinenbau und Verfahrenstechnik (95)
- Fachbereich Chemie (58)
- Fachbereich Elektrotechnik und Informationstechnik (45)
- Fachbereich Biologie (27)
- Fachbereich Sozialwissenschaften (15)
- Fachbereich Wirtschaftswissenschaften (8)
- Fachbereich Physik (6)
- Fachbereich ARUBI (5)

Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.

Topological insulators (TI) are a fascinating new state of matter. Like usual insulators, their band structure possesses a band gap, such that they cannot conduct current in their bulk. However, they are able to conduct current along their edges and surfaces, due to edge states that cross the band gap. What makes TIs so interesting and potentially useful are these robust unidirectional edge currents. They are immune to significant defects and disorder, which means that they provide scattering-free transport.
In photonics, using topological protection has a huge potential for applications, e.g. for robust optical data transfer [1-3] – even on the quantum level [4, 5] – or to make devices more stable and robust [6, 7]. Therefore, the field of topological insulators has spread to optics to create the new and active research field of topological photonics [8-10].
Well-defined and controllable model systems can help to provide deeper insight into the mechanisms of topologically protected transport. These model systems provide a vast control over parameters. For example, arbitrary lattice types without defects can be examined, and single lattice sites can be manipulated. Furthermore, they allow for the observation of effects that usually happen at extremely short time-scales in solids. Model systems based on photonic waveguides are ideal candidates for this.
They consist of optical waveguides arranged on a lattice. Due to evanescent coupling, light that is inserted into one waveguide spreads along the lattice. This coupling of light between waveguides can be seen as an analogue to electrons hopping/tunneling between atomic lattice sites in a solid.
The theoretical basis for this analogy is given by the mathematical equivalence between Schrödinger and paraxial Helmholtz equation. This means that in these waveguide systems, the role of time is assigned to a spatial axis. The field evolution along the waveguides' propagation axis z thus models the temporal evolution of an electron's wave-function in solid states. Electric and magnetic fields acting on electrons in solids need to be incorporated into the photonic platform by introducing artificial fields. These artificial gauge fields need to act on photons in the same way that their electro-magnetic counterparts act on electrons. E.g., to create a photonic analogue of a topological insulator the waveguides are bent helically along their propagation axis to model the effect of a magnetic field [3]. This means that the fabrication of these waveguide arrays needs to be done in 3D.
In this thesis, a new method to 3D micro-print waveguides is introduced. The inverse structure is fabricated via direct laser writing, and subsequently infiltrated with a material with higher refractive index contrast. We will use these model systems of evanescently coupled waveguides to look at different effects in topological systems, in particular at Floquet topological systems.
We will start with a topologically trivial system, consisting of two waveguide arrays with different artificial gauge fields. There, we observe that an interface between these trivial gauge fields has a profound impact on the wave vector of the light traveling across it. We deduce an analog to Snell's law and verify it experimentally.
Then we will move on to Floquet topological systems, consisting of helical waveguides. At the interface between two Floquet topological insulators with opposite helicity of the waveguides, we find additional trivial interface modes that trap the light. This allows to investigate the interaction between trivial and topological modes in the lattice.
Furthermore, we address the question if topological edge states are robust under the influence of time-dependent defects. In a one-dimensional topological model (the Su-Schrieffer-Heeger model [11]) we apply periodic temporal modulations to an edge wave-guide. We find Floquet copies of the edge state, that couple to the bulk in a certain frequency window and thus depopulate the edge state.
In the two-dimensional Floquet topological insulator, we introduce single defects at the edge. When these defects share the temporal periodicity of the helical bulk waveguides, they have no influence on a topological edge mode. Then, the light moves around/through the defect without being scattered into the bulk. Defects with different periodicity, however, can – likewise to the defects in the SSH model – induce scattering of the edge state into the bulk.
In the end we will briefly highlight a newly emerging method for the fabrication of waveguides with low refractive index contrast. Moreover, we will introduce new ways to create artificial gauge fields by the use of orbital angular momentum states in waveguides.

Under the notion of Cyber-Physical Systems an increasingly important research area has
evolved with the aim of improving the connectivity and interoperability of previously
separate system functions. Today, the advanced networking and processing capabilities
of embedded systems make it possible to establish strongly distributed, heterogeneous
systems of systems. In such configurations, the system boundary does not necessarily
end with the hardware, but can also take into account the wider context such as people
and environmental factors. In addition to being open and adaptive to other networked
systems at integration time, such systems need to be able to adapt themselves in accordance
with dynamic changes in their application environments. Considering that many
of the potential application domains are inherently safety-critical, it has to be ensured
that the necessary modifications in the individual system behavior are safe. However,
currently available state-of-the-practice and state-of-the-art approaches for safety assurance
and certification are not applicable to this context.
To provide a feasible solution approach, this thesis introduces a framework that allows
“just-in-time” safety certification for the dynamic adaptation behavior of networked
systems. Dynamic safety contracts (DSCs) are presented as the core solution concept
for monitoring and synthesis of decentralized safety knowledge. Ultimately, this opens
up a path towards standardized service provision concepts as a set of safety-related runtime
evidences. DSCs enable the modular specification of relevant safety features in
networked applications as a series of formalized demand-guarantee dependencies. The
specified safety features can be hierarchically integrated and linked to an interpretation
level for accessing the scope of possible safe behavioral adaptations. In this way, the networked
adaptation behavior can be conditionally certified with respect to the fulfilled
DSC safety features during operation. As long as the continuous evaluation process
provides safe adaptation behavior for a networked application context, safety can be
guaranteed for a networked system mode at runtime. Significant safety-related changes
in the application context, however, can lead to situations in which no safe adaptation
behavior is available for the current system state. In such cases, the remaining DSC
guarantees can be utilized to determine optimal degradation concepts for the dynamic
applications.
For the operationalization of the DSCs approach, suitable specification elements and
mechanisms have been defined. Based on a dedicated GUI-engineering framework it is
shown how DSCs can be systematically developed and transformed into appropriate runtime
representations. Furthermore, a safety engineering backbone is outlined to support
the DSC modeling process in concrete application scenarios. The conducted validation
activities show the feasibility and adequacy of the proposed DSCs approach. In parallel,
limitations and areas of future improvement are pointed out.

Many loads acting on a vehicle depend on the condition and quality of roads
traveled as well as on the driving style of the motorist. Thus, during vehicle development,
good knowledge on these further operations conditions is advantageous.
For that purpose, usage models for different kinds of vehicles are considered. Based
on these mathematical descriptions, representative routes for multiple user
types can be simulated in a predefined geographical region. The obtained individual
driving schedules consist of coordinates of starting and target points and can
thus be routed on the true road network. Additionally, different factors, like the
topography, can be evaluated along the track.
Available statistics resulting from travel survey are integrated to guarantee reasonable
trip length. Population figures are used to estimate the number of vehicles in
contained administrative units. The creation of thousands of those geo-referenced
trips then allows the determination of realistic measures of the durability loads.
Private as well as commercial use of vehicles is modeled. For the former, commuters
are modeled as the main user group conducting daily drives to work and
additional leisure time a shopping trip during workweek. For the latter, taxis as
example for users of passenger cars are considered. The model of light-duty commercial
vehicles is split into two types of driving patterns, stars and tours, and in
the common traffic classes of long-distance, local and city traffic.
Algorithms to simulate reasonable target points based on geographical and statistical
data are presented in detail. Examples for the evaluation of routes based
on topographical factors and speed profiles comparing the influence of the driving
style are included.

In computer graphics, realistic rendering of virtual scenes is a computationally complex problem. State-of-the-art rendering technology must become more scalable to
meet the performance requirements for demanding real-time applications.
This dissertation is concerned with core algorithms for rendering, focusing on the
ray tracing method in particular, to support and saturate recent massively parallel computer systems, i.e., to distribute the complex computations very efficiently
among a large number of processing elements. More specifically, the three targeted
main contributions are:
1. Collaboration framework for large-scale distributed memory computers
The purpose of the collaboration framework is to enable scalable rendering
in real-time on a distributed memory computer. As an infrastructure layer it
manages the explicit communication within a network of distributed memory
nodes transparently for the rendering application. The research is focused on
designing a communication protocol resilient against delays and negligible in
overhead, relying exclusively on one-sided and asynchronous data transfers.
The hypothesis is that a loosely coupled system like this is able to scale linearly
with the number of nodes, which is tested by directly measuring all possible
communication-induced delays as well as the overall rendering throughput.
2. Ray tracing algorithms designed for vector processing
Vector processors are to be efficiently utilized for improved ray tracing performance. This requires the basic, scalar traversal algorithm to be reformulated
in order to expose a high degree of fine-grained data parallelism. Two approaches are investigated: traversing multiple rays simultaneously, and performing
multiple traversal steps at once. Efficiently establishing coherence in a group
of rays as well as avoiding sorting of the nodes in a multi-traversal step are the
defining research goals.
3. Multi-threaded schedule and memory management for the ray tracing acceleration structure
Construction times of high-quality acceleration structures are to be reduced by
improvements to multi-threaded scalability and utilization of vector processors. Research is directed at eliminating the following scalability bottlenecks:
dynamic memory growth caused by the primitive splits required for high-
quality structures, and top-level hierarchy construction where simple task par-
allelism is not readily available. Additional research addresses how to expose
scatter/gather-free data-parallelism for efficient vector processing.
Together, these contributions form a scalable, high-performance basis for real-time,
ray tracing-based rendering, and a prototype path tracing application implemented
on top of this basis serves as a demonstration.
The key insight driving this dissertation is that the computational power necessary
for realistic light transport for real-time rendering applications demands massively
parallel computers, which in turn require highly scalable algorithms. Therefore this
dissertation provides important research along the path towards virtual reality.

While the design step should be free from computational related constraints and operations due to its artistic aspect, the modeling phase has to prepare the model for the later stages of the pipeline.
This dissertation is concerned with the design and implementation of a framework for local remeshing and optimization. Based on the experience gathered, a full study about mesh quality criteria is also part of this work.
The contributions can be highlighted as: (1) a local meshing technique based on a completely novel approach constrained to the preservation of the mesh of non interesting areas. With this concept, designers can work on the design details of specific regions of the model without introducing more polygons elsewhere; (2) a tool capable of recovering the shape of a refined area to its decimated version, enabling details on optimized meshes of detailed models; (3) the integration of novel techniques into a single framework for meshing and smoothing which is constrained to surface structure; (4) the development of a mesh quality criteria priority structure, being able to classify and prioritize according to the application of the mesh.
Although efficient meshing techniques have been proposed along the years, most of them lack the possibility to mesh smaller regions of the base mesh, preserving the mesh quality and density of outer areas.
Considering this limitation, this dissertation seeks answers to the following research questions:
1. Given that mesh quality is relative to the application it is intended for, is it possible to design a general mesh evaluation plan?
2. How to prioritize specific mesh criteria over others?
3. Given an optimized mesh and its original design, how to improve the representation of single regions of the first, without degrading the mesh quality elsewhere?
Four main achievements came from the respective answers:
1. The Application Driven Mesh Quality Criteria Structure: Due to high variation in mesh standards because of various computer aided operations performed for different applications, e.g. animation or stress simulation, a structure for better visualization of mesh quality criteria is proposed. The criteria can be used to guide the mesh optimization, making the task consistent and reliable. This dissertation also proposes a methodology to optimize the criteria values, which is adaptable to the needs of a specific application.
2. Curvature Driven Meshing Algorithm: A novel approach, a local meshing technique, which works on a desired area of the mesh while preserving its boundaries as well as the rest of the topology. It causes a slow growth in the overall amount of polygons by making only small regions denser. The method can also be used to recover the details of a reference mesh to its decimated version while refining it. Moreover, it employs a geometric fast and easy to implement approach representing surface features as simple circles, being used to guide the meshing. It also generates quad-dominant meshes, with triangle count directly dependent on the size of the boundary.
3. Curvature-based Method for Anisotropic Mesh Smoothing: A geometric-based method is extended to 3D space to be able to produce anisotropic elements where needed. It is made possible by mapping the original space to another which embeds the surface curvature. This methodology is used to enhance the smoothing algorithm by making the nearly regularized elements follow the surface features, preserving the original design. The mesh optimization method also preserves mesh topology, while resizing elements according to the local mesh resolution, effectively enhancing the design aspects intended.
4. Framework for Local Restructure of Meshed Surfaces: The combination of both methods creates a complete tool for recovering surface details through mesh refinement and curvature aware mesh smoothing.

In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.

In this thesis, we consider the problem of processing similarity queries over a dataset of top-k rankings and class constrained objects. Top-k rankings are the most natural and widely used technique to compress a large amount of information into a concise form. Spearman’s Footrule distance is used to compute the similarity between rankings, considering how well rankings agree on the positions (ranks) of ranked items. This setup allows the application of metric distance-based pruning strategies, and, alternatively, enables the use of traditional inverted indices for retrieving rankings that overlap in items. Although both techniques can be individually applied, we hypothesize that blending these two would lead to better performance. First, we formulate theoretical bounds over the rankings, based on Spearman's Footrule distance, which are essential for adapting existing, inverted index based techniques to the setting of top-k rankings. Further, we propose a hybrid indexing strategy, designed for efficiently processing similarity range queries, which incorporates inverted indices and metric space indices, such as M- or BK-trees, resulting in a structure that resembles both indexing methods with tunable emphasis on one or the other. Moreover, optimizations to the inverted index component are presented, for early termination and minimizing bookkeeping. As vast amounts of data are being generated on a daily bases, we further present a distributed, highly tunable, approach, implemented in Apache Spark, for efficiently processing similarity join queries over top-k rankings. To combine distance-based filtering with inverted indices, the algorithm works in several phases. The partial results are joined for the computation of the final result set. As the last contribution of the thesis, we consider processing k-nearest-neighbor (k-NN) queries over class-constrained objects, with the additional requirement that the result objects are of a specific type. We introduce the MISP index, which first indexes the objects by their (combination of) class belonging, followed by a similarity search sub index for each subset of objects. The number of such subsets can combinatorially explode, thus, we provide a cost model that analyzes the performance of the MISP index structure under different configurations, with the aim of finding the most efficient one for the dataset being searched.

Visualization is vital to the scientific discovery process.
An interactive high-fidelity rendering provides accelerated insight into complex structures, models and relationships.
However, the efficient mapping of visualization tasks to high performance architectures is often difficult, being subject to a challenging mixture of hardware and software architectural complexities in combination with domain-specific hurdles.
These difficulties are often exacerbated on heterogeneous architectures.
In this thesis, a variety of ray casting-based techniques are developed and investigated with respect to a more efficient usage of heterogeneous HPC systems for distributed visualization, addressing challenges in mesh-free rendering, in-situ compression, task-based workload formulation, and remote visualization at large scale.
A novel direct raytracing scheme for on-the-fly free surface reconstruction of particle-based simulations using an extended anisoptropic kernel model is investigated on different state-of-the-art cluster setups.
The versatile system renders up to 170 million particles on 32 distributed compute nodes at close to interactive frame rates at 4K resolution with ambient occlusion.
To address the widening gap between high computational throughput and prohibitively slow I/O subsystems, in situ topological contour tree analysis is combined with a compact image-based data representation to provide an effective and easy-to-control trade-off between storage overhead and visualization fidelity.
Experiments show significant reductions in storage requirements, while preserving flexibility for exploration and analysis.
Driven by an increasingly heterogeneous system landscape, a flexible distributed direct volume rendering and hybrid compositing framework is presented.
Based on a task-based dynamic runtime environment, it enables adaptable performance-oriented deployment on various platform configurations.
Comprehensive benchmarks with respect to task granularity and scaling are conducted to verify the characteristics and potential of the novel task-based system design.
A core challenge of HPC visualization is the physical separation of visualization resources and end-users.
Using more tiles than previously thought reasonable, a distributed, low-latency multi-tile streaming system is demonstrated, being able to sustain a stable 80 Hz when streaming up to 256 synchronized 3840x2160 tiles and achieve 365 Hz at 3840x2160 for sort-first compositing over the internet, thereby enabling lightweight visualization clients and leaving all the heavy lifting to the remote supercomputer.

Topology-Based Characterization and Visual Analysis of Feature Evolution in Large-Scale Simulations
(2019)

This manuscript presents a topology-based analysis and visualization framework that enables the effective exploration of feature evolution in large-scale simulations. Such simulations pose additional challenges to the already complex task of feature tracking and visualization, since the vast number of features and the size of the simulation data make it infeasible to naively identify, track, analyze, render, store, and interact with data. The presented methodology addresses these issues via three core contributions. First, the manuscript defines a novel topological abstraction, called the Nested Tracking Graph (NTG), that records the temporal evolution of features that exhibit a nesting hierarchy, such as superlevel set components for multiple levels, or filtered features across multiple thresholds. In contrast to common tracking graphs that are only capable of describing feature evolution at one hierarchy level, NTGs effectively summarize their evolution across all hierarchy levels in one compact visualization. The second core contribution is a view-approximation oriented image database generation approach (VOIDGA) that stores, at simulation runtime, a reduced set of feature images. Instead of storing the features themselves---which is often infeasable due to bandwidth constraints---the images of these databases can be used to approximate the depicted features from any view angle within an acceptable visual error, which requires far less disk space and only introduces a neglectable overhead. The final core contribution combines these approaches into a methodology that stores in situ the least amount of information necessary to support flexible post hoc analysis utilizing NTGs and view approximation techniques.

Various physical phenomenons with sudden transients that results into structrual changes can be modeled via
switched nonlinear differential algebraic equations (DAEs) of the type
\[
E_{\sigma}\dot{x}=A_{\sigma}x+f_{\sigma}+g_{\sigma}(x). \tag{DAE}
\]
where \(E_p,A_p \in \mathbb{R}^{n\times n}, x\mapsto g_p(x),\) is a mapping, \(p \in \{1,\cdots,P\}, P\in \mathbb{N}
f \in \mathbb{R} \rightarrow \mathbb{R}^n , \sigma: \mathbb{R} \rightarrow \{1,\cdots, P\}\).
Two related common tasks are:
Task 1: Investigate if above (DAE) has a solution and if it is unique.
Task 2: Find a connection among a solution of above (DAE) and solutions of related
partial differential equations.
In the linear case \(g(x) \equiv 0\) the task 1 has been tackeled already in a
distributional solution framework.
A main goal of the dissertation is to give contribution to task 1 for the
nonlinear case \(g(x) \not \equiv 0\) ; also contributions to the task 2 are given for
switched nonlinear DAEs arising while modeling sudden transients in water
distribution networks. In addition, this thesis contains the following further
contributions:
The notion of structured switched nonlinear DAEs has been introduced,
allowing also non regular distributions as solutions. This extend a previous
framework that allowed only piecewise smooth functions as solutions. Further six mild conditions were given to ensure existence and uniqueness of the solution within the space of piecewise smooth distribution. The main
condition, namely the regularity of the matrix pair \((E,A)\), is interpreted geometrically for those switched nonlinear DAEs arising from water network graphs.
Another contribution is the introduction of these switched nonlinear DAEs
as a simplication of the PDE model used classically for modeling water networks. Finally, with the support of numerical simulations of the PDE model it has been illustrated that this switched nonlinear DAE model is a good approximation for the PDE model in case of a small compressibility coefficient.

Shared memory concurrency is the pervasive programming model for multicore architectures
such as x86, Power, and ARM. Depending on the memory organization, each architecture follows
a somewhat different shared memory model. All these models, however, have one common
feature: they allow certain outcomes for concurrent programs that cannot be explained
by interleaving execution. In addition to the complexity due to architectures, compilers like
GCC and LLVM perform various program transformations, which also affect the outcomes of
concurrent programs.
To be able to program these systems correctly and effectively, it is important to define a
formal language-level concurrency model. For efficiency, it is important that the model is
weak enough to allow various compiler optimizations on shared memory accesses as well
as efficient mappings to the architectures. For programmability, the model should be strong
enough to disallow bogus “out-of-thin-air” executions and provide strong guarantees for well-synchronized
programs. Because of these conflicting requirements, defining such a formal
model is very difficult. This is why, despite years of research, major programming languages
such as C/C++ and Java do not yet have completely adequate formal models defining their
concurrency semantics.
In this thesis, we address this challenge and develop a formal concurrency model that is very
good both in terms of compilation efficiency and of programmability. Unlike most previous
approaches, which were defined either operationally or axiomatically on single executions,
our formal model is based on event structures, which represents multiple program executions,
and thus gives us more structure to define the semantics of concurrency.
In more detail, our formalization has two variants: the weaker version, WEAKEST, and the
stronger version, WEAKESTMO. The WEAKEST model simulates the promising semantics proposed
by Kang et al., while WEAKESTMO is incomparable to the promising semantics. Moreover,
WEAKESTMO discards certain questionable behaviors allowed by the promising semantics.
We show that the proposed WEAKESTMO model resolve out-of-thin-air problem, provide
standard data-race-freedom (DRF) guarantees, allow the desirable optimizations, and can be
mapped to the architectures like x86, PowerPC, and ARMv7. Additionally, our models are
flexible enough to leverage existing results from the literature to establish data-race-freedom
(DRF) guarantees and correctness of compilation.
In addition, in order to ensure the correctness of compilation by a major compiler, we developed
a translation validator targeting LLVM’s “opt” transformations of concurrent C/C++
programs. Using the validator, we identified a few subtle compilation bugs, which were reported
and were fixed. Additionally, we observe that LLVM concurrency semantics differs
from that of C11; there are transformations which are justified in C11 but not in LLVM and
vice versa. Considering the subtle aspects of LLVM concurrency, we formalized a fragment
of LLVM’s concurrency semantics and integrated it into our WEAKESTMO model.

The systems in industrial automation management (IAM) are information systems. The management parts of such systems are software components that support the manufacturing processes. The operational parts control highly plug-compatible devices, such as controllers, sensors and motors. Process variability and topology variability are the two main characteristics of software families in this domain. Furthermore, three roles of stakeholders -- requirement engineers, hardware-oriented engineers, and software developers -- participate in different derivation stages and have different variability concerns. In current practice, the development and reuse of such systems is costly and time-consuming, due to the complexity of topology and process variability. To overcome these challenges, the goal of this thesis is to develop an approach to improve the software product derivation process for systems in industrial automation management, where different variability types are concerned in different derivation stages. Current state-of-the-art approaches commonly use general-purpose variability modeling languages to represent variability, which is not sufficient for IAM systems. The process and topology variability requires more user-centered modeling and representation. The insufficiency of variability modeling leads to low efficiency during the staged derivation process involving different stakeholders. Up to now, product line approaches for systematic variability modeling and realization have not been well established for such complex domains. The model-based derivation approach presented in this thesis integrates feature modeling with domain-specific models for expressing processes and topology. The multi-variability modeling framework includes the meta-models of the three variability types and their associations. The realization and implementation of the multi-variability involves the mapping and the tracing of variants to their corresponding software product line assets. Based on the foundation of multi-variability modeling and realization, a derivation infrastructure is developed, which enables a semi-automated software derivation approach. It supports the configuration of different variability types to be integrated into the staged derivation process of the involved stakeholders. The derivation approach is evaluated in an industry-grade case study of a complex software system. The feasibility is demonstrated by applying the approach in the case study. By using the approach, both the size of the reusable core assets and the automation level of derivation are significantly improved. Furthermore, semi-structured interviews with engineers in practice have evaluated the usefulness and ease-of-use of the proposed approach. The results show a positive attitude towards applying the approach in practice, and high potential to generalize it to other related domains.

The usage of sensors in modern technical systems and consumer products is in a rapid increase. This advancement can be characterized by two major factors, namely, the mass introduction of consumer oriented sensing devices to the market and the sheer amount of sensor data being generated. These characteristics raise subsequent challenges regarding both the consumer sensing devices' reliability and the management and utilization of the generated sensor data. This thesis addresses these challenges through two main contributions. It presents a novel framework that leverages sentiment analysis techniques in order to assess the quality of consumer sensing devices. It also couples semantic technologies with big data technologies to present a new optimized approach for realization and management of semantic sensor data, hence providing a robust means of integration, analysis, and reuse of the generated data. The thesis also presents several applications that show the potential of the contributions in real-life scenarios.
Due to the broad range, growing feature set and fast release pace of new sensor-based products, evaluating these products is very challenging as standard product testing is not practical. As an alternative, an end-to-end aspect-based sentiment summarizer pipeline for evaluation of consumer sensing devices is presented. The pipeline uses product reviews to extract the sentiment at the aspect level and includes several components namely, product name extractor, aspects extractor and a lexicon-based sentiment extractor which handles multiple sentiment analysis challenges such as sentiment shifters, negations, and comparative sentences among others. The proposed summarizer's components generally outperform the state-of-the-art approaches. As a use case, features of the market leading fitness trackers are evaluated and a dynamic visual summarizer is presented to display the evaluation results and to provide personalized product recommendations for potential customers.
The increased usage of sensing devices in the consumer market is accompanied with increased deployment of sensors in various other fields such as industry, agriculture, and energy production systems. This necessitates using efficient and scalable methods for storing and processing of sensor data. Coupling big data technologies with semantic techniques not only helps to achieve the desired storage and processing goals, but also facilitates data integration, data analysis, and the utilization of data in unforeseen future applications through preserving the data generation context. This thesis proposes an efficient and scalable solution for semantification, storage and processing of raw sensor data through ontological modelling of sensor data and a novel encoding scheme that harnesses the split between the statements of the conceptual model of an ontology (TBox) and the individual facts (ABox) along with in-memory processing capabilities of modern big data systems. A sample use case is further introduced where a smartphone is deployed in a transportation bus to collect various sensor data which is then utilized in detecting street anomalies.
In addition to the aforementioned contributions, and to highlight the potential use cases of sensor data publicly available, a recommender system is developed using running route data, used for proximity-based retrieval, to provide personalized suggestions for new routes considering the runner's performance, visual and nature of route preferences.
This thesis aims at enhancing the integration of sensing devices in daily life applications through facilitating the public acquisition of consumer sensing devices. It also aims at achieving better integration and processing of sensor data in order to enable new potential usage scenarios of the raw generated data.

In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.

Linking protistan community shifts along salinity gradients with cellular haloadaptation strategies
(2019)

Salinity is one of the most structuring environmental factors for microeukaryotic communities. Using eDNA barcoding, I detected significant shifts in microeukaryotic community compositions occurring at distinct salinities between brackish and marine conditions in the Baltic Sea. I, furthermore, conducted a metadata analysis including my and other marine and hypersaline community sequence data to confirm the existence of salinity-related transition boundaries and significant changes in alpha diversity patterns along a brackish to hypersaline gradient. One hypothesis for the formation of salinity-dependent transition boundaries between brackish to hypersaline conditions is the use of different cellular haloadaptation strategies. To test this hypothesis, I conducted metatranscriptome analyses of microeukaryotic communities along a pronounced salinity gradient (40 – 380 ‰). Clustering of functional transcripts revealed differences in metabolic properties and metabolic capacities between microeukaryotic communities at specific salinities, corresponding to the transition boundaries already observed in the taxonomic eDNA barcoding approach. In specific, microeukaryotic communities thriving at mid-hypersaline conditions (≤ 150 ‰) seem to predominantly apply the ‘low-salt – organic-solutes-in’ strategy by accumulating compatible solutes to counteract osmotic stress. Indications were found for both the intracellular synthesis of compatible solutes as well as for cellular transport systems. In contrast, communities of extreme-hypersaline habitats (≥ 200 ‰) may preferentially use the ‘high-salt-in’ strategy, i. e. the intracellular accumulation of inorganic ions in high concentrations, which is implied by the increased expression of Mg2+, K+, Cl- transporters and channels.
In order to characterize the ‘low-salt – organic-solutes-in’ strategy applied by protists in more detail, I conducted a time-resolved transcriptome analysis of the heterotrophic ciliate Schmidingerothrix salinarum serving as model organism. S. salinarum was thus subjected to a salt-up shock to investigate the intracellular response to osmotic stress by shifts of gene expression. After increasing the external salinity, an increased expression of two-component signal transduction systems and MAPK cascades was observed. In an early reaction, the expression of transport mechanisms for K+, Cl- and Ca2+ increased, which may enhance the capacity of K+, Cl- and Ca2+ in the cytoplasm to compensate possibly harmful Na+ influx. Expression of enzymes for the synthesis of possible compatible solutes, starting with glycine betaine, followed by ectoine and later proline, could imply that the inorganic ions K+, Cl- and Ca2+ are gradually replaced by the synthesized compatible solutes. Additionally, expressed transporters for choline (precursor of glycine betaine) and proline could indicate an intracellular accumulation of compatible solutes to balance the external salinity. During this accumulation, the up-regulated ion export mechanisms may increase the capacity for Na+ expulsion from the cytoplasm and ion compartmentalization between cell organelles seem to happen.
The results of my PhD project revealed first evidence at molecular level for the salinity-dependent use of different haloadaptation strategies in microeukaryotes and significantly extend existing knowledge about haloadaptation processes in ciliates. The results provide ground for future research, such as (comparative) transcriptome analysis of ciliates thriving in extreme-hypersaline habitats or experiments like qRT-PCR to validate transcriptome results.

On the Effect of Nanofillers on the Environmental Stress Cracking Resistance of Glassy Polymers
(2019)

It is well known that reinforcing polymers with small amounts of nano-sized fillers is one of the most effective methods for simultaneously improving their mechanical and thermal properties. However, only a small number of studies have focused on environ-mental stress cracking (ESC), which is a major issue for premature failures of plastic products in service. Therefore, the contribution of this work focused on the influence of nano-SiO2 particles on the morphological, optical, mechanical, thermal, as well as envi-ronmental stress cracking properties of amorphous-based nanocomposites.
Polycarbonate (PC), polystyrene (PS) and poly(methyl methacrylate) (PMMA) nanocom-posites containing different amounts and sizes of nano-SiO2 particles were prepared using a twin-screw extruder followed by injection molding. Adding a small amount of nano-SiO2 caused a reduction in optical properties but improved the tensile, toughness, and thermal properties of the polymer nanocomposites. The significant enhancement in mechanical and thermal properties was attributed to the adequate level of dispersion and interfacial interaction of the SiO2 nanoparticles in the polymer matrix. This situation possibly increased the efficiency of stress transfer across the nanocomposite compo-nents. Moreover, the data revealed a clear dependency on the filler size. The polymer nanocomposites filled with smaller nanofillers exhibited an outstanding enhancement in both mechanical properties and transparency compared with nanocomposites filled with larger particles. The best compromise of strength, toughness, and thermal proper-ties was achieved in PC-based nanocomposites. Therefore, special attention to the influ-ence of nanofiller on the ESC resistance was given to PC.
The ESC resistance of the materials was investigated under static loading with and without the presence of stress-cracking agents. Interestingly, the incorporation of nano-SiO2 greatly enhanced the ESC resistance of PC in all investigated fluids. This result was particularly evident with the smaller quantities and sizes of nano-SiO2. The enhancement in ESC resistance was more effective in mild agents and air, where the quality of the deformation process was vastly altered with the presence of nano-SiO2. This finding confirmed that the new structural arrangements on the molecular scale in-duced by nanoparticles dominate over the ESC agent absorption effect and result in greatly improving the ESC resistance of the materials. This effect was more pronounced with increasing molecular weight of PC due to an increase in craze stability and fibril density. The most important and new finding is that the ESC behavior of polymer-based nanocomposites/ stress-cracking agent combinations can be scaled using the Hansen solubility parameter. Thus allowed us to predict the risk of ESC as a function of the filler content for different stress-cracking agents without performing extensive tests. For a comparison of different amorphous polymer-based nanocomposites at a given nano-SiO2 particle content, the ESC resistance of materials improved in the following order: PMMA/SiO2 < PS/SiO2 < low molecular weight PC/SiO2 < high molecular weight PC/SiO2. In most cases, nanocomposites with 1 vol.% of nano-SiO2 particles exhibited the largest improvement in ESC resistance.
However, the remarkable improvement in the ESC resistance—particularly in PC-based nanocomposites—created some challenges related to material characterization because testing times (failure time) significantly increased. Accordingly, the superposition ap-proach has been applied to construct a master curve of crack propagation model from the available short-term tests at different temperatures. Good agreement of the master curves with the experimental data revealed that the superposition approach is a suitable comparative method for predicting slow crack growth behavior, particularly for long-duration cracking tests as in mild agents. This methodology made it possible to mini-mize testing time.
Additionally, modeling and simulations using the finite element method revealed that multi-field modeling could provide reasonable predictions for diffusion processes and their impact on fracture behavior in different stress cracking agents. This finding sug-gests that the implemented model may be a useful tool for quick screening and mitigat-ing the risk of ESC failures in plastic products.

Most modern multiprocessors offer weak memory behavior to improve their performance in terms of throughput. They allow the order of memory operations to be observed differently by each processor. This is opposite to the concept of sequential consistency (SC) which enforces a unique sequential view on all operations for all processors. Because most software has been and still is developed with SC in mind, we face a gap between the expected behavior and the actual behavior on modern architectures. The issues described only affect multithreaded software and therefore most programmers might never face them. However, multi-threaded bare metal software like operating systems, embedded software, and real-time software have to consider memory consistency and ensure that the order of memory operations does not yield unexpected results. This software is more critical as general consumer software in terms of consequences, and therefore new methods are needed to ensure their correct behavior.
In general, a memory system is considered weak if it allows behavior that is not possible in a sequential system. For example, in the SPARC processor with total store ordering (TSO) consistency, all writes might be delayed by store buffers before they eventually are processed by the main memory. This allows the issuing process to work with its own written values before other processes observed them (i.e., reading its own value before it leaves the store buffer). Because this behavior is not possible with sequential consistency, TSO is considered to be weaker than SC. Programming in the context of weak memory architectures requires a proper comprehension of how the model deviates from expected sequential behavior. For verification of these programs formal representations are required that cover the weak behavior in order to utilize formal verification tools.
This thesis explores different verification approaches and respectively fitting representations of a multitude of memory models. In a joint effort, we started with the concept of testing memory operation traces in regard of their consistency with different memory consistency models. A memory operation trace is directly derived from a program trace and consists of a sequence of read and write operations for each process. Analyzing the testing problem, we are able to prove that the problem is NP-complete for most memory models. In that process, a satisfiability (SAT) encoding for given problem instances was developed, that can be used in reachability and robustness analysis.
In order to cover all program executions instead of just a single program trace, additional representations are introduced and explored throughout this thesis. One of the representations introduced is a novel approach to specify a weak memory system using temporal logics. A set of linear temporal logic (LTL) formulas is developed that describes all properties required to restrict possible traces to those consistent to the given memory model. The resulting LTL specifications can directly be used in model checking, e.g., to check safety conditions. Unfortunately, the derived LTL specifications suffer from the state explosion problem: Even small examples, like the Peterson mutual exclusion algorithm, tend to generate huge formulas and require vast amounts of memory for verification. For this reason, it is concluded that using the proposed verification approach these specifications are not well suited for verification of real world software. Nonetheless, they provide comprehensive and formally correct descriptions that might be used elsewhere, e.g., programming or teaching.
Another approach to represent these models are operational semantics. In this thesis, operational semantics of weak memory models are provided in the form of reference machines that are both correct and complete regarding the memory model specification. Operational semantics allow to simulate systems with weak memory models step by step. This provides an elegant way to study the effects that lead to weak consistent behavior, while still providing a basis for formal verification. The operational models are then incorporated in verification tools for multithreaded software. These state space exploration tools proved suitable for verification of multithreaded software in a weak consistent memory environment. However, because not only the memory system but also the processor are expressed as operational semantics, some verification approach will not be feasible due to the large size of the state space.
Finally, to tackle the beforementioned issue, a state transition system for parallel programs is proposed. The transition system is defined by a set of structural operational semantics (SOS) rules and a suitable memory structure that can cover multiple memory models. This allows to influence the state space by use of smart representations and approximation approaches in future work.

In der vorliegenden Arbeit wird das Verhalten von thermoplastischen
Verbundwerkstoffen mittels experimentellen und numerischen Untersuchungen
betrachtet. Das Ziel dieser Untersuchungen ist die Identifikation und Quantifikation
des Versagensverhaltens und der Energieabsorptionsmechanismen von geschichteten,
quasi-isotropen thermoplastischen Faser-Kunststoff-Verbunden und die Umsetzung
der gewonnenen Einsichten in Eigenschaften und Verhalten eines Materialmodells zur
Vorhersage des Crash-Verhaltens dieser Werkstoffe in transienten Analysen.
Vertreter der untersuchten Klassen sind un- und mittel-vertreckte Rundgestricke und
glasfaserverstärkte Thermoplaste (GMT). Die Untersuchungen an rundgestrickten
glasfaser-(GF)-verstärktem Polyethylentherephthalat (PET) waren Teil eines
Forschungsprojektes zur Charakterisierung sowohl der Verarbeitbarkeit als auch des
mechanischen Verhaltens. Experimente an GMT und Schnittfaser-GMT wurden
ebenfalls zum Vergleich mit dem Gestrick durchgeführt und dienen als Bestätigung
des beobachteten Verhaltens des Gestrickes.
Besonderer Aufmerksamkeit wird der Einfluß der Probengeometrie auf die Resultate
gewidmet, weil die Crash-Charakteristiken wesentlich von der Geometrie des
getesteten Probekörpers abhängen. Hierzu wurde ein Rundhutprofil zur Untersuchung
dieses Einflußes definiert. Diese spezielle Geometrie hat insbesondere Vorteile
hinsichtlich Energieabsorptionsvermögen sowie Herstellbarkeit von thermoplastischen
Verbundwerkstoffen (TPCs). Es wurden Impakt- und Perforationsversuche zur
Untersuchung der Schädigungsausbreitung und zur Charakterisierung der Zähigkeit
der untersuchten Materialien durchgeführt.
Geschichtete TPCs versagen hauptsächlich in einem Laminat-Biegemodus mit
kombiniertem intra- und interlaminaren Schub (transversaler Schub zwischen Lagen und teilweise mit transversalen Schubbrüchen in einzelnen Lagen). Durch eine
Kopplung der aktuellen Versagensmodi und Crash-Kennwerten wie der mittleren
Crash-Spannung, konnten Indikationen über die Relation zwischen Materialparameter
und absoluter Energieabsorption gewonnen werden.
Numerische Untersuchungen wurden mit einem expliziten Finiten Elemente-
Programm zur Simulation von dreidimensionalen, großen Verformungen durchgeführt.
Das Modell besteht bezüglich des Querschnittaufbaus aus einer mesoskopischen
Darstellung, die zwischen Matrix-zwischenlagen und mesoskopischen Verbundwerkstofflagen unterscheidet. Die Modellgeometrie stellt einen vereinfachten
Längsquerschnitt durch den Probekörper dar. Dabei wurden Einflüsse der Reibung
zwischen Impaktor und Material sowie zwischen einzelnen Lagen berücksichtigt.
Auch die lokal herrschende Dehnrate, Energie und Spannungs-Dehnungsverteilung
über die mesoskopischen Phasen konnten beobachtet werden. Dieses Modell zeigt
deutlich die verschiedenen Effekte, die durch den heterogenen Charakter des Laminats
entstehen, und gibt auch Hinweise für einige Erklärungen dieser Effekte.
Basierend auf den Resultaten der obengenannten Untersuchungen wurde ein
phänomenologisches Modell mit a-priori Information des inherenten
Materialverhaltens vorgeschlagen. Daher, daß das Crashverhalten vom heterogenen
Charakter des Werkstoffes dominiert wird, werden im Modell die Phasen separat
betrachtet. Eine einfache Methode zur Bestimmung der mesoskopischen Eigenschaften
wird diskutiert.
Zur Beschreibung des Verhaltens vom thermoplastischen Matrixsystem während
„Crushing“ würde ein dehnraten- und temperaturabhängiges Plastizitätsgesetz
ausreichen. Für die Beschreibung des Verhaltens der Verbundwerkstoffschichten wird
eine gekoppelte Plastizitäts- und Schädigungsformulierung vorgeschlagen. Ein solches
Modell kann sowohl den plastischen Anteil des Matrixsystems als auch das
„Softening“ - verursacht durch Faser-Matrix-Grenzflächenversagen und Faserbrüche -
beschreiben. Das vorgeschlagene Modell unterscheidet zwischen Belastungsfällen für
axiales „Crushing“ und Versagen ohne „Crushing“. Diese Unterteilung ermöglicht
eine explizite Modellierung des Werkstoffes unter Berücksichtigung des spezifischen
Materialzustandes und der Geometrie für den außerordentlichen Belastungsfall, der
zum progressiven Versagen führt.

Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.

Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)

While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.

Large-scale distributed systems consist of a number of components, take a number of parameter values as input, and behave differently based on a number of non-deterministic events. All these features—components, parameter values, and events—interact in complicated ways, and unanticipated interactions may lead to bugs. Empirically, many bugs in these systems are caused by interactions of only a small number of features. In certain cases, it may be possible to test all interactions of \(k\) features for a small constant \(k\) by executing a family of tests that is exponentially or even doubly-exponentially smaller than the family of all tests. Thus, in such cases we can effectively uncover all bugs that require up to \(k\)-wise interactions of features.
In this thesis we study two occurrences of this phenomenon. First, many bugs in distributed systems are caused by network partition faults. In most cases these bugs occur due to two or three key nodes, such as leaders or replicas, not being able to communicate, or because the leading node finds itself in a block of the partition without quorum. Second, bugs may occur due to unexpected schedules (interleavings) of concurrent events—concurrent exchange of messages and concurrent access to shared resources. Again, many bugs depend only on the relative ordering of a small number of events. We call the smallest number of events whose ordering causes a bug the depth of the bug. We show that in both testing scenarios we can effectively uncover bugs involving small number of nodes or bugs of small depth by executing small families of tests.
We phrase both testing scenarios in terms of an abstract framework of tests, testing goals, and goal coverage. Sets of tests that cover all testing goals are called covering families. We give a general construction that shows that whenever a random test covers a fixed goal with sufficiently high probability, a small randomly chosen set of tests is a covering family with high probability. We then introduce concrete coverage notions relating to network partition faults and bugs of small depth. In case of network partition faults, we show that for the introduced coverage notions we can find a lower bound on the probability that a random test covers a given goal. Our general construction then yields a randomized testing procedure that achieves full coverage—and hence, find bugs—quickly.
In case of coverage notions related to bugs of small depth, if the events in the program form a non-trivial partial order, our general construction may give a suboptimal bound. Thus, we study other ways of constructing covering families. We show that if the events in a concurrent program are partially ordered as a tree, we can explicitly construct a covering family of small size: for balanced trees, our construction is polylogarithmic in the number of events. For the case when the partial order of events does not have a "nice" structure, and the events and their relation to previous events are revealed while the program is running, we give an online construction of covering families. Based on the construction, we develop a randomized scheduler called PCTCP that uniformly samples schedules from a covering family and has a rigorous guarantee of finding bugs of small depth. We experiment with an implementation of PCTCP on two real-world distributed systems—Zookeeper and Cassandra—and show that it can effectively find bugs.

In the last decade, injection molding of long-fiber reinforced thermoplastics
(LFT) has been established as a low-cost, high volume technique for manufacturing
parts with complex shape without any post-treatment [1–3]. Applications
are mainly found in the automotive industry with a volume annually
growing by 10% to 15% [4].
While first applications were based on polyamide (PA6 and PA6.6), the market
share of glass fiber reinforced polypropylene (PP) is growing due to cost savings
and ease of processing. With the use of polypropylene, different processing
techniques such as gas-assisted injection molding [5] or injection compression
molding [6] have emerged in addition to injection molding [7, 8].
In order to overcome or justify higher materials costs when compared to short
fiber reinforced thermoplastics, the manufacturing techniques for LFT pellets
with fiber length greater than 10mm have evolved starting from pultrusion by
improving impregnation and throughput [9] or by direct addition of fiber strands
in the mold [10–12].
The benefit of long glass fiber reinforcement either in PP or PA is mainly due
to the enhanced resistance to fiber pull-out resulting in an increase in impact
properties and strength [13–19], even at low temperature levels [20]. Creep
and fatigue resistance are also substantially improved [21, 22].
The performance of fiber reinforced thermoplastics manufactured by injection
molding strongly depends on the flow-induced microstructure which is
driven by materials composition, processing conditions and part geometry.
The anisotropic microstructure is characterized by fiber fraction and dispersion,
fiber length and fiber orientation.
Facing the complexity of this processing technique, simulation becomes a precious
tool already in the concept phase for parts manufactured by injection
molding. Process simulation supports decisions with respect to choice of concepts
and materials. The part design is determined in terms of mold filling
including location of gates, vents and weld lines. Tool design requires the
determination of melt feeding, logistics and mold heating. Subsequently, performance
including prediction of shrinkage and warpage as well as structural
analysis is evaluated [23].
While simulation based on two-dimensional representation of three-dimensional
part geometry has been extensively used during the last two decades, the
complexity of the parts as well as the trend towards solid modelling in CAD
and CAE demands the step towards three-dimensional process simulation. The scope of this work is the prediction of flow-induced microstructure during
injection molding of long glass fiber reinforced polypropylene using threedimensional
process simulation. Modelling of the injection molding process in
three dimensions is supported experimentally by rheological characterization
in both shear and extensional flow and by two- and three-dimensional evaluation
of microstructure.
In chapter 2 the fundamentals of rheometry and rheology are presented with
respect to long fiber reinforced thermoplastics. The influence of parameters
on microstructure is described and approaches for modelling the state of microstructure
and its dynamics are discussed.
Chapter 3 introduces a rheometric technique allowing for rheological characterization
of polymer melts at processing conditions as encountered during
manufacturing. Using this rheometer, both shear and extensional viscosity of
long glass fiber reinforced polypropylene are measured with respect to composition
of materials, processing conditions and geometry of the cavity.
Chapter 4 contains the evaluation of microstructure of long glass fiber reinforced
polypropylene in terms of two-dimensional fiber orientation and its dependence
on materials parameters and processing condition. For the evaluation
of three-dimensional microstructure, a technique based on x-ray tomography
is introduced.
In chapter 5, modelling of microstructural dynamics is addressed. One-way
coupling of interactions between fluid and fibers is described macroscopically.
The flow behavior of fibers in the vicinity of cavity walls is evaluated experimentally.
From these observations, a model for treatment of fiber-wall interaction
with respect to numerical simulation is proposed.
Chapter 6 presents the application of three-dimensional simulation of the injection
molding process. Mold filling simulation is performed using a commercial
code while prediction of 3D fiber orientation is based on a proprietary module.
The rheological and thermal properties derived in chapter 3 are tested by
simulation of the experiments and comparison of predicted pressure and temperature
profile versus recorded results. The performance of fiber orientation
prediction is verified using analytical solutions of test examples from literature.
The capability of three-dimensional simulation is demonstrated based on the
simulation of mold filling and prediction of fiber orientation for an automotive
part.

Solid particle erosion is usually undesirable, as it leads to development of cracks and
holes, material removal and other degradation mechanisms that as final
consequence reduce the durability of the structure imposed to erosion. The main aim
of this study was to characterise the erosion behaviour of polymers and polymer
composites, to understand the nature and the mechanisms of the material removal
and to suggest modifications and protective strategies for the effective reduction of
the material removal due to erosion.
In polymers, the effects of morphology, mechanical-, thermomechanical, and fracture
mechanical- properties were discussed. It was established that there is no general
rule for high resistance to erosive wear. Because of the different erosive wear
mechanisms that can take place, wear resistance can be achieved by more than one
type of materials. Difficulties with materials optimisation for wear reduction arise from
the fact that a material can show different behaviour depending on the impact angle
and the experimental conditions. Effects of polymer modification through mixing or
blending with elastomers and inclusion of nanoparticles were also discussed.
Toughness modification of epoxy resin with hygrothermally decomposed polyesterurethane
can be favourable for the erosion resistance. This type of modification
changes also the crosslinking characteristics of the modified EP and it was
established the crosslink density along with fracture energy are decisive parameters
for the erosion response. Melt blending of thermoplastic polymers with functionalised
rubbers on the other hand, can also have a positive influence whereas inclusion of
nanoparticles deteriorate the erosion resistance at low oblique impact angles (30°).
The effects of fibre length, orientation, fibre/matrix adhesion, stacking sequence,
number, position and existence of interleaves were studied in polymer composites.
Linear and inverse rules of mixture were applied in order to predict the erosion rate of
a composite system as a function of the erosion rate of its constituents and their
relative content. Best results were generally delivered with the inverse rule of mixture
approach.
A semi-empirical model, proposed to describe the property degradation and damage
growth characteristics and to predict residual properties after single impact, was
applied for the case of solid particle erosion. Theoretical predictions and experimental
results were in very good agreement.
Strahlerosionsverschleiß (Erosion) entsteht beim Auftreffen von festen Partikel
auf Oberflächen und zeichnet sich üblicherweise durch einen Materialabtrag aus, der
neben der Partikelgeschwindigkeit und dem Auftreffwinkel stark vom jeweiligen
Werkstoff abhängt. In den letzten Jahren ist die Anwendung von Polymeren und
Verbundwerkstoffen anstelle der traditionellen Materialien stark angestiegen.
Polymere und Polymer-Verbundwerkstoffe weisen eine relativ hohe Erosionsrate
(ER) auf, was die potenzielle Anwendung dieser Werkstoffe unter erosiven
Umgebungsbedingungen erheblich einschränkt.
Untersuchungen des Erosionsverhaltens anhand ausgewählter Polymere und
Polymer-Verbundwerkstoffe haben gezeigt, dass diese Systeme unterschiedlichen
Verschleißmechnismen folgen, die sehr komplex sind und nicht nur von einer
Werkstoffeigenschaft beeinflusst werden. Anhand der ER kann das
Erosionsverhalten grob in zwei Kategorien eingeteilt werden: sprödes und duktiles
Erosionsverhalten. Das spröde Erosionsverhalten zeigt eine maximale ER bei 90°,
während das Maximum bei dem duktilen Verhalten bei 30° liegt. Ob ein Material das
eine oder das andere Erosionsverhalten aufweist, ist nicht nur von seinen
Eigenschaften, sondern auch von den jeweiligen Prüfparametern abhängig.
Das Ziel dieser Forschungsarbeit war, das grundsätzliche Verhalten von
Polymeren und Verbundwerkstoffen unter dem Einfluss von Erosion zu
charakterisieren, die verschiedenen Verschleißmechanismen zu erkennen und die
maßgeblichen Materialeigenschaften und Kennwerte zu erfassen, um Anwendungen
dieser Werkstoffe unter Erosionsbedingungen zu ermöglichen bzw. zu verbessern.
An einer exemplarischen Auswahl von Polymeren, Elastomeren, modifizierten Polymeren und Faserverbundwerkstoffen wurden die wesentlichen Einflussfaktoren
für die Erosion experimentell bestimmt.
Thermoplastische Polymere und thermoplastische- und vernetzte- Elastomere
Die Versuche, den Erosionswiderstand ausgewählter Polymere (Polyethylene
und Polyurethane) mit verschiedenen Materialeigenschaften zu korrelieren, haben
gezeigt, dass es weder eine klare Abhängigkeit von einzelnen Kenngrößen noch von
Eigenschaftskombinationen gibt. Möglicherweise führt die Bestimmung der
Materialeigenschaften unter den gleichen experimentellen Bedingungen wie bei den Erosionsversuchen zu einer besseren Korrelation zwischen ER und
Materialkenngröße.
Modifiziertes Epoxidharz
Am Beispiel eines modifizierten Epoxidharzes (EP) mit verschiedener
Vernetzungsdichte wurde eine Korrelation zwischen Erosionswiderstand und
Bruchenergie bzw. Erosionswiderstand und Vernetzungsdichte gefunden. Die
Modifizierung erfolgte mit verschiedenen Anteilen von einem hygrothermisch
abgebauten Polyurethan (HD-PUR). Der Zusammenhang zwischen ER und
Vernetzungsparametern steht im Einklang mit der Theorie der Kautschukelastizität.
Modifizierungseffizienz in Duromeren, Thermoplasten und Elastomeren
Des weiteren wurde der Einfluss von Modifizierungen von Polymeren und
Elastomeren untersucht. Mit dem obenerwähnten System (d.h. EP/HD-PUR) läßt sich
auch der Einfluss der Zähigkeitsmodifizierung des Epoxidharzes (EP) auf das
Erosionsverhalten untersuchen. Es wurde gezeigt, dass für HD-PUR Anteile von
mehr als 20 Gew.% diese Modifizierung einen positiven Einfluss auf die
Erosionsbeständigkeit hat. Durch Variation der HD-PUR-Anteile können für dieses
EP Materialeigenschaften, die zwischen den Eigenschaften eines üblichen
Duroplasten und eines weniger elastischen Gummis liegen, erzeugt werden.
Deswegen stellt der modifizierte EP-Harz ein sehr gutes Modellmaterial dar, um den
Einfluss der experimentellen Bedingungen zu studieren, und zu untersuchen, ob
verschiedene Erodenten zu gleichen Erosionsmechanismen führen. Der Übergang
vom duroplastischen zum zähen Verhalten wurde anhand von vier Erodenten
untersucht. Aus den Versuchen ergab sich, dass ein solcher Übergang auftritt, wenn
sehr feine, kantige Partikel (Korund) als Erodenten dienen. Die Partikelgröße und -form ist von entscheidender Bedeutung für die jeweiligen Verschleißmechanismen.
Die Effizienz neuartiger thermoplastischer Elastomere mit einer cokontinuierlichen
Phasenstruktur, bestehend aus thermoplastischem Polyester und
Gummi (funktionalisierter NBR und EPDM Kautschuk), wurde in Bezug auf die
Erosionsbeständigkeit untersucht. Große Anteile von funktionalisiertem Gummi (mehr
als 20 Gew.%) sind vorteilhaft für den Erosionswiderstand. Weiterhin wurde
untersucht, ob sich die herausragende Erosionsbeständigkeit von Polyurethan (PUR)
durch Zugabe von Nanosilikaten eventuell noch steigern läßt. Das Ergebnis war,
dass die Nanopartikel sich vor allem bei einem kleinen Verschleißwinkel (30°) negativ
auswirken. Die schwache Adhäsion zwischen Matrix und Partikeln erleichtert den
Beginn und das Wachsen von Rissen. Dies führt zu einem schnelleren
Materialabtrag von der Materialoberfläche.
Faserverbundwerkstoffe
Ferner wurden Faserverbundwerkstoffe (FVW) mit thermoplastischer und
duromerer Matrix auf ihr Verhalten bei Erosivverschleiß untersucht. Es war von
großem Interesse, den Einfluss von Faserlänge und -orientierung zu untersuchen.
Kurzfaserverstärkte Systeme haben einen besseren Erosionswiderstand als die
unidirektionalen (UD) Systeme. Die Rolle der Faserorientierung kann man nur in
Verbindung mit anderen Parametern, wie Matrixzähigkeit, Faseranteil oder Faser-
Matrix Haftung, berücksichtigen. Am Beispiel von GF/PP Verbunden weisen die
parallel zur Verstreckungsrichtung gestrahlten Systeme den geringsten Widerstand
auf. Andererseits findet bei einem GF/EP System die maximale ER in senkrechter
Richtung statt. Eine Verbesserung der Grenzflächenscherfestigkeit beeinflusst die
Erosionsverschleißrate nachhaltig. Wenn die Haftung der Grenzfläche ausreichend
ist, spielt die Erosionsrichtung eine unbedeutende Rolle für die ER. Weiterhin wurde
gezeigt, dass die Präsenz von zähen Zwischenschichten zu einer deutlichen
Verbesserung des Erosionswiderstands von CF/EP- Verbunden führt.
Eine weitere Aufgabenstellung war es, die Rolle des Faservolumenanteils zu
bestimmen. „Lineare, inverse und modifizierte Mischungsregeln“ wurden
angewendet, und es wurde festgestellt, dass die inversen Mischungsregeln besser
die ER in Abhängigkeit des Faservolumenanteils beschreiben können.
Im Anwendungsbereich von Faserverbundwerkstoffen ist nicht nur die Kenntnis
der ER, sondern auch die Kenntnis der Resteigenschaften erforderlich. Ein
halbempirisches Modell für die Vorhersage des Schlagenergieschwellwertes (Uo) für den Beginn der Festigkeitsabnahme und der Restzugfestigkeit nach einer
Schlagbelastung wurde bei der Untersuchung des Erosionsverschleißes
angewendet. Experimentelle Ergebnisse und theoretische Vorhersagen stimmten
nicht nur für duromere CF/EP-Verbundwerkstoffe, sondern auch für
Verbundwerkstoffe mit einer thermoplastischen Matrix (GF/PP) sehr gut überein.

Wine and alcoholic fermentations are complex and fascinating ecosystems. Wine aroma is shaped by the wine’s chemical compositions, in which both microbes and grape constituents play crucial roles. Activities of the microbial community impact the sensory properties of the final product, therefore, the characterisation of microbial diversity is essential in understanding and predicting sensory properties of wine. Characterisation has been challenging with traditional approaches, where microbes are isolated and therefore analyzed outside from their natural environment. This causes a bias in the observed microbial composition structure. In addition, true community interactions cannot be studied using isolates. Furthermore, the multiplex ties between wine chemical and sensory compositions remain evasive due to their multivariate and nonlinear nature. Therefore, the sensorial outcome arising from different microbial communities has remained inconclusive.
In this thesis, microbial diversity during Riesling wine fermentations is investigated with the aim to understand the roles of microbial communities during fermentations and their links to sensory properties. With the advancement of high-throughput tools based ‘omic methods, such as next-generation sequencing (NGS) technologies, it is now possible to study microbial communities and their functions without isolation by culturing. This developing field and its potential to wine community is reviewed in Chapter 1. The standardisation of methods remains challenging in the field. DNA extraction is a key step in capturing the microbial diversity in samples for generating NGS data, therefore, DNA extraction methods are evaluated in Chapter 2. In Chapter 3, machine learning is utilized in guiding raw data mining generated by the untargeted GC-MS analysis. This step is crucial in order to take full advantages of the large scope of data generated by ‘omic methods. These lay a solid foundation for Chapters 4 and 5 where microbial community structures and their outputs - chemical and sensory compositions are studied by using approaches and tools based on multiple ‘omics methods.
The results of this thesis show first that by using novel statistical approaches, it is possible to extract meaningful information from heterogeneous biological, chemical and sensorial data. Secondly, results suggest that the variation in wine aroma, might be related
to microbial interactions taking place not only inside a single community, but also the
IV
interactions between communities, such as vineyard and winery communities. Therefore, the true sensory expression of terroir might be masked by the interaction between two microbial communities, although more work is needed to uncover this potential relationship. Such potential interaction mechanisms were uncovered between non- Saccharomyces yeast and bacteria in this work and unexpected novel bacterial growth was observed during alcohol fermentation. This suggests new layers in understanding of wine fermentations. In the future, multi-omic approaches could be applied to identify biological pathways leading to specific wine aroma as well as investigate the effects upon specific winemaking conditions. These results are relevant not just for the wine industry, but also to other industries where complex microbial networks are important. As such, the approaches presented in this thesis might find widely use in the food industry.

Ranking lists are an essential methodology to succinctly summarize outstanding items, computed over database tables or crowdsourced in dedicated websites. In this thesis, we propose the usage of automatically generated, entity-centric rankings to discover insights in data. We present PALEO, a framework for data exploration through reverse engineering top-k database queries, that is, given a database and a sample top-k input list, our approach, aims at determining an SQL query that returns results similar to the provided input when executed over the database. The core problem consist of finding selection predicates that return the given items, determining the correct ranking criteria, and evaluating the most promising candidate queries first. PALEO operates on subset of the base data, uses data samples, histograms, descriptive statistics, and further proposes models that assess the suitability of candidate queries which facilitate limitation of false positives. Furthermore, this thesis presents COMPETE, a novel approach that models and computes dominance over user-provided input entities, given a database of top-k rankings. The resulting entities are found superior or inferior with tunable degree of dominance over the input set---a very intuitive, yet insightful way to explore pros and cons of entities of interest. Several notions of dominance are defined which differ in computational complexity and strictness of the dominance concept---yet, interdependent through containment relations. COMPETE is able to pick the most promising approach to satisfy a user request at minimal runtime latency, using a probabilistic model that is estimating the result sizes. The individual flavors of dominance are cast into a stack of algorithms over inverted indices and auxiliary structures, enabling pruning techniques to avoid significant data access over large datasets of rankings.

Function of two redox sensing kinases from the methanogenic archaeon Methanosarcina acetivorans
(2019)

MsmS is a heme-based redox sensor kinase in Methanosarcina acetivorans consisting of alternating PAS and GAF domains connected to a C-terminal kinase domain. In addition to MsmS, M. acetivorans possesses a second kinase, MA0863 with high sequence similarity. Interestingly, MA0863 possesses an amber codon in its second GAF domain, encoding for the amino acid pyrrolysine. Thus far, no function of this residue has been resolved. In order to examine the heme iron coordination in both proteins, an improved method for the production of heme proteins was established using the Escherichia coli strain Nissle 1917. This method enables the complete reconstitution of a recombinant hemoprotein during protein production, thereby resulting in a native heme coordination. Analysis of the full-length MsmS and MA0863 confirmed a covalently bound heme cofactor, which is connected to one conserved cysteine residue in each protein. In order to identify the coordinating amino acid residues of the heme iron, UV/vis spectra of different variants were measured. These studies revealed His702 in MsmS and the corresponding His666 in MA0863 as the proximal heme ligands. MsmS has previously been described as a heme-based redox sensor. In order to examine whether the same is true for MA0863, redox dependent kinase assays were performed. MA0863 indeed displays redox dependent autophosphorylation activity, which is independent of heme ligands and only observed under oxidizing conditions. Interestingly, autophosphorylation was shown to be independent of the heme cofactor but rather relies on thiol oxidation. Therefore, MA0863 was renamed in RdmS (redox dependent methyltransferase-associated sensor). In order to identify the phosphorylation site of RdmS, thin layer chromatography was performed identifying a tyrosine as the putative phosphorylation site. This observation is in agreement with the lack of a so-called H-box in typical histidine kinases. Due to their genomic localization, MsmS and RdmS were postulated to form two-component systems (TCS) with vicinal encoded regulator proteins MsrG and MsrF. Therefore, protein-protein interaction studies using the bacterial adenylate two hybrid system were performed suggesting an interaction of RdmS and MsmS with the three regulators MsrG/F/C. Due to these multiple interactions these signal transduction pathways should rather be considered multicomponent system instead of two component systems.

Wearable activity recognition aims to identify and assess human activities with the help
of computer systems by evaluating signals of sensors which can be attached to the human
body. This provides us with valuable information in several areas: in health care, e.g. fluid
and food intake monitoring; in sports, e.g. training support and monitoring; in entertainment,
e.g. human-computer interface using body movements; in industrial scenarios, e.g.
computer support for detected work tasks. Several challenges exist for wearable activity
recognition: a large number of nonrelevant activities (null class), the evaluation of large
numbers of sensor signals (curse of dimensionality), ambiguity of sensor signals compared
to the activities and finally the high variability of human activity in general.
This thesis develops a new activity recognition strategy, called invariants classification,
which addresses these challenges, especially the variability in human activities. The
core idea is that often even highly variable actions include short, more or less invariant
sub-actions which are due to hard physical constraints. If someone opens a door, the
movement of the hand to the door handle is not fixed. However the door handle has to
be pushed to open the door. The invariants classification algorithm is structured in four
phases: segmentation, invariant identification, classification, and spotting. The segmentation
divides the continuous sensor data stream into meaningful parts, which are related
to sub-activities. Our segmentation strategy uses the zero crossings of the central difference
quotient of the sensor signals, as segment borders. The invariant identification finds
the invariant sub-activities by means of clustering and a selection strategy dependent on
certain features. The classification identifies the segments of a specific activity class, using
models generated from the invariant sub-activities. The models include the invariant
sub-activity signal and features calculated on sensor signals related to the sub-activity. In
the spotting, the classified segments are used to find the entire activity class instances in
the continuous sensor data stream. For this purpose, we use the position of the invariant
sub-activity in the related activity class instance for the estimation of the borders of the
activity instances.
In this thesis, we show that our new activity recognition strategy, built on invariant
sub-activities, is beneficial. We tested it on three human activity datasets with wearable
inertial measurement units (IMU). Compared to previous publications on the same
datasets we got improvement in the activity recognition in several classes, some with a
large margin. Our segmentation achieves a sensible method to separate the sensor data in
relation to the underlying activities. Relying on sub-activities makes us independent from
imprecise labels on the training data. After the identification of invariant sub-activities,
we calculate a value called cluster precision for each sensor signal and each class activity.
This tells us which classes can be easily classified and which sensor channels support
the classification best. Finally, in the training for each activity class, our algorithm selects
suitable signal channels with invariant sub-activities on different points in time and
with different length. This makes our strategy a multi-dimensional asynchronous motif
detection with variable motif length.

Study 1 (Chapter 2) is an empirical case study that concerns the nature of teaching–learning transactions that facilitate self-directed learning in vocational education and training of young adults in England. It addresses in part the concern that fostering the skills necessary for self-directed learning is an important endeavor of vocational education and training in many contexts internationally. However, there is a distinct lack of studies that investigate the extent to which facilitation of self-directed learning is present within vocational education and training in different contexts. An exploratory thematic qualitative analysis of inspectors’ comments within general Further Education college Ofsted inspection reports was conducted to investigate the balance of control of the learning process between teacher and learner within vocational education and training of young adults in England. A clear difference between outstanding and inadequate provision is reported. Inadequate provision was overwhelmingly teacher-directed. Outstanding provision reflected a collaborative relationship between teacher and learner in directing the learning process, despite the Ofsted framework not explicitly identifying the need for learner involvement in directing the learning process. The chapter offers insight into the understanding of how an effective balance of control of learning between teacher and learner may be realized in vocational education and training settings and highlights the need to consider the modulating role of contextual factors.
Following the further research directions outlined in Chapter 2, study 2 (Chapter 3) is a theoretical chapter that addresses the issue that fostering adult learners’ competence to adapt appropriately to our ever-changing world is a primary concern of adult education. The purpose of the chapter is novel and examines whether the consideration of modes of learning (instruction, performance, and inquiry) could assist in the design of adult education that facilitates self-directed learning and enables learners to think and perform adaptively. The concept of modes of learning originated from the typology of Houle (1980). However, to date, no study has reached beyond this typology, especially concerning the potential of using modes of learning in the design of adult education. Specifically, an apparent oversight in adult learning theory is the foremost importance of the consideration of whether inquiry is included in the learning process: its inclusion potentially differentiates the purpose of instruction, the nature of learners’ performance, and the underlying epistemological positioning. To redress this concern, two models of modes of learning are proposed and contrasted. The reinforcing model of modes of learning (instruction, performance, without inquiry) promotes teacher-directed learning. A key consequence of employing this model in adult education is that learners may become accustomed to habitually reinforcing patterns of perceiving, thinking, judging, feeling, and acting—performance that may be rather inflexible and represented by a distinct lack of a perceived need to adapt to social contextual changes: a lack of motivation for self-directed learning. Rather, the adapting model of modes of learning (instruction, performance, with inquiry) may facilitate learners to be adaptive in their performance—by encouraging an enhanced learner sensitivity toward changing social contextual conditions: potentially enhancing learners’ motivation for self-directed learning.
In line with the further research directions highlighted in Chapter 3, concerning the need to consider the nature and treatment of educational experiences that are conductive to learner growth and development, study 3 (Chapter 4) presents a systematic review of the experiential learning theory; a theory that perhaps cannot be uncoupled from self-directed learning theory, especially in regard to understanding the cognitive aspect of self-directed learning, which represents an important direction for further research on self-directed learning. D. A. Kolb’s (1984) experiential learning cycle is perhaps the most scholarly influential and cited model regarding experiential learning theory. However, a key issue in interpreting Kolb’s model concerns a lack of clarity regarding what constitutes a concrete experience, exactly. A systematic literature review was conducted in order to examine: what constitutes a concrete experience and what is the nature of treatment of a concrete experience in experiential learning? The analysis revealed five themes: learners are involved, active, participants; knowledge is situated in place and time; learners are exposed to novel experiences, which involves risk; learning demands inquiry to specific real-world problems; and critical reflection acts as a mediator of meaningful learning. Accordingly, a revision to Kolb’s model is proposed: experiential learning consists of contextually rich concrete experience, critical reflective observation, contextual-specific abstract conceptualization, and pragmatic active experimentation. Further empirical studies are required to test the model proposed. Finally, in Chapter 5 key findings of the studies are summarized, including that the models proposed in Chapters 3 and 4 (Figures 2 and 4, respectively) may be important considerations for further research on self-directed learning.

In this thesis, we deal with the worst-case portfolio optimization problem occuring in discrete-time markets.
First, we consider the discrete-time market model in the presence of crash threats. We construct the discrete worst-case optimal portfolio strategy by the indifference principle in the case of the logarithmic utility. After that we extend this problem to general utility functions and derive the discrete worst-case optimal portfolio processes, which are characterized by a dynamic programming equation. Furthermore, the convergence of the discrete worst-case optimal portfolio processes are investigated when we deal with the explicit utility functions.
In order to further study the relation of the worst-case optimal value function in discrete-time models to continuous-time models we establish the finite-difference approach. By deriving the discrete HJB equation we verify the worst-case optimal value function in discrete-time models, which satisfies a system of dynamic programming inequalities. With increasing degree of fineness of the time discretization, the convergence of the worst-case value function in discrete-time models to that in continuous-time models are proved by using a viscosity solution method.

Graphs and flow networks are important mathematical concepts that enable the modeling and analysis of a large variety of real world problems in different domains such as engineering, medicine or computer science. The number, sizes and complexities of those problems permanently increased during the last decades. This led to an increased demand of techniques that help domain experts in understanding their data and its underlying structure to enable an efficient analysis and decision making process.
To tackle this challenge, this work presents several new techniques that utilize concepts of visual analysis to provide domain scientists with new visualization methodologies and tools. Therefore, this work provides novel concepts and approaches for diverse aspects of the visual analysis such as data transformation, visual mapping, parameter refinement and analysis, model building and visualization as well as user interaction.
The presented techniques form a framework that enriches domain scientists with new visual analysis tools and help them analyze their data and gain insight from the underlying structures. To show the applicability and effectiveness of the presented approaches, this work tackles different applications such as networking, product flow management and vascular systems, while preserving the generality to be applicable to further domains.

The simulation of physical phenomena involving the dynamic behavior of fluids and gases
has numerous applications in various fields of science and engineering. Of particular interest
is the material transport behavior, the tendency of a flow field to displace parts of the
medium. Therefore, many visualization techniques rely on particle trajectories.
Lagrangian Flow Field Representation. In typical Eulerian settings, trajectories are
computed from the simulation output using numerical integration schemes. Accuracy concerns
arise because, due to limitations of storage space and bandwidth, often only a fraction
of the computed simulation time steps are available. Prior work has shown empirically that
a Lagrangian, trajectory-based representation can improve accuracy [Agr+14]. Determining
the parameters of such a representation in advance is difficult; a relationship between the
temporal and spatial resolution and the accuracy of resulting trajectories needs to be established.
We provide an error measure for upper bounds of the error of individual trajectories.
We show how areas at risk for high errors can be identified, thereby making it possible to
prioritize areas in time and space to allocate scarce storage resources.
Comparative Visual Analysis of Flow Field Ensembles. Independent of the representation,
errors of the simulation itself are often caused by inaccurate initial conditions,
limitations of the chosen simulation model, and numerical errors. To gain a better understanding
of the possible outcomes, multiple simulation runs can be calculated, resulting in
sets of simulation output referred to as ensembles. Of particular interest when studying the
material transport behavior of ensembles is the identification of areas where the simulation
runs agree or disagree. We introduce and evaluate an interactive method that enables application
scientists to reliably identify and examine regions of agreement and disagreement,
while taking into account the local transport behavior within individual simulation runs.
Particle-Based Representation and Visualization of Uncertain Flow Data Sets. Unlike
simulation ensembles, where uncertainty of the solution appears in the form of different
simulation runs, moment-based Eulerian multi-phase fluid simulations are probabilistic in
nature. These simulations, used in process engineering to simulate the behavior of bubbles in
liquid media, are aimed toward reducing the need for real-world experiments. The locations
of individual bubbles are not modeled explicitly, but stochastically through the properties of
locally defined bubble populations. Comparisons between simulation results and physical
experiments are difficult. We describe and analyze an approach that generates representative
sets of bubbles for moment-based simulation data. Using our approach, application scientists
can directly, visually compare simulation results and physical experiments.

Economics of Downside Risk
(2019)

Ever since establishment of portfolio selection theory by Markowitz (1952), the use of Standard deviation as a measure of risk has heavily been criticized. The aim of this thesis is to refine classical portfolio selection and asset pricing theory by using a downside deviation risk measure. It is defined as below-target semideviation and referred to as downside risk.
Downside efficient portfolios maximize expected payoff given a prescribed upper bound for downside risk and, thus, are analogs to mean-variance efficient portfolios in the sense of Markowitz. The present thesis provides an alternative proof of existence of downside efficient portfolios and identifies a sufficient criterion for their uniqueness. A specific representation of their form brings structural similarity to mean-variance efficient portfolios to light. Eventually, a separation theorem for the existence and uniqueness of portfolios that maximize the trade-off between downside risk and return is established.
The notion of a downside risk asset market equilibrium (DRAME) in an asset market with finitely many investors is introduced. This thesis addresses the existence and uniqueness Problem of such equilibria and specifies a DRAME pricing formula. In contrast to prices obtained from the mean-variance CAPM pricing formula, DRAME prices are arbitrage-free and strictly positive.
The final part of this thesis addresses practical issues. An algorithm that allows for an effective computation of downside efficient portfolios from simulated or historical financial data is outlined. In a simulation study, it is revealed in which scenarios downside efficient portfolios
outperform mean-variance efficient portfolios.

Carotenoids are organic lipophilic tetraterpenes ubiquitously present in Nature and found across the three domains of life (Archaea, Bacteria and Eukaryotes). Their structure is characterized by an extensive conjugated double-bond system, which serves as a light-absorbing chromophore, hence determining its colour, and enables carotenoids to absorb energy from other molecules and to act as antioxidant agents. Humans obtain carotenoids mainly via the consumption of fruits and vegetables, and to a smaller extent from other food sources such as fish and eggs. The concentration of carotenoids in the human plasma and tissues has been positively associated with a lower incidence of several chronic diseases including, cancer, diabetes, macular degeneration and cardiovascular conditions, likely due to their antioxidant properties. However, an important aspect of carotenoids, namely β- and α-carotene and β-cryptoxanthin, in human health and development, is their potential to be converted by the body into Vitamin A.
Yet, bioavailability of carotenoids is relatively low (< 30%) and dependent, among others, on dietary factors, such as amount and type of dietary lipids and the presence of dietary fibres. One dietary factor that has been found to negatively impact carotenoid bioaccessibility and cellular uptake in vitro is high concentrations of divalent cations during simulated gastro-intestinal digestion. Nevertheless, the mechanism of action of divalent cations remains unclear. The goal of this thesis was to better understand how divalent cations act during digestion and modulate carotenoid bioavailability. In vitro trials of simulated gastro-intestinal digestion and cellular uptake were run to investigate how varying concentrations of calcium, magnesium and zinc affected the bioaccessibility of both pure carotenoids and carotenoids from food matrices. In order to validate or refute results obtained in vitro, a randomized and double blinded placebo controlled cross-over postprandial trial (24 male participants) was carried out, testing the effect of 3 supplementary calcium doses (0 mg, 500 mg and 1000 mg) on the bioavailability of carotenoids from a spinach based meal. In vitro trials showed that addition of the divalent cations significantly decreased the bioaccessibility of both pure carotenoids (P < 0.001) and those from food matrices (P < 0.01). This effect was dependent on the type of mineral and its concentration. Strongest effects were seen for increasing concentrations of calcium followed by magnesium and zinc. The addition of divalent cations also altered the physico-chemical properties, i.e. viscosity and surface tension, of the digestas. However, the extent of this effect varied according to the type of matrix. The effects on bioaccessibility and physico-chemical properties were accompanied by variations of the zeta-potential of the particles in solution. Taken together, results from the in vitro trials strongly suggested that divalent cations were able to bind bile salts and other surfactant agents, affecting their solubility. The observed i) decrease in macroviscosity, ii) increase in surface tension, and the iii) reduction of the zeta-potential of the digesta, confirmed the removal of surfactant agents from the system, most likely due to precipitation as a result of the lower solubility of the mineral-surfactant complexes. As such, micellarization of carotenoids was hindered, explaining their reduced bioaccessibility. As for the human trial, results showed that there was no significant influence of supplementation with either 500 or 1000 mg of supplemental calcium (in form of carbonate) on the bioavailability of a spinach based meal, as measured by the area-under curve of carotenoid concentrations in the plasma-triacylglycerol rich fraction, suggesting that the in vitro results are not supported in such an in vivo scenario, which may be explained by the initial low bioaccessibility of spinach carotenoids and the dissolution kinetics of the calcium pills. Further investigations are necessary to understand how divalent cations act during in vivo digestion and potentially interact with lipophilic nutrients and food constituents.

The fact that long fibre reinforced thermoplastic composites (LFT) have higher tensile
strength, modulus and even toughness, compared to short fibre reinforced
thermoplastics with the same fibre loading has been well documented in literature.
These are the underlying factors that have made LFT materials one of the most
rapidly growing sectors of plastics industry. New developments in manufacturing of
LFT composites have led to improvements in mechanical properties and price
reduction, which has made these materials an attractive choice as a replacement for
metals in automobile parts and other similar applications. However, there are still
several open scientific questions concerning the material selection leading to the
optimal property combinations. The present work is an attempt to clarify some of
these questions. The target was to develop tools that can be used to modify, or to
“tailor”, the properties of LFT composite materials, according to the requirements of
automobile and other applications.
The present study consisted of three separate case studies, focusing on the current
scientific issues on LFT material systems. The first part of this work was focused on
LGF reinforced thermoplastic styrenic resins. The target was to find suitable maleic
acid anhydride (MAH) based coupling agents in order to improve the fibre-matrix
interfacial strength, and, in this way, to develop an LGF concentrate suitable for
thermoplastic styrenic resins. It was shown that the mechanical properties of LGF
reinforced “styrenics” were considerably improved when a small amount of MAH
functionalised polymer was added to the matrix. This could be explained by the better fibre-matrix adhesion, revealed by scanning electron microscopy of fracture surfaces.
A novel LGF concentrate concept showed that one particular base material can be
used to produce parts with different mechanical and thermal properties by diluting the
fibre content with different types of thermoplastic styrenic resins. Therefore, this
concept allows a flexible production of parts, and it can be used in the manufacturing
of interior parts for automobile components.The second material system dealt with so called hybrid composites, consisting of
long glass fibre reinforced polypropylene (LGF-PP) and mineral fillers like calcium
carbonate and talcum. The aim was to get more information about the fracture
behaviour of such hybrid composites under tensile and impact loading, and to
observe the influence of the fillers on properties. It was found that, in general, the
addition of fillers in LGF-PP, increased stiffness but the strength and fracture
toughness were decreased. However, calcium carbonate and talcum fillers resulted
in different mechanical properties, when added to LGF-PP: better mechanical
properties were achieved by using talcum, compared to calcium carbonate. This
phenomenon could be explained by the different nucleation effect of these fillers,
which resulted in a different crystalline morphology of polypropylene, and by the
particle orientation during the processing when talc was used. Furthermore, the
acoustic emission study revealed that the fracture mode of LGF-PP changed when
calcium carbonate was added. The characteristic acoustic signals revealed that the
addition of filler led to the fibre debonding at an earlier stage of fracture sequence
when compared to unfilled LGF-PP.
In the third material system, the target was to develop a novel long glass fibre
reinforced composite material based on the blend of polyamide with thermoset
resins. In this study a blend of polyamide-66 (PA66) and phenol formaldehyde resin
(PFR) was used. The chemical structure of the PA66-PFR resin was analysed by
using small molecular weight analogues corresponding to PA66 and PFR
components, as well as by carrying out experiments using the macromolecular
system. Theoretical calculations and experiments showed that there exists a strong
hydrogen bonding between the carboxylic groups of PA66 and the hydroxylic groups
of PFR, exceeding even the strength of amide-water hydrogen bonds. This was
shown to lead to the miscible blends, when PFR was not crosslinked. It was also
found that the morphology of such thermoplastic-thermoset blends can be controlled
by altering ratio of blend components (PA66, PFR and crosslinking agent). In the
next phase, PA66-PFR blends were reinforced by long glass fibres. The studies
showed that the water absorption of the blend samples was considerably decreased,
which was also reflected in higher mechanical properties at equilibrium state.
Wie man aus zahlreichen Untersuchungen und Anwendungsbeispielen entnehmen
kann, besitzen langfaserverstärkte Thermoplaste (LFT) eine bessere Zugfestigkeit,
Biege- und Schlagzähigkeit im Vergleich zu kurzfaserverstärkten Thermoplasten. Die
Vorteile in den mechanischen Eigenschaften haben die LFT zu einem
schnellwachsenden Bereich in der Kunststoffindustrie gemacht. Neue Entwicklungen
in Bereich der Herstellung von LFT haben für zusätzliche Verbesserungen der
mechanischen Eigenschaften sowie eine Preisreduzierung der Materialien in den
vergangenen Jahren gesorgt, was die LFT zu einer attraktiven Wahl u.a. als Ersatz
von Metallen in Automobilteilen macht. Es stellen sich allerdings immer noch einige
offene wissenschaftliche Fragen in Bezug auf z.B. die Materialbeschaffenheit, um
optimale Eigenschaftskombinationen zu erreichen. Die vorliegende Arbeit versucht,
einige dieser Fragen zu beantworten. Ziel war es, Vorgehensweisen zu entwickeln,
mit denen man die Eigenschaften von LFT gezielt beeinflussen und so den
Anforderungen von Automobilen oder anderen Anwendungen anpassen oder
„maßschneidern“ kann.
Die vorliegende Arbeit besteht aus drei Teilen, welche sich auf unterschiedliche
Materialsysteme, angepasst an den aktuellen Bedarf und das Interesse der Industrie,
konzentrieren.
Der erste Teil der Arbeit richtet sich auf die Eigenschaftsoptimierung von
langglasfaserverstärkten (LGF) thermoplastischen Styrolcopolymeren und von
Blends aus diesen Materialien. Es wurden passende, auf Maleinsäureanhydride
(MAH) basierende Kopplungsmittel gefunden, um die Faser-Matrix-Haftung zu
optimieren. Weiterhin wurde ein LGF Konzentrat entwickelt, welches mit
verschiedenen thermoplastischen Styrolcopolymeren kompatibel ist und somit als
„Verstärkungsadditiv“ eingesetzt werden kann.Das Konzept für ein neues LGF-Konzentrat auf Basis des kompatiblen
Materialsystems konzentriert sich insbesondere darauf, dass ein Basismaterial für
die Herstellung von Bauteilen bereit gestellt werden kann, mit dessen Hilfe gezielt
verschiedene mechanische und thermomechanischen Eigenschaften durch das
Zumischen von verschiedenen Styrolcopoylmeren und Blends verbessert werden
können. Dieses Konzept ermöglicht eine sehr flexible Produktion von Bauteilen und
wird seine Anwendung bei der Herstellung von Bauteilen u.a. im Interieur von Autos
finden.
Das zweite Materialsystem basiert auf sogenannten hybriden Verbundwerkstoffen,
welche aus Langglasfasern und mineralischen Füllstoffen wie Kalziumkarbonat und
Talkum in einer Polypropylen (PP) - Matrix zusammengesetzt sind. Ziel war es, durch
detaillierte bruchmechanische Analysen genaue Informationen über das
Bruchverhalten dieser hybriden Verbundwerkstoffe bei Zug- und Schlagbelastung zu
bekommen, um dann die Unterschiede zwischen den verschiedenen Füllstoffen in
Bezug auf ihre Eigenschaften zu dokumentieren. Es konnte beobachtet werden, dass
bei Zugabe der Füllstoffe zum LGF-PP normalerweise die Steifigkeit weiter
verbessert wurde, jedoch die Festigkeit und Schlagzähigkeit abnahmen. Weiterhin
zeigten die verschiedenen Füllstoffe wie Kalziumkarbonat und Talkum
unterschiedliche mechanische Eigenschaften auf, wenn sie zusammen mit LGF
Verstärkung eingesetzt wurden: Bei der Zugabe von Talkum wurde u.a. eine deutlich
bessere Schlagzähigkeit als bei der Zugabe von Kalziumkarbonat festgestellt. Dieses
Phänomen konnte durch das unterschiedliche Nukleierungsverhalten des PPs erklärt
werden, welches in einer unterschiedlichen Kristallmorphologie von Polypropylen
resultierte. Weiterhin konnte man durch Messungen der akustischen Emmissionen
während der Zugbelastung eines bruchmechanischen Versuchskörpers aufzeigen,
dass die höhere Bruchzähigkeit von LGF-PP ohne Füllstoffe daraus resultiert, dass
Faser-Pullout schon bei geringeren Kräften vorhanden war.

Magnetoelastic coupling describes the mutual dependence of the elastic and magnetic fields and can be observed in certain types of materials, among which are the so-called "magnetostrictive materials". They belong to the large class of "smart materials", which change their shape, dimensions or material properties under the influence of an external field. The mechanical strain or deformation a material experiences due to an externally applied magnetic field is referred to as magnetostriction; the reciprocal effect, i.e. the change of the magnetization of a body subjected to mechanical stress is called inverse magnetostriction. The coupling of mechanical and electromagnetic fields is particularly observed in "giant magnetostrictive materials", alloys of ferromagnetic materials that can exhibit several thousand times greater magnitudes of magnetostriction (measured as the ratio of the change in length of the material to its original length) than the common magnetostrictive materials. These materials have wide applications areas: They are used as variable-stiffness devices, as sensors and actuators in mechanical systems or as artificial muscles. Possible application fields also include robotics, vibration control, hydraulics and sonar systems.
Although the computational treatment of coupled problems has seen great advances over the last decade, the underlying problem structure is often not fully understood nor taken into account when using black box simulation codes. A thorough analysis of the properties of coupled systems is thus an important task.
The thesis focuses on the mathematical modeling and analysis of the coupling effects in magnetostrictive materials. Under the assumption of linear and reversible material behavior with no magnetic hysteresis effects, a coupled magnetoelastic problem is set up using two different approaches: the magnetic scalar potential and vector potential formulations. On the basis of a minimum energy principle, a system of partial differential equations is derived and analyzed for both approaches. While the scalar potential model involves only stationary elastic and magnetic fields, the model using the magnetic vector potential accounts for different settings such as the eddy current approximation or the full Maxwell system in the frequency domain.
The distinctive feature of this work is the analysis of the obtained coupled magnetoelastic problems with regard to their structure, strong and weak formulations, the corresponding function spaces and the existence and uniqueness of the solutions. We show that the model based on the magnetic scalar potential constitutes a coupled saddle point problem with a penalty term. The main focus in proving the unique solvability of this problem lies on the verification of an inf-sup condition in the continuous and discrete cases. Furthermore, we discuss the impact of the reformulation of the coupled constitutive equations on the structure of the coupled problem and show that in contrast to the scalar potential approach, the vector potential formulation yields a symmetric system of PDEs. The dependence of the problem structure on the chosen formulation of the constitutive equations arises from the distinction of the energy and coenergy terms in the Lagrangian of the system. While certain combinations of the elastic and magnetic variables lead to a coupled magnetoelastic energy function yielding a symmetric problem, the use of their dual variables results in a coupled coenergy function for which a mixed problem is obtained.
The presented models are supplemented with numerical simulations carried out with MATLAB for different examples including a 1D Euler-Bernoulli beam under magnetic influence and a 2D magnetostrictive plate in the state of plane stress. The simulations are based on material data of Terfenol-D, a giant magnetostrictive materials used in many industrial applications.

Materials in general can be divided into insulators, semiconductors and conductors,
depending on their degree of electrical conductivity. Polymers are classified as
electrically insulating materials, having electrical conductivity values lower than 10-12
S/cm. Due to their favourable characteristics, e.g. their good physical characteristics,
their low density, which results in weight reduction, etc., polymers are also
considered for applications where a certain degree of conductivity is required. The
main aim of this study was to develop electrically conductive composite materials
based on epoxy (EP) matrix, and to study their thermal, electrical, and mechanical
properties. The target values of electrical conductivity were mainly in the range of
electrostatic discharge protection (ESD, 10-9-10-6 S/cm).
Carbon fibres (CF) were the first type of conductive filler used. It was established that
there is a significant influence of the fibre aspect ratio on the electrical properties of
the fabricated composite materials. With longer CF the percolation threshold value
could be achieved at lower concentrations. Additional to the homogeneous CF/EP
composites, graded samples were also developed. By the use of a centrifugation
method, the CF created a graded distribution along one dimension of the samples.
The effect of the different processing parameters on the resulting graded structures
and consequently on their gradients in the electrical and mechanical properties were
systematically studied.
An intrinsically conductive polyaniline (PANI) salt was also used for enhancing the
electrical properties of the EP. In this case, a much lower percolation threshold was
observed compared to that of CF. PANI was found out to have, up to a particular
concentration, a minimal influence on the thermal and mechanical properties of the
EP system.
Furthermore, the two above-mentioned conductive fillers were jointly added to the EP
matrix. Improved electrical and mechanical properties were observed by this
incorporation. A synergy effect between the two fillers took place regarding the
electrical conductivity of the composites.
The last part of this work was engaged in the application of existing theoretical
models for the prediction of the electrical conductivity of the developed polymer composites. A good correlation between the simulation and the experiments was
observed.
Allgemein werden Materialien in Bezug auf ihre elektrische Leitfähigkeit in Isolatoren,
Halbleiter oder Leiter unterteilt. Polymere gehören mit einer elektrischen Leitfähigkeit
niedriger als 10-12 S/cm in die Gruppe der Isolatoren. Aufgrund vorteilhafter
Eigenschaften der Polymere, wie z.B. ihren guten physikalischen Eigenschaften,
ihrer geringen Dichte, welche zur Gewichtsreduktion beiträgt, usw., werden Polymere
auch für Anwendungen in Betracht gezogen, bei denen ein gewisser Grad an
Leitfähigkeit gefordert wird. Das Hauptziel dieser Studie war, elektrisch leitende
Verbundwerkstoffe auf der Basis von Epoxidharz (EP) zu entwickeln und deren
elektrische, mechanische und thermische Eigenschaften zu studieren. Die Zielwerte
der elektrischen Leitfähigkeit lagen hauptsächlich im Bereich der Vermeidung
elektrostatischer Aufladungen (ESD, 10-9-10-6 S/cm).
Bei der Herstellung elektrisch leitender Kunststoffen wurden als erstes
Kohlenstofffasern (CF) als leitfähige Füllstoffe benutzt. Bei den durchgeführten
Experimenten konnte man beobachten, dass das Faserlängenverhältnis einen
bedeutenden Einfluss auf die elektrischen Eigenschaften der fabrizierten
Verbundwerkstoffe hat. Mit längeren CF wurde die Perkolationsschwelle bereits bei
einer niedrigeren Konzentration erreicht. Zusätzlich zu den homogenen CF/EP
Verbundwerkstoffen, wurden auch Gradientenwerkstoffe entwickelt. Mit Hilfe einer
Zentrifugation konnte eine gradierte Verteilung der CF entlang der Probenlängeachse
erreicht werden. Die Effekte der unterschiedlichen Zentrifugationsparameter
auf die resultierenden Gradientenwerkstoffe und die daraus
resultierenden, gradierten elektrischen und mechanischen Eigenschaften wurden
systematisch studiert.
Ein intrinsisch leitendes Polyanilin-Salz (PANI) wurde auch für das Erhöhen der
elektrischen Eigenschaften des EP benutzt. In diesem Fall wurde eine viel niedrigere
Perkolationsschwelle verglichen mit der von CF beobachtet. Der Einsatz von PANI hat bis zu einer bestimmten Konzentration nur einen minimalen Einfluß auf die
thermischen und mechanischen Eigenschaften des EP Systems.
In einem dritte Schritt wurden die zwei oben erwähnten, leitenden Füllstoffe
gemeinsam der EP Matrix hinzugefügt. Erhöhte elektrische und mechanische
Eigenschaften wurden in diesem Fall beobachtet, wobei sich ein Synergie-Effekt
zwischen den zwei Füllstoffen bezogen auf die elektrische Leitfähigkeit der
Verbundwerkstoffe ergab.
Im letzten Teil dieser Arbeit fand die Anwendung von theoretischen Modelle zur
Vorhersage der elektrischen Leitfähigkeit der entwickelten Verbundwerkstoffe statt.
Dabei konnte eine gute Übereinstimmung mit den experimentellen Ergebnissen
festgestellt werden .

This thesis addresses several challenges for sustainable logistics operations and investigates (1) the integration of intermediate stops in the route planning of transportation vehicles, which especially becomes relevant when alternative-fuel vehicles with limited driving range or a sparse refueling infrastructure are considered, (2) the combined planning of the battery replacement infrastructure and of the routing for battery electric vehicles, (3) the use of mobile load replenishment or refueling possibilities in environments where the respective infrastructure is not available, and (4) the additional consideration of the flow of goods from the end user in backward direction to the point of origin for the purpose of, e.g., recapturing value or proper disposal. We utilize models and solution methods from the domain of operations research to gain insights into the investigated problems and thus to support managerial decisions with respect to these issues.

The demand of sustainability is continuously increasing. Therefore, thermoplastic
composites became a focus of research due to their good weight to performance
ratio. Nevertheless, the limiting factor of their usage for some processes is the loss of
consolidation during re-melting (deconsolidation), which reduces the part quality.
Several studies dealing with deconsolidation are available. These studies investigate
a single material and process, which limit their usefulness in terms of general
interpretations as well as their comparability to other studies. There are two main
approaches. The first approach identifies the internal void pressure as the main
cause of deconsolidation and the second approach identifies the fiber reinforcement
network as the main cause. Due to of their controversial results and limited variety of
materials and processes, there is a big need of a more comprehensive investigation
on several materials and processes.
This study investigates the deconsolidation behavior of 17 different materials and
material configurations considering commodity, engineering, and performance
polymers as well as a carbon and two glass fiber fabrics. Based on the first law of
thermodynamics, a deconsolidation model is proposed and verified by experiments.
Universal applicable input parameters are proposed for the prediction of
deconsolidation to minimize the required input measurements. The study revealed
that the fiber reinforcement network is the main cause of deconsolidation, especially
for fiber volume fractions higher than 48 %. The internal void pressure can promote
deconsolidation, when the specimen was recently manufactured. In other cases the
internal void pressure as well as the surface tension prevents deconsolidation.
During deconsolidation the polymer is displaced by the volume increase of the void.
The polymer flow damps the progress of deconsolidation because of the internal
friction of the polymer. The crystallinity and the thermal expansion lead to a
reversible thickness increase during deconsolidation. Moisture can highly accelerate
deconsolidation and can increase the thickness by several times because of the
vaporization of water. The model is also capable to predict reconsolidation under the
defined boundary condition of pressure, time, and specimen size. For high pressure
matrix squeeze out occur, which falsifies the accuracy of the model.The proposed model was applied to thermoforming, induction welding, and
thermoplastic tape placement. It is demonstrated that the load rate during
thermoforming is the critical factor of achieving complete reconsolidation. The
required load rate can be determined by the model and is dependent on the cooling
rate, the forming length, the extent of deconsolidation, the processing temperature,
and the final pressure. During induction welding deconsolidation can tremendously
occur because of the left moisture in the polymer at the molten state. The moisture
cannot fully diffuse out of the specimen during the faster heating. Therefore,
additional pressure is needed for complete reconsolidation than it would be for a dry
specimen. Deconsolidation is an issue for thermoplastic tape placement, too. It limits
the placement velocity because of insufficient cooling after compaction. If the
specimen after compaction is locally in a molten state, it deconsolidates and causes
residual stresses in the bond line, which decreases the interlaminar shear strength. It
can be concluded that the study gains new knowledge and helps to optimize these
processes by means of the developed model without a high number of required
measurements.
Aufgrund seiner guten spezifischen Festigkeit und Steifigkeit ist der
endlosfaserverstärkte Thermoplast ein hervorragender Leichtbauwerkstoff. Allerdings
kann es während des Wiederaufschmelzens durch Dekonsolidierung zu einem
Verlust der guten mechanischen Eigenschaften kommen, daher ist Dekonsolidierung
unerwünscht. In vielen Studien wurde die Dekonsolidierung mit unterschiedlichen
Ergebnissen untersucht. Dabei wurde meist ein Material und ein Prozess betrachtet.
Eine allgemeine Interpretation und die Vergleichbarkeit unter den Studien sind
dadurch nur begrenzt möglich. Aus der Literatur sind zwei Ansätze bekannt. Dem
ersten Ansatz liegt der Druckunterschied zwischen Poreninnendruck und
Umgebungsdruck als Hauptursache der Dekonsolidierung zu Grunde. Beim zweiten
Ansatz wird die Faserverstärkung als Hauptursache identifiziert. Aufgrund der
kontroversen Ergebnisse und der begrenzten Anzahl der Materialien und
Verarbeitungsverfahren, besteht die Notwendigkeit einer umfassenden Untersuchung
über mehrere Materialien und Prozesse. Diese Studie umfasst drei Polymere
(Polypropylen, Polycarbonat und Polyphenylensulfid), drei Gewebe (Köper, Atlas und
Unidirektional) und zwei Prozesse (Autoklav und Heißpressen) bei verschiedenen
Faservolumengehalten.
Es wurde der Einfluss des Porengehaltes auf die interlaminare Scherfestigkeit
untersucht. Aus der Literatur ist bekannt, dass die interlaminare Scherfestigkeit mit
der Zunahme des Porengehaltes linear sinkt. Dies konnte für die Dekonsolidierung
bestätigt werden. Die Reduktion der interlaminaren Scherfestigkeit für
thermoplastische Matrizes ist kleiner als für duroplastische Matrizes und liegt im
Bereich zwischen 0,5 % bis 1,5 % pro Prozent Porengehalt. Außerdem ist die
Abnahme signifikant vom Matrixpolymer abhängig.
Im Falle der thermisch induzierten Dekonsolidierung nimmt der Porengehalt
proportional zu der Dicke der Probe zu und ist ein Maß für die Dekonsolidierung. Die
Pore expandiert aufgrund der thermischen Gasexpansion und kann durch äußere
Kräfte zur Expansion gezwungen werden, was zu einem Unterdruck in der Pore
führt. Die Faserverstärkung ist die Hauptursache der Dickenzunahme
beziehungsweise der Dekonsolidierung. Die gespeicherte Energie, aufgebaut während der Kompaktierung, wird während der Dekonsolidierung abgegeben. Der
Dekompaktierungsdruck reicht von 0,02 MPa bis 0,15 MPa für die untersuchten
Gewebe und Faservolumengehalte. Die Oberflächenspannung behindert die
Porenexpansion, weil die Oberfläche vergrößert werden muss, die zusätzliche
Energie benötigt. Beim Kontakt von benachbarten Poren verursacht die
Oberflächenspannung ein Verschmelzen der Poren. Durch das bessere Volumen-
Oberfläche-Verhältnis wird Energie abgebaut. Der Polymerfluss bremst die
Entwicklung der Dickenzunahme aufgrund der erforderlichen Energie (innere
Reibung) der viskosen Strömung. Je höher die Temperatur ist, desto niedriger ist die
Viskosität des Polymers, wodurch weniger Energie für ein weiteres Porenwachstum
benötigt wird. Durch den reversiblen Einfluss der Kristallinität und der
Wärmeausdehnung des Verbundes wird während der Erwärmung die Dicke erhöht
und während der Abkühlung wieder verringert. Feuchtigkeit kann einen enormen
Einfluss auf die Dekonsolidierung haben. Ist noch Feuchtigkeit über der
Schmelztemperatur im Verbund vorhanden, verdampft diese und kann die Dicke um
ein Vielfaches der ursprünglichen Dicke vergrößern.
Das Dekonsolidierungsmodell ist in der Lage die Rekonsolidierung vorherzusagen.
Allerdings muss der Rekonsolidierungsdruck unter einem Grenzwert liegen
(0,15 MPa für 50x50 mm² und 1,5 MPa für 500x500 mm² große Proben), da es sonst
bei der Probe zu einem Polymerfluss aus der Probe von mehr als 2 % kommt. Die
Rekonsolidierung ist eine inverse Dekonsolidierung und weist die gleichen
Mechanismen in der entgegengesetzten Richtung auf.
Das entwickelte Modell basiert auf dem ersten Hauptsatz der Thermodynamik und
kann die Dicke während der Dekonsolidierung und der Rekonsolidierung
vorhersagen. Dabei wurden eine homogene Porenverteilung und eine einheitliche,
kugelförmige Porengröße angenommen. Außerdem wurde die Massenerhaltung
angenommen. Um den Aufwand für die Bestimmung der Eingangsgrößen zu
reduzieren, wurden allgemein gültige Eingabeparameter bestimmt, die für eine
Vielzahl von Konfigurationen gelten. Das simulierte Materialverhalten mit den
allgemein gültigen Eingangsparametern erzielte unter den definierten
Einschränkungen eine gute Übereinstimmung mit dem tatsächlichen
Materialverhalten. Nur bei Konfigurationen mit einer Viskositätsdifferenz von mehr als 30 % zwischen der Schmelztemperatur und der Prozesstemperatur sind die
allgemein gültigen Eingangsparameter nicht anwendbar. Um die Relevanz für die
Industrie aufzuzeigen, wurden die Effekte der Dekonsolidierung für drei weitere
Verfahren simuliert. Es wurde gezeigt, dass die Kraftzunahmegeschwindigkeit
während des Thermoformens ein Schlüsselfaktor für eine vollständige
Rekonsolidierung ist. Wenn die Kraft zu langsam appliziert wird oder die finale Kraft
zu gering ist, ist die Probe bereits erstarrt, bevor eine vollständige Konsolidierung
erreicht werden kann. Auch beim Induktionsschweißen kann Dekonsolidierung
auftreten. Besonders die Feuchtigkeit kann zu einer starken Zunahme der
Dekonsolidierung führen, verursacht durch die sehr schnellen Heizraten von mehr als
100 K/min. Die Feuchtigkeit kann während der kurzen Aufheizphase nicht vollständig
aus dem Polymer ausdiffundieren, sodass die Feuchtigkeit beim Erreichen der
Schmelztemperatur in der Probe verdampft. Beim Tapelegen wird die
Ablegegeschwindigkeit durch die Dekonsolidierung begrenzt. Nach einer scheinbar
vollständigen Konsolidierung unter der Walze kann die Probe lokal dekonsolidieren,
wenn das Polymer unter der Oberfläche noch geschmolzen ist. Die daraus
resultierenden Poren reduzieren die interlaminare Scherfestigkeit drastisch um 5,8 %
pro Prozent Porengehalt für den untersuchten Fall. Ursache ist die Kristallisation in
der Verbindungszone. Dadurch werden Eigenspannungen erzeugt, die in der
gleichen Größenordnung wie die tatsächliche Scherfestigkeit sind.

The focus of this work is to provide and evaluate a novel method for multifield topology-based analysis and visualization. Through this concept, called Pareto sets, one is capable to identify critical regions in a multifield with arbitrary many individual fields. It uses ideas found in graph optimization to find common behavior and areas of divergence between multiple optimization objectives. The connections between the latter areas can be reduced into a graph structure allowing for an abstract visualization of the multifield to support data exploration and understanding.
The research question that is answered in this dissertation is about the general capability and expandability of the Pareto set concept in context of visualization and application. Furthermore, the study of its relations, drawbacks and advantages towards other topological-based approaches. This questions is answered in several steps, including consideration and comparison with related work, a thorough introduction of the Pareto set itself as well as a framework for efficient implementation and an attached discussion regarding limitations of the concept and their implications for run time, suitable data, and possible improvements.
Furthermore, this work considers possible simplification approaches like integrated single-field simplification methods but also using common structures identified through the Pareto set concept to smooth all individual fields at once. These considerations are especially important for real-world scenarios to visualize highly complex data by removing small local structures without destroying information about larger, global trends.
To further emphasize possible improvements and expandability of the Pareto set concept, the thesis studies a variety of different real world applications. For each scenario, this work shows how the definition and visualization of the Pareto set is used and improved for data exploration and analysis based on the scenarios.
In summary, this dissertation provides a complete and sound summary of the Pareto set concept as ground work for future application of multifield data analysis. The possible scenarios include those presented in the application section, but are found in a wide range of research and industrial areas relying on uncertainty analysis, time-varying data, and ensembles of data sets in general.

Novel image processing techniques have been in development for decades, but most
of these techniques are barely used in real world applications. This results in a gap
between image processing research and real-world applications; this thesis aims to
close this gap. In an initial study, the quantification, propagation, and communication
of uncertainty were determined to be key features in gaining acceptance for
new image processing techniques in applications.
This thesis presents a holistic approach based on a novel image processing pipeline,
capable of quantifying, propagating, and communicating image uncertainty. This
work provides an improved image data transformation paradigm, extending image
data using a flexible, high-dimensional uncertainty model. Based on this, a completely
redesigned image processing pipeline is presented. In this pipeline, each
step respects and preserves the underlying image uncertainty, allowing image uncertainty
quantification, image pre-processing, image segmentation, and geometry
extraction. This is communicated by utilizing meaningful visualization methodologies
throughout each computational step.
The presented methods are examined qualitatively by comparing to the Stateof-
the-Art, in addition to user evaluation in different domains. To show the applicability
of the presented approach to real world scenarios, this thesis demonstrates
domain-specific problems and the successful implementation of the presented techniques
in these domains.

Der Fokus der vorliegenden Arbeit liegt auf endlosfaser- und langfaserverstärkten
thermoplastischen Materialien. Hierfür wurde das „multilayered hybrid
(MLH)“ Konzept entwickelt und auf zwei Halbzeuge, den MLH-Roving und die MLHMat
angewendet. Der MLH-Roving ist ein Roving (bestehend aus Endlosfasern), der
durch thermoplastische Folien in mehrere Schichten geteilt wird. Der MLH-Roving
wird durch eine neuartige Spreizmethode mit anschließender thermischen Fixierung
und abschließender mehrfacher Faltung hergestellt. Dadurch können verschiedene
Faser-Matrix-Konfigurationen realisiert werden. Die MLH-Mat ist ein
glasmattenverstärktes thermoplastisches Material, das für hohe Fasergehalte bis 45
vol. % und verschiedene Matrixpolymere, z.B. Polypropylen (PP) und Polyamide 6
(PA6) geeignet ist. Sie zeichnet sich durch eine hohe Homogenität in der
Flächendichte und in der Faserrichtung aus. Durch dynamische Crashversuche mit
auf MLH-Roving und MLH-Mat basierenden Probekörpern wurden das
Crashverhalten und die Performance untersucht. Die Ergebnisse der Crashkörper
basierend auf langfaserverstärktem Material (MLH-Mat) und endlosfaserverstärktem
Material (MLH-Roving) waren vergleichbar. Die PA6-Typen zeigten eine bessere
Crashperformance als PP-Typen.
The present work deals with continuous fiber- and long fiber reinforced thermoplastic
materials. The concept of multilayered hybrid (MLH) structure was developed and
applied to the so-called MLH-roving and MLH-mat. The MLH-roving is a continuous
fiber roving separated evenly into several sublayers by thermoplastic films, through
the sequential processes of spreading with a newly derived equation, thermal fixing,
and folding. It was aimed to satisfy the variety of material configuration as well as the
variety in intermediate product. The MLH-mat is a glass mat reinforced thermoplastic
(GMT)-like material that is suitable for high fiber contents up to 45 vol. % and various
matrix polymers, e.g. polypropylene (PP), polyamide 6 (PA6). It showed homogeneity
in areal density, random directional fiber distribution, and reheating stability required
for molding process. On the MLH-roving and MLH-mat materials, the crash behavior
and performance were investigated by dynamic crash test. Long fiber reinforced
materials (MLH-mat) were equivalent to continuous fiber reinforced materials (MLHroving),
and PA6 grades showed higher crash performance than PP grades.

The gas phase infrared and fragmentation spectra of a systematic group of trimetallic oxo-centered
transition metal complexes are shown and discussed, with formate and acetate bridging ligands and
pyridine and water as axial ligands.
The stability of the complexes, as predicted by appropriate ab initio simulations, is demonstrated to
agree with collision induced dissociation (CID) measurements.
A broad range of DFT calculations are shown. They are used to simulate the geometry, the bonding
situation, relative stability and flexibility of the discussed complexes, and to specify the observed
trends. These simulations correctly predict the trends in the band splitting of the symmetric and
asymmetric carboxylate stretch modes, but fail to account for anharmonic effects observed specifically
in the mid IR range.
The infrared spectra of the different ligands are introduced in a brief literature review. Their changes
in different environments or different bonding situations are discussed and visualized, especially the
interplay between fundamental-, overtone-, and combination bands, as well as Fermi resonances
between them.
A new variation on the infrared multi photon dissociation (IRMPD) spectroscopy method is proposed
and evaluated. In addition to the commonly considered total fragment yield, the cumulative fragment
yield can be used to plot the wavelength dependent relative abundance of different fragmentation
products. This is shown to include valuable additional information on the excited chromophors, and
their coupling to specific fragmentation channels.
High quality homo- and heterometallic IRMPD spectra of oxo centered carboxylate complexes of
chromium and iron show the impacts of the influencing factors: the metal centers, the bridging ligands,
their carboxylate stretch modes and CH bend modes, and the terminal ligands.
In all four formate spectra, anharmonic effects are necessary to explain the observed spectra:
combination bands of both carboxylate stretch modes and a Fermi resonance of the fundamental of
the CH stretch mode, and a combination band of the asymmetric carboxylate stretch mode with the
CH bend mode of the formate bridging ligand.
For the water adduct species, partial hydrolysis is proposed to account for the changes in the observed
carboxylic stretch modes.
Appropriate experiments are suggested to verify the mode assignments that are not directly explained
by the ab initio calculations, the available experimental results or other means like deuteration
experiments.

Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.

Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).

The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.

The Symbol Grounding Problem (SGP) is one of the first attempts to proposed a hypothesis about mapping abstract concepts and the real world. For example, the concept "ball" can be represented by an object with a round shape (visual modality) and phonemes /b/ /a/ /l/ (audio modality).
This thesis is inspired by the association learning presented in infant development.
Newborns can associate visual and audio modalities of the same concept that are presented at the same time for vocabulary acquisition task.
The goal of this thesis is to develop a novel framework that combines the constraints of the Symbol Grounding Problem and Neural Networks in a simplified scenario of association learning in infants. The first motivation is that the network output can be considered as numerical symbolic features because the attributes of input samples are already embedded. The second motivation is the association between two samples is predefined before training via the same vectorial representation. This thesis proposes to associate two samples and the vectorial representation during training. Two scenarios are considered: sample pair association and sequence pair association.
Three main contributions are presented in this work.
The first contribution is a novel Symbolic Association Model based on two parallel MLPs.
The association task is defined by learning that two instances that represent one concept.
Moreover, a novel training algorithm is defined by matching the output vectors of the MLPs with a statistical distribution for obtaining the relationship between concepts and vectorial representations.
The second contribution is a novel Symbolic Association Model based on two parallel LSTM networks that are trained on weakly labeled sequences.
The definition of association task is extended to learn that two sequences represent the same series of concepts.
This model uses a training algorithm that is similar to MLP-based approach.
The last contribution is a Classless Association.
The association task is defined by learning based on the relationship of two samples that represents the same unknown concept.
In summary, the contributions of this thesis are to extend Artificial Intelligence and Cognitive Computation research with a new constraint that is cognitive motivated. Moreover, two training algorithms with a new constraint are proposed for two cases: single and sequence associations. Besides, a new training rule with no-labels with promising results is proposed.

In recent years, enormous progress has been made in the field of Artificial Intelligence (AI). Especially the introduction of Deep Learning and end-to-end learning, the availability of large datasets and the necessary computational power in form of specialised hardware allowed researchers to build systems with previously unseen performance in areas such as computer vision, machine translation and machine gaming. In parallel, the Semantic Web and its Linked Data movement have published many interlinked RDF datasets, forming the world’s largest, decentralised and publicly available knowledge base.
Despite these scientific successes, all current systems are still narrow AI systems. Each of them is specialised to a specific task and cannot easily be adapted to all other human intelligence tasks, as would be necessary for Artificial General Intelligence (AGI). Furthermore, most of the currently developed systems are not able to learn by making use of freely available knowledge such as provided by the Semantic Web. Autonomous incorporation of new knowledge is however one of the pre-conditions for human-like problem solving.
This work provides a small step towards teaching machines such human-like reasoning on freely available knowledge from the Semantic Web. We investigate how human associations, one of the building blocks of our thinking, can be simulated with Linked Data. The two main results of these investigations are a ground truth dataset of semantic associations and a machine learning algorithm that is able to identify patterns for them in huge knowledge bases.
The ground truth dataset of semantic associations consists of DBpedia entities that are known to be strongly associated by humans. The dataset is published as RDF and can be used for future research.
The developed machine learning algorithm is an evolutionary algorithm that can learn SPARQL queries from a given SPARQL endpoint based on a given list of exemplary source-target entity pairs. The algorithm operates in an end-to-end learning fashion, extracting features in form of graph patterns without the need for human intervention. The learned patterns form a feature space adapted to the given list of examples and can be used to predict target candidates from the SPARQL endpoint for new source nodes. On our semantic association ground truth dataset, our evolutionary graph pattern learner reaches a Recall@10 of > 63 % and an MRR (& MAP) > 43 %, outperforming all baselines. With an achieved Recall@1 of > 34% it even reaches average human top response prediction performance. We also demonstrate how the graph pattern learner can be applied to other interesting areas without modification.

Though environmental inequality research has gained extensive interest in the United States, it has received far less attention in Europe and Germany. The main objective of this book is to extend the research on environmental inequality in Germany. This book aims to shed more light on the question of whether minorities in Germany are affected by a disproportionately high burden of environmental pollution, and to increase the general knowledge about the causal mechanisms, which contribute to the unequal distribution of environmental hazards across the population.
To improve our knowledge about environmental inequality in Germany, this book extends previous research in several ways. First, to evaluate the extent of environmental inequality, this book relies on two different data sources. On the on hand, it uses household-level survey data and self-reports about the impairment through air pollution. On the other hand, it combines aggregated census data and objective register-based measures of industrial air pollution by using geographic information systems (GIS). Consequently, this book offers the first analysis of environmental inequality on the national level that uses objective measures of air pollution in Germany. Second, to evaluate the causes of environmental inequality, this book applies a panel data analysis on the household level, thereby offering the first longitudinal analysis of selective migration processes outside the United States. Third, it compares the level of environmental inequality between German metropolitan areas and evaluates to which extent the theoretical arguments of environmental inequality can explain differing levels of environmental inequality across the country. By doing so, this book not only investigates the impact of indicators derived by the standard strand of theoretical reasoning but also includes structural characteristics of the urban space.
All studies presented in this book confirm the disproportionate exposure of minorities to environmental pollution. Minorities live in more polluted areas in Germany but also in more polluted parts of the communities, and this disadvantage is most severe in metropolitan regions. Though this book finds evidence for selective migration processes contributing to the disproportionate exposure of minorities to environmental pollution, it also stresses the importance of urban conditions. Especially cities with centrally located industrial facilities yield a high level of environmental inequality. This poses the question of whether environmental inequality might be the result of two independent processes: 1) urban infrastructure confines residential choices of minorities to the urban core, and 2) urban infrastructure facilitates centrally located industries. In combination, both processes lead to a disproportionate burden of minority households.

Tables or ranked lists summarize facts about a group of entities in a concise and structured fashion. They are found in all kind of domains and easily comprehensible by humans. Some globally prominent examples of such rankings are the tallest buildings in the World, the richest people in Germany, or most powerful cars. The availability of vast amounts of tables or rankings from open domain allows different ways to explore data. Computing similarity between ranked lists, in order to find those lists where entities are presented in a similar order, carries important analytical insights. This thesis presents a novel query-driven Locality Sensitive Hashing (LSH) method, in order to efficiently find similar top-k rankings for a given input ranking. Experiments show that the proposed method provides a far better performance than inverted-index--based approaches, in particular, it is able to outperform the popular prefix-filtering method. Additionally, an LSH-based probabilistic pruning approach is proposed that optimizes the space utilization of inverted indices, while still maintaining a user-provided recall requirement for the results of the similarity search. Further, this thesis addresses the problem of automatically identifying interesting categorical attributes, in order to explore the entity-centric data by organizing them into meaningful categories. Our approach proposes novel statistical measures, beyond known concepts, like information entropy, in order to capture the distribution of data to train a classifier that can predict which categorical attribute will be perceived suitable by humans for data categorization. We further discuss how the information of useful categories can be applied in PANTHEON and PALEO, two data exploration frameworks developed in our group.