Refine
Year of publication
Document Type
- Doctoral Thesis (622) (remove)
Language
- English (622) (remove)
Keywords
- Visualisierung (13)
- finite element method (8)
- Finite-Elemente-Methode (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Visualization (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)
Faculty / Organisational entity
- Fachbereich Mathematik (218)
- Fachbereich Informatik (139)
- Fachbereich Maschinenbau und Verfahrenstechnik (95)
- Fachbereich Chemie (58)
- Fachbereich Elektrotechnik und Informationstechnik (45)
- Fachbereich Biologie (27)
- Fachbereich Sozialwissenschaften (15)
- Fachbereich Wirtschaftswissenschaften (8)
- Fachbereich Physik (6)
- Fachbereich ARUBI (5)
The main focus of the research lies in the interpretation and application of results and correlations of soil properties from in situ testing and subsequent use in terramechanical applications. The empirical correlations and current procedures were mainly developed for medium to large depths, and therefore they were re-evaluated and adjusted herein to reflect the current state of knowledge for the assessment of near-surface soil. For testing technologies, a field investigation to a moon analogue site was carried out. Focus was placed in the assessment of the near surface soil properties. Samples were collected for subsequent analysis in laboratory conditions. Further laboratory experiments in extraterrestrial soil simulants and other terrestrial soils were conducted and correlations with relative density and shear strength parameters were attempted. The correlations from the small scale laboratory experiments, and the new re-evaluated correlation for relative density were checked against the data from the field investigation. Additionally, single tire-soil tests were carried out, which enable the investigation of the localized soil response in order to advance current wheel designs and subsequently the vehicle’s mobility. Furthermore, numerical simulations were done to aid the investigation of the tire-soil interaction. Summing up, current relationships for estimating relative density of near surface soil were re-evaluated, and subsequently correlated to shear strength parameters that are the main input to model soil in numerical analyses. Single tire-soil tests were carried out and were used as a reference to calibrate the interaction of the tire and the soil and subsequently were utilized to model rolling scenarios which enable the assessment of soil trafficability and vehicle’s mobility.
Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.
Topological insulators (TI) are a fascinating new state of matter. Like usual insulators, their band structure possesses a band gap, such that they cannot conduct current in their bulk. However, they are able to conduct current along their edges and surfaces, due to edge states that cross the band gap. What makes TIs so interesting and potentially useful are these robust unidirectional edge currents. They are immune to significant defects and disorder, which means that they provide scattering-free transport.
In photonics, using topological protection has a huge potential for applications, e.g. for robust optical data transfer [1-3] – even on the quantum level [4, 5] – or to make devices more stable and robust [6, 7]. Therefore, the field of topological insulators has spread to optics to create the new and active research field of topological photonics [8-10].
Well-defined and controllable model systems can help to provide deeper insight into the mechanisms of topologically protected transport. These model systems provide a vast control over parameters. For example, arbitrary lattice types without defects can be examined, and single lattice sites can be manipulated. Furthermore, they allow for the observation of effects that usually happen at extremely short time-scales in solids. Model systems based on photonic waveguides are ideal candidates for this.
They consist of optical waveguides arranged on a lattice. Due to evanescent coupling, light that is inserted into one waveguide spreads along the lattice. This coupling of light between waveguides can be seen as an analogue to electrons hopping/tunneling between atomic lattice sites in a solid.
The theoretical basis for this analogy is given by the mathematical equivalence between Schrödinger and paraxial Helmholtz equation. This means that in these waveguide systems, the role of time is assigned to a spatial axis. The field evolution along the waveguides' propagation axis z thus models the temporal evolution of an electron's wave-function in solid states. Electric and magnetic fields acting on electrons in solids need to be incorporated into the photonic platform by introducing artificial fields. These artificial gauge fields need to act on photons in the same way that their electro-magnetic counterparts act on electrons. E.g., to create a photonic analogue of a topological insulator the waveguides are bent helically along their propagation axis to model the effect of a magnetic field [3]. This means that the fabrication of these waveguide arrays needs to be done in 3D.
In this thesis, a new method to 3D micro-print waveguides is introduced. The inverse structure is fabricated via direct laser writing, and subsequently infiltrated with a material with higher refractive index contrast. We will use these model systems of evanescently coupled waveguides to look at different effects in topological systems, in particular at Floquet topological systems.
We will start with a topologically trivial system, consisting of two waveguide arrays with different artificial gauge fields. There, we observe that an interface between these trivial gauge fields has a profound impact on the wave vector of the light traveling across it. We deduce an analog to Snell's law and verify it experimentally.
Then we will move on to Floquet topological systems, consisting of helical waveguides. At the interface between two Floquet topological insulators with opposite helicity of the waveguides, we find additional trivial interface modes that trap the light. This allows to investigate the interaction between trivial and topological modes in the lattice.
Furthermore, we address the question if topological edge states are robust under the influence of time-dependent defects. In a one-dimensional topological model (the Su-Schrieffer-Heeger model [11]) we apply periodic temporal modulations to an edge wave-guide. We find Floquet copies of the edge state, that couple to the bulk in a certain frequency window and thus depopulate the edge state.
In the two-dimensional Floquet topological insulator, we introduce single defects at the edge. When these defects share the temporal periodicity of the helical bulk waveguides, they have no influence on a topological edge mode. Then, the light moves around/through the defect without being scattered into the bulk. Defects with different periodicity, however, can – likewise to the defects in the SSH model – induce scattering of the edge state into the bulk.
In the end we will briefly highlight a newly emerging method for the fabrication of waveguides with low refractive index contrast. Moreover, we will introduce new ways to create artificial gauge fields by the use of orbital angular momentum states in waveguides.
Under the notion of Cyber-Physical Systems an increasingly important research area has
evolved with the aim of improving the connectivity and interoperability of previously
separate system functions. Today, the advanced networking and processing capabilities
of embedded systems make it possible to establish strongly distributed, heterogeneous
systems of systems. In such configurations, the system boundary does not necessarily
end with the hardware, but can also take into account the wider context such as people
and environmental factors. In addition to being open and adaptive to other networked
systems at integration time, such systems need to be able to adapt themselves in accordance
with dynamic changes in their application environments. Considering that many
of the potential application domains are inherently safety-critical, it has to be ensured
that the necessary modifications in the individual system behavior are safe. However,
currently available state-of-the-practice and state-of-the-art approaches for safety assurance
and certification are not applicable to this context.
To provide a feasible solution approach, this thesis introduces a framework that allows
“just-in-time” safety certification for the dynamic adaptation behavior of networked
systems. Dynamic safety contracts (DSCs) are presented as the core solution concept
for monitoring and synthesis of decentralized safety knowledge. Ultimately, this opens
up a path towards standardized service provision concepts as a set of safety-related runtime
evidences. DSCs enable the modular specification of relevant safety features in
networked applications as a series of formalized demand-guarantee dependencies. The
specified safety features can be hierarchically integrated and linked to an interpretation
level for accessing the scope of possible safe behavioral adaptations. In this way, the networked
adaptation behavior can be conditionally certified with respect to the fulfilled
DSC safety features during operation. As long as the continuous evaluation process
provides safe adaptation behavior for a networked application context, safety can be
guaranteed for a networked system mode at runtime. Significant safety-related changes
in the application context, however, can lead to situations in which no safe adaptation
behavior is available for the current system state. In such cases, the remaining DSC
guarantees can be utilized to determine optimal degradation concepts for the dynamic
applications.
For the operationalization of the DSCs approach, suitable specification elements and
mechanisms have been defined. Based on a dedicated GUI-engineering framework it is
shown how DSCs can be systematically developed and transformed into appropriate runtime
representations. Furthermore, a safety engineering backbone is outlined to support
the DSC modeling process in concrete application scenarios. The conducted validation
activities show the feasibility and adequacy of the proposed DSCs approach. In parallel,
limitations and areas of future improvement are pointed out.
Many loads acting on a vehicle depend on the condition and quality of roads
traveled as well as on the driving style of the motorist. Thus, during vehicle development,
good knowledge on these further operations conditions is advantageous.
For that purpose, usage models for different kinds of vehicles are considered. Based
on these mathematical descriptions, representative routes for multiple user
types can be simulated in a predefined geographical region. The obtained individual
driving schedules consist of coordinates of starting and target points and can
thus be routed on the true road network. Additionally, different factors, like the
topography, can be evaluated along the track.
Available statistics resulting from travel survey are integrated to guarantee reasonable
trip length. Population figures are used to estimate the number of vehicles in
contained administrative units. The creation of thousands of those geo-referenced
trips then allows the determination of realistic measures of the durability loads.
Private as well as commercial use of vehicles is modeled. For the former, commuters
are modeled as the main user group conducting daily drives to work and
additional leisure time a shopping trip during workweek. For the latter, taxis as
example for users of passenger cars are considered. The model of light-duty commercial
vehicles is split into two types of driving patterns, stars and tours, and in
the common traffic classes of long-distance, local and city traffic.
Algorithms to simulate reasonable target points based on geographical and statistical
data are presented in detail. Examples for the evaluation of routes based
on topographical factors and speed profiles comparing the influence of the driving
style are included.
In computer graphics, realistic rendering of virtual scenes is a computationally complex problem. State-of-the-art rendering technology must become more scalable to
meet the performance requirements for demanding real-time applications.
This dissertation is concerned with core algorithms for rendering, focusing on the
ray tracing method in particular, to support and saturate recent massively parallel computer systems, i.e., to distribute the complex computations very efficiently
among a large number of processing elements. More specifically, the three targeted
main contributions are:
1. Collaboration framework for large-scale distributed memory computers
The purpose of the collaboration framework is to enable scalable rendering
in real-time on a distributed memory computer. As an infrastructure layer it
manages the explicit communication within a network of distributed memory
nodes transparently for the rendering application. The research is focused on
designing a communication protocol resilient against delays and negligible in
overhead, relying exclusively on one-sided and asynchronous data transfers.
The hypothesis is that a loosely coupled system like this is able to scale linearly
with the number of nodes, which is tested by directly measuring all possible
communication-induced delays as well as the overall rendering throughput.
2. Ray tracing algorithms designed for vector processing
Vector processors are to be efficiently utilized for improved ray tracing performance. This requires the basic, scalar traversal algorithm to be reformulated
in order to expose a high degree of fine-grained data parallelism. Two approaches are investigated: traversing multiple rays simultaneously, and performing
multiple traversal steps at once. Efficiently establishing coherence in a group
of rays as well as avoiding sorting of the nodes in a multi-traversal step are the
defining research goals.
3. Multi-threaded schedule and memory management for the ray tracing acceleration structure
Construction times of high-quality acceleration structures are to be reduced by
improvements to multi-threaded scalability and utilization of vector processors. Research is directed at eliminating the following scalability bottlenecks:
dynamic memory growth caused by the primitive splits required for high-
quality structures, and top-level hierarchy construction where simple task par-
allelism is not readily available. Additional research addresses how to expose
scatter/gather-free data-parallelism for efficient vector processing.
Together, these contributions form a scalable, high-performance basis for real-time,
ray tracing-based rendering, and a prototype path tracing application implemented
on top of this basis serves as a demonstration.
The key insight driving this dissertation is that the computational power necessary
for realistic light transport for real-time rendering applications demands massively
parallel computers, which in turn require highly scalable algorithms. Therefore this
dissertation provides important research along the path towards virtual reality.
While the design step should be free from computational related constraints and operations due to its artistic aspect, the modeling phase has to prepare the model for the later stages of the pipeline.
This dissertation is concerned with the design and implementation of a framework for local remeshing and optimization. Based on the experience gathered, a full study about mesh quality criteria is also part of this work.
The contributions can be highlighted as: (1) a local meshing technique based on a completely novel approach constrained to the preservation of the mesh of non interesting areas. With this concept, designers can work on the design details of specific regions of the model without introducing more polygons elsewhere; (2) a tool capable of recovering the shape of a refined area to its decimated version, enabling details on optimized meshes of detailed models; (3) the integration of novel techniques into a single framework for meshing and smoothing which is constrained to surface structure; (4) the development of a mesh quality criteria priority structure, being able to classify and prioritize according to the application of the mesh.
Although efficient meshing techniques have been proposed along the years, most of them lack the possibility to mesh smaller regions of the base mesh, preserving the mesh quality and density of outer areas.
Considering this limitation, this dissertation seeks answers to the following research questions:
1. Given that mesh quality is relative to the application it is intended for, is it possible to design a general mesh evaluation plan?
2. How to prioritize specific mesh criteria over others?
3. Given an optimized mesh and its original design, how to improve the representation of single regions of the first, without degrading the mesh quality elsewhere?
Four main achievements came from the respective answers:
1. The Application Driven Mesh Quality Criteria Structure: Due to high variation in mesh standards because of various computer aided operations performed for different applications, e.g. animation or stress simulation, a structure for better visualization of mesh quality criteria is proposed. The criteria can be used to guide the mesh optimization, making the task consistent and reliable. This dissertation also proposes a methodology to optimize the criteria values, which is adaptable to the needs of a specific application.
2. Curvature Driven Meshing Algorithm: A novel approach, a local meshing technique, which works on a desired area of the mesh while preserving its boundaries as well as the rest of the topology. It causes a slow growth in the overall amount of polygons by making only small regions denser. The method can also be used to recover the details of a reference mesh to its decimated version while refining it. Moreover, it employs a geometric fast and easy to implement approach representing surface features as simple circles, being used to guide the meshing. It also generates quad-dominant meshes, with triangle count directly dependent on the size of the boundary.
3. Curvature-based Method for Anisotropic Mesh Smoothing: A geometric-based method is extended to 3D space to be able to produce anisotropic elements where needed. It is made possible by mapping the original space to another which embeds the surface curvature. This methodology is used to enhance the smoothing algorithm by making the nearly regularized elements follow the surface features, preserving the original design. The mesh optimization method also preserves mesh topology, while resizing elements according to the local mesh resolution, effectively enhancing the design aspects intended.
4. Framework for Local Restructure of Meshed Surfaces: The combination of both methods creates a complete tool for recovering surface details through mesh refinement and curvature aware mesh smoothing.
In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.
In this thesis, we consider the problem of processing similarity queries over a dataset of top-k rankings and class constrained objects. Top-k rankings are the most natural and widely used technique to compress a large amount of information into a concise form. Spearman’s Footrule distance is used to compute the similarity between rankings, considering how well rankings agree on the positions (ranks) of ranked items. This setup allows the application of metric distance-based pruning strategies, and, alternatively, enables the use of traditional inverted indices for retrieving rankings that overlap in items. Although both techniques can be individually applied, we hypothesize that blending these two would lead to better performance. First, we formulate theoretical bounds over the rankings, based on Spearman's Footrule distance, which are essential for adapting existing, inverted index based techniques to the setting of top-k rankings. Further, we propose a hybrid indexing strategy, designed for efficiently processing similarity range queries, which incorporates inverted indices and metric space indices, such as M- or BK-trees, resulting in a structure that resembles both indexing methods with tunable emphasis on one or the other. Moreover, optimizations to the inverted index component are presented, for early termination and minimizing bookkeeping. As vast amounts of data are being generated on a daily bases, we further present a distributed, highly tunable, approach, implemented in Apache Spark, for efficiently processing similarity join queries over top-k rankings. To combine distance-based filtering with inverted indices, the algorithm works in several phases. The partial results are joined for the computation of the final result set. As the last contribution of the thesis, we consider processing k-nearest-neighbor (k-NN) queries over class-constrained objects, with the additional requirement that the result objects are of a specific type. We introduce the MISP index, which first indexes the objects by their (combination of) class belonging, followed by a similarity search sub index for each subset of objects. The number of such subsets can combinatorially explode, thus, we provide a cost model that analyzes the performance of the MISP index structure under different configurations, with the aim of finding the most efficient one for the dataset being searched.
Visualization is vital to the scientific discovery process.
An interactive high-fidelity rendering provides accelerated insight into complex structures, models and relationships.
However, the efficient mapping of visualization tasks to high performance architectures is often difficult, being subject to a challenging mixture of hardware and software architectural complexities in combination with domain-specific hurdles.
These difficulties are often exacerbated on heterogeneous architectures.
In this thesis, a variety of ray casting-based techniques are developed and investigated with respect to a more efficient usage of heterogeneous HPC systems for distributed visualization, addressing challenges in mesh-free rendering, in-situ compression, task-based workload formulation, and remote visualization at large scale.
A novel direct raytracing scheme for on-the-fly free surface reconstruction of particle-based simulations using an extended anisoptropic kernel model is investigated on different state-of-the-art cluster setups.
The versatile system renders up to 170 million particles on 32 distributed compute nodes at close to interactive frame rates at 4K resolution with ambient occlusion.
To address the widening gap between high computational throughput and prohibitively slow I/O subsystems, in situ topological contour tree analysis is combined with a compact image-based data representation to provide an effective and easy-to-control trade-off between storage overhead and visualization fidelity.
Experiments show significant reductions in storage requirements, while preserving flexibility for exploration and analysis.
Driven by an increasingly heterogeneous system landscape, a flexible distributed direct volume rendering and hybrid compositing framework is presented.
Based on a task-based dynamic runtime environment, it enables adaptable performance-oriented deployment on various platform configurations.
Comprehensive benchmarks with respect to task granularity and scaling are conducted to verify the characteristics and potential of the novel task-based system design.
A core challenge of HPC visualization is the physical separation of visualization resources and end-users.
Using more tiles than previously thought reasonable, a distributed, low-latency multi-tile streaming system is demonstrated, being able to sustain a stable 80 Hz when streaming up to 256 synchronized 3840x2160 tiles and achieve 365 Hz at 3840x2160 for sort-first compositing over the internet, thereby enabling lightweight visualization clients and leaving all the heavy lifting to the remote supercomputer.
Topology-Based Characterization and Visual Analysis of Feature Evolution in Large-Scale Simulations
(2019)
This manuscript presents a topology-based analysis and visualization framework that enables the effective exploration of feature evolution in large-scale simulations. Such simulations pose additional challenges to the already complex task of feature tracking and visualization, since the vast number of features and the size of the simulation data make it infeasible to naively identify, track, analyze, render, store, and interact with data. The presented methodology addresses these issues via three core contributions. First, the manuscript defines a novel topological abstraction, called the Nested Tracking Graph (NTG), that records the temporal evolution of features that exhibit a nesting hierarchy, such as superlevel set components for multiple levels, or filtered features across multiple thresholds. In contrast to common tracking graphs that are only capable of describing feature evolution at one hierarchy level, NTGs effectively summarize their evolution across all hierarchy levels in one compact visualization. The second core contribution is a view-approximation oriented image database generation approach (VOIDGA) that stores, at simulation runtime, a reduced set of feature images. Instead of storing the features themselves---which is often infeasable due to bandwidth constraints---the images of these databases can be used to approximate the depicted features from any view angle within an acceptable visual error, which requires far less disk space and only introduces a neglectable overhead. The final core contribution combines these approaches into a methodology that stores in situ the least amount of information necessary to support flexible post hoc analysis utilizing NTGs and view approximation techniques.
Various physical phenomenons with sudden transients that results into structrual changes can be modeled via
switched nonlinear differential algebraic equations (DAEs) of the type
\[
E_{\sigma}\dot{x}=A_{\sigma}x+f_{\sigma}+g_{\sigma}(x). \tag{DAE}
\]
where \(E_p,A_p \in \mathbb{R}^{n\times n}, x\mapsto g_p(x),\) is a mapping, \(p \in \{1,\cdots,P\}, P\in \mathbb{N}
f \in \mathbb{R} \rightarrow \mathbb{R}^n , \sigma: \mathbb{R} \rightarrow \{1,\cdots, P\}\).
Two related common tasks are:
Task 1: Investigate if above (DAE) has a solution and if it is unique.
Task 2: Find a connection among a solution of above (DAE) and solutions of related
partial differential equations.
In the linear case \(g(x) \equiv 0\) the task 1 has been tackeled already in a
distributional solution framework.
A main goal of the dissertation is to give contribution to task 1 for the
nonlinear case \(g(x) \not \equiv 0\) ; also contributions to the task 2 are given for
switched nonlinear DAEs arising while modeling sudden transients in water
distribution networks. In addition, this thesis contains the following further
contributions:
The notion of structured switched nonlinear DAEs has been introduced,
allowing also non regular distributions as solutions. This extend a previous
framework that allowed only piecewise smooth functions as solutions. Further six mild conditions were given to ensure existence and uniqueness of the solution within the space of piecewise smooth distribution. The main
condition, namely the regularity of the matrix pair \((E,A)\), is interpreted geometrically for those switched nonlinear DAEs arising from water network graphs.
Another contribution is the introduction of these switched nonlinear DAEs
as a simplication of the PDE model used classically for modeling water networks. Finally, with the support of numerical simulations of the PDE model it has been illustrated that this switched nonlinear DAE model is a good approximation for the PDE model in case of a small compressibility coefficient.
Shared memory concurrency is the pervasive programming model for multicore architectures
such as x86, Power, and ARM. Depending on the memory organization, each architecture follows
a somewhat different shared memory model. All these models, however, have one common
feature: they allow certain outcomes for concurrent programs that cannot be explained
by interleaving execution. In addition to the complexity due to architectures, compilers like
GCC and LLVM perform various program transformations, which also affect the outcomes of
concurrent programs.
To be able to program these systems correctly and effectively, it is important to define a
formal language-level concurrency model. For efficiency, it is important that the model is
weak enough to allow various compiler optimizations on shared memory accesses as well
as efficient mappings to the architectures. For programmability, the model should be strong
enough to disallow bogus “out-of-thin-air” executions and provide strong guarantees for well-synchronized
programs. Because of these conflicting requirements, defining such a formal
model is very difficult. This is why, despite years of research, major programming languages
such as C/C++ and Java do not yet have completely adequate formal models defining their
concurrency semantics.
In this thesis, we address this challenge and develop a formal concurrency model that is very
good both in terms of compilation efficiency and of programmability. Unlike most previous
approaches, which were defined either operationally or axiomatically on single executions,
our formal model is based on event structures, which represents multiple program executions,
and thus gives us more structure to define the semantics of concurrency.
In more detail, our formalization has two variants: the weaker version, WEAKEST, and the
stronger version, WEAKESTMO. The WEAKEST model simulates the promising semantics proposed
by Kang et al., while WEAKESTMO is incomparable to the promising semantics. Moreover,
WEAKESTMO discards certain questionable behaviors allowed by the promising semantics.
We show that the proposed WEAKESTMO model resolve out-of-thin-air problem, provide
standard data-race-freedom (DRF) guarantees, allow the desirable optimizations, and can be
mapped to the architectures like x86, PowerPC, and ARMv7. Additionally, our models are
flexible enough to leverage existing results from the literature to establish data-race-freedom
(DRF) guarantees and correctness of compilation.
In addition, in order to ensure the correctness of compilation by a major compiler, we developed
a translation validator targeting LLVM’s “opt” transformations of concurrent C/C++
programs. Using the validator, we identified a few subtle compilation bugs, which were reported
and were fixed. Additionally, we observe that LLVM concurrency semantics differs
from that of C11; there are transformations which are justified in C11 but not in LLVM and
vice versa. Considering the subtle aspects of LLVM concurrency, we formalized a fragment
of LLVM’s concurrency semantics and integrated it into our WEAKESTMO model.
The systems in industrial automation management (IAM) are information systems. The management parts of such systems are software components that support the manufacturing processes. The operational parts control highly plug-compatible devices, such as controllers, sensors and motors. Process variability and topology variability are the two main characteristics of software families in this domain. Furthermore, three roles of stakeholders -- requirement engineers, hardware-oriented engineers, and software developers -- participate in different derivation stages and have different variability concerns. In current practice, the development and reuse of such systems is costly and time-consuming, due to the complexity of topology and process variability. To overcome these challenges, the goal of this thesis is to develop an approach to improve the software product derivation process for systems in industrial automation management, where different variability types are concerned in different derivation stages. Current state-of-the-art approaches commonly use general-purpose variability modeling languages to represent variability, which is not sufficient for IAM systems. The process and topology variability requires more user-centered modeling and representation. The insufficiency of variability modeling leads to low efficiency during the staged derivation process involving different stakeholders. Up to now, product line approaches for systematic variability modeling and realization have not been well established for such complex domains. The model-based derivation approach presented in this thesis integrates feature modeling with domain-specific models for expressing processes and topology. The multi-variability modeling framework includes the meta-models of the three variability types and their associations. The realization and implementation of the multi-variability involves the mapping and the tracing of variants to their corresponding software product line assets. Based on the foundation of multi-variability modeling and realization, a derivation infrastructure is developed, which enables a semi-automated software derivation approach. It supports the configuration of different variability types to be integrated into the staged derivation process of the involved stakeholders. The derivation approach is evaluated in an industry-grade case study of a complex software system. The feasibility is demonstrated by applying the approach in the case study. By using the approach, both the size of the reusable core assets and the automation level of derivation are significantly improved. Furthermore, semi-structured interviews with engineers in practice have evaluated the usefulness and ease-of-use of the proposed approach. The results show a positive attitude towards applying the approach in practice, and high potential to generalize it to other related domains.
The usage of sensors in modern technical systems and consumer products is in a rapid increase. This advancement can be characterized by two major factors, namely, the mass introduction of consumer oriented sensing devices to the market and the sheer amount of sensor data being generated. These characteristics raise subsequent challenges regarding both the consumer sensing devices' reliability and the management and utilization of the generated sensor data. This thesis addresses these challenges through two main contributions. It presents a novel framework that leverages sentiment analysis techniques in order to assess the quality of consumer sensing devices. It also couples semantic technologies with big data technologies to present a new optimized approach for realization and management of semantic sensor data, hence providing a robust means of integration, analysis, and reuse of the generated data. The thesis also presents several applications that show the potential of the contributions in real-life scenarios.
Due to the broad range, growing feature set and fast release pace of new sensor-based products, evaluating these products is very challenging as standard product testing is not practical. As an alternative, an end-to-end aspect-based sentiment summarizer pipeline for evaluation of consumer sensing devices is presented. The pipeline uses product reviews to extract the sentiment at the aspect level and includes several components namely, product name extractor, aspects extractor and a lexicon-based sentiment extractor which handles multiple sentiment analysis challenges such as sentiment shifters, negations, and comparative sentences among others. The proposed summarizer's components generally outperform the state-of-the-art approaches. As a use case, features of the market leading fitness trackers are evaluated and a dynamic visual summarizer is presented to display the evaluation results and to provide personalized product recommendations for potential customers.
The increased usage of sensing devices in the consumer market is accompanied with increased deployment of sensors in various other fields such as industry, agriculture, and energy production systems. This necessitates using efficient and scalable methods for storing and processing of sensor data. Coupling big data technologies with semantic techniques not only helps to achieve the desired storage and processing goals, but also facilitates data integration, data analysis, and the utilization of data in unforeseen future applications through preserving the data generation context. This thesis proposes an efficient and scalable solution for semantification, storage and processing of raw sensor data through ontological modelling of sensor data and a novel encoding scheme that harnesses the split between the statements of the conceptual model of an ontology (TBox) and the individual facts (ABox) along with in-memory processing capabilities of modern big data systems. A sample use case is further introduced where a smartphone is deployed in a transportation bus to collect various sensor data which is then utilized in detecting street anomalies.
In addition to the aforementioned contributions, and to highlight the potential use cases of sensor data publicly available, a recommender system is developed using running route data, used for proximity-based retrieval, to provide personalized suggestions for new routes considering the runner's performance, visual and nature of route preferences.
This thesis aims at enhancing the integration of sensing devices in daily life applications through facilitating the public acquisition of consumer sensing devices. It also aims at achieving better integration and processing of sensor data in order to enable new potential usage scenarios of the raw generated data.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
Linking protistan community shifts along salinity gradients with cellular haloadaptation strategies
(2019)
Salinity is one of the most structuring environmental factors for microeukaryotic communities. Using eDNA barcoding, I detected significant shifts in microeukaryotic community compositions occurring at distinct salinities between brackish and marine conditions in the Baltic Sea. I, furthermore, conducted a metadata analysis including my and other marine and hypersaline community sequence data to confirm the existence of salinity-related transition boundaries and significant changes in alpha diversity patterns along a brackish to hypersaline gradient. One hypothesis for the formation of salinity-dependent transition boundaries between brackish to hypersaline conditions is the use of different cellular haloadaptation strategies. To test this hypothesis, I conducted metatranscriptome analyses of microeukaryotic communities along a pronounced salinity gradient (40 – 380 ‰). Clustering of functional transcripts revealed differences in metabolic properties and metabolic capacities between microeukaryotic communities at specific salinities, corresponding to the transition boundaries already observed in the taxonomic eDNA barcoding approach. In specific, microeukaryotic communities thriving at mid-hypersaline conditions (≤ 150 ‰) seem to predominantly apply the ‘low-salt – organic-solutes-in’ strategy by accumulating compatible solutes to counteract osmotic stress. Indications were found for both the intracellular synthesis of compatible solutes as well as for cellular transport systems. In contrast, communities of extreme-hypersaline habitats (≥ 200 ‰) may preferentially use the ‘high-salt-in’ strategy, i. e. the intracellular accumulation of inorganic ions in high concentrations, which is implied by the increased expression of Mg2+, K+, Cl- transporters and channels.
In order to characterize the ‘low-salt – organic-solutes-in’ strategy applied by protists in more detail, I conducted a time-resolved transcriptome analysis of the heterotrophic ciliate Schmidingerothrix salinarum serving as model organism. S. salinarum was thus subjected to a salt-up shock to investigate the intracellular response to osmotic stress by shifts of gene expression. After increasing the external salinity, an increased expression of two-component signal transduction systems and MAPK cascades was observed. In an early reaction, the expression of transport mechanisms for K+, Cl- and Ca2+ increased, which may enhance the capacity of K+, Cl- and Ca2+ in the cytoplasm to compensate possibly harmful Na+ influx. Expression of enzymes for the synthesis of possible compatible solutes, starting with glycine betaine, followed by ectoine and later proline, could imply that the inorganic ions K+, Cl- and Ca2+ are gradually replaced by the synthesized compatible solutes. Additionally, expressed transporters for choline (precursor of glycine betaine) and proline could indicate an intracellular accumulation of compatible solutes to balance the external salinity. During this accumulation, the up-regulated ion export mechanisms may increase the capacity for Na+ expulsion from the cytoplasm and ion compartmentalization between cell organelles seem to happen.
The results of my PhD project revealed first evidence at molecular level for the salinity-dependent use of different haloadaptation strategies in microeukaryotes and significantly extend existing knowledge about haloadaptation processes in ciliates. The results provide ground for future research, such as (comparative) transcriptome analysis of ciliates thriving in extreme-hypersaline habitats or experiments like qRT-PCR to validate transcriptome results.
On the Effect of Nanofillers on the Environmental Stress Cracking Resistance of Glassy Polymers
(2019)
It is well known that reinforcing polymers with small amounts of nano-sized fillers is one of the most effective methods for simultaneously improving their mechanical and thermal properties. However, only a small number of studies have focused on environ-mental stress cracking (ESC), which is a major issue for premature failures of plastic products in service. Therefore, the contribution of this work focused on the influence of nano-SiO2 particles on the morphological, optical, mechanical, thermal, as well as envi-ronmental stress cracking properties of amorphous-based nanocomposites.
Polycarbonate (PC), polystyrene (PS) and poly(methyl methacrylate) (PMMA) nanocom-posites containing different amounts and sizes of nano-SiO2 particles were prepared using a twin-screw extruder followed by injection molding. Adding a small amount of nano-SiO2 caused a reduction in optical properties but improved the tensile, toughness, and thermal properties of the polymer nanocomposites. The significant enhancement in mechanical and thermal properties was attributed to the adequate level of dispersion and interfacial interaction of the SiO2 nanoparticles in the polymer matrix. This situation possibly increased the efficiency of stress transfer across the nanocomposite compo-nents. Moreover, the data revealed a clear dependency on the filler size. The polymer nanocomposites filled with smaller nanofillers exhibited an outstanding enhancement in both mechanical properties and transparency compared with nanocomposites filled with larger particles. The best compromise of strength, toughness, and thermal proper-ties was achieved in PC-based nanocomposites. Therefore, special attention to the influ-ence of nanofiller on the ESC resistance was given to PC.
The ESC resistance of the materials was investigated under static loading with and without the presence of stress-cracking agents. Interestingly, the incorporation of nano-SiO2 greatly enhanced the ESC resistance of PC in all investigated fluids. This result was particularly evident with the smaller quantities and sizes of nano-SiO2. The enhancement in ESC resistance was more effective in mild agents and air, where the quality of the deformation process was vastly altered with the presence of nano-SiO2. This finding confirmed that the new structural arrangements on the molecular scale in-duced by nanoparticles dominate over the ESC agent absorption effect and result in greatly improving the ESC resistance of the materials. This effect was more pronounced with increasing molecular weight of PC due to an increase in craze stability and fibril density. The most important and new finding is that the ESC behavior of polymer-based nanocomposites/ stress-cracking agent combinations can be scaled using the Hansen solubility parameter. Thus allowed us to predict the risk of ESC as a function of the filler content for different stress-cracking agents without performing extensive tests. For a comparison of different amorphous polymer-based nanocomposites at a given nano-SiO2 particle content, the ESC resistance of materials improved in the following order: PMMA/SiO2 < PS/SiO2 < low molecular weight PC/SiO2 < high molecular weight PC/SiO2. In most cases, nanocomposites with 1 vol.% of nano-SiO2 particles exhibited the largest improvement in ESC resistance.
However, the remarkable improvement in the ESC resistance—particularly in PC-based nanocomposites—created some challenges related to material characterization because testing times (failure time) significantly increased. Accordingly, the superposition ap-proach has been applied to construct a master curve of crack propagation model from the available short-term tests at different temperatures. Good agreement of the master curves with the experimental data revealed that the superposition approach is a suitable comparative method for predicting slow crack growth behavior, particularly for long-duration cracking tests as in mild agents. This methodology made it possible to mini-mize testing time.
Additionally, modeling and simulations using the finite element method revealed that multi-field modeling could provide reasonable predictions for diffusion processes and their impact on fracture behavior in different stress cracking agents. This finding sug-gests that the implemented model may be a useful tool for quick screening and mitigat-ing the risk of ESC failures in plastic products.
Most modern multiprocessors offer weak memory behavior to improve their performance in terms of throughput. They allow the order of memory operations to be observed differently by each processor. This is opposite to the concept of sequential consistency (SC) which enforces a unique sequential view on all operations for all processors. Because most software has been and still is developed with SC in mind, we face a gap between the expected behavior and the actual behavior on modern architectures. The issues described only affect multithreaded software and therefore most programmers might never face them. However, multi-threaded bare metal software like operating systems, embedded software, and real-time software have to consider memory consistency and ensure that the order of memory operations does not yield unexpected results. This software is more critical as general consumer software in terms of consequences, and therefore new methods are needed to ensure their correct behavior.
In general, a memory system is considered weak if it allows behavior that is not possible in a sequential system. For example, in the SPARC processor with total store ordering (TSO) consistency, all writes might be delayed by store buffers before they eventually are processed by the main memory. This allows the issuing process to work with its own written values before other processes observed them (i.e., reading its own value before it leaves the store buffer). Because this behavior is not possible with sequential consistency, TSO is considered to be weaker than SC. Programming in the context of weak memory architectures requires a proper comprehension of how the model deviates from expected sequential behavior. For verification of these programs formal representations are required that cover the weak behavior in order to utilize formal verification tools.
This thesis explores different verification approaches and respectively fitting representations of a multitude of memory models. In a joint effort, we started with the concept of testing memory operation traces in regard of their consistency with different memory consistency models. A memory operation trace is directly derived from a program trace and consists of a sequence of read and write operations for each process. Analyzing the testing problem, we are able to prove that the problem is NP-complete for most memory models. In that process, a satisfiability (SAT) encoding for given problem instances was developed, that can be used in reachability and robustness analysis.
In order to cover all program executions instead of just a single program trace, additional representations are introduced and explored throughout this thesis. One of the representations introduced is a novel approach to specify a weak memory system using temporal logics. A set of linear temporal logic (LTL) formulas is developed that describes all properties required to restrict possible traces to those consistent to the given memory model. The resulting LTL specifications can directly be used in model checking, e.g., to check safety conditions. Unfortunately, the derived LTL specifications suffer from the state explosion problem: Even small examples, like the Peterson mutual exclusion algorithm, tend to generate huge formulas and require vast amounts of memory for verification. For this reason, it is concluded that using the proposed verification approach these specifications are not well suited for verification of real world software. Nonetheless, they provide comprehensive and formally correct descriptions that might be used elsewhere, e.g., programming or teaching.
Another approach to represent these models are operational semantics. In this thesis, operational semantics of weak memory models are provided in the form of reference machines that are both correct and complete regarding the memory model specification. Operational semantics allow to simulate systems with weak memory models step by step. This provides an elegant way to study the effects that lead to weak consistent behavior, while still providing a basis for formal verification. The operational models are then incorporated in verification tools for multithreaded software. These state space exploration tools proved suitable for verification of multithreaded software in a weak consistent memory environment. However, because not only the memory system but also the processor are expressed as operational semantics, some verification approach will not be feasible due to the large size of the state space.
Finally, to tackle the beforementioned issue, a state transition system for parallel programs is proposed. The transition system is defined by a set of structural operational semantics (SOS) rules and a suitable memory structure that can cover multiple memory models. This allows to influence the state space by use of smart representations and approximation approaches in future work.
In der vorliegenden Arbeit wird das Verhalten von thermoplastischen
Verbundwerkstoffen mittels experimentellen und numerischen Untersuchungen
betrachtet. Das Ziel dieser Untersuchungen ist die Identifikation und Quantifikation
des Versagensverhaltens und der Energieabsorptionsmechanismen von geschichteten,
quasi-isotropen thermoplastischen Faser-Kunststoff-Verbunden und die Umsetzung
der gewonnenen Einsichten in Eigenschaften und Verhalten eines Materialmodells zur
Vorhersage des Crash-Verhaltens dieser Werkstoffe in transienten Analysen.
Vertreter der untersuchten Klassen sind un- und mittel-vertreckte Rundgestricke und
glasfaserverstärkte Thermoplaste (GMT). Die Untersuchungen an rundgestrickten
glasfaser-(GF)-verstärktem Polyethylentherephthalat (PET) waren Teil eines
Forschungsprojektes zur Charakterisierung sowohl der Verarbeitbarkeit als auch des
mechanischen Verhaltens. Experimente an GMT und Schnittfaser-GMT wurden
ebenfalls zum Vergleich mit dem Gestrick durchgeführt und dienen als Bestätigung
des beobachteten Verhaltens des Gestrickes.
Besonderer Aufmerksamkeit wird der Einfluß der Probengeometrie auf die Resultate
gewidmet, weil die Crash-Charakteristiken wesentlich von der Geometrie des
getesteten Probekörpers abhängen. Hierzu wurde ein Rundhutprofil zur Untersuchung
dieses Einflußes definiert. Diese spezielle Geometrie hat insbesondere Vorteile
hinsichtlich Energieabsorptionsvermögen sowie Herstellbarkeit von thermoplastischen
Verbundwerkstoffen (TPCs). Es wurden Impakt- und Perforationsversuche zur
Untersuchung der Schädigungsausbreitung und zur Charakterisierung der Zähigkeit
der untersuchten Materialien durchgeführt.
Geschichtete TPCs versagen hauptsächlich in einem Laminat-Biegemodus mit
kombiniertem intra- und interlaminaren Schub (transversaler Schub zwischen Lagen und teilweise mit transversalen Schubbrüchen in einzelnen Lagen). Durch eine
Kopplung der aktuellen Versagensmodi und Crash-Kennwerten wie der mittleren
Crash-Spannung, konnten Indikationen über die Relation zwischen Materialparameter
und absoluter Energieabsorption gewonnen werden.
Numerische Untersuchungen wurden mit einem expliziten Finiten Elemente-
Programm zur Simulation von dreidimensionalen, großen Verformungen durchgeführt.
Das Modell besteht bezüglich des Querschnittaufbaus aus einer mesoskopischen
Darstellung, die zwischen Matrix-zwischenlagen und mesoskopischen Verbundwerkstofflagen unterscheidet. Die Modellgeometrie stellt einen vereinfachten
Längsquerschnitt durch den Probekörper dar. Dabei wurden Einflüsse der Reibung
zwischen Impaktor und Material sowie zwischen einzelnen Lagen berücksichtigt.
Auch die lokal herrschende Dehnrate, Energie und Spannungs-Dehnungsverteilung
über die mesoskopischen Phasen konnten beobachtet werden. Dieses Modell zeigt
deutlich die verschiedenen Effekte, die durch den heterogenen Charakter des Laminats
entstehen, und gibt auch Hinweise für einige Erklärungen dieser Effekte.
Basierend auf den Resultaten der obengenannten Untersuchungen wurde ein
phänomenologisches Modell mit a-priori Information des inherenten
Materialverhaltens vorgeschlagen. Daher, daß das Crashverhalten vom heterogenen
Charakter des Werkstoffes dominiert wird, werden im Modell die Phasen separat
betrachtet. Eine einfache Methode zur Bestimmung der mesoskopischen Eigenschaften
wird diskutiert.
Zur Beschreibung des Verhaltens vom thermoplastischen Matrixsystem während
„Crushing“ würde ein dehnraten- und temperaturabhängiges Plastizitätsgesetz
ausreichen. Für die Beschreibung des Verhaltens der Verbundwerkstoffschichten wird
eine gekoppelte Plastizitäts- und Schädigungsformulierung vorgeschlagen. Ein solches
Modell kann sowohl den plastischen Anteil des Matrixsystems als auch das
„Softening“ - verursacht durch Faser-Matrix-Grenzflächenversagen und Faserbrüche -
beschreiben. Das vorgeschlagene Modell unterscheidet zwischen Belastungsfällen für
axiales „Crushing“ und Versagen ohne „Crushing“. Diese Unterteilung ermöglicht
eine explizite Modellierung des Werkstoffes unter Berücksichtigung des spezifischen
Materialzustandes und der Geometrie für den außerordentlichen Belastungsfall, der
zum progressiven Versagen führt.
Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
Large-scale distributed systems consist of a number of components, take a number of parameter values as input, and behave differently based on a number of non-deterministic events. All these features—components, parameter values, and events—interact in complicated ways, and unanticipated interactions may lead to bugs. Empirically, many bugs in these systems are caused by interactions of only a small number of features. In certain cases, it may be possible to test all interactions of \(k\) features for a small constant \(k\) by executing a family of tests that is exponentially or even doubly-exponentially smaller than the family of all tests. Thus, in such cases we can effectively uncover all bugs that require up to \(k\)-wise interactions of features.
In this thesis we study two occurrences of this phenomenon. First, many bugs in distributed systems are caused by network partition faults. In most cases these bugs occur due to two or three key nodes, such as leaders or replicas, not being able to communicate, or because the leading node finds itself in a block of the partition without quorum. Second, bugs may occur due to unexpected schedules (interleavings) of concurrent events—concurrent exchange of messages and concurrent access to shared resources. Again, many bugs depend only on the relative ordering of a small number of events. We call the smallest number of events whose ordering causes a bug the depth of the bug. We show that in both testing scenarios we can effectively uncover bugs involving small number of nodes or bugs of small depth by executing small families of tests.
We phrase both testing scenarios in terms of an abstract framework of tests, testing goals, and goal coverage. Sets of tests that cover all testing goals are called covering families. We give a general construction that shows that whenever a random test covers a fixed goal with sufficiently high probability, a small randomly chosen set of tests is a covering family with high probability. We then introduce concrete coverage notions relating to network partition faults and bugs of small depth. In case of network partition faults, we show that for the introduced coverage notions we can find a lower bound on the probability that a random test covers a given goal. Our general construction then yields a randomized testing procedure that achieves full coverage—and hence, find bugs—quickly.
In case of coverage notions related to bugs of small depth, if the events in the program form a non-trivial partial order, our general construction may give a suboptimal bound. Thus, we study other ways of constructing covering families. We show that if the events in a concurrent program are partially ordered as a tree, we can explicitly construct a covering family of small size: for balanced trees, our construction is polylogarithmic in the number of events. For the case when the partial order of events does not have a "nice" structure, and the events and their relation to previous events are revealed while the program is running, we give an online construction of covering families. Based on the construction, we develop a randomized scheduler called PCTCP that uniformly samples schedules from a covering family and has a rigorous guarantee of finding bugs of small depth. We experiment with an implementation of PCTCP on two real-world distributed systems—Zookeeper and Cassandra—and show that it can effectively find bugs.
In the last decade, injection molding of long-fiber reinforced thermoplastics
(LFT) has been established as a low-cost, high volume technique for manufacturing
parts with complex shape without any post-treatment [1–3]. Applications
are mainly found in the automotive industry with a volume annually
growing by 10% to 15% [4].
While first applications were based on polyamide (PA6 and PA6.6), the market
share of glass fiber reinforced polypropylene (PP) is growing due to cost savings
and ease of processing. With the use of polypropylene, different processing
techniques such as gas-assisted injection molding [5] or injection compression
molding [6] have emerged in addition to injection molding [7, 8].
In order to overcome or justify higher materials costs when compared to short
fiber reinforced thermoplastics, the manufacturing techniques for LFT pellets
with fiber length greater than 10mm have evolved starting from pultrusion by
improving impregnation and throughput [9] or by direct addition of fiber strands
in the mold [10–12].
The benefit of long glass fiber reinforcement either in PP or PA is mainly due
to the enhanced resistance to fiber pull-out resulting in an increase in impact
properties and strength [13–19], even at low temperature levels [20]. Creep
and fatigue resistance are also substantially improved [21, 22].
The performance of fiber reinforced thermoplastics manufactured by injection
molding strongly depends on the flow-induced microstructure which is
driven by materials composition, processing conditions and part geometry.
The anisotropic microstructure is characterized by fiber fraction and dispersion,
fiber length and fiber orientation.
Facing the complexity of this processing technique, simulation becomes a precious
tool already in the concept phase for parts manufactured by injection
molding. Process simulation supports decisions with respect to choice of concepts
and materials. The part design is determined in terms of mold filling
including location of gates, vents and weld lines. Tool design requires the
determination of melt feeding, logistics and mold heating. Subsequently, performance
including prediction of shrinkage and warpage as well as structural
analysis is evaluated [23].
While simulation based on two-dimensional representation of three-dimensional
part geometry has been extensively used during the last two decades, the
complexity of the parts as well as the trend towards solid modelling in CAD
and CAE demands the step towards three-dimensional process simulation. The scope of this work is the prediction of flow-induced microstructure during
injection molding of long glass fiber reinforced polypropylene using threedimensional
process simulation. Modelling of the injection molding process in
three dimensions is supported experimentally by rheological characterization
in both shear and extensional flow and by two- and three-dimensional evaluation
of microstructure.
In chapter 2 the fundamentals of rheometry and rheology are presented with
respect to long fiber reinforced thermoplastics. The influence of parameters
on microstructure is described and approaches for modelling the state of microstructure
and its dynamics are discussed.
Chapter 3 introduces a rheometric technique allowing for rheological characterization
of polymer melts at processing conditions as encountered during
manufacturing. Using this rheometer, both shear and extensional viscosity of
long glass fiber reinforced polypropylene are measured with respect to composition
of materials, processing conditions and geometry of the cavity.
Chapter 4 contains the evaluation of microstructure of long glass fiber reinforced
polypropylene in terms of two-dimensional fiber orientation and its dependence
on materials parameters and processing condition. For the evaluation
of three-dimensional microstructure, a technique based on x-ray tomography
is introduced.
In chapter 5, modelling of microstructural dynamics is addressed. One-way
coupling of interactions between fluid and fibers is described macroscopically.
The flow behavior of fibers in the vicinity of cavity walls is evaluated experimentally.
From these observations, a model for treatment of fiber-wall interaction
with respect to numerical simulation is proposed.
Chapter 6 presents the application of three-dimensional simulation of the injection
molding process. Mold filling simulation is performed using a commercial
code while prediction of 3D fiber orientation is based on a proprietary module.
The rheological and thermal properties derived in chapter 3 are tested by
simulation of the experiments and comparison of predicted pressure and temperature
profile versus recorded results. The performance of fiber orientation
prediction is verified using analytical solutions of test examples from literature.
The capability of three-dimensional simulation is demonstrated based on the
simulation of mold filling and prediction of fiber orientation for an automotive
part.
Solid particle erosion is usually undesirable, as it leads to development of cracks and
holes, material removal and other degradation mechanisms that as final
consequence reduce the durability of the structure imposed to erosion. The main aim
of this study was to characterise the erosion behaviour of polymers and polymer
composites, to understand the nature and the mechanisms of the material removal
and to suggest modifications and protective strategies for the effective reduction of
the material removal due to erosion.
In polymers, the effects of morphology, mechanical-, thermomechanical, and fracture
mechanical- properties were discussed. It was established that there is no general
rule for high resistance to erosive wear. Because of the different erosive wear
mechanisms that can take place, wear resistance can be achieved by more than one
type of materials. Difficulties with materials optimisation for wear reduction arise from
the fact that a material can show different behaviour depending on the impact angle
and the experimental conditions. Effects of polymer modification through mixing or
blending with elastomers and inclusion of nanoparticles were also discussed.
Toughness modification of epoxy resin with hygrothermally decomposed polyesterurethane
can be favourable for the erosion resistance. This type of modification
changes also the crosslinking characteristics of the modified EP and it was
established the crosslink density along with fracture energy are decisive parameters
for the erosion response. Melt blending of thermoplastic polymers with functionalised
rubbers on the other hand, can also have a positive influence whereas inclusion of
nanoparticles deteriorate the erosion resistance at low oblique impact angles (30°).
The effects of fibre length, orientation, fibre/matrix adhesion, stacking sequence,
number, position and existence of interleaves were studied in polymer composites.
Linear and inverse rules of mixture were applied in order to predict the erosion rate of
a composite system as a function of the erosion rate of its constituents and their
relative content. Best results were generally delivered with the inverse rule of mixture
approach.
A semi-empirical model, proposed to describe the property degradation and damage
growth characteristics and to predict residual properties after single impact, was
applied for the case of solid particle erosion. Theoretical predictions and experimental
results were in very good agreement.
Strahlerosionsverschleiß (Erosion) entsteht beim Auftreffen von festen Partikel
auf Oberflächen und zeichnet sich üblicherweise durch einen Materialabtrag aus, der
neben der Partikelgeschwindigkeit und dem Auftreffwinkel stark vom jeweiligen
Werkstoff abhängt. In den letzten Jahren ist die Anwendung von Polymeren und
Verbundwerkstoffen anstelle der traditionellen Materialien stark angestiegen.
Polymere und Polymer-Verbundwerkstoffe weisen eine relativ hohe Erosionsrate
(ER) auf, was die potenzielle Anwendung dieser Werkstoffe unter erosiven
Umgebungsbedingungen erheblich einschränkt.
Untersuchungen des Erosionsverhaltens anhand ausgewählter Polymere und
Polymer-Verbundwerkstoffe haben gezeigt, dass diese Systeme unterschiedlichen
Verschleißmechnismen folgen, die sehr komplex sind und nicht nur von einer
Werkstoffeigenschaft beeinflusst werden. Anhand der ER kann das
Erosionsverhalten grob in zwei Kategorien eingeteilt werden: sprödes und duktiles
Erosionsverhalten. Das spröde Erosionsverhalten zeigt eine maximale ER bei 90°,
während das Maximum bei dem duktilen Verhalten bei 30° liegt. Ob ein Material das
eine oder das andere Erosionsverhalten aufweist, ist nicht nur von seinen
Eigenschaften, sondern auch von den jeweiligen Prüfparametern abhängig.
Das Ziel dieser Forschungsarbeit war, das grundsätzliche Verhalten von
Polymeren und Verbundwerkstoffen unter dem Einfluss von Erosion zu
charakterisieren, die verschiedenen Verschleißmechanismen zu erkennen und die
maßgeblichen Materialeigenschaften und Kennwerte zu erfassen, um Anwendungen
dieser Werkstoffe unter Erosionsbedingungen zu ermöglichen bzw. zu verbessern.
An einer exemplarischen Auswahl von Polymeren, Elastomeren, modifizierten Polymeren und Faserverbundwerkstoffen wurden die wesentlichen Einflussfaktoren
für die Erosion experimentell bestimmt.
Thermoplastische Polymere und thermoplastische- und vernetzte- Elastomere
Die Versuche, den Erosionswiderstand ausgewählter Polymere (Polyethylene
und Polyurethane) mit verschiedenen Materialeigenschaften zu korrelieren, haben
gezeigt, dass es weder eine klare Abhängigkeit von einzelnen Kenngrößen noch von
Eigenschaftskombinationen gibt. Möglicherweise führt die Bestimmung der
Materialeigenschaften unter den gleichen experimentellen Bedingungen wie bei den Erosionsversuchen zu einer besseren Korrelation zwischen ER und
Materialkenngröße.
Modifiziertes Epoxidharz
Am Beispiel eines modifizierten Epoxidharzes (EP) mit verschiedener
Vernetzungsdichte wurde eine Korrelation zwischen Erosionswiderstand und
Bruchenergie bzw. Erosionswiderstand und Vernetzungsdichte gefunden. Die
Modifizierung erfolgte mit verschiedenen Anteilen von einem hygrothermisch
abgebauten Polyurethan (HD-PUR). Der Zusammenhang zwischen ER und
Vernetzungsparametern steht im Einklang mit der Theorie der Kautschukelastizität.
Modifizierungseffizienz in Duromeren, Thermoplasten und Elastomeren
Des weiteren wurde der Einfluss von Modifizierungen von Polymeren und
Elastomeren untersucht. Mit dem obenerwähnten System (d.h. EP/HD-PUR) läßt sich
auch der Einfluss der Zähigkeitsmodifizierung des Epoxidharzes (EP) auf das
Erosionsverhalten untersuchen. Es wurde gezeigt, dass für HD-PUR Anteile von
mehr als 20 Gew.% diese Modifizierung einen positiven Einfluss auf die
Erosionsbeständigkeit hat. Durch Variation der HD-PUR-Anteile können für dieses
EP Materialeigenschaften, die zwischen den Eigenschaften eines üblichen
Duroplasten und eines weniger elastischen Gummis liegen, erzeugt werden.
Deswegen stellt der modifizierte EP-Harz ein sehr gutes Modellmaterial dar, um den
Einfluss der experimentellen Bedingungen zu studieren, und zu untersuchen, ob
verschiedene Erodenten zu gleichen Erosionsmechanismen führen. Der Übergang
vom duroplastischen zum zähen Verhalten wurde anhand von vier Erodenten
untersucht. Aus den Versuchen ergab sich, dass ein solcher Übergang auftritt, wenn
sehr feine, kantige Partikel (Korund) als Erodenten dienen. Die Partikelgröße und -form ist von entscheidender Bedeutung für die jeweiligen Verschleißmechanismen.
Die Effizienz neuartiger thermoplastischer Elastomere mit einer cokontinuierlichen
Phasenstruktur, bestehend aus thermoplastischem Polyester und
Gummi (funktionalisierter NBR und EPDM Kautschuk), wurde in Bezug auf die
Erosionsbeständigkeit untersucht. Große Anteile von funktionalisiertem Gummi (mehr
als 20 Gew.%) sind vorteilhaft für den Erosionswiderstand. Weiterhin wurde
untersucht, ob sich die herausragende Erosionsbeständigkeit von Polyurethan (PUR)
durch Zugabe von Nanosilikaten eventuell noch steigern läßt. Das Ergebnis war,
dass die Nanopartikel sich vor allem bei einem kleinen Verschleißwinkel (30°) negativ
auswirken. Die schwache Adhäsion zwischen Matrix und Partikeln erleichtert den
Beginn und das Wachsen von Rissen. Dies führt zu einem schnelleren
Materialabtrag von der Materialoberfläche.
Faserverbundwerkstoffe
Ferner wurden Faserverbundwerkstoffe (FVW) mit thermoplastischer und
duromerer Matrix auf ihr Verhalten bei Erosivverschleiß untersucht. Es war von
großem Interesse, den Einfluss von Faserlänge und -orientierung zu untersuchen.
Kurzfaserverstärkte Systeme haben einen besseren Erosionswiderstand als die
unidirektionalen (UD) Systeme. Die Rolle der Faserorientierung kann man nur in
Verbindung mit anderen Parametern, wie Matrixzähigkeit, Faseranteil oder Faser-
Matrix Haftung, berücksichtigen. Am Beispiel von GF/PP Verbunden weisen die
parallel zur Verstreckungsrichtung gestrahlten Systeme den geringsten Widerstand
auf. Andererseits findet bei einem GF/EP System die maximale ER in senkrechter
Richtung statt. Eine Verbesserung der Grenzflächenscherfestigkeit beeinflusst die
Erosionsverschleißrate nachhaltig. Wenn die Haftung der Grenzfläche ausreichend
ist, spielt die Erosionsrichtung eine unbedeutende Rolle für die ER. Weiterhin wurde
gezeigt, dass die Präsenz von zähen Zwischenschichten zu einer deutlichen
Verbesserung des Erosionswiderstands von CF/EP- Verbunden führt.
Eine weitere Aufgabenstellung war es, die Rolle des Faservolumenanteils zu
bestimmen. „Lineare, inverse und modifizierte Mischungsregeln“ wurden
angewendet, und es wurde festgestellt, dass die inversen Mischungsregeln besser
die ER in Abhängigkeit des Faservolumenanteils beschreiben können.
Im Anwendungsbereich von Faserverbundwerkstoffen ist nicht nur die Kenntnis
der ER, sondern auch die Kenntnis der Resteigenschaften erforderlich. Ein
halbempirisches Modell für die Vorhersage des Schlagenergieschwellwertes (Uo) für den Beginn der Festigkeitsabnahme und der Restzugfestigkeit nach einer
Schlagbelastung wurde bei der Untersuchung des Erosionsverschleißes
angewendet. Experimentelle Ergebnisse und theoretische Vorhersagen stimmten
nicht nur für duromere CF/EP-Verbundwerkstoffe, sondern auch für
Verbundwerkstoffe mit einer thermoplastischen Matrix (GF/PP) sehr gut überein.
Wine and alcoholic fermentations are complex and fascinating ecosystems. Wine aroma is shaped by the wine’s chemical compositions, in which both microbes and grape constituents play crucial roles. Activities of the microbial community impact the sensory properties of the final product, therefore, the characterisation of microbial diversity is essential in understanding and predicting sensory properties of wine. Characterisation has been challenging with traditional approaches, where microbes are isolated and therefore analyzed outside from their natural environment. This causes a bias in the observed microbial composition structure. In addition, true community interactions cannot be studied using isolates. Furthermore, the multiplex ties between wine chemical and sensory compositions remain evasive due to their multivariate and nonlinear nature. Therefore, the sensorial outcome arising from different microbial communities has remained inconclusive.
In this thesis, microbial diversity during Riesling wine fermentations is investigated with the aim to understand the roles of microbial communities during fermentations and their links to sensory properties. With the advancement of high-throughput tools based ‘omic methods, such as next-generation sequencing (NGS) technologies, it is now possible to study microbial communities and their functions without isolation by culturing. This developing field and its potential to wine community is reviewed in Chapter 1. The standardisation of methods remains challenging in the field. DNA extraction is a key step in capturing the microbial diversity in samples for generating NGS data, therefore, DNA extraction methods are evaluated in Chapter 2. In Chapter 3, machine learning is utilized in guiding raw data mining generated by the untargeted GC-MS analysis. This step is crucial in order to take full advantages of the large scope of data generated by ‘omic methods. These lay a solid foundation for Chapters 4 and 5 where microbial community structures and their outputs - chemical and sensory compositions are studied by using approaches and tools based on multiple ‘omics methods.
The results of this thesis show first that by using novel statistical approaches, it is possible to extract meaningful information from heterogeneous biological, chemical and sensorial data. Secondly, results suggest that the variation in wine aroma, might be related
to microbial interactions taking place not only inside a single community, but also the
IV
interactions between communities, such as vineyard and winery communities. Therefore, the true sensory expression of terroir might be masked by the interaction between two microbial communities, although more work is needed to uncover this potential relationship. Such potential interaction mechanisms were uncovered between non- Saccharomyces yeast and bacteria in this work and unexpected novel bacterial growth was observed during alcohol fermentation. This suggests new layers in understanding of wine fermentations. In the future, multi-omic approaches could be applied to identify biological pathways leading to specific wine aroma as well as investigate the effects upon specific winemaking conditions. These results are relevant not just for the wine industry, but also to other industries where complex microbial networks are important. As such, the approaches presented in this thesis might find widely use in the food industry.
Ranking lists are an essential methodology to succinctly summarize outstanding items, computed over database tables or crowdsourced in dedicated websites. In this thesis, we propose the usage of automatically generated, entity-centric rankings to discover insights in data. We present PALEO, a framework for data exploration through reverse engineering top-k database queries, that is, given a database and a sample top-k input list, our approach, aims at determining an SQL query that returns results similar to the provided input when executed over the database. The core problem consist of finding selection predicates that return the given items, determining the correct ranking criteria, and evaluating the most promising candidate queries first. PALEO operates on subset of the base data, uses data samples, histograms, descriptive statistics, and further proposes models that assess the suitability of candidate queries which facilitate limitation of false positives. Furthermore, this thesis presents COMPETE, a novel approach that models and computes dominance over user-provided input entities, given a database of top-k rankings. The resulting entities are found superior or inferior with tunable degree of dominance over the input set---a very intuitive, yet insightful way to explore pros and cons of entities of interest. Several notions of dominance are defined which differ in computational complexity and strictness of the dominance concept---yet, interdependent through containment relations. COMPETE is able to pick the most promising approach to satisfy a user request at minimal runtime latency, using a probabilistic model that is estimating the result sizes. The individual flavors of dominance are cast into a stack of algorithms over inverted indices and auxiliary structures, enabling pruning techniques to avoid significant data access over large datasets of rankings.
Function of two redox sensing kinases from the methanogenic archaeon Methanosarcina acetivorans
(2019)
MsmS is a heme-based redox sensor kinase in Methanosarcina acetivorans consisting of alternating PAS and GAF domains connected to a C-terminal kinase domain. In addition to MsmS, M. acetivorans possesses a second kinase, MA0863 with high sequence similarity. Interestingly, MA0863 possesses an amber codon in its second GAF domain, encoding for the amino acid pyrrolysine. Thus far, no function of this residue has been resolved. In order to examine the heme iron coordination in both proteins, an improved method for the production of heme proteins was established using the Escherichia coli strain Nissle 1917. This method enables the complete reconstitution of a recombinant hemoprotein during protein production, thereby resulting in a native heme coordination. Analysis of the full-length MsmS and MA0863 confirmed a covalently bound heme cofactor, which is connected to one conserved cysteine residue in each protein. In order to identify the coordinating amino acid residues of the heme iron, UV/vis spectra of different variants were measured. These studies revealed His702 in MsmS and the corresponding His666 in MA0863 as the proximal heme ligands. MsmS has previously been described as a heme-based redox sensor. In order to examine whether the same is true for MA0863, redox dependent kinase assays were performed. MA0863 indeed displays redox dependent autophosphorylation activity, which is independent of heme ligands and only observed under oxidizing conditions. Interestingly, autophosphorylation was shown to be independent of the heme cofactor but rather relies on thiol oxidation. Therefore, MA0863 was renamed in RdmS (redox dependent methyltransferase-associated sensor). In order to identify the phosphorylation site of RdmS, thin layer chromatography was performed identifying a tyrosine as the putative phosphorylation site. This observation is in agreement with the lack of a so-called H-box in typical histidine kinases. Due to their genomic localization, MsmS and RdmS were postulated to form two-component systems (TCS) with vicinal encoded regulator proteins MsrG and MsrF. Therefore, protein-protein interaction studies using the bacterial adenylate two hybrid system were performed suggesting an interaction of RdmS and MsmS with the three regulators MsrG/F/C. Due to these multiple interactions these signal transduction pathways should rather be considered multicomponent system instead of two component systems.
Wearable activity recognition aims to identify and assess human activities with the help
of computer systems by evaluating signals of sensors which can be attached to the human
body. This provides us with valuable information in several areas: in health care, e.g. fluid
and food intake monitoring; in sports, e.g. training support and monitoring; in entertainment,
e.g. human-computer interface using body movements; in industrial scenarios, e.g.
computer support for detected work tasks. Several challenges exist for wearable activity
recognition: a large number of nonrelevant activities (null class), the evaluation of large
numbers of sensor signals (curse of dimensionality), ambiguity of sensor signals compared
to the activities and finally the high variability of human activity in general.
This thesis develops a new activity recognition strategy, called invariants classification,
which addresses these challenges, especially the variability in human activities. The
core idea is that often even highly variable actions include short, more or less invariant
sub-actions which are due to hard physical constraints. If someone opens a door, the
movement of the hand to the door handle is not fixed. However the door handle has to
be pushed to open the door. The invariants classification algorithm is structured in four
phases: segmentation, invariant identification, classification, and spotting. The segmentation
divides the continuous sensor data stream into meaningful parts, which are related
to sub-activities. Our segmentation strategy uses the zero crossings of the central difference
quotient of the sensor signals, as segment borders. The invariant identification finds
the invariant sub-activities by means of clustering and a selection strategy dependent on
certain features. The classification identifies the segments of a specific activity class, using
models generated from the invariant sub-activities. The models include the invariant
sub-activity signal and features calculated on sensor signals related to the sub-activity. In
the spotting, the classified segments are used to find the entire activity class instances in
the continuous sensor data stream. For this purpose, we use the position of the invariant
sub-activity in the related activity class instance for the estimation of the borders of the
activity instances.
In this thesis, we show that our new activity recognition strategy, built on invariant
sub-activities, is beneficial. We tested it on three human activity datasets with wearable
inertial measurement units (IMU). Compared to previous publications on the same
datasets we got improvement in the activity recognition in several classes, some with a
large margin. Our segmentation achieves a sensible method to separate the sensor data in
relation to the underlying activities. Relying on sub-activities makes us independent from
imprecise labels on the training data. After the identification of invariant sub-activities,
we calculate a value called cluster precision for each sensor signal and each class activity.
This tells us which classes can be easily classified and which sensor channels support
the classification best. Finally, in the training for each activity class, our algorithm selects
suitable signal channels with invariant sub-activities on different points in time and
with different length. This makes our strategy a multi-dimensional asynchronous motif
detection with variable motif length.
Study 1 (Chapter 2) is an empirical case study that concerns the nature of teaching–learning transactions that facilitate self-directed learning in vocational education and training of young adults in England. It addresses in part the concern that fostering the skills necessary for self-directed learning is an important endeavor of vocational education and training in many contexts internationally. However, there is a distinct lack of studies that investigate the extent to which facilitation of self-directed learning is present within vocational education and training in different contexts. An exploratory thematic qualitative analysis of inspectors’ comments within general Further Education college Ofsted inspection reports was conducted to investigate the balance of control of the learning process between teacher and learner within vocational education and training of young adults in England. A clear difference between outstanding and inadequate provision is reported. Inadequate provision was overwhelmingly teacher-directed. Outstanding provision reflected a collaborative relationship between teacher and learner in directing the learning process, despite the Ofsted framework not explicitly identifying the need for learner involvement in directing the learning process. The chapter offers insight into the understanding of how an effective balance of control of learning between teacher and learner may be realized in vocational education and training settings and highlights the need to consider the modulating role of contextual factors.
Following the further research directions outlined in Chapter 2, study 2 (Chapter 3) is a theoretical chapter that addresses the issue that fostering adult learners’ competence to adapt appropriately to our ever-changing world is a primary concern of adult education. The purpose of the chapter is novel and examines whether the consideration of modes of learning (instruction, performance, and inquiry) could assist in the design of adult education that facilitates self-directed learning and enables learners to think and perform adaptively. The concept of modes of learning originated from the typology of Houle (1980). However, to date, no study has reached beyond this typology, especially concerning the potential of using modes of learning in the design of adult education. Specifically, an apparent oversight in adult learning theory is the foremost importance of the consideration of whether inquiry is included in the learning process: its inclusion potentially differentiates the purpose of instruction, the nature of learners’ performance, and the underlying epistemological positioning. To redress this concern, two models of modes of learning are proposed and contrasted. The reinforcing model of modes of learning (instruction, performance, without inquiry) promotes teacher-directed learning. A key consequence of employing this model in adult education is that learners may become accustomed to habitually reinforcing patterns of perceiving, thinking, judging, feeling, and acting—performance that may be rather inflexible and represented by a distinct lack of a perceived need to adapt to social contextual changes: a lack of motivation for self-directed learning. Rather, the adapting model of modes of learning (instruction, performance, with inquiry) may facilitate learners to be adaptive in their performance—by encouraging an enhanced learner sensitivity toward changing social contextual conditions: potentially enhancing learners’ motivation for self-directed learning.
In line with the further research directions highlighted in Chapter 3, concerning the need to consider the nature and treatment of educational experiences that are conductive to learner growth and development, study 3 (Chapter 4) presents a systematic review of the experiential learning theory; a theory that perhaps cannot be uncoupled from self-directed learning theory, especially in regard to understanding the cognitive aspect of self-directed learning, which represents an important direction for further research on self-directed learning. D. A. Kolb’s (1984) experiential learning cycle is perhaps the most scholarly influential and cited model regarding experiential learning theory. However, a key issue in interpreting Kolb’s model concerns a lack of clarity regarding what constitutes a concrete experience, exactly. A systematic literature review was conducted in order to examine: what constitutes a concrete experience and what is the nature of treatment of a concrete experience in experiential learning? The analysis revealed five themes: learners are involved, active, participants; knowledge is situated in place and time; learners are exposed to novel experiences, which involves risk; learning demands inquiry to specific real-world problems; and critical reflection acts as a mediator of meaningful learning. Accordingly, a revision to Kolb’s model is proposed: experiential learning consists of contextually rich concrete experience, critical reflective observation, contextual-specific abstract conceptualization, and pragmatic active experimentation. Further empirical studies are required to test the model proposed. Finally, in Chapter 5 key findings of the studies are summarized, including that the models proposed in Chapters 3 and 4 (Figures 2 and 4, respectively) may be important considerations for further research on self-directed learning.
In this thesis, we deal with the worst-case portfolio optimization problem occuring in discrete-time markets.
First, we consider the discrete-time market model in the presence of crash threats. We construct the discrete worst-case optimal portfolio strategy by the indifference principle in the case of the logarithmic utility. After that we extend this problem to general utility functions and derive the discrete worst-case optimal portfolio processes, which are characterized by a dynamic programming equation. Furthermore, the convergence of the discrete worst-case optimal portfolio processes are investigated when we deal with the explicit utility functions.
In order to further study the relation of the worst-case optimal value function in discrete-time models to continuous-time models we establish the finite-difference approach. By deriving the discrete HJB equation we verify the worst-case optimal value function in discrete-time models, which satisfies a system of dynamic programming inequalities. With increasing degree of fineness of the time discretization, the convergence of the worst-case value function in discrete-time models to that in continuous-time models are proved by using a viscosity solution method.
Graphs and flow networks are important mathematical concepts that enable the modeling and analysis of a large variety of real world problems in different domains such as engineering, medicine or computer science. The number, sizes and complexities of those problems permanently increased during the last decades. This led to an increased demand of techniques that help domain experts in understanding their data and its underlying structure to enable an efficient analysis and decision making process.
To tackle this challenge, this work presents several new techniques that utilize concepts of visual analysis to provide domain scientists with new visualization methodologies and tools. Therefore, this work provides novel concepts and approaches for diverse aspects of the visual analysis such as data transformation, visual mapping, parameter refinement and analysis, model building and visualization as well as user interaction.
The presented techniques form a framework that enriches domain scientists with new visual analysis tools and help them analyze their data and gain insight from the underlying structures. To show the applicability and effectiveness of the presented approaches, this work tackles different applications such as networking, product flow management and vascular systems, while preserving the generality to be applicable to further domains.
The simulation of physical phenomena involving the dynamic behavior of fluids and gases
has numerous applications in various fields of science and engineering. Of particular interest
is the material transport behavior, the tendency of a flow field to displace parts of the
medium. Therefore, many visualization techniques rely on particle trajectories.
Lagrangian Flow Field Representation. In typical Eulerian settings, trajectories are
computed from the simulation output using numerical integration schemes. Accuracy concerns
arise because, due to limitations of storage space and bandwidth, often only a fraction
of the computed simulation time steps are available. Prior work has shown empirically that
a Lagrangian, trajectory-based representation can improve accuracy [Agr+14]. Determining
the parameters of such a representation in advance is difficult; a relationship between the
temporal and spatial resolution and the accuracy of resulting trajectories needs to be established.
We provide an error measure for upper bounds of the error of individual trajectories.
We show how areas at risk for high errors can be identified, thereby making it possible to
prioritize areas in time and space to allocate scarce storage resources.
Comparative Visual Analysis of Flow Field Ensembles. Independent of the representation,
errors of the simulation itself are often caused by inaccurate initial conditions,
limitations of the chosen simulation model, and numerical errors. To gain a better understanding
of the possible outcomes, multiple simulation runs can be calculated, resulting in
sets of simulation output referred to as ensembles. Of particular interest when studying the
material transport behavior of ensembles is the identification of areas where the simulation
runs agree or disagree. We introduce and evaluate an interactive method that enables application
scientists to reliably identify and examine regions of agreement and disagreement,
while taking into account the local transport behavior within individual simulation runs.
Particle-Based Representation and Visualization of Uncertain Flow Data Sets. Unlike
simulation ensembles, where uncertainty of the solution appears in the form of different
simulation runs, moment-based Eulerian multi-phase fluid simulations are probabilistic in
nature. These simulations, used in process engineering to simulate the behavior of bubbles in
liquid media, are aimed toward reducing the need for real-world experiments. The locations
of individual bubbles are not modeled explicitly, but stochastically through the properties of
locally defined bubble populations. Comparisons between simulation results and physical
experiments are difficult. We describe and analyze an approach that generates representative
sets of bubbles for moment-based simulation data. Using our approach, application scientists
can directly, visually compare simulation results and physical experiments.
Economics of Downside Risk
(2019)
Ever since establishment of portfolio selection theory by Markowitz (1952), the use of Standard deviation as a measure of risk has heavily been criticized. The aim of this thesis is to refine classical portfolio selection and asset pricing theory by using a downside deviation risk measure. It is defined as below-target semideviation and referred to as downside risk.
Downside efficient portfolios maximize expected payoff given a prescribed upper bound for downside risk and, thus, are analogs to mean-variance efficient portfolios in the sense of Markowitz. The present thesis provides an alternative proof of existence of downside efficient portfolios and identifies a sufficient criterion for their uniqueness. A specific representation of their form brings structural similarity to mean-variance efficient portfolios to light. Eventually, a separation theorem for the existence and uniqueness of portfolios that maximize the trade-off between downside risk and return is established.
The notion of a downside risk asset market equilibrium (DRAME) in an asset market with finitely many investors is introduced. This thesis addresses the existence and uniqueness Problem of such equilibria and specifies a DRAME pricing formula. In contrast to prices obtained from the mean-variance CAPM pricing formula, DRAME prices are arbitrage-free and strictly positive.
The final part of this thesis addresses practical issues. An algorithm that allows for an effective computation of downside efficient portfolios from simulated or historical financial data is outlined. In a simulation study, it is revealed in which scenarios downside efficient portfolios
outperform mean-variance efficient portfolios.
Carotenoids are organic lipophilic tetraterpenes ubiquitously present in Nature and found across the three domains of life (Archaea, Bacteria and Eukaryotes). Their structure is characterized by an extensive conjugated double-bond system, which serves as a light-absorbing chromophore, hence determining its colour, and enables carotenoids to absorb energy from other molecules and to act as antioxidant agents. Humans obtain carotenoids mainly via the consumption of fruits and vegetables, and to a smaller extent from other food sources such as fish and eggs. The concentration of carotenoids in the human plasma and tissues has been positively associated with a lower incidence of several chronic diseases including, cancer, diabetes, macular degeneration and cardiovascular conditions, likely due to their antioxidant properties. However, an important aspect of carotenoids, namely β- and α-carotene and β-cryptoxanthin, in human health and development, is their potential to be converted by the body into Vitamin A.
Yet, bioavailability of carotenoids is relatively low (< 30%) and dependent, among others, on dietary factors, such as amount and type of dietary lipids and the presence of dietary fibres. One dietary factor that has been found to negatively impact carotenoid bioaccessibility and cellular uptake in vitro is high concentrations of divalent cations during simulated gastro-intestinal digestion. Nevertheless, the mechanism of action of divalent cations remains unclear. The goal of this thesis was to better understand how divalent cations act during digestion and modulate carotenoid bioavailability. In vitro trials of simulated gastro-intestinal digestion and cellular uptake were run to investigate how varying concentrations of calcium, magnesium and zinc affected the bioaccessibility of both pure carotenoids and carotenoids from food matrices. In order to validate or refute results obtained in vitro, a randomized and double blinded placebo controlled cross-over postprandial trial (24 male participants) was carried out, testing the effect of 3 supplementary calcium doses (0 mg, 500 mg and 1000 mg) on the bioavailability of carotenoids from a spinach based meal. In vitro trials showed that addition of the divalent cations significantly decreased the bioaccessibility of both pure carotenoids (P < 0.001) and those from food matrices (P < 0.01). This effect was dependent on the type of mineral and its concentration. Strongest effects were seen for increasing concentrations of calcium followed by magnesium and zinc. The addition of divalent cations also altered the physico-chemical properties, i.e. viscosity and surface tension, of the digestas. However, the extent of this effect varied according to the type of matrix. The effects on bioaccessibility and physico-chemical properties were accompanied by variations of the zeta-potential of the particles in solution. Taken together, results from the in vitro trials strongly suggested that divalent cations were able to bind bile salts and other surfactant agents, affecting their solubility. The observed i) decrease in macroviscosity, ii) increase in surface tension, and the iii) reduction of the zeta-potential of the digesta, confirmed the removal of surfactant agents from the system, most likely due to precipitation as a result of the lower solubility of the mineral-surfactant complexes. As such, micellarization of carotenoids was hindered, explaining their reduced bioaccessibility. As for the human trial, results showed that there was no significant influence of supplementation with either 500 or 1000 mg of supplemental calcium (in form of carbonate) on the bioavailability of a spinach based meal, as measured by the area-under curve of carotenoid concentrations in the plasma-triacylglycerol rich fraction, suggesting that the in vitro results are not supported in such an in vivo scenario, which may be explained by the initial low bioaccessibility of spinach carotenoids and the dissolution kinetics of the calcium pills. Further investigations are necessary to understand how divalent cations act during in vivo digestion and potentially interact with lipophilic nutrients and food constituents.
The fact that long fibre reinforced thermoplastic composites (LFT) have higher tensile
strength, modulus and even toughness, compared to short fibre reinforced
thermoplastics with the same fibre loading has been well documented in literature.
These are the underlying factors that have made LFT materials one of the most
rapidly growing sectors of plastics industry. New developments in manufacturing of
LFT composites have led to improvements in mechanical properties and price
reduction, which has made these materials an attractive choice as a replacement for
metals in automobile parts and other similar applications. However, there are still
several open scientific questions concerning the material selection leading to the
optimal property combinations. The present work is an attempt to clarify some of
these questions. The target was to develop tools that can be used to modify, or to
“tailor”, the properties of LFT composite materials, according to the requirements of
automobile and other applications.
The present study consisted of three separate case studies, focusing on the current
scientific issues on LFT material systems. The first part of this work was focused on
LGF reinforced thermoplastic styrenic resins. The target was to find suitable maleic
acid anhydride (MAH) based coupling agents in order to improve the fibre-matrix
interfacial strength, and, in this way, to develop an LGF concentrate suitable for
thermoplastic styrenic resins. It was shown that the mechanical properties of LGF
reinforced “styrenics” were considerably improved when a small amount of MAH
functionalised polymer was added to the matrix. This could be explained by the better fibre-matrix adhesion, revealed by scanning electron microscopy of fracture surfaces.
A novel LGF concentrate concept showed that one particular base material can be
used to produce parts with different mechanical and thermal properties by diluting the
fibre content with different types of thermoplastic styrenic resins. Therefore, this
concept allows a flexible production of parts, and it can be used in the manufacturing
of interior parts for automobile components.The second material system dealt with so called hybrid composites, consisting of
long glass fibre reinforced polypropylene (LGF-PP) and mineral fillers like calcium
carbonate and talcum. The aim was to get more information about the fracture
behaviour of such hybrid composites under tensile and impact loading, and to
observe the influence of the fillers on properties. It was found that, in general, the
addition of fillers in LGF-PP, increased stiffness but the strength and fracture
toughness were decreased. However, calcium carbonate and talcum fillers resulted
in different mechanical properties, when added to LGF-PP: better mechanical
properties were achieved by using talcum, compared to calcium carbonate. This
phenomenon could be explained by the different nucleation effect of these fillers,
which resulted in a different crystalline morphology of polypropylene, and by the
particle orientation during the processing when talc was used. Furthermore, the
acoustic emission study revealed that the fracture mode of LGF-PP changed when
calcium carbonate was added. The characteristic acoustic signals revealed that the
addition of filler led to the fibre debonding at an earlier stage of fracture sequence
when compared to unfilled LGF-PP.
In the third material system, the target was to develop a novel long glass fibre
reinforced composite material based on the blend of polyamide with thermoset
resins. In this study a blend of polyamide-66 (PA66) and phenol formaldehyde resin
(PFR) was used. The chemical structure of the PA66-PFR resin was analysed by
using small molecular weight analogues corresponding to PA66 and PFR
components, as well as by carrying out experiments using the macromolecular
system. Theoretical calculations and experiments showed that there exists a strong
hydrogen bonding between the carboxylic groups of PA66 and the hydroxylic groups
of PFR, exceeding even the strength of amide-water hydrogen bonds. This was
shown to lead to the miscible blends, when PFR was not crosslinked. It was also
found that the morphology of such thermoplastic-thermoset blends can be controlled
by altering ratio of blend components (PA66, PFR and crosslinking agent). In the
next phase, PA66-PFR blends were reinforced by long glass fibres. The studies
showed that the water absorption of the blend samples was considerably decreased,
which was also reflected in higher mechanical properties at equilibrium state.
Wie man aus zahlreichen Untersuchungen und Anwendungsbeispielen entnehmen
kann, besitzen langfaserverstärkte Thermoplaste (LFT) eine bessere Zugfestigkeit,
Biege- und Schlagzähigkeit im Vergleich zu kurzfaserverstärkten Thermoplasten. Die
Vorteile in den mechanischen Eigenschaften haben die LFT zu einem
schnellwachsenden Bereich in der Kunststoffindustrie gemacht. Neue Entwicklungen
in Bereich der Herstellung von LFT haben für zusätzliche Verbesserungen der
mechanischen Eigenschaften sowie eine Preisreduzierung der Materialien in den
vergangenen Jahren gesorgt, was die LFT zu einer attraktiven Wahl u.a. als Ersatz
von Metallen in Automobilteilen macht. Es stellen sich allerdings immer noch einige
offene wissenschaftliche Fragen in Bezug auf z.B. die Materialbeschaffenheit, um
optimale Eigenschaftskombinationen zu erreichen. Die vorliegende Arbeit versucht,
einige dieser Fragen zu beantworten. Ziel war es, Vorgehensweisen zu entwickeln,
mit denen man die Eigenschaften von LFT gezielt beeinflussen und so den
Anforderungen von Automobilen oder anderen Anwendungen anpassen oder
„maßschneidern“ kann.
Die vorliegende Arbeit besteht aus drei Teilen, welche sich auf unterschiedliche
Materialsysteme, angepasst an den aktuellen Bedarf und das Interesse der Industrie,
konzentrieren.
Der erste Teil der Arbeit richtet sich auf die Eigenschaftsoptimierung von
langglasfaserverstärkten (LGF) thermoplastischen Styrolcopolymeren und von
Blends aus diesen Materialien. Es wurden passende, auf Maleinsäureanhydride
(MAH) basierende Kopplungsmittel gefunden, um die Faser-Matrix-Haftung zu
optimieren. Weiterhin wurde ein LGF Konzentrat entwickelt, welches mit
verschiedenen thermoplastischen Styrolcopolymeren kompatibel ist und somit als
„Verstärkungsadditiv“ eingesetzt werden kann.Das Konzept für ein neues LGF-Konzentrat auf Basis des kompatiblen
Materialsystems konzentriert sich insbesondere darauf, dass ein Basismaterial für
die Herstellung von Bauteilen bereit gestellt werden kann, mit dessen Hilfe gezielt
verschiedene mechanische und thermomechanischen Eigenschaften durch das
Zumischen von verschiedenen Styrolcopoylmeren und Blends verbessert werden
können. Dieses Konzept ermöglicht eine sehr flexible Produktion von Bauteilen und
wird seine Anwendung bei der Herstellung von Bauteilen u.a. im Interieur von Autos
finden.
Das zweite Materialsystem basiert auf sogenannten hybriden Verbundwerkstoffen,
welche aus Langglasfasern und mineralischen Füllstoffen wie Kalziumkarbonat und
Talkum in einer Polypropylen (PP) - Matrix zusammengesetzt sind. Ziel war es, durch
detaillierte bruchmechanische Analysen genaue Informationen über das
Bruchverhalten dieser hybriden Verbundwerkstoffe bei Zug- und Schlagbelastung zu
bekommen, um dann die Unterschiede zwischen den verschiedenen Füllstoffen in
Bezug auf ihre Eigenschaften zu dokumentieren. Es konnte beobachtet werden, dass
bei Zugabe der Füllstoffe zum LGF-PP normalerweise die Steifigkeit weiter
verbessert wurde, jedoch die Festigkeit und Schlagzähigkeit abnahmen. Weiterhin
zeigten die verschiedenen Füllstoffe wie Kalziumkarbonat und Talkum
unterschiedliche mechanische Eigenschaften auf, wenn sie zusammen mit LGF
Verstärkung eingesetzt wurden: Bei der Zugabe von Talkum wurde u.a. eine deutlich
bessere Schlagzähigkeit als bei der Zugabe von Kalziumkarbonat festgestellt. Dieses
Phänomen konnte durch das unterschiedliche Nukleierungsverhalten des PPs erklärt
werden, welches in einer unterschiedlichen Kristallmorphologie von Polypropylen
resultierte. Weiterhin konnte man durch Messungen der akustischen Emmissionen
während der Zugbelastung eines bruchmechanischen Versuchskörpers aufzeigen,
dass die höhere Bruchzähigkeit von LGF-PP ohne Füllstoffe daraus resultiert, dass
Faser-Pullout schon bei geringeren Kräften vorhanden war.
Magnetoelastic coupling describes the mutual dependence of the elastic and magnetic fields and can be observed in certain types of materials, among which are the so-called "magnetostrictive materials". They belong to the large class of "smart materials", which change their shape, dimensions or material properties under the influence of an external field. The mechanical strain or deformation a material experiences due to an externally applied magnetic field is referred to as magnetostriction; the reciprocal effect, i.e. the change of the magnetization of a body subjected to mechanical stress is called inverse magnetostriction. The coupling of mechanical and electromagnetic fields is particularly observed in "giant magnetostrictive materials", alloys of ferromagnetic materials that can exhibit several thousand times greater magnitudes of magnetostriction (measured as the ratio of the change in length of the material to its original length) than the common magnetostrictive materials. These materials have wide applications areas: They are used as variable-stiffness devices, as sensors and actuators in mechanical systems or as artificial muscles. Possible application fields also include robotics, vibration control, hydraulics and sonar systems.
Although the computational treatment of coupled problems has seen great advances over the last decade, the underlying problem structure is often not fully understood nor taken into account when using black box simulation codes. A thorough analysis of the properties of coupled systems is thus an important task.
The thesis focuses on the mathematical modeling and analysis of the coupling effects in magnetostrictive materials. Under the assumption of linear and reversible material behavior with no magnetic hysteresis effects, a coupled magnetoelastic problem is set up using two different approaches: the magnetic scalar potential and vector potential formulations. On the basis of a minimum energy principle, a system of partial differential equations is derived and analyzed for both approaches. While the scalar potential model involves only stationary elastic and magnetic fields, the model using the magnetic vector potential accounts for different settings such as the eddy current approximation or the full Maxwell system in the frequency domain.
The distinctive feature of this work is the analysis of the obtained coupled magnetoelastic problems with regard to their structure, strong and weak formulations, the corresponding function spaces and the existence and uniqueness of the solutions. We show that the model based on the magnetic scalar potential constitutes a coupled saddle point problem with a penalty term. The main focus in proving the unique solvability of this problem lies on the verification of an inf-sup condition in the continuous and discrete cases. Furthermore, we discuss the impact of the reformulation of the coupled constitutive equations on the structure of the coupled problem and show that in contrast to the scalar potential approach, the vector potential formulation yields a symmetric system of PDEs. The dependence of the problem structure on the chosen formulation of the constitutive equations arises from the distinction of the energy and coenergy terms in the Lagrangian of the system. While certain combinations of the elastic and magnetic variables lead to a coupled magnetoelastic energy function yielding a symmetric problem, the use of their dual variables results in a coupled coenergy function for which a mixed problem is obtained.
The presented models are supplemented with numerical simulations carried out with MATLAB for different examples including a 1D Euler-Bernoulli beam under magnetic influence and a 2D magnetostrictive plate in the state of plane stress. The simulations are based on material data of Terfenol-D, a giant magnetostrictive materials used in many industrial applications.
Materials in general can be divided into insulators, semiconductors and conductors,
depending on their degree of electrical conductivity. Polymers are classified as
electrically insulating materials, having electrical conductivity values lower than 10-12
S/cm. Due to their favourable characteristics, e.g. their good physical characteristics,
their low density, which results in weight reduction, etc., polymers are also
considered for applications where a certain degree of conductivity is required. The
main aim of this study was to develop electrically conductive composite materials
based on epoxy (EP) matrix, and to study their thermal, electrical, and mechanical
properties. The target values of electrical conductivity were mainly in the range of
electrostatic discharge protection (ESD, 10-9-10-6 S/cm).
Carbon fibres (CF) were the first type of conductive filler used. It was established that
there is a significant influence of the fibre aspect ratio on the electrical properties of
the fabricated composite materials. With longer CF the percolation threshold value
could be achieved at lower concentrations. Additional to the homogeneous CF/EP
composites, graded samples were also developed. By the use of a centrifugation
method, the CF created a graded distribution along one dimension of the samples.
The effect of the different processing parameters on the resulting graded structures
and consequently on their gradients in the electrical and mechanical properties were
systematically studied.
An intrinsically conductive polyaniline (PANI) salt was also used for enhancing the
electrical properties of the EP. In this case, a much lower percolation threshold was
observed compared to that of CF. PANI was found out to have, up to a particular
concentration, a minimal influence on the thermal and mechanical properties of the
EP system.
Furthermore, the two above-mentioned conductive fillers were jointly added to the EP
matrix. Improved electrical and mechanical properties were observed by this
incorporation. A synergy effect between the two fillers took place regarding the
electrical conductivity of the composites.
The last part of this work was engaged in the application of existing theoretical
models for the prediction of the electrical conductivity of the developed polymer composites. A good correlation between the simulation and the experiments was
observed.
Allgemein werden Materialien in Bezug auf ihre elektrische Leitfähigkeit in Isolatoren,
Halbleiter oder Leiter unterteilt. Polymere gehören mit einer elektrischen Leitfähigkeit
niedriger als 10-12 S/cm in die Gruppe der Isolatoren. Aufgrund vorteilhafter
Eigenschaften der Polymere, wie z.B. ihren guten physikalischen Eigenschaften,
ihrer geringen Dichte, welche zur Gewichtsreduktion beiträgt, usw., werden Polymere
auch für Anwendungen in Betracht gezogen, bei denen ein gewisser Grad an
Leitfähigkeit gefordert wird. Das Hauptziel dieser Studie war, elektrisch leitende
Verbundwerkstoffe auf der Basis von Epoxidharz (EP) zu entwickeln und deren
elektrische, mechanische und thermische Eigenschaften zu studieren. Die Zielwerte
der elektrischen Leitfähigkeit lagen hauptsächlich im Bereich der Vermeidung
elektrostatischer Aufladungen (ESD, 10-9-10-6 S/cm).
Bei der Herstellung elektrisch leitender Kunststoffen wurden als erstes
Kohlenstofffasern (CF) als leitfähige Füllstoffe benutzt. Bei den durchgeführten
Experimenten konnte man beobachten, dass das Faserlängenverhältnis einen
bedeutenden Einfluss auf die elektrischen Eigenschaften der fabrizierten
Verbundwerkstoffe hat. Mit längeren CF wurde die Perkolationsschwelle bereits bei
einer niedrigeren Konzentration erreicht. Zusätzlich zu den homogenen CF/EP
Verbundwerkstoffen, wurden auch Gradientenwerkstoffe entwickelt. Mit Hilfe einer
Zentrifugation konnte eine gradierte Verteilung der CF entlang der Probenlängeachse
erreicht werden. Die Effekte der unterschiedlichen Zentrifugationsparameter
auf die resultierenden Gradientenwerkstoffe und die daraus
resultierenden, gradierten elektrischen und mechanischen Eigenschaften wurden
systematisch studiert.
Ein intrinsisch leitendes Polyanilin-Salz (PANI) wurde auch für das Erhöhen der
elektrischen Eigenschaften des EP benutzt. In diesem Fall wurde eine viel niedrigere
Perkolationsschwelle verglichen mit der von CF beobachtet. Der Einsatz von PANI hat bis zu einer bestimmten Konzentration nur einen minimalen Einfluß auf die
thermischen und mechanischen Eigenschaften des EP Systems.
In einem dritte Schritt wurden die zwei oben erwähnten, leitenden Füllstoffe
gemeinsam der EP Matrix hinzugefügt. Erhöhte elektrische und mechanische
Eigenschaften wurden in diesem Fall beobachtet, wobei sich ein Synergie-Effekt
zwischen den zwei Füllstoffen bezogen auf die elektrische Leitfähigkeit der
Verbundwerkstoffe ergab.
Im letzten Teil dieser Arbeit fand die Anwendung von theoretischen Modelle zur
Vorhersage der elektrischen Leitfähigkeit der entwickelten Verbundwerkstoffe statt.
Dabei konnte eine gute Übereinstimmung mit den experimentellen Ergebnissen
festgestellt werden .
This thesis addresses several challenges for sustainable logistics operations and investigates (1) the integration of intermediate stops in the route planning of transportation vehicles, which especially becomes relevant when alternative-fuel vehicles with limited driving range or a sparse refueling infrastructure are considered, (2) the combined planning of the battery replacement infrastructure and of the routing for battery electric vehicles, (3) the use of mobile load replenishment or refueling possibilities in environments where the respective infrastructure is not available, and (4) the additional consideration of the flow of goods from the end user in backward direction to the point of origin for the purpose of, e.g., recapturing value or proper disposal. We utilize models and solution methods from the domain of operations research to gain insights into the investigated problems and thus to support managerial decisions with respect to these issues.
The demand of sustainability is continuously increasing. Therefore, thermoplastic
composites became a focus of research due to their good weight to performance
ratio. Nevertheless, the limiting factor of their usage for some processes is the loss of
consolidation during re-melting (deconsolidation), which reduces the part quality.
Several studies dealing with deconsolidation are available. These studies investigate
a single material and process, which limit their usefulness in terms of general
interpretations as well as their comparability to other studies. There are two main
approaches. The first approach identifies the internal void pressure as the main
cause of deconsolidation and the second approach identifies the fiber reinforcement
network as the main cause. Due to of their controversial results and limited variety of
materials and processes, there is a big need of a more comprehensive investigation
on several materials and processes.
This study investigates the deconsolidation behavior of 17 different materials and
material configurations considering commodity, engineering, and performance
polymers as well as a carbon and two glass fiber fabrics. Based on the first law of
thermodynamics, a deconsolidation model is proposed and verified by experiments.
Universal applicable input parameters are proposed for the prediction of
deconsolidation to minimize the required input measurements. The study revealed
that the fiber reinforcement network is the main cause of deconsolidation, especially
for fiber volume fractions higher than 48 %. The internal void pressure can promote
deconsolidation, when the specimen was recently manufactured. In other cases the
internal void pressure as well as the surface tension prevents deconsolidation.
During deconsolidation the polymer is displaced by the volume increase of the void.
The polymer flow damps the progress of deconsolidation because of the internal
friction of the polymer. The crystallinity and the thermal expansion lead to a
reversible thickness increase during deconsolidation. Moisture can highly accelerate
deconsolidation and can increase the thickness by several times because of the
vaporization of water. The model is also capable to predict reconsolidation under the
defined boundary condition of pressure, time, and specimen size. For high pressure
matrix squeeze out occur, which falsifies the accuracy of the model.The proposed model was applied to thermoforming, induction welding, and
thermoplastic tape placement. It is demonstrated that the load rate during
thermoforming is the critical factor of achieving complete reconsolidation. The
required load rate can be determined by the model and is dependent on the cooling
rate, the forming length, the extent of deconsolidation, the processing temperature,
and the final pressure. During induction welding deconsolidation can tremendously
occur because of the left moisture in the polymer at the molten state. The moisture
cannot fully diffuse out of the specimen during the faster heating. Therefore,
additional pressure is needed for complete reconsolidation than it would be for a dry
specimen. Deconsolidation is an issue for thermoplastic tape placement, too. It limits
the placement velocity because of insufficient cooling after compaction. If the
specimen after compaction is locally in a molten state, it deconsolidates and causes
residual stresses in the bond line, which decreases the interlaminar shear strength. It
can be concluded that the study gains new knowledge and helps to optimize these
processes by means of the developed model without a high number of required
measurements.
Aufgrund seiner guten spezifischen Festigkeit und Steifigkeit ist der
endlosfaserverstärkte Thermoplast ein hervorragender Leichtbauwerkstoff. Allerdings
kann es während des Wiederaufschmelzens durch Dekonsolidierung zu einem
Verlust der guten mechanischen Eigenschaften kommen, daher ist Dekonsolidierung
unerwünscht. In vielen Studien wurde die Dekonsolidierung mit unterschiedlichen
Ergebnissen untersucht. Dabei wurde meist ein Material und ein Prozess betrachtet.
Eine allgemeine Interpretation und die Vergleichbarkeit unter den Studien sind
dadurch nur begrenzt möglich. Aus der Literatur sind zwei Ansätze bekannt. Dem
ersten Ansatz liegt der Druckunterschied zwischen Poreninnendruck und
Umgebungsdruck als Hauptursache der Dekonsolidierung zu Grunde. Beim zweiten
Ansatz wird die Faserverstärkung als Hauptursache identifiziert. Aufgrund der
kontroversen Ergebnisse und der begrenzten Anzahl der Materialien und
Verarbeitungsverfahren, besteht die Notwendigkeit einer umfassenden Untersuchung
über mehrere Materialien und Prozesse. Diese Studie umfasst drei Polymere
(Polypropylen, Polycarbonat und Polyphenylensulfid), drei Gewebe (Köper, Atlas und
Unidirektional) und zwei Prozesse (Autoklav und Heißpressen) bei verschiedenen
Faservolumengehalten.
Es wurde der Einfluss des Porengehaltes auf die interlaminare Scherfestigkeit
untersucht. Aus der Literatur ist bekannt, dass die interlaminare Scherfestigkeit mit
der Zunahme des Porengehaltes linear sinkt. Dies konnte für die Dekonsolidierung
bestätigt werden. Die Reduktion der interlaminaren Scherfestigkeit für
thermoplastische Matrizes ist kleiner als für duroplastische Matrizes und liegt im
Bereich zwischen 0,5 % bis 1,5 % pro Prozent Porengehalt. Außerdem ist die
Abnahme signifikant vom Matrixpolymer abhängig.
Im Falle der thermisch induzierten Dekonsolidierung nimmt der Porengehalt
proportional zu der Dicke der Probe zu und ist ein Maß für die Dekonsolidierung. Die
Pore expandiert aufgrund der thermischen Gasexpansion und kann durch äußere
Kräfte zur Expansion gezwungen werden, was zu einem Unterdruck in der Pore
führt. Die Faserverstärkung ist die Hauptursache der Dickenzunahme
beziehungsweise der Dekonsolidierung. Die gespeicherte Energie, aufgebaut während der Kompaktierung, wird während der Dekonsolidierung abgegeben. Der
Dekompaktierungsdruck reicht von 0,02 MPa bis 0,15 MPa für die untersuchten
Gewebe und Faservolumengehalte. Die Oberflächenspannung behindert die
Porenexpansion, weil die Oberfläche vergrößert werden muss, die zusätzliche
Energie benötigt. Beim Kontakt von benachbarten Poren verursacht die
Oberflächenspannung ein Verschmelzen der Poren. Durch das bessere Volumen-
Oberfläche-Verhältnis wird Energie abgebaut. Der Polymerfluss bremst die
Entwicklung der Dickenzunahme aufgrund der erforderlichen Energie (innere
Reibung) der viskosen Strömung. Je höher die Temperatur ist, desto niedriger ist die
Viskosität des Polymers, wodurch weniger Energie für ein weiteres Porenwachstum
benötigt wird. Durch den reversiblen Einfluss der Kristallinität und der
Wärmeausdehnung des Verbundes wird während der Erwärmung die Dicke erhöht
und während der Abkühlung wieder verringert. Feuchtigkeit kann einen enormen
Einfluss auf die Dekonsolidierung haben. Ist noch Feuchtigkeit über der
Schmelztemperatur im Verbund vorhanden, verdampft diese und kann die Dicke um
ein Vielfaches der ursprünglichen Dicke vergrößern.
Das Dekonsolidierungsmodell ist in der Lage die Rekonsolidierung vorherzusagen.
Allerdings muss der Rekonsolidierungsdruck unter einem Grenzwert liegen
(0,15 MPa für 50x50 mm² und 1,5 MPa für 500x500 mm² große Proben), da es sonst
bei der Probe zu einem Polymerfluss aus der Probe von mehr als 2 % kommt. Die
Rekonsolidierung ist eine inverse Dekonsolidierung und weist die gleichen
Mechanismen in der entgegengesetzten Richtung auf.
Das entwickelte Modell basiert auf dem ersten Hauptsatz der Thermodynamik und
kann die Dicke während der Dekonsolidierung und der Rekonsolidierung
vorhersagen. Dabei wurden eine homogene Porenverteilung und eine einheitliche,
kugelförmige Porengröße angenommen. Außerdem wurde die Massenerhaltung
angenommen. Um den Aufwand für die Bestimmung der Eingangsgrößen zu
reduzieren, wurden allgemein gültige Eingabeparameter bestimmt, die für eine
Vielzahl von Konfigurationen gelten. Das simulierte Materialverhalten mit den
allgemein gültigen Eingangsparametern erzielte unter den definierten
Einschränkungen eine gute Übereinstimmung mit dem tatsächlichen
Materialverhalten. Nur bei Konfigurationen mit einer Viskositätsdifferenz von mehr als 30 % zwischen der Schmelztemperatur und der Prozesstemperatur sind die
allgemein gültigen Eingangsparameter nicht anwendbar. Um die Relevanz für die
Industrie aufzuzeigen, wurden die Effekte der Dekonsolidierung für drei weitere
Verfahren simuliert. Es wurde gezeigt, dass die Kraftzunahmegeschwindigkeit
während des Thermoformens ein Schlüsselfaktor für eine vollständige
Rekonsolidierung ist. Wenn die Kraft zu langsam appliziert wird oder die finale Kraft
zu gering ist, ist die Probe bereits erstarrt, bevor eine vollständige Konsolidierung
erreicht werden kann. Auch beim Induktionsschweißen kann Dekonsolidierung
auftreten. Besonders die Feuchtigkeit kann zu einer starken Zunahme der
Dekonsolidierung führen, verursacht durch die sehr schnellen Heizraten von mehr als
100 K/min. Die Feuchtigkeit kann während der kurzen Aufheizphase nicht vollständig
aus dem Polymer ausdiffundieren, sodass die Feuchtigkeit beim Erreichen der
Schmelztemperatur in der Probe verdampft. Beim Tapelegen wird die
Ablegegeschwindigkeit durch die Dekonsolidierung begrenzt. Nach einer scheinbar
vollständigen Konsolidierung unter der Walze kann die Probe lokal dekonsolidieren,
wenn das Polymer unter der Oberfläche noch geschmolzen ist. Die daraus
resultierenden Poren reduzieren die interlaminare Scherfestigkeit drastisch um 5,8 %
pro Prozent Porengehalt für den untersuchten Fall. Ursache ist die Kristallisation in
der Verbindungszone. Dadurch werden Eigenspannungen erzeugt, die in der
gleichen Größenordnung wie die tatsächliche Scherfestigkeit sind.
The focus of this work is to provide and evaluate a novel method for multifield topology-based analysis and visualization. Through this concept, called Pareto sets, one is capable to identify critical regions in a multifield with arbitrary many individual fields. It uses ideas found in graph optimization to find common behavior and areas of divergence between multiple optimization objectives. The connections between the latter areas can be reduced into a graph structure allowing for an abstract visualization of the multifield to support data exploration and understanding.
The research question that is answered in this dissertation is about the general capability and expandability of the Pareto set concept in context of visualization and application. Furthermore, the study of its relations, drawbacks and advantages towards other topological-based approaches. This questions is answered in several steps, including consideration and comparison with related work, a thorough introduction of the Pareto set itself as well as a framework for efficient implementation and an attached discussion regarding limitations of the concept and their implications for run time, suitable data, and possible improvements.
Furthermore, this work considers possible simplification approaches like integrated single-field simplification methods but also using common structures identified through the Pareto set concept to smooth all individual fields at once. These considerations are especially important for real-world scenarios to visualize highly complex data by removing small local structures without destroying information about larger, global trends.
To further emphasize possible improvements and expandability of the Pareto set concept, the thesis studies a variety of different real world applications. For each scenario, this work shows how the definition and visualization of the Pareto set is used and improved for data exploration and analysis based on the scenarios.
In summary, this dissertation provides a complete and sound summary of the Pareto set concept as ground work for future application of multifield data analysis. The possible scenarios include those presented in the application section, but are found in a wide range of research and industrial areas relying on uncertainty analysis, time-varying data, and ensembles of data sets in general.
Novel image processing techniques have been in development for decades, but most
of these techniques are barely used in real world applications. This results in a gap
between image processing research and real-world applications; this thesis aims to
close this gap. In an initial study, the quantification, propagation, and communication
of uncertainty were determined to be key features in gaining acceptance for
new image processing techniques in applications.
This thesis presents a holistic approach based on a novel image processing pipeline,
capable of quantifying, propagating, and communicating image uncertainty. This
work provides an improved image data transformation paradigm, extending image
data using a flexible, high-dimensional uncertainty model. Based on this, a completely
redesigned image processing pipeline is presented. In this pipeline, each
step respects and preserves the underlying image uncertainty, allowing image uncertainty
quantification, image pre-processing, image segmentation, and geometry
extraction. This is communicated by utilizing meaningful visualization methodologies
throughout each computational step.
The presented methods are examined qualitatively by comparing to the Stateof-
the-Art, in addition to user evaluation in different domains. To show the applicability
of the presented approach to real world scenarios, this thesis demonstrates
domain-specific problems and the successful implementation of the presented techniques
in these domains.
Der Fokus der vorliegenden Arbeit liegt auf endlosfaser- und langfaserverstärkten
thermoplastischen Materialien. Hierfür wurde das „multilayered hybrid
(MLH)“ Konzept entwickelt und auf zwei Halbzeuge, den MLH-Roving und die MLHMat
angewendet. Der MLH-Roving ist ein Roving (bestehend aus Endlosfasern), der
durch thermoplastische Folien in mehrere Schichten geteilt wird. Der MLH-Roving
wird durch eine neuartige Spreizmethode mit anschließender thermischen Fixierung
und abschließender mehrfacher Faltung hergestellt. Dadurch können verschiedene
Faser-Matrix-Konfigurationen realisiert werden. Die MLH-Mat ist ein
glasmattenverstärktes thermoplastisches Material, das für hohe Fasergehalte bis 45
vol. % und verschiedene Matrixpolymere, z.B. Polypropylen (PP) und Polyamide 6
(PA6) geeignet ist. Sie zeichnet sich durch eine hohe Homogenität in der
Flächendichte und in der Faserrichtung aus. Durch dynamische Crashversuche mit
auf MLH-Roving und MLH-Mat basierenden Probekörpern wurden das
Crashverhalten und die Performance untersucht. Die Ergebnisse der Crashkörper
basierend auf langfaserverstärktem Material (MLH-Mat) und endlosfaserverstärktem
Material (MLH-Roving) waren vergleichbar. Die PA6-Typen zeigten eine bessere
Crashperformance als PP-Typen.
The present work deals with continuous fiber- and long fiber reinforced thermoplastic
materials. The concept of multilayered hybrid (MLH) structure was developed and
applied to the so-called MLH-roving and MLH-mat. The MLH-roving is a continuous
fiber roving separated evenly into several sublayers by thermoplastic films, through
the sequential processes of spreading with a newly derived equation, thermal fixing,
and folding. It was aimed to satisfy the variety of material configuration as well as the
variety in intermediate product. The MLH-mat is a glass mat reinforced thermoplastic
(GMT)-like material that is suitable for high fiber contents up to 45 vol. % and various
matrix polymers, e.g. polypropylene (PP), polyamide 6 (PA6). It showed homogeneity
in areal density, random directional fiber distribution, and reheating stability required
for molding process. On the MLH-roving and MLH-mat materials, the crash behavior
and performance were investigated by dynamic crash test. Long fiber reinforced
materials (MLH-mat) were equivalent to continuous fiber reinforced materials (MLHroving),
and PA6 grades showed higher crash performance than PP grades.
The gas phase infrared and fragmentation spectra of a systematic group of trimetallic oxo-centered
transition metal complexes are shown and discussed, with formate and acetate bridging ligands and
pyridine and water as axial ligands.
The stability of the complexes, as predicted by appropriate ab initio simulations, is demonstrated to
agree with collision induced dissociation (CID) measurements.
A broad range of DFT calculations are shown. They are used to simulate the geometry, the bonding
situation, relative stability and flexibility of the discussed complexes, and to specify the observed
trends. These simulations correctly predict the trends in the band splitting of the symmetric and
asymmetric carboxylate stretch modes, but fail to account for anharmonic effects observed specifically
in the mid IR range.
The infrared spectra of the different ligands are introduced in a brief literature review. Their changes
in different environments or different bonding situations are discussed and visualized, especially the
interplay between fundamental-, overtone-, and combination bands, as well as Fermi resonances
between them.
A new variation on the infrared multi photon dissociation (IRMPD) spectroscopy method is proposed
and evaluated. In addition to the commonly considered total fragment yield, the cumulative fragment
yield can be used to plot the wavelength dependent relative abundance of different fragmentation
products. This is shown to include valuable additional information on the excited chromophors, and
their coupling to specific fragmentation channels.
High quality homo- and heterometallic IRMPD spectra of oxo centered carboxylate complexes of
chromium and iron show the impacts of the influencing factors: the metal centers, the bridging ligands,
their carboxylate stretch modes and CH bend modes, and the terminal ligands.
In all four formate spectra, anharmonic effects are necessary to explain the observed spectra:
combination bands of both carboxylate stretch modes and a Fermi resonance of the fundamental of
the CH stretch mode, and a combination band of the asymmetric carboxylate stretch mode with the
CH bend mode of the formate bridging ligand.
For the water adduct species, partial hydrolysis is proposed to account for the changes in the observed
carboxylic stretch modes.
Appropriate experiments are suggested to verify the mode assignments that are not directly explained
by the ab initio calculations, the available experimental results or other means like deuteration
experiments.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.
The Symbol Grounding Problem (SGP) is one of the first attempts to proposed a hypothesis about mapping abstract concepts and the real world. For example, the concept "ball" can be represented by an object with a round shape (visual modality) and phonemes /b/ /a/ /l/ (audio modality).
This thesis is inspired by the association learning presented in infant development.
Newborns can associate visual and audio modalities of the same concept that are presented at the same time for vocabulary acquisition task.
The goal of this thesis is to develop a novel framework that combines the constraints of the Symbol Grounding Problem and Neural Networks in a simplified scenario of association learning in infants. The first motivation is that the network output can be considered as numerical symbolic features because the attributes of input samples are already embedded. The second motivation is the association between two samples is predefined before training via the same vectorial representation. This thesis proposes to associate two samples and the vectorial representation during training. Two scenarios are considered: sample pair association and sequence pair association.
Three main contributions are presented in this work.
The first contribution is a novel Symbolic Association Model based on two parallel MLPs.
The association task is defined by learning that two instances that represent one concept.
Moreover, a novel training algorithm is defined by matching the output vectors of the MLPs with a statistical distribution for obtaining the relationship between concepts and vectorial representations.
The second contribution is a novel Symbolic Association Model based on two parallel LSTM networks that are trained on weakly labeled sequences.
The definition of association task is extended to learn that two sequences represent the same series of concepts.
This model uses a training algorithm that is similar to MLP-based approach.
The last contribution is a Classless Association.
The association task is defined by learning based on the relationship of two samples that represents the same unknown concept.
In summary, the contributions of this thesis are to extend Artificial Intelligence and Cognitive Computation research with a new constraint that is cognitive motivated. Moreover, two training algorithms with a new constraint are proposed for two cases: single and sequence associations. Besides, a new training rule with no-labels with promising results is proposed.
In recent years, enormous progress has been made in the field of Artificial Intelligence (AI). Especially the introduction of Deep Learning and end-to-end learning, the availability of large datasets and the necessary computational power in form of specialised hardware allowed researchers to build systems with previously unseen performance in areas such as computer vision, machine translation and machine gaming. In parallel, the Semantic Web and its Linked Data movement have published many interlinked RDF datasets, forming the world’s largest, decentralised and publicly available knowledge base.
Despite these scientific successes, all current systems are still narrow AI systems. Each of them is specialised to a specific task and cannot easily be adapted to all other human intelligence tasks, as would be necessary for Artificial General Intelligence (AGI). Furthermore, most of the currently developed systems are not able to learn by making use of freely available knowledge such as provided by the Semantic Web. Autonomous incorporation of new knowledge is however one of the pre-conditions for human-like problem solving.
This work provides a small step towards teaching machines such human-like reasoning on freely available knowledge from the Semantic Web. We investigate how human associations, one of the building blocks of our thinking, can be simulated with Linked Data. The two main results of these investigations are a ground truth dataset of semantic associations and a machine learning algorithm that is able to identify patterns for them in huge knowledge bases.
The ground truth dataset of semantic associations consists of DBpedia entities that are known to be strongly associated by humans. The dataset is published as RDF and can be used for future research.
The developed machine learning algorithm is an evolutionary algorithm that can learn SPARQL queries from a given SPARQL endpoint based on a given list of exemplary source-target entity pairs. The algorithm operates in an end-to-end learning fashion, extracting features in form of graph patterns without the need for human intervention. The learned patterns form a feature space adapted to the given list of examples and can be used to predict target candidates from the SPARQL endpoint for new source nodes. On our semantic association ground truth dataset, our evolutionary graph pattern learner reaches a Recall@10 of > 63 % and an MRR (& MAP) > 43 %, outperforming all baselines. With an achieved Recall@1 of > 34% it even reaches average human top response prediction performance. We also demonstrate how the graph pattern learner can be applied to other interesting areas without modification.
Though environmental inequality research has gained extensive interest in the United States, it has received far less attention in Europe and Germany. The main objective of this book is to extend the research on environmental inequality in Germany. This book aims to shed more light on the question of whether minorities in Germany are affected by a disproportionately high burden of environmental pollution, and to increase the general knowledge about the causal mechanisms, which contribute to the unequal distribution of environmental hazards across the population.
To improve our knowledge about environmental inequality in Germany, this book extends previous research in several ways. First, to evaluate the extent of environmental inequality, this book relies on two different data sources. On the on hand, it uses household-level survey data and self-reports about the impairment through air pollution. On the other hand, it combines aggregated census data and objective register-based measures of industrial air pollution by using geographic information systems (GIS). Consequently, this book offers the first analysis of environmental inequality on the national level that uses objective measures of air pollution in Germany. Second, to evaluate the causes of environmental inequality, this book applies a panel data analysis on the household level, thereby offering the first longitudinal analysis of selective migration processes outside the United States. Third, it compares the level of environmental inequality between German metropolitan areas and evaluates to which extent the theoretical arguments of environmental inequality can explain differing levels of environmental inequality across the country. By doing so, this book not only investigates the impact of indicators derived by the standard strand of theoretical reasoning but also includes structural characteristics of the urban space.
All studies presented in this book confirm the disproportionate exposure of minorities to environmental pollution. Minorities live in more polluted areas in Germany but also in more polluted parts of the communities, and this disadvantage is most severe in metropolitan regions. Though this book finds evidence for selective migration processes contributing to the disproportionate exposure of minorities to environmental pollution, it also stresses the importance of urban conditions. Especially cities with centrally located industrial facilities yield a high level of environmental inequality. This poses the question of whether environmental inequality might be the result of two independent processes: 1) urban infrastructure confines residential choices of minorities to the urban core, and 2) urban infrastructure facilitates centrally located industries. In combination, both processes lead to a disproportionate burden of minority households.
Tables or ranked lists summarize facts about a group of entities in a concise and structured fashion. They are found in all kind of domains and easily comprehensible by humans. Some globally prominent examples of such rankings are the tallest buildings in the World, the richest people in Germany, or most powerful cars. The availability of vast amounts of tables or rankings from open domain allows different ways to explore data. Computing similarity between ranked lists, in order to find those lists where entities are presented in a similar order, carries important analytical insights. This thesis presents a novel query-driven Locality Sensitive Hashing (LSH) method, in order to efficiently find similar top-k rankings for a given input ranking. Experiments show that the proposed method provides a far better performance than inverted-index--based approaches, in particular, it is able to outperform the popular prefix-filtering method. Additionally, an LSH-based probabilistic pruning approach is proposed that optimizes the space utilization of inverted indices, while still maintaining a user-provided recall requirement for the results of the similarity search. Further, this thesis addresses the problem of automatically identifying interesting categorical attributes, in order to explore the entity-centric data by organizing them into meaningful categories. Our approach proposes novel statistical measures, beyond known concepts, like information entropy, in order to capture the distribution of data to train a classifier that can predict which categorical attribute will be perceived suitable by humans for data categorization. We further discuss how the information of useful categories can be applied in PANTHEON and PALEO, two data exploration frameworks developed in our group.
Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important
subject of study in programming languages. Recent advances in self-adjusting
computation made progress towards achieving efficient incremental computation by providing algorithmic language abstractions to express computations that respond automatically to dynamic changes in their inputs. Selfadjusting programs have been shown to be efficient for a broad range of problems via an explicit programming style, where the programmer uses specific
primitives to identify, create and operate on data that can change over time.
This dissertation presents implicit self-adjusting computation, a type directed technique for translating purely functional programs into self-adjusting
programs. In this implicit approach, the programmer annotates the (toplevel) input types of the programs to be translated. Type inference finds
all other types, and a type-directed translation rewrites the source program
into an explicitly self-adjusting target program. The type system is related to
information-flow type systems and enjoys decidable type inference via constraint solving. We prove that the translation outputs well-typed self-adjusting
programs and preserves the source program’s input-output behavior, guaranteeing that translated programs respond correctly to all changes to their
data. Using a cost semantics, we also prove that the translation preserves the
asymptotic complexity of the source program.
As a second contribution, we present two techniques to facilitate the processing of large and dynamic data in self-adjusting computation. First, we
present a type system for precise dependency tracking that minimizes the
time and space for storing dependency metadata. The type system improves
the scalability of self-adjusting computation by eliminating an important assumption of prior work that can lead to recording spurious dependencies.
We present a type-directed translation algorithm that generates correct selfadjusting programs without relying on this assumption. Second, we show a
probabilistic-chunking technique to further decrease space usage by controlling the fundamental space-time tradeoff in self-adjusting computation.
We implement implicit self-adjusting computation as an extension to Standard ML with compiler and runtime support. Using the compiler, we are able
to incrementalize an interesting set of applications, including standard list
and matrix benchmarks, ray tracer, PageRank, sparse graph connectivity, and
social circle counts. Our experiments show that our compiler incrementalizes existing code with only trivial amounts of annotation, and the resulting
programs bring asymptotic improvements to large datasets from real-world
applications, leading to orders of magnitude speedups in practice.
The transfer of substrates between to enzymes within a biosynthesis pathway is an effective way to synthesize the specific product and a good way to avoid metabolic interference. This process is called metabolic channeling and it describes the (in-)direct transfer of an intermediate molecule between the active sites of two enzymes. By forming multi-enzyme cascades the efficiency of product formation and the flux is elevated and intermediate products are transferred and converted in a correct manner by the enzymes.
During tetrapyrrole biosynthesis several substrate transfer events occur and are prerequisite for an optimal pigment synthesis. In this project the metabolic channeling process during the pink pigment phycoerythrobilin (PEB) was investigated. The responsible ferredoxin-dependent bilin reductases (FDBR) for PEB formation are PebA and PebB. During the pigment synthesis the intermediate molecule 15,16-dihydrobiliverdin (DHBV) is formed and transferred from PebA to PebB. While in earlier studies a metabolic channeling of DHBV was postulated, this work revealed new insights into the requirements of this protein-protein interaction. It became clear, that the most important requirement for the PebA/PebB interaction is based on the affinity to their substrate/product DHBV. The already high affinity of both enzymes to each other is enhanced in the presence of DHBV in the binding pocket of PebA which leads to a rapid transfer to the subsequent enzyme PebB. DHBV is a labile molecule and needs to be rapidly channeled in order to get correctly further reduced to PEB. Fluorescence titration experiments and transfer assays confirmed the enhancement effect of DHBV for its own transfer.
More insights became clear by creating an active fusion protein of PebA and PebB and comparing its reaction mechanism with standard FDBRs. This fusion protein was able to convert biliverdin IXα (BV IXα) to PEB similar to the PebS activity, which also can convert BV IXα via DHBV to PEB as a single enzyme. The product and intermediate of the reaction were identified via HPLC and UV-Vis spectroscopy.
The results of this work revealed that PebA and PebB interact via a proximity channeling process where the intermediate DHBV plays an important role for the interaction. It also highlights the importance of substrate channeling in the synthesis of PEB to optimize the flux of intermediates through this metabolic pathway.
This thesis consists of five chapters. Chapter one elaborates on the principle of cognitive consistency and provides an overview of what extant research refers to as cognitive consistency theories (e.g., Abelson et al., 1968; Harmon-Jones & Harmon-Jones, 2007; Simon, Stenstrom, & Read, 2015). Moreover, it describes the most prominent theoretical representatives in this context, namely balance theory (Heider, 1946, 1958), congruity theory (Osgood & Tannenbaum, 1955), and cognitive dissonance theory (Festinger, 1957). Chapter one further outlines the role of individuals’ preference for cognitive consistency in the context of financial resource acquisition, the recruitment of employees and the acquisition of customers in the entrepreneurial context.
Chapter two is co-authored by Prof. Dr. Matthias Baum and presents two separate studies in which we empirically investigate the hypothesis that social entrepreneurs face a systematic disadvantage, compared to for-profit entrepreneurs, when seeking to acquire financial resources. Further, our work goes beyond existing research by introducing biased perceptions as a factor that may constrain social enterprise resource acquisition and therefore possibly stall the process of social value creation. On the foundation of role congruity theory (Eagly & Karau, 2002), we emphasize on the question whether social entrepreneurs provide signals which are less congruent with the stereotype of successful entrepreneurs and, in such, are perceived as less competent. We further test whether such biased competency perceptions feed forward into a lower probability to receive funding.
Chapter three is also co-authored by Prof. Dr. Matthias Baum as well as by Eva Henrich. The aim of this chapter is to further our understanding of the early recruitment phase and to contribute to the current debate about how firms should orchestrate their recruitment channels in order to enhance the creation of employer knowledge. We introduce the concept of integrated marketing communication into the recruitment field and examine how the level of consistency regarding job or organization information affects the recall and the recognition of that information. We additionally test whether information consistency among multiple recruitment channels influences information recognition failure quota. Answering this question is important as by failing to remember the source of recruitment information, job seekers may attribute job information to the wrong firm and thus create an incorrect employer knowledge.
Chapter four, which is co-authored by Prof. Dr. Matthias Baum, introduces customer congruity perceptions between a brand and a reward in the context of customer referral programs as an essential driver of the effectiveness of such programs. More precisely, we posit and empirically test a model according to which the decision-making process of the customer recommending a firm involves multiple mental steps and assumes reward perceptions to be an immediate antecedent of brand evaluation, which then, ultimately shapes the likelihood of recommendation. The level of congruity/incongruity is set up as an antecedent state and affects the perceived attractiveness of the reward. Our work contributes to the discussion on the optimal level of congruity between a prevailing schema in the mind of the customer and a stimulus presented. In addition, chapter four introduces customer referral programs as a strategic tool for brand managers. Chapter four is further published in Psychology & Marketing.
Chapter five first proposes that marketing strategies specifically designed to induce word-of-mouth (WOM) behavior are particular relevant for new ventures. Against the background that previous research suggests that customer perceptions of young firm age may influence customer behavior and the degree to which customers support new ventures (e.g., Choi & Shepherd, 2005; Stinchcombe, 1965), we secondly conduct an experiment to examine the causal mechanisms linking firm age and customer WOM. Chapter five, too, is co-authored by Prof. Dr. Matthias Baum.
Increasing costs due to the rising attrition of drug candidates in late developmental phases alongside post-marketing withdrawal of drugs challenge the pharmaceutical industry to further improve their current preclinical safety assessment strategies. One of the most common reasons for the termination of drug candidates is drug induced hepatotoxicity, which more often than not remains undetected in early developmental stages, thus emphasizing the necessity for improved and more predictive preclinical test systems. One reason for the very limited value of currently applied in vitro test systems for the detection of potential hepatotoxic liabilities is the lack of organotypic and tissue-specific physiology of hepatocytes cultured in ordinary monolayer culture formats.
The thesis at hand primarily deals with the evaluation of both two- and three-dimensional cell culture approaches with respect to their relative ability to predict the hepatotoxic potential of drug candidates in early developmental phases. First, different hepatic cell models, which are routinely used in pharmaceutical industry (primary human hepatocytes as well as the three cell lines HepG2, HepaRG and Upcyte hepatocytes), were investigated in conventional 2D monolayer culture with respect to their ability to detect hepatotoxic effects in simple cytotoxicity studies. Moreover, it could be shown that the global protein expression levels of all cell lines substantially differ from that of primary human hepatocytes, with the least pronounced difference in HepaRG cells.
The introduction of a third dimension through the cultivation of spheroids enables hepatocytes to recapitulate their typical native polarity and furthermore dramatically increases the contact surface of adjacent cells. These differences in cellular architecture have a positive influence on hepatocyte longevity and the expression of drug metabolizing enzymes and transporters, which could be proven via immunofluorescent (IF) staining for at least 14 days in PHH and at least 28 days in HepaRG spheroids, respectively. Additionally, the IF staining of three different phase III transporters (MDR1, MRP2 and BSEP) indicated a bile canalicular network in spheroids of both cell models. A dose-dependent inducibility of important cytochrome P450 isoenzymes in HepaRG spheroids could be shown on the protein level via IF for at least 14 days. CYP inducibility of HepaRG cells cultured in 2D and 3D was compared on the mRNA level for up to 14 days and inducibility was generally lower in 3D compared to 2D under the conditions of this study. In a comparative cytotoxicity study, both PHH and HepaRG spheroids as well as HepaRG monolayers have been treated with five hepatotoxic drugs for up to 14 days and viability was measured at three time points (days 3, 7 and 14). A clear time- and dose-dependent onset of the drug-induced hepatotoxic effects was observable in all conditions tested, indicated by a shift of the respective EC50 value towards lower doses by increasing exposure. The observed effects were most pronounced in PHH spheroids, thus indicating those as the most sensitive cell model in this study. Moreover, HepaRG cells were more sensitive in spheroid culture compared to monolayers, which suggests a potential application of spheroids as long-term test system for the detection of hepatotoxicities with slow onset. Finally, the basal protein expression levels of three antigens (CYP1A2, CYP3A4 and NAT 1/2) were analyzed via Western Blotting in HepaRG cells cultured in three different cell culture formats (2D, 3D and QV) in order to estimate the impact of the cell culture conditions on protein expression levels. In the QV system enables a pump-driven flow of cell culture media, which introduces both mechanical stimuli through shear and molecular stimuli through dynamic circulation to the monolayer. Those stimuli resulted in a clearly positive effect on the expression levels of the selected antigens by an increased expression level in comparison to both 2D and 3D. In contrast, HepaRG spheroids showed time-dependent differences with the overall highest levels at day 7.
The studies presented in this thesis delivered valuable information on the increased physiological relevance in dependence on the cell culture format: three-dimensionality as well as the circulation of media lead to a more differentiated phenotype in hepatic cell models. Those cell culture formats are applicable in preclinical drug development in order to obtain more relevant information at early developmental stages and thus help to create a more efficient drug development process. Nonetheless, further studies are necessary to thoroughly characterize, validate and standardize such novel cell culture approaches prior to their routine application in industry.
Road accidents remain as one of the major causes of death and injuries globally. Several million people die every year due to road accidents all over the world. Although the number of accidents in European region have reduced in the past years, road safety still remains a major challenge. Especially in case of commercial trucks, due to the size and load of the vehicle, even minor collisions with other road users would lead to serious injuries or death. In order to reduce number of accidents, automotive industry is rapidly developing advanced driver assistance systems (ADAS) and automated driving technologies. Efficient and reliable solutions are required for these systems to sense, perceive and react to different environmental conditions. For vehicle safety applications such as collision avoidance with vulnerable road users (VRUs), it is not only important for the system to efficiently detect and track the objects in the vicinity of the vehicle but should also function robustly.
An environment perception solution for application in commercial truck safety systems and for future automated driving is developed in this work. Thereby a method for integrated tracking and classification of road users in the near vicinity of the vehicle is formulated. The drawbacks in conventional multi-object tracking algorithms with respect to state, measurement and data association uncertainties have been addressed with the recent advancements in the field of unified multi-object tracking solutions based on random finite sets (RFS). Gaussian mixture implementation of the recently developed labeled multi-Bernoulli (LMB) filter [RSD15] is used as the basis for multi-object tracking in this work. Measurement from an high-resolution radar sensor is used as the main input for detecting and tracking objects.
On one side, the focus of this work is on tracking VRUs in the near vicinity of the truck. As it is beneficial for most of the vehicle safety systems to also know the category that the object belongs to, the focus on the other side is also to classify the road users. All the radar detections believed to originate from a single object are clustered together with help of density based spatial clustering for application with noise (DBSCAN) algorithm. Each cluster of detections would have different properties based on the respective object characteristics. Sixteen distinct features based on radar detections, that are suitable for separating pedestrians, bicyclists and passenger car categories are selected and extracted for each of the cluster. A machine learning based classifier is constructed, trained and parameterised for distinguishing the road users based on the extracted features.
The class information derived from the radar detections can further be used by the tracking algorithm, to adapt the model parameters used for precisely predicting the object motion according to the category of the object. Multiple model labeled multi-Bernoulli filter (MMLMB) is used for modelling different object motions. Apart from the detection level, the estimated state of an object on the tracking level also provides information about the object class. Both these informations are fused using Dempster-Shafer theory (DST) of evidence, based on respective class probabilities Thereby, the output of the integrated tracking and classification with MMLMB filter are classified tracks that can be used by truck safety applications with better reliability.
The developed environment perception method is further implemented as a real-time prototypical system on a commercial truck. The performance of the tracking and classification approaches are evaluated with the help of simulation and multiple test scenarios. A comparison of the developed approaches to a conventional converted measurements Kalman filter with global nearest neighbour association (CMKF-GNN) shows significant advantages in the overall accuracy and performance.
Mobility has become an integral feature of many wireless networks. Along with this mobility comes the need for location awareness. A prime example for this development are today’s and future transportation systems. They increasingly rely on wireless communications to exchange location and velocity information for a multitude of functions and applications. At the same time, the technological progress facilitates the widespread availability of sophisticated radio technology such as software-defined radios. The result is a variety of new attack vectors threatening the integrity of location information in mobile networks.
Although such attacks can have severe consequences in safety-critical environments such as transportation, the combination of mobility and integrity of spatial information has not received much attention in security research in the past. In this thesis we aim to fill this gap by providing adequate methods to protect the integrity of location and velocity information in the presence of mobility. Based on physical effects of mobility on wireless communications, we develop new methods to securely verify locations, sequences of locations, and velocity information provided by untrusted nodes. The results of our analyses show that mobility can in fact be exploited to provide robust security at low cost.
To further investigate the applicability of our schemes to real-world transportation systems, we have built the OpenSky Network, a sensor network which collects air traffic control communication data for scientific applications. The network uses crowdsourcing and has already achieved coverage in most parts of the world with more than 1000 sensors.
Based on the data provided by the network and measurements with commercial off-the-shelf hardware, we demonstrate the technical feasibility and security of our schemes in the air traffic scenario. Moreover, the experience and data provided by the OpenSky Network allows us to investigate the challenges for our schemes in the real-world air traffic communication environment. We show that our verification methods match all
requirements to help secure the next generation air traffic system.
This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.
If gradient based derivative algorithms are used to improve industrial products by reducing their target functions, the derivatives need to be exact.
The last percent of possible improvement, like the efficiency of a turbine, can only be gained if the derivatives are consistent with the solution process that is used in the simulation software.
It is problematic that the development of the simulation software is an ongoing process which leads to the use of approximated derivatives.
If a derivative computation is implemented manually, it will be inconsistent after some time if it is not updated.
This thesis presents a generalized approach which differentiates the whole simulation software with Algorithmic Differentiation (AD), and guarantees a correct and consistent derivative computation after each change to the software.
For this purpose, the variable tagging technique is developed.
The technique checks at run-time if all dependencies, which are used by the derivative algorithms, are correct.
Since it is also necessary to check the correctness of the implementation, a theorem is developed which describes how AD derivatives can be compared.
This theorem is used to develop further methods that can detect and correct errors.
All methods are designed such that they can be applied in real world applications and are used within industrial configurations.
The process described above yields consistent and correct derivatives but the efficiency can still be improved.
This is done by deriving new derivative algorithms.
A fixed-point iterator approach, with a consistent derivation, yields all state of the art algorithms and produces two new algorithms.
These two new algorithms include all implementation details and therefore they produce consistent derivative results.
For detecting hot spots in the application, the state of the art techniques are presented and extended.
The data management is changed such that the performance of the software is affected only marginally when quantities, like the number of input and output variables or the memory consumption, are computed for the detection.
The hot spots can be treated with techniques like checkpointing or preaccumulation.
How these techniques change the time and memory consumption is analyzed and it is shown how they need to be used in selected AD tools.
As a last step, the used AD tools are analyzed in more detail.
The major implementation strategies for operator overloading AD tools are presented and implementation improvements for existing AD tools are discussed.\
The discussion focuses on a minimal memory consumption and makes it possible to compare AD tools on a theoretical level.
The new AD tool CoDiPack is based on these findings and its design and concepts are presented.
The improvements and findings in this thesis make it possible, that an automatic, consistent and correct derivative is generated in an efficient way for industrial applications.
The aim of this dissertation is to explain processes in recruitment by gaining a better understanding of how perceptions evolve and how recruitment outcomes and perceptions are influenced. To do so, this dissertation takes a closer look at the formation of fit perceptions, the effects of top employer awards on pre-hire recruitment outcomes, and on how perceptions about external sources are influenced.
Fast Internet content delivery relies on two layers of caches on the request path. Firstly, content delivery networks (CDNs) seek to answer user requests before they traverse slow Internet paths. Secondly, aggregation caches in data centers seek to answer user requests before they traverse slow backend systems. The key challenge in managing these caches is the high variability of object sizes, request patterns, and retrieval latencies. Unfortunately, most existing literature focuses on caching with low (or no) variability in object sizes and ignores the intricacies of data center subsystems.
This thesis seeks to fill this gap with three contributions. First, we design a new caching system, called AdaptSize, that is robust under high object size variability. Second, we derive a method (called Flow-Offline Optimum or FOO) to predict the optimal cache hit ratio under variable object sizes. Third, we design a new caching system, called RobinHood, that exploits variances in retrieval latencies to deliver faster responses to user requests in data centers.
The techniques proposed in this thesis significantly improve the performance of CDN and data center caches. On two production traces from one of the world's largest CDN AdaptSize achieves 30-91% higher hit ratios than widely-used production systems, and 33-46% higher hit ratios than state-of-the-art research systems. Further, AdaptSize reduces the latency by more than 30% at the median, 90-percentile and 99-percentile.
We evaluate the accuracy of our FOO analysis technique on eight different production traces spanning four major Internet companies.
We find that FOO's error is at most 0.3%. Further, FOO reveals that the gap between online policies and OPT is much larger than previously thought: 27% on average, and up to 43% on web application traces.
We evaluate RobinHood with production traces from a major Internet company on a 50-server cluster. We find that RobinHood improves the 99-percentile latency by more than 50% over existing caching systems.
As load imbalances grow, RobinHood's latency improvement can be more than 2x. Further, we show that RobinHood is robust against server failures and adapts to automatic scaling of backend systems.
The results of this thesis demonstrate the power of guiding the design of practical caching policies using mathematical performance models and analysis. These models are general enough to find application in other areas of caching design and future challenges in Internet content delivery.
The simulation of cutting process challenges established methods due to large deformations and topological changes. In this work a particle finite element method (PFEM) is presented, which combines the benefits of discrete modeling techniques and methods based on continuum mechanics. A crucial part of the PFEM is the detection of the boundary of a set of particles. The impact of this boundary detection method on the structural integrity is examined and a relation of the key parameter of the method to the eigenvalues of strain tensors is elaborated. The influence of important process parameters on the cutting force is studied and a comparison to an empirical relation is presented.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
Analyzing Centrality Indices in Complex Networks: an Approach Using Fuzzy Aggregation Operators
(2018)
The identification of entities that play an important role in a system is one of the fundamental analyses being performed in network studies. This topic is mainly related to centrality indices, which quantify node centrality with respect to several properties in the represented network. The nodes identified in such an analysis are called central nodes. Although centrality indices are very useful for these analyses, there exist several challenges regarding which one fits best
for a network. In addition, if the usage of only one index for determining central
nodes leads to under- or overestimation of the importance of nodes and is
insufficient for finding important nodes, then the question is how multiple indices
can be used in conjunction in such an evaluation. Thus, in this thesis an approach is proposed that includes multiple indices of nodes, each indicating
an aspect of importance, in the respective evaluation and where all the aspects of a node’s centrality are analyzed in an explorative manner. To achieve this
aim, the proposed idea uses fuzzy operators, including a parameter for generating different types of aggregations over multiple indices. In addition, several preprocessing methods for normalization of those values are proposed and discussed. We investigate whether the choice of different decisions regarding the
aggregation of the values changes the ranking of the nodes or not. It is revealed that (1) there are nodes that remain stable among the top-ranking nodes, which
makes them the most central nodes, and there are nodes that remain stable
among the bottom-ranking nodes, which makes them the least central nodes; and (2) there are nodes that show high sensitivity to the choice of normalization
methods and/or aggregations. We explain both cases and the reasons why the nodes’ rankings are stable or sensitive to the corresponding choices in various networks, such as social networks, communication networks, and air transportation networks.
Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.
Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.
The use of polymers subjected to various tribological situations has become state of
the art. Owing to the advantages of self-lubrication and superior cleanliness, more
and more polymer composites are now being used as sliding elements, which were
formerly composed of metallic materials only. The feature that makes polymer composites
so promising in industrial applications is the opportunity to tailor their properties
with special fillers. The main aim of this study was to strength the importance of
integrating various functional fillers in the design of wear-resistant polymer composites
and to understand the role of fillers in modifying the wear behaviour of the materials.
Special emphasis was focused on enhancement of the wear resistance of
thermosetting and thermoplastic matrix composites by nano-TiO2 particles (with a
diameter of 300nm).
In order to optimize the content of various fillers, the tribological performance of a
series of epoxy-based composites, filled with short carbon fibre (SCF), graphite,
PTFE and nano-TiO2 in different proportions and combinations, was investigated.
The patterns of frictional coefficient, wear resistance and contact temperature were
examined by a pin-on-disc apparatus in a dry sliding condition under different contact
pressures and sliding velocities. The experimental results indicated that the addition
of nano-TiO2 effectively reduced the frictional coefficient, and consequently the contact
temperature, of short-fibre reinforced epoxy composites. Based on scanning
electron microscopy (SEM) and atomic force microscopy (AFM) observations of the
worn surfaces, a positive rolling effect of the nanoparticles between the material pairs
was proposed, which led to remarkable reduction of the frictional coefficient. In particular,
this rolling effect protected the SCF from more severe wear mechanisms, especially
in high sliding pressure and speed situations. As a result, the load carrying capacity of materials was significantly improved. In addition, the different contributions
of two solid lubricants, PTFE powders and graphite flakes, on the tribological
performance of epoxy nanocomposites were compared. It seems that graphite contributes
to the improved wear resistance in general, whereas PTFE can easily form a
transfer film and reduce the wear rate, especially in the running-in period. A combination of SCF and solid lubricants (PTFE and graphite) together with TiO2 nanoparticles
can achieve a synergistic effect on the wear behaviour of materials.
The favourable effect of nanoparticles detected in epoxy composites was also found
in the investigations of thermoplastic, e.g. polyamide (PA) 6,6 matrix. It was found
that nanoparticles could reduce the friction coefficient and wear rate of the PA6,6
composite remarkably, when additionally incorporated with short carbon fibres and
graphite flakes. In particular, the addition of nanoparticles contributed to an obvious
enhancement of the tribological performances of the short-fibre reinforced, hightemperature
resistant polymers, e.g. polyetherimide (PEI), especially under extreme
sliding conditions.
A procedure was proposed in order to correlate the contact temperature and the
wear rate with the frictional dissipated energy. Based on this energy consideration, a
better interpretation of the different performance of distinct tribo-systems is possible.
The validity of the model was illustrated for various sliding tests under different conditions.
Although simple quantitative formulations could not be expected at present, the
study may lead to a fundamental understanding of the mechanisms controlling friction
and wear from a general system point of view. Moreover, using the energybased
models, the artificial neural network (ANN) approach was applied to the experimental
data. The well-trained ANN has the potential to be further used for online
monitoring and prediction of wear progress in practical applications.
Die Verwendung von Polymeren im Hinblick auf verschiedene tribologische Anwendungen
entspricht mittlerweile dem Stand der Technik. Aufgrund der Vorteile von
Selbstschmierung und ausgezeichneter Sauberkeit werden polymere Verbundwerkstoffe
immer mehr als Gleitelemente genutzt, welche früher ausschließlich aus metallischen
Werkstoffen bestanden. Die Besonderheit, die polymere Verbundwerkstoffe
so vielversprechend für industrielle Anwendungen macht, ist die Möglichkeit ihre Eigenschaften
durch Zugabe von speziellen Füllstoffen maßzuschneidern. Das Hauptziel
dieser Arbeit bestand darin, die Wichtigkeit der Integration verschiedener funktionalisierter
Füllstoffe in den Aufbau polymerer Verbundwerkstoffe mit hohem Verschleißwiderstand
aufzuzeigen und die Rolle der Füllstoffe hinsichtlich des Verschleißverhaltens
zu verstehen. Hierbei lag besonderes Augenmerk auf der Verbesserung
des Verschleißwiderstandes bei Verbunden mit duromerer und thermoplastischer
Matrix durch die Präsenz von TiO2-Partikeln (Durchmesser 300nm).
Das tribologische Verhalten epoxidharzbasierter Verbunde, gefüllt mit kurzen Kohlenstofffasern
(SCF), Graphite, PTFE und nano-TiO2 in unterschiedlichen Proportionen
und Kombinationen wurde untersucht, um den jeweiligen Füllstoffgehalt zu optimieren.
Das Verhalten von Reibungskoeffizient, Verschleißwiderstand und Kontakttemperatur
wurde unter Verwendung einer Stift-Scheibe Apparatur bei trockenem
Gleitzustand, verschiedenen Kontaktdrücken und Gleitgeschwindigkeiten erforscht.
Die experimentellen Ergebnisse zeigen, dass die Zugabe von nano-TiO2 in kohlenstofffaserverstärkte
Epoxide den Reibungskoeffizienten und die Kontakttemperatur
herabsetzen können. Basierend auf Aufnahmen der verschlissenen Oberflächen
durch Rasterelektronen- (REM) und Rasterkraftmikroskopie (AFM) trat ein positiver
Rolleffekt der Nanopartikel zwischen den Materialpaaren zum Vorschein, welcher zu
einer beachtlichen Reduktion des Reibungskoeffizienten führte. Dieser Rolleffekt schützte insbesondere die SCF vor schwerwiegenderen Verschleißmechanismen,
speziell bei hohem Gleitdruck und hohen Geschwindigkeiten. Als Ergebnis konnte
die Tragfähigkeit dieser Materialien wesentlich verbessert werden. Zusätzlich wurde
die Wirkung zweier fester Schmierstoffe (PTFE-Pulver und Graphit-Flocken) auf die tribologische Leistungsfähigkeit verglichen. Es scheint, daß Graphit generell zur Verbesserung
des Verschleißwiderstandes beiträgt, wobei PTFE einen Transferfilm bilden
kann und die Verschleißrate insbesondere in der Einlaufphase reduziert. Die
Kombination von SCF und festen Schmierstoffen zusammen mit TiO2-Nanopartikeln
kann einen Synergieeffekt bei dem Verschleißverhalten der Materialien hervorrufen.
Der positive Effekt der Nanopartikel in Duromeren wurde ebenfalls bei den Untersuchungen
von Thermoplasten (PA 66) gefunden. Die Nanopartikel konnten den Reibungskoeffizienten
und die Verschleißrate der PA 66-Verbunde herabsetzen, wobei
zusätzlich Kohlenstofffasern und Graphit-Flocken enthalten waren. Die Zugabe von
Nanopartikeln trug offensichtlich auch zur Verbesserung der tribologischen Leistungsfähigkeit
von SCF-verstärkten, hochtemperaturbeständigen Polymeren (PEI)
insbesondere unter extremen Gleitzuständen, bei. Es wurde eine Methode vorgestellt,
um die Kontakttemperatur und die Verschleißrate mit der durch Reibung dissipierten
Energie zu korrelieren. Diese Energiebetrachtung ermöglicht eine bessere
Interpretation der verschiedenen Eigenschaften von ausgewählten Tribo-Systemen.
Die Gültigkeit dieses Models wurde für mehrere Gleittests unter verschiedenen Bedingungen
erklärt.
Vom generellen Blickpunkt eines tribologischen Systems aus mag diese Arbeit zu
einem fundamentalen Verständnis der Mechanismen führen, welche das Reibungs und Verschleißverhalten kontrollieren, obwohl hier einfache quantitative (mathematische)
Zusammenhänge bisher nicht zu erwarten sind. Der auf energiebasierenden
Modellen fußende Lösungsansatz der neuronalen Netzwerke (ANN) wurde darüber
hinaus auf die experimentellen Datensätze angewendet. Die gut trainierten ANN's
besitzen das Potenzial sie in der praktischen Anwendungen zur Online-
Datenauswertung und zur Vorhersage des Verschleißfortschritts einzusetzen.
Multiphase materials combine properties of several materials, which makes them interesting for high-performing components. This thesis considers a certain set of multiphase materials, namely silicon-carbide (SiC) particle-reinforced aluminium (Al) metal matrix composites and their modelling based on stochastic geometry models.
Stochastic modelling can be used for the generation of virtual material samples: Once we have fitted a model to the material statistics, we can obtain independent three-dimensional “samples” of the material under investigation without the need of any actual imaging. Additionally, by changing the model parameters, we can easily simulate a new material composition.
The materials under investigation have a rather complicated microstructure, as the system of SiC particles has many degrees of freedom: Size, shape, orientation and spatial distribution. Based on FIB-SEM images, that yield three-dimensional image data, we extract the SiC particle structure using methods of image analysis. Then we model the SiC particles by anisotropically rescaled cells of a random Laguerre tessellation that was fitted to the shapes of isotropically rescaled particles. We fit a log-normal distribution for the volume distribution of the SiC particles. Additionally, we propose models for the Al grain structure and the Aluminium-Copper (\({Al}_2{Cu}\)) precipitations occurring on the grain boundaries and on SiC-Al phase boundaries.
Finally, we show how we can estimate the parameters of the volume-distribution based on two-dimensional SEM images. This estimation is applied to two samples with different mean SiC particle diameters and to a random section through the model. The stereological estimations are within acceptable agreement with the parameters estimated from three-dimensional image data
as well as with the parameters of the model.
Fucoidan is a class of biopolymers mainly found in brown seaweeds. Due to its diverse medical importance, homogenous supply as well as a GMP-compliant product is of a special interest. Therefore, in addition to optimization of its extraction and purification from classical resources, other techniques were tried (e.g., marine tissue culture and heterologous expression of enzymes involved in its biosynthesis). Results showed that 17.5% (w/w) crude fucoidan after pre-treatment and extraction was obtained from the brown macroalgae F. vesiculosus. Purification by affinity chromatography improved purity relative to the commercial purified product. Furthermore, biological investigations revealed improved anti-coagulant and anti-viral activities compared with crude fucoidan. Furthermore, callus-like and protoplast cultures as well as bioreactor cultivation were developed from F. vesiculosus representing a new horizon to produce fucoidan biotechnologically. Moreover, heterologous expression of several enzymes involved in its biosynthesis by E. coli (e.g., FucTs and STs) demonstrated the possibility to obtain active enzymes that could be utilized in enzymatic in vitro synthesis of fucoidan. All these competitive techniques could provide the global demands from fucoidan.
Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.
The phase field approach is a powerful tool that can handle even complicated fracture phenomena within an apparently simple framework. Nonetheless, a profound understanding of the model is required in order to be able to interpret the obtained results correctly. Furthermore, in the dynamic case the phase field model needs to be verified in comparison to experimental data and analytical results in order to increase the trust in this new approach. In this thesis, a phase field model for dynamic brittle fracture is investigated with regard to these aspects by analytical and numerical methods
The complexity of modern real-time systems is increasing day by day. This inevitable rise in complexity predominantly stems from two contradicting requirements, i.e., ever increasing demand for functionality, and required low cost for the final product. The development of modern multi-processors and variety of network protocols and architectures have enabled such a leap in complexity and functionality possible. Albeit, efficient use of these multi-processors and network architectures is still a major problem. Moreover, the software design and its development process needs improvements in order to support rapid-prototyping for ever changing system designs. Therefore, in this dissertation, we provide solutions for different problems faced in the development and deployment process of real-time systems. The contributions presented in this thesis enable efficient utilization of system resources, rapid design & development and component modularity & portability.
In order to ease the certification process, time-triggered computation model is often used in distributed systems. However, time-triggered scheduling is NP-hard, due to which the process of schedule generation for complex large systems becomes convoluted. Large scheduler run-times and low scalability are two major problems with time-triggered scheduling. To solve these problems, we present a modular real-time scheduler based on a novel search-tree pruning technique, which consumes less time (compared to the state-of-the-art) in order to schedule tasks on large distributed time-triggered systems. In order to provide end-to-end guarantees, we also extend our modular scheduler to quickly generate schedules for time-triggered network traffic in large TTEthernet based networks. We evaluate our schedulers on synthetic but practical task-sets and demonstrate that our pruning technique efficiently reduces scheduler run-times and exhibits adequate scalability for future time-triggered distributed systems.
In safety critical systems, the certification process also requires strict isolation between independent components. This isolation is enforced by utilizing resource partitioning approach, where different criticality components execute in different partitions (each temporally and spatially isolated from each other). However, existing partitioning approaches use periodic servers or tasks to service aperiodic activities. This approach leads to utilization loss and potentially leads to large latencies. On the contrary to the periodic approaches, state-of-the-art aperiodic task admission algorithms do not suffer from problems like utilization loss. However, these approaches do not support partitioned scheduling or mixed-criticality execution environment. To solve this problem, we propose an algorithm for online admission of aperiodic tasks which provides job execution flexibility, jitter control and leads to lower latencies of aperiodic tasks.
For safety critical systems, fault-tolerance is one of the most important requirements. In time-triggered systems, modes are often used to ensure survivability against faults, i.e., when a fault is detected, current system configuration (or mode) is changed such that the overall system performance is either unaffected or degrades gracefully. In literature, it has been asserted that a task-set might be schedulable in individual modes but unschedulable during a mode-change. Moreover, conventional mode-change execution strategies might cause significant delays until the next mode is established. In order to address these issues, in this dissertation, we present an approach for schedulability analysis of mode-changes and propose mode-change delay reduction techniques in distributed system architecture defined by the DREAMS project. We evaluate our approach on an avionics use case and demonstrate that our approach can drastically reduce mode-change delays.
In order to manage increasing system complexity, real-time applications also require new design and development technologies. Other than fulfilling the technical requirements, the main features required from such technologies include modularity and re-usability. AUTOSAR is one of these technologies in automotive industry, which defines an open standard for software architecture of a real-time operating system. However, being an industrial standard, the available proprietary tools do not support model extensions and/or new developments by third-parties and, therefore, hinder the software evolution. To solve this problem, we developed an open-source AUTOSAR toolchain which supports application development and code generation for several modules. In order to exhibit the capabilities of our toolchain, we developed two case studies. These case studies demonstrate that our toolchain generates valid artifacts, avoids dirty workarounds and supports application development.
In order to cope with evolving system designs and hardware platforms, rapid-development of scheduling and analysis algorithms is required. In order to ease the process of algorithm development, a number of scheduling and analysis frameworks are proposed in literature. However, these frameworks focus on a specific class of applications and are limited in functionality. In this dissertation, we provide the skeleton of a scheduling and analysis framework for real-time systems. In order to support rapid-development, we also highlight different development components which promote code reuse and component modularity.
Asynchronous concurrency is a wide-spread way of writing programs that
deal with many short tasks. It is the programming model behind
event-driven concurrency, as exemplified by GUI applications, where the
tasks correspond to event handlers, web applications based around
JavaScript, the implementation of web browsers, but also of server-side
software or operating systems.
This model is widely used because it provides the performance benefits of
concurrency together with easier programming than multi-threading. While
there is ample work on how to implement asynchronous programs, and
significant work on testing and model checking, little research has been
done on handling asynchronous programs that involve heap manipulation, nor
on how to automatically optimize code for asynchronous concurrency.
This thesis addresses the question of how we can reason about asynchronous
programs while considering the heap, and how to use this this to optimize
programs. The work is organized along the main questions: (i) How can we
reason about asynchronous programs, without ignoring the heap? (ii) How
can we use such reasoning techniques to optimize programs involving
asynchronous behavior? (iii) How can we transfer these reasoning and
optimization techniques to other settings?
The unifying idea behind all the results in the thesis is the use of an
appropriate model encompassing global state and a promise-based model of
asynchronous concurrency. For the first question, We start from refinement
type systems for sequential programs and extend them to perform precise
resource-based reasoning in terms of heap contents, known outstanding
tasks and promises. This extended type system is known as Asynchronous
Liquid Separation Types, or ALST for short. We implement ALST in for OCaml
programs using the Lwt library.
For the second question, we consider a family of possible program
optimizations, described by a set of rewriting rules, the DWFM rules. The
rewriting rules are type-driven: We only guarantee soundness for programs
that are well-typed under ALST. We give a soundness proof based on a
semantic interpretation of ALST that allows us to show behavior inclusion
of pairs of programs.
For the third question, we address an optimization problem from industrial
practice: Normally, JavaScript files that are referenced in an HTML file
are be loaded synchronously, i.e., when a script tag is encountered, the
browser must suspend parsing, then load and execute the script, and only
after will it continue parsing HTML. But in practice, there are numerous
JavaScript files for which asynchronous loading would be perfectly sound.
First, we sketch a hypothetical optimization using the DWFM rules and a
static analysis.
To actually implement the analysis, we modify the approach to use a
dynamic analysis. This analysis, known as JSDefer, enables us to analyze
real-world web pages, and provide experimental evidence for the efficiency
of this transformation.
Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.
The N-containing heterocycles have received strong attention from the organic synthesis field because of their importance for pharmaceutical and material sciences. Nitrogen element plays an important role between inorganic salts and biomolecules, to search convenient methods combine C-N bond together become a hot topic in recent decades.
Since the early beginning of 20th century, transition-metal-catalyzed coupling reactions had been well-known and world widely spread in organic researchs, achieved abundant significant progress. In the other side, the less toxic and more challenging transition metal free coupling method remained further potential value.
With the evolution of amination reactions and oxidants, more and more effective, simplified, and atom economic organic synthesis methods will come soon. And these stories also drove me to think about investigating the novel cross-dehydrogenative-coupling amination methods development as the topics of my PhD research.
Thus, we selected the phenothiazine derivatives as the N-nucleophile reagents and the phenols as the C-nucleophile reagents. To achieve the transition metal-free CDC aminations of phenols with phenothiazines, we scanned the chemical toolbox and tested a series of both common and uncommon oxidants.
Firstly, we start the condition in the presence of cumene and O2. The proposed mechanism initiated by a Hock process, which would form in situ peroxo-species as initiator of the reaction. And the initial infra-red analysis predicted there is a strong O-H..N interaction.
In the second method, a series of iodines with different valance have been tested to achieve the C-N bond formation of phenols with phenothiazines. This time, a simplified and more efficient method had been developed, which also provides a wider scope of phenols. Several controlling experiments had been conducted for the plausible pathway research. Large-scale synthesis of target molecular was also successfully performed.
And then, we focus the research on the cross-coupling reaction of pre-oxidized(iminated) phenothiazine with ubiquitous phenols and indoles. In this task, we first regio-selectively synthesized the novel iminated phenothiazine derivatives with the traditional biocide and mild disinfectant, Chloramine T. Then the phenothiazinimine performed an ultra-simple condensation technique with phenol or indole coupling partners in a simplified condition. Parallel reactions were also performed to investigate the plausible pathway.
Nowadays, the increasing demand for ever more customizable products has emphasized the need for more flexible and fast-changing manufacturing systems. In this environment, simulation has become a strategic tool for the design, development, and implementation of such systems. Simulation represents a relatively low-cost and risk-free alternative for testing the impact and effectiveness of changes in different aspects of manufacturing systems.
Systems that deal with this kind of data for its use in decision making processes are known as Simulation-Based Decision Support Systems (SB-DSS). Although most SB-DSS provide a powerful variety of tools for the automatic and semi-automatic analysis of simulations, visual and interactive alternatives for the manual exploration of the results are still open to further development.
The work in this dissertation is focused on enhancing decision makers’ analysis capabilities by making simulation data more accessible through the incorporation of visualization and analysis techniques. To demonstrate how this goal can be achieved, two systems were developed. The first system, viPhos – standing for visualization of Phos: Greek for light –, is a system that supports lighting design in factory layout planning. viPhos combines simulation, analysis, and visualization tools and techniques to facilitate the global and local (overall factory or single workstations, respectively) interactive exploration and comparison of lighting design alternatives.
The second system, STRAD - standing for Spatio-Temporal Radar -, is a web-based systems that considers the spatio/attribute-temporal analysis of event data. Since decision making processes in manufacturing also involve the monitoring of the systems over time, STRAD enables the multilevel exploration of event data (e.g., simulated or historical registers of the status of machines or results of quality control processes).
A set of four case studies and one proof of concept prepared for both systems demonstrate the suitability of the visualization and analysis strategies adopted for supporting decision making processes in diverse application domains. The results of these case studies indicate that both, the systems as well as the techniques included in the systems can be generalized and extended to support the analysis of different tasks and scenarios.
The scientific and industrial interest devoted to polymer/layered silicate
nanocomposites due to their outstanding properties and novel applications resulted
in numerous studies in the last decade. They cover mostly thermoplastic- and
thermoset-based systems. Recently, studies in rubber/layered silicate
nanocomposites were started, as well. It was presented how complex maybe the
nanocomposite formation for the related systems. Therefore the rules governing their
structure-property relationships have to be clarified. In this Thesis, the related
aspects were addressed.
For the investigations several ethylene propylene diene rubbers (EPDM) of polar and
non-polar origin were selected, as well as, the more polar hydrogenated acrylonitrile
butadiene rubber (HNBR). The polarity was found to be beneficial on the
nanocomposite formation as it assisted to the intercalation of the polymer chains
within the clay galleries. This favored the development of exfoliated structures.
Finding an appropriate processing procedure, i.e. compounding in a kneader instead
of on an open mill, the mechanical performance of the nanocomposites was
significantly improved. The complexity of the nanocomposite formation in
rubber/organoclay system was demonstrated. The deintercalation of the organoclay
observed, was traced to the vulcanization system used. It was evidenced by an
indirect way that during sulfur curing, the primary amine clay intercalant leaves the
silicate surface and migrates in the rubber matrix. This was explained by its
participation in the sulfur-rich Zn-complexes created. Thus, by using quaternary
amine clay intercalants (as it was presented for EPDM or HNBR compounds) the
deintercalation was eliminated. The organoclay intercalation/deintercalation detected
for the primary amine clay intercalants, were controlled by means of peroxide curing
(as it was presented for HNBR compounds), where the vulcanization mechanism
differs from that of the sulfur curing.
The current analysis showed that by selecting the appropriate organoclay type the
properties of the nanocomposites can be tailored. This occurs via generating different
nanostructures (i.e. exfoliated, intercalated or deintercalated). In all cases, the
rubber/organoclay nanocomposites exhibited better performance than vulcanizates
with traditional fillers, like silica or unmodified (pristine) layered silicates.The mechanical and gas permeation behavior of the respective nanocomposites
were modelled. It was shown that models (e.g. Guth’s or Nielsen’s equations)
developed for “traditional” vulcanizates can be used when specific aspects are taken
into consideration. These involve characteristics related to the platy structure of the
silicates, i.e. their aspect ratio after compounding (appearance of platelet stacks), or
their orientation in the rubber matrix (order parameter).
Benzene is a natural constituent of crude oil and a product of incomplete combustion of petrol
and has been classified as “carcinogenic to humans” by IARC in 1982 (IARC 1982). (E,E)-
Muconaldehyde has been postulated to be a microsomal metabolite of benzene in vitro
(Latriano et al. 1986). (E,E)-Muconaldehyde is hematotoxic in vivo and its role in the
hematotoxicity of benzene is unclear (Witz et al. 1985).
We intended to ascertain the presence of (E,E)-muconaldehyde in vivo by detection of a
protein conjugate deriving from (E,E)-muconaldehyde.
Therefore we improved the current synthetic access to (E,E)-muconaldehyde. (E,E)-
muconaldehyde was synthesized in three steps starting from with (E,E)-muconic acid in an
overall yield of 60 %.
Reaction of (E,E)-muconaldehyde with bovine serum albumin resulted in formation of a
conjugate which was converted upon addition of NaBH4 to a new species whose HPLC-
retention time, UV spectra, Q1 mass and MS2 spectra matched those of the crude reaction
product from one pot conversion of Ac-Lys-OMe with (E,E)-muconaldehyde in the presence
of NaBH4 and subsequent cleavage of protection groups.
Synthetic access to the presumed structure (S)-2-ammonio-6-(((E,E)-6-oxohexa-2,4-dien-1-
yl)amino)hexanoate (Lys(MUC-CHO)) was provided in eleven steps starting from (E,E)-
muconic acid and Lys(Z)-OtBu*HCl in 2 % overall yield. Additionally synthetic access to
(S)-2-ammonio-6-(((E,E)-6-hydroxyhexa-2,4-dien-1-yl)amino)hexanoate (Lys(MUC-OH))
and (S)-2-ammonio-6-((6-hydroxyhexyl)amino)hexanoate (IS) was provided.
With synthetic reference material at hand, the presumed structure Lys(MUC-OH) could be
identified from incubations of (E,E)-muconaldehyde with bovine serum albumin via HPLC-ESI+-
MS/MS.
Cytotoxicity analysis of (E,E)-muconaldehyde and Lys(MUC-CHO) in human promyelocytic
NB4 cells resulted in EC50 ≈ 1 μM for (E,E)-muconaldehyde. Lys(MUC-CHO) did not show
any additional cytotoxicity up to 10 μM.
B6C3F1 mice were exposed to 0, 400 and 800 mg/kg b.w. benzene to examine the formation
of Lys(MUC-OH) in vivo. After 24 h mice were sacrificed and serum albumin was isolated.
Analysis for Lys(MUC-OH) has not been performed in this work.
Collaboration aims to increase the efficiency of problem solving and decision making by bringing diverse areas of expertise together, i.e., teams of experts from various disciplines, all necessary to come up with acceptable concepts. This dissertation is concerned with the design of highly efficient computer-supported collaborative work involving active participation of user groups with diverse expertise. Three main contributions can be highlighted: (1) the definition and design of a framework facilitating collaborative decision making; (2) the deployment and evaluation of more natural and intuitive interaction and visualization techniques in order to support multiple decision makers in virtual reality environments; and (3) the integration of novel techniques into a single proof-of-concept system.
Decision making processes are time-consuming, typically involving several iterations of different options before a generally acceptable solution is obtained. Although, collaboration is an often-applied method, the execution of collaborative sessions is often inefficient, does not involve all participants, and decisions are often finalized with- out the agreement of all participants. An increasing number of computer-supported cooperative work systems (CSCW) facilitate collaborative work by providing shared viewpoints and tools to solve joint tasks. However, most of these software systems are designed from a feature-oriented perspective, rather than a human-centered perspective and without the consideration of user groups with diverse experience and joint goals instead of joint tasks. The aim of this dissertation is to bring insights to the following research question: How can computer-supported cooperative work be designed to be more efficient? This question opens up more specific questions like: How can collaborative work be designed to be more efficient? How can all participants be involved in the collaboration process? And how can interaction interfaces that support collaborative work be designed to be more efficient? As such, this dissertation makes contributions in:
1. Definition and design of a framework facilitating decision making and collaborative work. Based on examinations of collaborative work and decision making processes requirements of a collaboration framework are assorted and formulated. Following, an approach to define and rate software/frameworks is introduced. This approach is used to translate the assorted requirements into a software’s architecture design. Next, an approach to evaluate alternatives based on Multi Criteria Decision Making (MCDM) and Multi Attribute Utility Theory (MAUT) is presented. Two case studies demonstrate the usability of this approach for (1) benchmarking between systems and evaluates the value of the desired collaboration framework, and (2) ranking a set of alternatives resulting from a decision-making process incorporating the points of view of multiple stake- holders.
2. Deployment and evaluation of natural and intuitive interaction and visualization techniques in order to support multiple diverse decision makers. A user taxonomy of industrial corporations serves to create a petri network of users in order to identify dependencies and information flows between each other. An explicit characterization and design of task models was developed to define interfaces and further components of the collaboration framework. In order to involve and support user groups with diverse experiences, smart de- vices and virtual reality are used within the presented collaboration framework. Natural and intuitive interaction techniques as well as advanced visualizations of user centered views of the collaboratively processed data are developed in order to support and increase the efficiency of decision making processes. The smartwatch as one of the latest technologies of smart devices, offers new possibilities of interaction techniques. A multi-modal interaction interface is provided, realized with smartwatch and smartphone in full immersive environments, including touch-input, in-air gestures, and speech.
3. Integration of novel techniques into a single proof-of-concept system. Finally, all findings and designed components are combined into the new collaboration framework called IN2CO, for distributed or co-located participants to efficiently collaborate using diverse mobile devices. In a prototypical implementation, all described components are integrated and evaluated. Examples where next-generation network-enabled collaborative environments, connected by visual and mobile interaction devices, can have significant impact are: design and simulation of automobiles and aircrafts; urban planning and simulation of urban infrastructure; or the design of complex and large buildings, including efficiency- and cost-optimized manufacturing buildings as task in factory planning. To demonstrate the functionality and usability of the framework, case studies referring to factory planning are demonstrated. Considering that factory planning is a process that involves the interaction of multiple aspects as well as the participation of experts from different domains (i.e., mechanical engineering, electrical engineering, computer engineering, ergonomics, material science, and even more), this application is suitable to demonstrate the utilization and usability of the collaboration framework. The various software modules and the integrated system resulting from the research will all be subjected to evaluations. Thus, collaborative decision making for co-located and distributed participants is enhanced by the use of natural and intuitive multi-modal interaction interfaces and techniques.
Due to their superior weight-specific mechanical properties, carbon fibre epoxy composites (CFRP) are commonly used in aviation industry. However, their brittle failure behaviour limits the structural integrity and damage tolerance in case of impact (e.g. tool drop, bird strike, hail strike, ramp collision) or crash events. To ensure sufficient robustness, a minimum skin thickness is therefore prescribed for the fuselage, partially exceeding typical service load requirements from ground or flight manoeuvre load cases. A minimum skin thickness is also required for lightning strike protection purposes and to enable state-of-the-art bolted repair technology. Furthermore, the electrical conductivity of CFRP aircraft structures is insufficient for certain applications; additional metal components are necessary to provide electrical functionality (e.g. metal meshes on the outer skin for lightning strike protection, wires for electrical bonding and grounding, overbraiding of cables to provide electromagnetic shielding). The corresponding penalty weights compromise the lightweight potential that is actually given by the structural performance of CFRP over aluminium alloys.
Former research attempts tried to overcome these deficits by modifying the resin system (e.g. by addition of conductive particles or toughening agents) but could not prove sufficient enhancements. A novel holistic approach is the incorporation of highly conductive and ductile continuous metal fibres into CFRP. The basic idea of this hybrid material concept is to take advantage of both the electrical and mechanical capabilities of the integrated metal fibres in order to simultaneously improve the electrical conductivity and the damage tolerance of the composite. The increased density of the hybrid material is over-compensated by omitting the need for additional electrical system installation items and by the enhanced structural performance, enabling a reduction of the prescribed minimum skin thickness. Advantages over state-of-the-art fibre metal laminates mainly arise from design and processing technology aspects.
In this context, the present work focuses on analysing and optimising the structural and electrical performance of such hybrid composites with shares of metal fibres up to 20 vol.%. Bundles of soft-annealed austenitic steel or copper cladded low carbon steel fibres with filament diameters of 60 or 63 µm are considered. The fibre bundles are distinguished by high elongation at break (32 %) and ultimate tensile strength (900 MPa) or high electrical conductivity (2.4 × 10^7 S/m). Comprehensive researches are carried out on the fibre bundles as well as on unidirectional and multiaxial laminates. Both hybrid composites with homogeneous and accumulated steel fibre arrangement are taken into account. Electrical in-plane conductivity, plain tensile behaviour, suitability for bolted joints as well as impact and perforation performance of the composite are analysed. Additionally, a novel non-destructive testing method based on measurement of deformation-induced phase transformation of the metastable austenitic steel fibres is discussed.
The outcome of the conductivity measurements verifies a correlation of the volume conductivity of the composite with the volume share and the specific electrical resistance of the incorporated metal fibres. Compared to conventional CFRP, the electrical conductivity in parallel to the fibre orientation can be increased by one to two orders of magnitude even for minor percentages of steel fibres. The analysis, however, also discloses the challenge of establishing a sufficient connection to the hybrid composite in order to entirely exploit its electrical conductivity.
In case of plain tensile load, the performance of the hybrid composite is essentially affected by the steel fibre-resin-adhesion as well as the laminate structure. Uniaxial hybrid laminates show brittle, singular failure behaviour. Exhaustive yielding of the embedded steel fibres is confined to the arising fracture gap. The high transverse stiffness of the isotropic metal fibres additionally intensifies strain magnification within the resin under transverse tensile load. This promotes (intralaminar) inter-fibre-failure at minor composite deformation. By contrast, multiaxial hybrid laminates exhibit distinctive damage evolution. After failure initiation, the steel fibres extensively yield and sustain the load-carrying capacity of angularly (e.g. ±45°) aligned CFRP plies. The overall material response is thus not only a simple superimposition but a complex interaction of the mechanical behaviour of the composite’s constituents. As a result of this post-damage performance, an ultimate elongation of over 11 % can be proven for the hybrid laminates analysed in this work. In this context, the influence of the steel fibre-resin adhesion on the failure behaviour of the hybrid composite is explicated by means of an analytical model. Long term exposure to corrosive media has no detrimental effect on the mechanical performance of stainless steel fibre reinforced composites. By trend, water uptake increases the maximum elongation at break of the hybrid laminate.
Moreover, the suitability of CFRP for bolted joints can partially be improved by the integration of steel fibres. While the bearing strength basically remains nearly unaffected, the bypass failure behaviour (ε_{max}: +363 %) as well as the head pull-through resistance (E_{a,BPT}: +81 %) can be enhanced. The improvements primarily concern the load-carrying capacity after failure initiation. Additionally, the integrated ductile steel fibres significantly increase the energy absorption capacity of the laminate in case of progressive bearing failure by up to 63 %.
However, the hybrid composite exhibits a sensitive low velocity/low mass impact behaviour. Compared to conventional CFRP, the damage threshold load of very thin hybrid laminates is lower, making them prone for delamination at minor, non-critical impact energies. At higher energy levels, however, the impact-induced delamination spreads less since most of the impact energy is absorbed by yielding of the ductile metal fibres instead of crack propagation. This structural advantage compared to CFRP gains in importance with increasing impact energy. The plastic deformation of the metastable austenitic steel fibres is accompanied by a phase transformation from paramagnetic γ-austenite to ferromagnetic α’-martensite. This change of the magnetic behaviour can be used to detect and evaluate impacts on the surface of the hybrid composite, which provides a simple non-destructive testing method. In case of low velocity/high mass impact, integration of ductile metal fibres into CFRP enables to address spacious areas of the laminate for energy absorption purposes. As a consequence, the perforation resistance of the hybrid composite is significantly enhanced; by addition of approximately 20 vol.% of stainless steel fibres, the perforation strength can be increased by 61 %, while the maximum energy absorption capacity rises by 194 %.
Due to the steadily growing flood of data, the appropriate use of visualizations for efficient data analysis is as important today as it has never been before. In many application domains, the data flood is based on processes that can be represented by node-link diagrams. Within such a diagram, nodes may represent intermediate results (or products), system states (or snapshots), milestones or real (and possibly georeferenced) objects, while links (edges) can embody transition conditions, transformation processes or real physical connections. Inspired by the engineering sciences application domain and the research project “SinOptiKom: Cross-sectoral optimization of transformation processes in municipal infrastructures in rural areas”, a platform for the analysis of transformation processes has been researched and developed based on a geographic information system (GIS). Caused by the increased amount of available and interesting data, a particular challenge is the simultaneous visualization of several visible attributes within one single diagram instead of using multiple ones. Therefore, two approaches have been developed, which utilize the available space between nodes in a diagram to display additional information.
Motivated by the necessity of appropriate result communication with various stakeholders, a concept for a universal, dashboard-based analysis platform has been developed. This web-based approach is conceptually capable of displaying data from various data sources and has been supplemented by collaboration possibilities such as sharing, annotating and presenting features.
In order to demonstrate the applicability and usability of newly developed applications, visualizations or user interfaces, extensive evaluations with human users are often inevitable. To reduce the complexity and the effort for conducting an evaluation, the browser-based evaluation framework (BREF) has been designed and implemented. Through its universal and flexible character, virtually any visualization or interaction running in the browser can be evaluated with BREF without any additional application (except for a modern web browser) on the target device. BREF has already proved itself in a wide range of application areas during the development and has since grown into a comprehensive evaluation tool.
A fast numerical method for an advanced electro-chemo-mechanical model is developed which is able to capture phase separation processes in porous materials. This method is applied to simulate lithium-ion battery cells, where the complex microstructure of the electrodes is fully resolved. The intercalation of ions into the popular cathode material LFP leads to a separation into lithium-rich and lithium-poor phases. The large concentration gradients result in high mechanical stresses. A phase-field method applying the Cahn-Hilliard equation is used to describe the diffusion. For the sake of simplicity, the linear elastic case is considered. Numerical tests for fully resolved three-dimensional granular microstructures are discussed in detail.
In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.
In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.
European economic, social and territorial cohesion is one of the fundamental aims of the European Union (EU). It seeks to both reduce the effects of internal borders and enhance European integration. In order to facilitate territorial cohesion, the linkage of member states by means of efficient cross-border transport infrastructures and services is an important factor. Many cross-border transport challenges have historically existed in everyday life. They have hampered smooth passenger and freight flows within the EU.
Two EU policies, namely European Territorial Cooperation (ETC) and the Trans-European Transport Networks (TEN-T), promote enhancing cross-border transport through cooperation in soft spaces. This dissertation seeks to explore the influence of these two EU policies on cross-border transport and further European integration.
Based on an analysis of European, national and cross-border policy and planning documents, surveys with TEN-T Corridor Coordinators and INTERREG Secretariats and a high number of elite interviews, the dissertation will investigate how the objectives of the two EU policies were formally implemented in both soft spaces and the EU member states as well as which practical implementations have taken place. Thereby, the initiated Europeanisation and European integration processes will be evaluated. The analysis is conducted in nine preliminary case studies and two in-depth case studies. The cases comprise cross-border regions funded by the ETC policy that are crossed by a TEN-T corridor. The in-depth analysis explores the Greater Region Saar-Lor-Lux+ and the Brandenburg-Lubuskie region. The cases are characterised by different initial situations.
The research determined that the two EU policies support cross-border transport on different levels and, further, that they need to be better intertwined in order to make effective use of their complementarities. Moreover, it became clear that the EU policies have a distinct influence on domestic policy and planning documents of different administrative levels and countries as well as on the practical implementation. The final implementation of the EU objectives and the cross-border transport initiatives was strongly influenced by the member states’ initial situations – particularly, the regional and local transport needs. This dissertation concludes that the two EU policies cannot remove the entirety of the cross-border transport-related challenges. However, in addition to their financial investments in concrete projects, they promote the importance of cross-border transport and facilitate cooperation, learning and exchange processes. These are all of high relevance to cross-border transport development, driven by member states, as well as to further European integration.
The dissertation recommends that the transport planning competences of the EU in addition to the TEN-T network should not be enlarged in the future, but rather further transnational transport development tasks should be decentralised to transnational transport planning committees that are aware of regional needs and can coordinate a joint transport development strategy. The latter should be implemented with the support of additional EU funds for secondary and tertiary cross-border connections. Moreover, the potential complementarities of the transnational regions and transport corridors as well as the two EU policy fields should be made better use of by improving communication. This means that soft spaces, the TEN-T and ETC Policy as well as the domestic transport ministries and the domestic administrations that are responsible for the two EU policies need to intensify their cooperation. Furthermore, a focus of future ETC projects on topics that are of added value for the whole cross-border region or else that can be applied in different territorial contexts is recommended rather than investing in small-scale scattered expensive infrastructures and services that are only of benefit for a small part of the region. Additionally, the dissemination of project results should be enhanced so that the developed tools can be accessed by potential users and benefits become more visible to a wider society, despite the fact that they might not be measurable in numbers. In addition, the research points at another success factor for more concrete outputs: the frequent involvement of transport and spatial planners in transnational projects could increase the relation to planning practice. Besides that, advanced training regarding planning culture could reduce cooperation barriers.
Field-effect transistor (FET) sensors and in particular their nanoscale variant of silicon nanowire transistors are very promising technology platforms for label-free biosensor applications. These devices directly detect the intrinsic electrical charge of biomolecules at the sensor’s liquid-solid interface. The maturity of micro fabrication techniques enables very large FET sensor arrays for massive multiplex detection. However, the direct detection of charged molecules in liquids faces a significant limitation due to a charge screening effect in physiological solutions, which inhibits the realization of point-of-care applications. As an alternative, impedance spectroscopy with FET devices has the potential to enable measurements in physiological samples. Even though promising studies were published in the field, impedimetric detection with silicon FET devices is not well understood.
The first goal of this thesis was to understand the device performances and to relate the effects seen in biosensing experiments to device and biomolecule types. A model approach should help to understand the capability and limitations of the impedimetric measurement method with FET biosensors. In addition, to obtain experimental results, a high precision readout device was needed. Consequently, the second goal was to build up multi-channel, highly accurate amplifier systems that would also enable future multi-parameter handheld devices.
A PSPICE FET model for potentiometric and impedimetric detection was adapted to the experiments and further expanded to investigate the sensing mechanism, the working principle, and effects of side parameters for the biosensor experiments. For potentiometric experiments, the pH sensitivity of the sensors was also included in this modelling approach. For impedimetric experiments, solutions of different conductivity were used to validate the suggested theories and assumptions. The impedance spectra showed two pronounced frequency domains: a low-pass characteristic at lower frequencies and a resonance effect at higher frequencies. The former can be interpreted as a contribution of the source and double layer capacitances. The latter can be interpreted as a combined effect of the drain capacitance with the operational amplifier in the transimpedance circuit.
Two readout systems, one as a laboratory system and one as a point-of-care demonstrator, were developed and used for several chemical and biosensing experiments. The PSPICE model applied to the sensors and circuits were utilized to optimize the systems and to explain the sensor responses. The systems as well as the developed modelling approach were a significant step towards portable instruments with combined transducer principles in future healthcare applications.
The research problem is that the land-use (re-)planning process in the existing Egyptian cities
does not attain sustainability. This is because of the unfulfillment of essential principles within
their land-use structures, lack of harmony between the added and old parts in the cities, and
other reasons. This leads to the need for developing an assessment system, which is a
computational spatial planning support system-SPSS. This SPSS is used for identifying the
degree of sustainability attainment in land-uses plans, predicting probable problems, and
suggesting modifications in the evaluated plans.
The main goal is to design the SPSS for supporting sustainability in the Egyptian cities. The
secondary goals are: studying the Egyptian planning and administrative systems for designing
the technical and administrative frameworks for the SPSS, the development of an assessment
model from the SPSS for assessing sustainability in land-use structures of urban areas, as well
as the identification of the improvements required in the model and the recommendations for
developing the SPSS.
The theoretical part aims to design each of the administrative and technical frameworks of the
SPSS. This requires studying each of the main planning approaches, the sustainability in urban
land-use planning, and the significance of using efficient assessment tools for evaluating the
sustainability in this process. The added value of the planning support systems-PSSs for
planning and their role in supporting sustainability attainment in urban land-use planning are
discussed. Then, a group of previous examples in the sustainability assessment from various
countries (developed and developing countries) are selected, which have used various
assessment tools. This is to extract some learned lessons to be guides for the SPSS. And so,
the comprehensive technical framework for the SPSS is designed, which includes the suggested
methods and techniques that perform various stages of the assessment process.
The Egyptian context is studied regarding the planning and administration systems within the
Egyptian cities, as well as the spatial and administrative problems facing the sustainable
development. And so, the administrative framework for the SPSS is identified, which includes
the entities that should be involved in the assessment process.
The empirical part focuses on the design of a selected assessment model from the
comprehensive technical framework of the SPSS to be established as a minimized version from
it. This model is programmed in the form of a new toolbox within the ArcGIS™ software through
geoscripting using Python programming language to be applied for assessing the sustainability
attainment in the land-use structure of urban areas. The required assessing criteria for the model
specialized for the Egyptian and German cities are identified, for applying it on German and
Egyptian study areas.
The conclusions regarding each of PSSs, the Egyptian local administration and planning
systems, sustainability attainment in the land-use planning process in Egyptian Cities, as well as
the proposed SPSS and the developed toolbox are drawn. The recommendations are regarding
each of challenges facing the development and application of PSSs, the Egyptian local
administration and planning systems, the spatial problems in Egyptian cities, the establishment
of the SPSS, and the application of the toolbox. The future agenda is in the fields of sustainable urban land-use planning, planning support science, and the development process in the
Egyptian cities.
A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.
In this thesis, we focus on the application of the Heath-Platen (HP) estimator in option
pricing. In particular, we extend the approach of the HP estimator for pricing path dependent
options under the Heston model. The theoretical background of the estimator
was first introduced by Heath and Platen [32]. The HP estimator was originally interpreted
as a control variate technique and an application for European vanilla options was
presented in [32]. For European vanilla options, the HP estimator provided a considerable
amount of variance reduction. Thus, applying the technique for path dependent options
under the Heston model is the main contribution of this thesis.
The first part of the thesis deals with the implementation of the HP estimator for pricing
one-sided knockout barrier options. The main difficulty for the implementation of the HP
estimator is located in the determination of the first hitting time of the barrier. To test the
efficiency of the HP estimator we conduct numerical tests with regard to various aspects.
We provide a comparison among the crude Monte Carlo estimation, the crude control
variate technique and the HP estimator for all types of barrier options. Furthermore, we
present the numerical results for at the money, in the money and out of the money barrier
options. As numerical results imply, the HP estimator performs superior among others
for pricing one-sided knockout barrier options under the Heston model.
Another contribution of this thesis is the application of the HP estimator in pricing bond
options under the Cox-Ingersoll-Ross (CIR) model and the Fong-Vasicek (FV) model. As
suggested in the original paper of Heath and Platen [32], the HP estimator has a wide
range of applicability for derivative pricing. Therefore, transferring the structure of the
HP estimator for pricing bond options is a promising contribution. As the approximating
Vasicek process does not seem to be as good as the deterministic volatility process in the
Heston setting, the performance of the HP estimator in the CIR model is only relatively
good. However, for the FV model the variance reduction provided by the HP estimator is
again considerable.
Finally, the numerical result concerning the weak convergence rate of the HP estimator
for pricing European vanilla options in the Heston model is presented. As supported by
numerical analysis, the HP estimator has weak convergence of order almost 1.
The screening of metagenomic datasets led to the identification of new phage-derived members of the heme oxygenase and the ferredoxin-dependent bilin reductase enzyme families.
The novel bilin biosynthesis genes were shown to form mini-cassettes on metagenomic scaffolds and further form distinct clusters in phylogenetic analyses (Ledermann et al., 2016). In this project, it was demonstrated that the discovered sequences actually encode for active enzymes. The biochemical characterization of a member of the heme oxygenases (ΦHemO) revealed that it possesses a regiospecificity for the α-methine bridge in the cleavage of the heme macrocycle. The reaction product biliverdin IXα was shown to function as the substrate for the novel ferredoxin-dependent bilin reductases (PcyX reductases), which catalyze its reduction to PEB via the intermediate 15,16-DHBV. While it was demonstrated that ΦPcyX, a phage-derived member of the PcyX reductases, is an active enzyme, it also became clear that the rate of the reaction is highly dependent on the employed redox partner. It turned out that the ferredoxin from the cyanophage P-SSM2 is to date the most suitable redox partner for the reductases of the PcyX group. Furthermore, the solution of the ΦPcyX crystal structure revealed that it adopts an α/β/α-sandwich fold, typical for the FDBR-family. Activity assays and subsequent HPLC analyses with different variants of the ΦPcyX protein demonstrated that, despite their similarity, PcyX and PcyA reductases must act via different reaction mechanisms.
Another part of this project focused on the biochemical characterization of the FDBR KflaHY2 from the streptophyte alga Klebsormidium flaccidum. Experiments with recombinant KflaHY2 showed that it is an active FDBR which produces 3(Z)-PCB as the main reaction product, like it can be found in reductases of the PcyA group. Moreover, it was shown that under the employed assay conditions the reaction of BV to PCB proceeds in two different ways: Both 3(Z)-PΦB and 18¹,18²-DHBV occur as intermediates. Activity assays with the purified intermediates yielded PCB. Hence, both compounds are suitable substrates for KflaHY2.
The results of this work highlight the importance of the biochemical experiments, as catalytic activity cannot solely be predicted by sequence analysis.
1,3-Diynes are frequently found as an important structural motif in natural products, pharmaceuticals and bioactive compounds, electronic and optical materials and supramolecular molecules. Copper and palladium complexes are widely used to prepare 1,3-diynes by homocoupling of terminal alkynes; albeit the potential of nickel complexes towards the same is essentially unexplored. Although a detailed study on the reported nickel-acetylene chemistry has not been carried out, a generalized mechanism featuring a nickel(II)/nickel(0) catalytic cycle has been proposed. In the present work, a detailed mechanistic aspect of the nickel-mediated homocoupling reaction of terminal alkynes is investigated through the isolation and/or characterization of key intermediates from both the stoichiometric and the catalytic reactions. A nickel(II) complex [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) containing a tetradentate N,N′-dimethyl-2,11-diaza[3.3](2,6)pyridinophane (L-N4Me2) as ligand was used as catalyst for homocoupling of terminal alkynes by employing oxygen as oxidant at room temperature. A series of dinuclear nickel(I) complexes bridged by a 1,3-diyne ligand have been isolated from stoichiometric reaction between [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) and lithium acetylides. The dinuclear nickel(I)-diyne complexes [{Ni(L-N4Me2)}2(RC4R)](ClO4)2 (2) were well characterized by X-ray crystal structures, various spectroscopic methods, SQUID and DFT calculation. The complexes not only represent as a key intermediate in aforesaid catalytic reaction, but also describe the first structurally characterized dinuclear nickel(I)-diyne complexes. In addition, radical trapping and low temperature UV-Vis-NIR experiments in the formation of the dinuclear nickel(I)-diyne confirm that the reactions occurring during the reduction of nickel(II) to nickel(I) and C-C bond formation of 1,3-diyne follow non-radical concerted mechanism. Furthermore, spectroscopic investigation on the reactivity of the dinuclear nickel(I)-diyne complex towards molecular oxygen confirmed the formation of a mononuclear nickel(I)-diyne species [Ni(L-N4Me2)(RC4R)]+ (4) and a mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) which were converted to free 1,3-diyne and an unstable dinuclear nickel(II) species [{Ni(L-N4Me2)}2(O2)]2+ (6). A mononuclear nickel(I)-alkyne complex [Ni(L-N4Me2)(PhC2Ph)](ClO4).MeOH (3) and the mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) were isolated/generated and characterized to confirm the formulation of aforementioned mononuclear nickel(I)-diyne and mononuclear nickel(III)-peroxo species. Spectroscopic experiments on the catalytic reaction mixture also confirm the presence of aforesaid intermediates. Results of both stoichiometric and catalytic reactions suggested an intriguing mechanism involving nickel(II)/nickel(I)/nickel(III) oxidation states in contrast to the reported nickel(II)/nickel(0) catalytic cycle. These findings are expected to open a new paradigm towards nickel-catalyzed organic transformations.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.
Embedded reactive systems underpin various safety-critical applications wherein they interact with other systems and the environment with limited or even no human supervision. Therefore, design errors that violate essential system specifications can lead to severe unacceptable damages. For this reason, formal verification of such systems in their physical environment is of high interest. Synchronous programs are typically used to represent embedded reactive systems while hybrid systems serve to model discrete reactive system in a continuous environment. As such, both synchronous programs and hybrid systems play important roles in the model-based design of embedded reactive systems. This thesis develops induction-based techniques for safety property verification of synchronous and hybrid programs. The imperative synchronous language Quartz and its hybrid systems’ extensions are used to sustain the findings.
Deductive techniques for software verification typically use Hoare calculus. In this context, Verification Condition Generation (VCG) is used to apply Hoare calculus rules to a program whose statements are annotated with pre- and postconditions so that the validity of an obtained Verification Condition (VC) implies correctness of a given proof goal. Due to the abstraction of macro steps, Hoare calculus cannot directly generate VCs of synchronous programs unless it handles additional label variables or goto statements. As a first contribution, Floyd’s induction-based approach is employed to generate VCs for synchronous and hybrid programs. Five VCG methods are introduced that use inductive assertions to decompose the overall proof goal. Given the right assertions, the procedure can automatically generate a set of VCs that can then be checked by SMT solvers or automated theorem provers. The methods are proved sound and relatively complete, provided that the underlying assertion language is expressive enough. They can be applied to any program with a state-based semantics.
Property Directed Reachability (PDR) is an efficient method for synchronous hardware circuit verification based on induction rather than fixpoint computation. Crucial steps of the PDR method consist of deciding about the reachability of Counterexamples to Induction (CTIs) and generalizing them to clauses that cover as many unreachable states as possible. The thesis demonstrates that PDR becomes more efficient for imperative synchronous programs when using the distinction between the control- and dataflow. Before calling the PDR method, it is possible to derive additional program control-flow information that can be added to the transition relation such that less CTIs will be generated. Two methods to compute additional control-flow information are presented that differ in how precisely they approximate the reachable control-flow states and, consequently, in their required runtime. After calling the PDR method, the CTI identification work is reduced to its control-flow part and to checking whether the obtained control-flow states are unreachable in the corresponding extended finite state machine of the program. If so, all states of the transition system that refer to the same program locations can be excluded, which significantly increases the performance of PDR.
Grape powdery mildew, Erysiphe necator, is one of the most significant plant pathogens, which affects grape growing regions world-wide. Because of its short generation time and the production of large amounts of conidia throughout the season, E. necator is classified as a moderate to high risk pathogen with respect to the development of fungicide resistance. The number of fungicidal mode of actions available to control powdery mildew is limited and for some of them resistances are already known. Aryl-phenyl-ketones (APKs), represented by metrafenone and pyriofenone, and succinate-dehydrogenase inhibitors (SDHIs), composed of numerous active ingredients, are two important fungicide classes used for the control of E. necator. Over the period 2014 to 2016, the emergence and development of metrafenone and SDHI resistant E. necator isolates in Europe was followed and evaluated. The distribution of resistant isolates was thereby strongly dependent on the European region. Whereas the north-western part is still predominantly sensitive, samples from east European countries showed higher resistance frequencies.
Classical sensitivity tests with obligate biotrophs can be challenging regarding sampling, transport and especially the maintenance of the living strains. Whenever possible, molecular genetic methods are preferred for a more efficient monitoring. Such methods require the knowledge of the resistance mechanisms. The exact molecular target and the resistance mechanism of metrafenone is still unknown. Whole genome sequencing of metrafenone sensitive and resistant wheat powdery mildew isolates, as well as adapted laboratory mutants of Aspergillus nidulans, where performed with the aim to identify proteins potentially linked to the mode of action or which contribute to metrafenone resistance. Based on comparative SNP analysis, four proteins potentially associated with metrafenone resistance were identified, but validation studies could not confirm their role in metrafenone resistance. In contrast to APKs, the mode of action of SDHIs is well understood. Sequencing of the sdh-genes of less sensitive E. necator isolates identified four different target-site mutations, the B-H242R, B-I244V, C-G169D and C-G169S, in sdhB and sdhC, respectively. Based on this information it was possible to develop molecular genetic monitoring methods for the mutations B-H242R and C-G169D. In 2016, the B-H242R was thereby identified as by far the most frequent mutation. Depending on the analysed SDH compound and the sdh-genotype, different sensitivities were observed and revealed a complex cross-resistance pattern.
Growth competition assays without selection pressure, with mixtures of sensitive and resistant E. necator isolates, were performed to determine potential fitness costs associated with fungicide resistance. With the experimental setups used, a clear fitness disadvantage associated with metrafenone resistance was not identified, although a strong variability of fitness was observed among the tested resistant E. necator isolates. For isolates with a reduced sensitivity towards SDHIs, associated fitness costs were dependent on the sdh-genotype analysed. Competition tests with the B-H242R genotypes gave evidence that there are no fitness costs associated with this mutation. In contrast, the C-G169D genotypes were less competitive, indicating a restricted fitness compared to the tested sensitive partners. Competition assays of field isolates, which exhibited several resistances towards different fungicide classes, indicated that there are no fitness costs associated with a multiple resistant phenotype in E. necator. Overall, these results clearly indicate the importance to analyse a representative number of isolates with sensitive and resistant phenotypes.
Epoxy belongs to a category of high-performance thermosetting polymers which have been used extensively in industrial and consumer applications. Highly cross-linked epoxy polymers offer excellent mechanical properties, adhesion, and chemical resistance. However, unmodified epoxies are prone to brittle fracture and crack propagation due to their highly crosslinked structure. As a result, epoxies are normally toughened to ensure the usability of these materials in practical applications.
This research work focuses on the development of novel modified epoxy matrices, with enhanced mechanical, fracture mechanical and thermal properties, suitable to be processed by filament winding technology, to manufacture composite based calender roller covers with improved performance in comparison to commercially available products.
In the first stage, a neat epoxy resin (EP) was modified using three different high functionality epoxy resins with two type of hardeners i.e. amine-based (H1) and anhydride-based (H2). Series of hybrid epoxy resins were obtained by systematic variation of high functionality epoxy resin contents with reference epoxy system. The resulting matrices were characterized by their tensile properties and the best system was chosen from each hardener system i.e. amine and anhydride. For tailored amine based system (MEP_H1) 14 % improvement was measured for bulk samples similarly, for tailored anhydride system (MEP_H2) 11 % improvement was measured when tested at 23 °C.
Further, tailored epoxy systems (MEP_H1 and MEP_H2) were modified using specially designed block copolymer (BCP), and core-shell rubber nanoparticles (CSR). Series of nanocomposites were obtained by systematic variation of filler contents. The resulting matrices were extensively characterized qualitatively and quantitatively to reveal the effect of each filler on the polymer properties. It was shown that the BCP confer better fracture properties to the epoxy resin at low filler loading without losing the other mechanical properties. These characteristics were accompanied by ductility and temperature stability. All composites were tested at 23 °C and at 80 °C to understand the effect of temperature on the mechanical and fracture properties.
Examinations on fractured specimen surfaces provided information about the mechanisms responsible for reinforcement. Nanoparticles generate several energy dissipating mechanisms in the epoxy, e.g. plastic deformation of the matrix, cavitation, void growth, debonding and crack pinning. These were closely related to the microstructure of the materials. The characteristic of the microstructure was verified by microscopy methods (SEM and AFM). The microstructure of neat epoxy hardener system was strongly influenced by the nanoparticles and the resulting interfacial interactions. The interaction of nanoparticles with a different hardener system will result in different morphology which will ultimately influence the mechanical and fracture mechanical properties of the nanocomposites. Hybrid toughening using a combination of the block-copolymer / core-shell rubber nanoparticles and block copolymer / TiO2 nanoparticles has been investigated in the epoxy systems. It was found out that addition of rigid phase with a soft phase recovers the loss of strength in the nanocomposites caused by a softer phase.
In order to clarify the relevant relationships, the microstructural and mechanical properties were correlated. The Counto’s, Halpin-Tsai, and Lewis-Nielsen equations were used to calculate the modulus of the composites and predicted modulus fit well with the measured values. Modeling was done to predict the toughening contribution from block copolymers and core-shell rubber nanoparticles. There was good agreement between the predicted values and the experimental values for the fracture energy.
Computational simulations run on large supercomputers balance their outputs with the need of the scientist and the capability of the machine. Persistent storage is typically expensive and slow, its peformance grows at a slower rate than the processing power of the machine. This forces scientists to be practical about the size and frequency of the simulation outputs that can be later analyzed to understand the simulation states. Flexibility in the trade-offs of flexibilty and accessibility of the outputs of the simulations are critical the success of scientists using the supercomputers to understand their science. In situ transformations of the simulation state to be persistently stored is the focus of this dissertation.
The extreme size and parallelism of simulations can cause challenges for visualization and data analysis. This is coupled with the need to accept pre partitioned data into the analysis algorithms, which is not always well oriented toward existing software infrastructures. The work in this dissertation is focused on improving current work flows and software to accept data as it is, and efficiently produce smaller, more information rich data, for persistent storage that is easily consumed by end-user scientists. I attack this problem from both a theoretical and practical basis, by managing completely raw data to quantities of information dense visualizations and study methods for managing both the creation and persistence of data products from large scale simulations.
In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.
The present situation of control engineering in the context of automated production can be described as a tension field between its desired outcome and its actual consideration. On the one hand, the share of control engineering compared to the other engineering domains has significantly increased within the last decades due to rising automation degrees of production processes and equipment. On the other hand, the control engineering domain is still underrepresented within the production engineering process. Another limiting factor constitutes a lack of methods and tools to decrease the amount of software engineering efforts and to permit the development of innovative automation applications that ideally support the business requirements.
This thesis addresses this challenging situation by means of the development of a new control engineering methodology. The foundation is built by concepts from computer science to promote structuring and abstraction mechanisms for the software development. In this context, the key sources for this thesis are the paradigm of Service-oriented Architecture and concepts from Model-driven Engineering. To mold these concepts into an integrated engineering procedure, ideas from Systems Engineering are applied. The overall objective is to develop an engineering methodology to improve the efficiency of control engineering by a higher adaptability of control software and decreased programming efforts by reuse.