Doctoral Thesis
Refine
Year of publication
Document Type
- Doctoral Thesis (941) (remove)
Language
- English (941) (remove)
Has Fulltext
- yes (941)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (54)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (23)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
The aim of this dissertation is to explain processes in recruitment by gaining a better understanding of how perceptions evolve and how recruitment outcomes and perceptions are influenced. To do so, this dissertation takes a closer look at the formation of fit perceptions, the effects of top employer awards on pre-hire recruitment outcomes, and on how perceptions about external sources are influenced.
Matter-wave Optics of Dark-state Polaritons: Applications to Interferometry and Quantum Information
(2006)
The present work "Materwave Optics with Dark-state Polaritons: Applications to Interferometry and Quantum Information" deals in a broad sense with the subject of dark-states and in particular with the so-called dark-state polaritons introduced by M. Fleischhauer and M. D. Lukin. The dark-state polaritons can be regarded as a combined excitation of electromagnetic fields and spin/matter-waves. Within the framework of this thesis the special optical properties of the combined excitation are studied. On one hand a new procedure to spatially manipulate and to increase the excitation density of stored photons is described and on the other hand the properties are used to construct a new type of Sagnac Hybrid interferometer. The thesis is devided into four parts. In the introduction all notions necessary to understand the work are described, e.g.: electromagnetically induced transparency (EIT), dark-state polaritons and the Sagnac effect. The second chapter considers the method developed by A. Andre and M. D. Lukin to create stationary light pulses in specially dressed EIT-media. In a first step a set of field equations is derived and simplified by introducing a new set of normal modes. The absorption of one of the normal modes leads to the phenomenon of pulse-matching for the other mode and thereby to a diffusive spreading of its field envelope. All these considerations are based on a homogeneous field setup of the EIT preparation laser. If this restriction is dismissed one finds that a drift motion is superimposed to the diffusive spreading. By choosing a special laser configuration the drift motion can be tailored such that an effective force is created that counteracts the spreading. Moreover, the force can not only be strong enough to compensate the diffusive spreading but also to exceed this dynamics and hence to compress the field envelope of the excitation. The compression can be discribed using a Fokker-Planck equation of the Ornstein-Uhlenbeck type. The investigations show that the compression leads to an excitation of higher-order modes which decay very fast. In the last section of the chapter this exciation will be discussed in more detail and conditions will be given how the excitation of higher-order modes can be avoided or even suppressed. All results given in the chapter are supported by numerical simulatons. In the third chapter the matterwave optical properties of the dark-state polaritons will be studied. They will be used to construct a light-matterwave hybrid Sagnac interferometer. First the principle setup of such an interferometer will be sketched and the relevant equations of motion of light-matter interaction in a rotating frame will be derived. These form the basis of the following considerations of the dark-state polariton dynamics with and without the influence of external trapping potentials on the matterwave part of the polariton. It will be shown that a sensitivity enhancement compared to a passive laser gyroscope can be anticipated if the gaseous medium is initially in a superfluid quantum state in a ring-trap configuration. To achieve this enhancement a simultaneous coherence and momentum transfer is furthermore necessary. In the last part of the chapter the quantum sensitivity limit of the hybrid interferometer is derived using the one-particle density matrix equations incorporating the motion of the particles. To this end the Maxwell-Bloch equations are considered perturbatively in the rotation rate of the noninertial frame of reference and the susceptibility of the considered 3-level \(\Lambda\)-type system is derived in arbitrary order of the probe-field. This is done to determine the optimum operation point. With its help the anticipated quantum sensitivity of the light-matterwave hybrid Sagnac interferometer is calculated at the shot-noise limit and the results are compared to state-of-the-art laser and matterwave Sagnac interferometers. The last chapter of the thesis originates from a joint theoretical and experimental project with the AG Bergmann. This chapter does no longer consider the dark-state polaritons of the last two chapters but deals with the more general concept of dark states and in particular with the transient velocity selective dark states as introduced by E. Arimondo et al. In the experiment we could for the first time measure these states. The chapter starts with an introduction into the concept of velocity selective dark states as they occur in a \(\Lambda\)-configuration. Then we introduce the transient velocity selective dark-states as they occur in an particular extension of the \(\Lambda\)-system. For later use in the simulations the relevant equations of motion are derived in detail. The simulations are based on the solution of the generalized optical Bloch equations. Finally the experimental setup and procedure are explained and the theoretical and experimental results are compared.
A series of (oligo)phenthiazines, thiazolium salts and sulfonic acid functionalized organic/inorganic hybrid materials were synthesized. The organic groups were covalently bound on the inorganic surface through reactions of organosilane precursors with TEOS or with the silanol groups of material surface. These synthetic methods are called the co-condensation process and the post grafting. The structures and the textural parameters of the generated hybrid materials were characterized by XRD, N2 adsorption-desorption measurements, SEM and TEM. The incorporations of the organic groups were verified by elemental analysis, thermogravimetric analysis, FT-IR, UV-Vis, EPR, CV, as well as by 13C CP-MAS NMR and 29Si CP-MAS NMR spectroscopy. Introduction of various organic groups endow different phsysical, chemical properties to these hybrid materials. The (oligo)phenothiazines provide a group of novel redox acitive hybrid materials with special electronic and optic properties. The thiazolium salts modified materials were applied as heterogenized organo catalysts for the benzoin condensation and the cross-coupling of aldehydes with acylimines to yield a-amido ketones. The sulfonic acid containing materials can not only be used as Broensted acid catalysts, but also can serve as ion exchangable supports for further modifications and applications.
Nanoparticle-Filled Thermoplastics and Thermoplastic Elastomer: Structure-Property Relationships
(2012)
The present work focuses on the structure-property relationships of
particulate-filled thermoplastics and thermoplastic elastomer (TPE). In this work
two thermoplastics and one TPE were used as polymer matrices, i.e. amorphous
bisphenol-A polycarbonate (PC), semi-crystalline isotactic polypropylene (iPP),
and a block copolymer poly(butylene terephthalate)-block-poly(tetramethylene
glycol) TPE(PBT-PTMG). For PC, a selected type of various Aerosil® nano-SiO2
types was used as filler to improve the thermal and mechanical properties by
maintaining the transparency of PC matrix. Different types of SiO2 and TiO2
nanoparticles with different surface polarity were used for iPP. The goal was to
examine the influence of surface polarity and chemical nature of nanoparticles on
the thermal, mechanical and morphological properties of iPP composites. For
TPE(PBT-PTMG), three TiO2 particles were used, i.e. one grade with hydroxyl
groups on the particle surface and the other two grades are surface-modified with
metal and metal oxides, respectively. The influence of primary size and dispersion
quality of TiO2 particles on the properties of TPE(PBT-PTMG)/TiO2 composites
were determined and discussed.
All polymer composites were produced by direct melt blending in a twin-screw
extruder via masterbatch technique. The dispersion of particles was examined by
using scanning electron microscopy (SEM) and micro-computerized tomography
(μCT). The thermal and crystalline properties of polymer composites were characterized by using thermogravimetric analysis (TGA) and differential
scanning calorimetry (DSC). The mechanical and thermomechanical properties
were determined by using mechanical tensile testing, compact tension and
Charpy impact as well as dynamic-mechanical thermal analysis (DMTA).
The SEM results show that the unpolar-surface modified nanoparticles are better
dispersed in polymer matrices as iPP than polar-surface nanoparticles, especially
in case of using Aeroxide® TiO2 nanoparticles. The Aeroxide® TiO2 nanoparticles
with a polar surface due to Ti-OH groups result in a very high degree of
agglomeration in both iPP and TPE matrices because of strong van der Waals
interactions among particles (hydrogen bonding). Compared to unmodified
Aeroxide® TiO2 nanoparticles, the other grades of surface modified TiO2 particles
are very homogenously dispersed in used iPP and TPE(PBT-PTMG). The
incorporation of SiO2 nanoparticles into bisphenol-A PC significantly increases
the mechanical properties of PC/SiO2 nanocomposites, particularly the resistance
against environmental stress crazing (ESC). However, the transparency of
PC/SiO2 nanocomposites decreases with increasing nanoparticle content and
size due to a mismatch of infractive indices of PC and SiO2 particles. The different
surface polarity of nanoparticles in iPP shows evident influence on properties of
iPP composites. Among iPP/SiO2 nanocomposites, the nanocomposite
containing SiO2 nanoparticles with a higher degree of hydrophobicity shows
improved fracture and impact toughness compared to the other iPP/SiO2
composites. The TPE(PBT-PTMG)/TiO2 composites show much better thermal and mechanical properties than neat TPE(PBT-PTMG) due to strong chemical
interactions between polymer matrix and TiO2 particles. In addition, better
dispersion quality of TiO2 particles in used TPE(PBT-PTMG) leads to dramatically
improved mechanical properties of TPE(PBT-PTMG)/TiO2 composites.
Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.
Whole-body vibrations (WBV) have adverse effects on ride comfort and human health. Suspension seats have an important influence on the WBV severity. In this study, WBV were measured on a medium-sized compact wheel loader (CWL) in its typical operations. The effect of short-term exposure to the WBV on the ride comfort was evaluated according to ISO 2631-1:1985 and ISO 2631-1:1997. ISO 2631-1:1997 and ISO 2631-5:2004 were adopted to evaluate the effect of long-term exposure to the WBV on the human health. Reasons for the different evaluation results obtained according to ISO 2631-1:1997 and ISO 2631-5:2004 were explained in this study. The WBV measurements were carried out in cases where the driver wore a lap belt or a four-point seat harness and in the case where the driver did not wear any safety belt. The seat effective amplitude transmissibility (SEAT) and the seat transmissibility in the frequency domain in these three cases were analyzed to investigate the effect of a safety belt on the seat transmissibility. Seat tests were performed on a multi-axis shaking table in laboratory to study the dynamic behavior of a suspension seat under the vibration excitations measured on the CWL. The WBV intensity was reduced by optimizing the vertical and the longitudinal seat suspension systems with the help of computational simulations. For the optimization multi-body models of the seat-dummy system in the laboratory seat tests and the seat-driver system in the field vibration measurements were built and validated.
The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.
In recent years, nanofiller-reinforced polymer composites have attracted considerable
interest from numerous researchers, since they can offer unique mechanical,
electrical, optical and thermal properties compared to the conventional polymer
composites filled with micron-sized particles or short fibers. With this background, the
main objective of the present work was to investigate the various mechanical
properties of polymer matrices filled with different inorganic rigid nanofillers, including
SiOB2B, TiOB2B, AlB2BOB3B and multi-walled carbon nanotubes (MWNT). Further, special
attention was paid to the fracture behaviours of the polymer nanocomposites. The
polymer matrices used in this work contained two types of epoxy resin (cycloaliphatic
and bisphenol-F) and two types of thermoplastic polymer (polyamide 66 and isotactic
polypropylene).
The epoxy-based nanocomposites (filled with nano-SiOB2B) were formed in situ by a
special sol-gel technique supplied by nanoresins AG. Excellent nanoparticle
dispersion was achieved even at rather high particle loading. The almost
homogeneously distributed nanoparticles can improve the elastic modulus and
fracture toughness (characterized by KBICB and GBICB) simultaneously. According to
dynamic mechanical and thermal analysis (DMTA), the nanosilica particles in epoxy
resins possessed considerable "effective volume fraction" in comparison with their
actual volume fraction, due to the presence of the interphase. Moreover, AFM and
high-resolution SEM observations also suggested that the nanosilica particles were
coated with a polymer layer and therefore a core-shell structure of particle-matrix was
expected. Furthermore, based on SEM fractography, several toughening
mechanisms were considered to be responsible for the improvement in toughness,
which included crack deflection, crack pinning/bowing and plastic deformation of
matrix induced by nanoparticles.
The PA66 or iPP-based nanocomposites were fabricated by a conventional meltextrusion
technique. Here, the nanofiller content was set constant as 1 vol.%. Relatively good particle dispersion was found, though some small aggregates still
existed. The elastic modulus of both PA66 and iPP was moderately improved after
incorporation of the nanofillers. The fracture behaviours of these materials were
characterized by an essential work fracture (EWF) approach. In the case of PA66
system, the EWF experiments were carried out over a broad temperature range
(23~120 °C). It was found that the EWF parameters exhibited high temperature
dependence. At most testing temperatures, a small amount of nanoparticles could
produce obvious toughening effects at the cost of reduction in plastic deformation of
the matrix. In light of SEM fractographs and crack opening tip (COD) analysis, the
crack blunting induced by nanoparticles might be the major source of this toughening.
The fracture behaviours of PP filled with MWNTs were investigated over a broad
temperature range (-196~80 °C) in terms of notched impact resistance. It was found
that MWNTs could enhance the notched impact resistance of PP matrix significantly
once the testing temperature was higher than the glass transition temperature (TBgB) of
neat PP. At the relevant temperature range, the longer the MWNTs, the better was
the impact resistance. SEM observation revealed three failure modes of nanotubes:
nanotube bridging, debonding/pullout and fracture. All of them would contribute to
impact toughness to a degree. Moreover, the nanotube fracture was considered as
the major failure mode. In addition, the smaller spherulites induced by the nanotubes
would also benefit toughness.
Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.
This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.
Automata theory has given rise to a variety of automata models that consist
of a finite-state control and an infinite-state storage mechanism. The aim
of this work is to provide insights into how the structure of the storage
mechanism influences the expressiveness and the analyzability of the
resulting model. To this end, it presents generalizations of results about
individual storage mechanisms to larger classes. These generalizations
characterize those storage mechanisms for which the given result remains
true and for which it fails.
In order to speak of classes of storage mechanisms, we need an overarching
framework that accommodates each of the concrete storage mechanisms we wish
to address. Such a framework is provided by the model of valence automata,
in which the storage mechanism is represented by a monoid. Since the monoid
serves as a parameter to specifying the storage mechanism, our aim
translates into the question: For which monoids does the given
(automata-theoretic) result hold?
As a first result, we present an algebraic characterization of those monoids
over which valence automata accept only regular languages. In addition, it
turns out that for each monoid, this is the case if and only if valence
grammars, an analogous grammar model, can generate only context-free
languages.
Furthermore, we are concerned with closure properties: We study which
monoids result in a Boolean closed language class. For every language class
that is closed under rational transductions (in particular, those induced by
valence automata), we show: If the class is Boolean closed and contains any
non-regular language, then it already includes the whole arithmetical
hierarchy.
This work also introduces the class of graph monoids, which are defined by
finite graphs. By choosing appropriate graphs, one can realize a number of
prominent storage mechanisms, but also combinations and variants thereof.
Examples are pushdowns, counters, and Turing tapes. We can therefore relate
the structure of the graphs to computational properties of the resulting
storage mechanisms.
In the case of graph monoids, we study (i) the decidability of the emptiness
problem, (ii) which storage mechanisms guarantee semilinear Parikh images,
(iii) when silent transitions (i.e. those that read no input) can be
avoided, and (iv) which storage mechanisms permit the computation of
downward closures.
Continuum Mechanical Modeling of Dry Granular Systems: From Dilute Flow to Solid-Like Behavior
(2014)
In this thesis, we develop a granular hydrodynamic model which covers the three principal regimes observed in granular systems, i.e. the dilute flow, the dense flow and the solid-like regime. We start from a kinetic model valid at low density and extend its validity to the granular solid-like behavior. Analytical and numerical results show that this model reproduces a lot of complex phenomena like for instance slow viscoplastic motion, critical states and the pressure dip in sand piles. Finally we formulate a 1D version of the full model and develop a numerical method to solve it. We present two numerical examples, a filling simulation and the flow on an inclined plane where the three regimes are included.
Today, information systems are often distributed to achieve high availability and low latency.
These systems can be realized by building on a highly available database to manage the distribution of data.
However, it is well known that high availability and low latency are not compatible with strong consistency guarantees.
For application developers, the lack of strong consistency on the database layer can make it difficult to reason about their programs and ensure that applications work as intended.
We address this problem from the perspective of formal verification.
We present a specification technique, which allows specifying functional properties of the application.
In addition to data invariants, we support history properties.
These let us express relations between events, including invocations of the application API and operations on the database.
To address the verification problem, we have developed a proof technique that handles concurrency using invariants and thereby reduces the problem to sequential verification.
The underlying system semantics, technique and its soundness proof are all formalized in the interactive theorem prover Isabelle/HOL.
Additionally, we have developed a tool named Repliss which uses the proof technique to enable partially automated verification and testing of applications.
For verification, Repliss generates verification conditions via symbolic execution and then uses an SMT solver to discharge them.
Fucoidan is a class of biopolymers mainly found in brown seaweeds. Due to its diverse medical importance, homogenous supply as well as a GMP-compliant product is of a special interest. Therefore, in addition to optimization of its extraction and purification from classical resources, other techniques were tried (e.g., marine tissue culture and heterologous expression of enzymes involved in its biosynthesis). Results showed that 17.5% (w/w) crude fucoidan after pre-treatment and extraction was obtained from the brown macroalgae F. vesiculosus. Purification by affinity chromatography improved purity relative to the commercial purified product. Furthermore, biological investigations revealed improved anti-coagulant and anti-viral activities compared with crude fucoidan. Furthermore, callus-like and protoplast cultures as well as bioreactor cultivation were developed from F. vesiculosus representing a new horizon to produce fucoidan biotechnologically. Moreover, heterologous expression of several enzymes involved in its biosynthesis by E. coli (e.g., FucTs and STs) demonstrated the possibility to obtain active enzymes that could be utilized in enzymatic in vitro synthesis of fucoidan. All these competitive techniques could provide the global demands from fucoidan.
The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major
interest from both academic and industrial stakeholders.
Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions
and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed.
End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests
such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex
spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects.
To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while
trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis.
On the Extended Finite Element Method for the Elasto-Plastic Deformation of Heterogeneous Materials
(2015)
This thesis is concerned with the extended finite element method (XFEM) for deformation analysis of three-dimensional heterogeneous materials. Using the "enhanced abs enrichment" the XFEM is able to reproduce kinks in the displacements and therewith jumps in the strains within elements of the underlying tetrahedral finite element mesh. A complex model for the micro structure reconstruction of aluminum matrix composite AMC225xe and the modeling of its macroscopic thermo-mechanical plastic deformation behavior is presented, using the XFEM. Additionally, a novel stabilization algorithm is introduced for the XFEM. This algorithm requires preprocessing only.
With the technological advancement in the field of robotics, it is now quite practical to acknowledge the actuality of social robots being a part of human's daily life in the next decades. Concerning HRI, the basic expectations from a social robot are to perceive words, emotions, and behaviours, in order to draw several conclusions and adapt its behaviour to realize natural HRI. Henceforth, assessment of human personality traits is essential to bring a sense of appeal and acceptance towards the robot during interaction.
Knowledge of human personality is highly relevant as far as natural and efficient HRI is concerned. The idea is taken from human behaviourism, with humans behaving differently based on the personality trait of the communicating partners. This thesis contributes to the development of personality trait assessment system for intelligent human-robot interaction.
The personality trait assessment system is organized in three separate levels. The first level, known as perceptual level, is responsible for enabling the robot to perceive, recognize and understand human actions in the surrounding environment in order to make sense of the situation. Using psychological concepts and theories, several percepts have been extracted. A study has been conducted to validate the significance of these percepts towards personality traits.
The second level, known as affective level, helps the robot to connect the knowledge acquired in the first level to make higher order evaluations such as assessment of human personality traits. The affective system of the robot is responsible for analysing human personality traits. To the best of our knowledge, this thesis is the first work in the field of human-robot interaction that presents an automatic assessment of human personality traits in real-time using visual information. Using psychology and cognitive studies, many theories has been studied. Two theories have been been used to build the personality trait assessment system: Big Five personality traits assessment and temperament framework for personality traits assessment.
By using the information from the perceptual and affective level, the last level, known as behavioural level, enables the robot to synthesize an appropriate behaviour adapted to human personality traits. Multiple experiments have been conducted with different scenarios. It has been shown that the robot, ROBIN, assesses personality traits correctly during interaction and uses the similarity-attraction principle to behave with similar personality type. For example, if the person is found out to be extrovert, the robot also behaves like an extrovert. However, it also uses the complementary attraction theory to adapt its behaviour and complement the personality of the interaction partner. For example, if the person is found out to be self-centred, the robot behaves like an agreeable in order to flourish human-robot interaction.
This thesis focuses on novel methods to establish the utility of wearable devices along with machine learning and pattern recognition methods for formal education and address the open research questions posed by existing methods. Firstly, state-of-the-art methods are proposed to analyse the cognitive activities in the learning process, i.e., reading, writing, and their correlation. Furthermore, this thesis presents real-time applications in wearable space as an experimental tool in Physics education, and an air-writing system.
There are two critical components in analysing the reading behaviour, i.e., WHERE a person looks at (gaze analysis) and WHAT a person looks at (content analysis). This thesis proposes novel methods to classify the reading content to address the WHAT AT component. The proposed methods are based on a hybrid approach, which fuses the traditional computer vision methods with deep neural networks. These methods, when evaluated on publicly available datasets, yield state-of-the-art results to define the structure of the document images. Moreover, extensive efforts were made to refine and correct ICDAR2017-POD dataset along with a completely new FFD dataset.
Traditionally, handwriting research focuses on character and number recognition without looking into the type of writing, i.e. text, math, and drawing. This thesis reports multiple contributions for on-line handwriting classification. First, it presents a public dataset for on-line handwriting classification OnTabWriter, collected using iPen and an iPad. In addition, a new feature set is introduced for on-line handwriting classification to establish the benchmark on the proposed dataset to classify handwriting as plain text, mathematical expression, and plot/graph. An ablation study is made to evaluate the performance of the proposed feature set in comparison to existing feature sets. Lastly, this thesis evaluates the importance of context for on-line handwriting classification.
Analysing reading and writing activities individually is not enough to provide insights to identify the student's expertise unless their correlations are analysed. This thesis presents a study where reading data from wearable eye-trackers and writing data from sensor pen are analysed together in correlation to correlate the expertise of the users in Physics education with their actual knowledge. Initial results show a strong correlation between individual's expertise and understanding of the subject.
Augmented reality & virtual applications can play a vital role in making classroom environments more interactive and engaging both for teachers and learners. To validate the hypothesis, different applications are developed and evaluated. First, smart glasses are used as an experimental tool in Physics education to help the learners perform experiments by providing assistance and feedback on head mounted display in understanding acoustics concepts. Second, a real-time application of air-writing with the finger on an imaginary canvas using a single IMU as the FAirWrite system is also presented. FAirWrite system is further equipped with DL methods to classify the air-written characters.
Recent studies on the environmental performance of additive manufacturing (AM) have shown that AM exhibits both complex potentials and challenges at different life stages compared to conventional manufacturing. To assess and ensure the environmental benefits of AM during the design phase, an eco-design approach is required. Existing eco-design for AM approaches described in the literature mainly focus on the use of lifecycle assessment (LCA) to analyze the environmental impacts of AM-specific design solutions. However, since LCA requires a full-process chain model and detailed inventory data, it can only be performed after the design process or in a subsequent design stage. To integrate evaluation activities into the middle stage of the design process, energy performance assessment can be used as an alternative evaluation tool in eco-design for AM. However, the literature still lacks an eco-design for AM method based on energy performance quantification and assessment. By addressing this research problem, this dissertation contributes to the development of a holistic framework to implement eco-design for AM using energy performance assessment. This framework consists of the following three parts: a simulation tool for energy prediction in the design phase; an energy performance assessment model for AM; and a method for carrying out activities in eco-design for AM. To demonstrate the feasibility of the proposed method, three use cases are performed. Based on these use cases, it is concluded that with the use of the proposed method, AM designers will be able to select and develop optimal design solutions based on the energy performance of AM in the middle design stage.
Distributed Optimization of Constraint-Coupled Systems via Approximations of the Dual Function
(2024)
This thesis deals with the distributed optimization of constraint-coupled systems. This problem class is often encountered in systems consisting of multiple individual subsystems, which are coupled through shared limited resources. The goal is to optimize each subsystem in a distributed manner while still ensuring that system-wide constraints are satisfied. By introducing dual variables for the system-wide constraints the system-wide problem can be decomposed into individual subproblems. These resulting subproblems can then be coordinated by iteratively adapting the dual variables. This thesis presents two new algorithms that exploit the properties of the dual optimization problem. Both algorithms compute a quadratic surrogate function of the dual function in each iteration, which is optimized to adapt the dual variables. The Quadratically Approximated Dual Ascent (QADA) algorithm computes the surrogate function by solving a regression problem, while the Quasi-Newton Dual Ascent (QNDA) algorithm updates the surrogate function iteratively via a quasi-Newton scheme. Both algorithms employ cutting planes to take the nonsmoothness of the dual function into account. The proposed algorithms are compared to algorithms from the literature on a large number of different benchmark problems, showing superior performance in most cases. In addition to general convex and mixed-integer optimization problems, dual decomposition-based distributed optimization is applied to distributed model predictive control and distributed K-means clustering problems.
This work introduces a promising concept for the preparation of new nano-sized receptors. Mixed monolayer protected gold nanoparticles (AuNPs) for low molecular weight compounds were prepared featuring functional groups on their surfaces. It has been shown that these AuNPs can engage in interactions with peptides in aqueous media. Quantitative binding information was obtained from DOSY-NMR titrations indicating that nanoparticles containing a combination of three orthogonal functional groups are more efficient in binding to dipeptides than mono or difunctionalised analogues. The strategy is highly modular and easily allows adapting the receptor selectivity to a
given substrate by varying the type, number, and ratio of binding sites on the nanoparticle
surface.
The safety of embedded systems is becoming more and more important nowadays. Fault Tree Analysis (FTA) is a widely used technique for analyzing the safety of embedded systems. A standardized tree-like structure called a Fault Tree (FT) models the failures of the systems. The Component Fault Tree (CFT) provides an advanced modeling concept for adapting the traditional FTs to the hierarchical architecture model in system design. Minimal Cut Set (MCS) analysis is a method that works for qualitative analysis based on the FTs. Each MCS represents a minimal combination of component failures of a system called basic events, which may together cause the top-level system failure. The ordinary representations of MCSs consist of plain text and data tables with little additional supporting visual and interactive information. Importance analysis based on FTs or CFTs estimates the contribution of each potential basic event to a top-level system failure. The resulting importance values of basic events are typically represented in summary views, e.g., data tables and histograms. There is little visual integration between these forms and the FT (or CFT) structure. The safety of a system can be improved using an iterative process, called the safety improvement process, based on FTs taking relevant constraints into account, e.g., cost. Typically, relevant data regarding the safety improvement process are presented across multiple views with few interactive associations. In short, the ordinary representation concepts cannot effectively facilitate these analyses.
We propose a set of visualization approaches for addressing the issues above mentioned in order to facilitate those analyses in terms of the representations.
Contribution:
1. To support the MCS analysis, we propose a matrix-based visualization that allows detailed data of the MCSs of interest to be viewed while maintaining a satisfactory overview of a large number of MCSs for effective navigation and pattern analysis. Engineers can also intuitively analyze the influence of MCSs of a CFT.
2. To facilitate the importance analysis based on the CFT, we propose a hybrid visualization approach that combines the icicle-layout-style architectural views with the CFT structure. This approach facilitates to identify the vulnerable components taking the hierarchies of system architecture into account and investigate the logical failure propagation of the important basic events.
3. We propose a visual safety improvement process that integrates an enhanced decision tree with a scatter plot. This approach allows one to visually investigate the detailed data related to individual steps of the process while maintaining the overview of the process. The approach facilitates to construct and analyze improvement solutions of the safety of a system.
Using our visualization approaches, the MCS analysis, the importance analysis, and the safety improvement process based on the CFT can be facilitated.
The noise issue in manufacturing system is widely discussed from legal and health aspects. Regarding the existing laws and guidelines, various investigation methods are implemented in industry. The sound pressure level can be measured and reduced by using established approaches in reality. However, a straightforward and low cost approach to study noise issue using existing digital factory models is not found.
This thesis attempts to develop a novel concept for sound pressure level investigation in a virtual environment. With this, the factory planners are able to investigate the noise issue during factory design and layout planning phase.
Two computer aided tools are used in this approach: acoustic simulation and virtual reality (VR). The former enables the planner to simulate the sound pressure level by given factory layout and facility sound features. And the latter provides a visualization environment to view and explore the simulation results. The combination of these two powerful tools provides the planners a new possibility to analyze the noise in a factory.
To validate the simulations, the acoustic measurements are implemented in a real factory. Sound pressure level and sound intensity are determined respectively. Furthermore, a software tool is implemented using the introduced concept and approach. With this software, the simulation results are represented in a Cave Automatic Virtual Environment (CAVE).
This thesis describes the development of the approach, the measurement of sound features, the design of visualization framework, and the implementation of VR software. Based on this know-how, the industry users are able to design their own method and software for noise investigation and analysis.
The broad engineering applications of polymers and composites have become the
state of the art due to their numerous advantages over metals and alloys, such as
lightweight, easy processing and manufacturing, as well as acceptable mechanical
properties. However, a general deficiency of thermoplastics is their relatively poor
creep resistance, impairing service durability and safety, which is a significant barrier
to further their potential applications. In recent years, polymer nanocomposites have
been increasingly focused as a novel field in materials science. There are still many
scientific questions concerning these materials leading to the optimal property
combinations. The major task of the current work is to study the improved creep
resistance of thermoplastics filled with various nanoparticles and multi-walled carbon
nanotubes.
A systematic study of three different nanocomposite systems by means of
experimental observation and modeling and prediction was carried out. In the first
part, a nanoparticle/PA system was prepared to undergo creep tests under different
stress levels (20, 30, 40 MPa) at various temperatures (23, 50, 80 °C). The aim was
to understand the effect of different nanoparticles on creep performance. 1 vol. % of
300 nm and 21 nm TiO2 nanoparticles and nanoclay was considered. Surface
modified 21 nm TiO2 particles were also investigated. Static tensile tests were
conducted at those temperatures accordingly. It was found that creep resistance was
significantly enhanced to different degrees by the nanoparticles, without sacrificing
static tensile properties. Creep was characterized by isochronous stress-strain curves,
creep rate, and creep compliance under different temperatures and stress levels.
Orientational hardening, as well as thermally and stress activated processes were
briefly introduced to further understanding of the creep mechanisms of these
nanocomposites. The second material system was PP filled with 1 vol. % 300 nm and 21 nm TiO2
nanoparticles, which was used to obtain more information about the effect of particle
size on creep behavior based on another matrix material with much lower Tg. It was
found especially that small nanoparticles could significantly improve creep resistance.
Additionally, creep lifetime under high stress levels was noticeably extended by
smaller nanoparticles. The improvement in creep resistance was attributed to a very
dense network formed by the small particles that effectively restricted the mobility of
polymer chains. Changes in the spherulite morphology and crystallinity in specimens
before and after creep tests confirmed this explanation.
In the third material system, the objective was to explore the creep behavior of PP
reinforced with multi-walled carbon nanotubes. Short and long aspect ratio nanotubes
with 1 vol. % were used. It was found that nanotubes markedly improved the creep
resistance of the matrix, with reduced creep deformation and rate. In addition, the
creep lifetime of the composites was dramatically extended by 1,000 % at elevated
temperatures. This enhancement contributed to efficient load transfer between
carbon nanotubes and surrounding polymer chains.
Finally, a modeling analysis and prediction of long-term creep behaviors presented a
comprehensive understanding of creep in the materials studied here. Both the
Burgers model and Findley power law were applied to satisfactorily simulate the
experimental data. The parameter analysis based on Burgers model provided an
explanation of structure-to-property relationships. Due to their intrinsic difference, the
power law was more capable of predicting long-term behaviors than Burgers model.
The time-temperature-stress superposition principle was adopted to predict long-term
creep performance based on the short-term experimental data, to make it possible to
forecast the future performance of materials.
Estimation and Portfolio Optimization with Expert Opinions in Discrete-time Financial Markets
(2021)
In this thesis, we mainly discuss the problem of parameter estimation and
portfolio optimization with partial information in discrete-time. In the portfolio optimization problem, we specifically aim at maximizing the utility of
terminal wealth. We focus on the logarithmic and power utility functions. We consider expert opinions as another observation in addition to stock returns to improve estimation of drift and volatility parameters at different times and for the purpose of asset optimization.
In the first part, we assume that the drift term has a fixed distribution, and
the volatility term is constant. We use the Kalman filter to combine the two
types of observations. Moreover, we discuss how to transform this problem
into a non-linear problem of Gaussian noise when the expert opinion is uniformly distributed. The generalized Kalman filter is used to estimate the parameters in this problem.
In the second part, we assume that drift and volatility of asset returns are both driven by a Markov chain. We mainly use the change-of-measure technique to estimate various values required by the EM algorithm. In addition,
we focus on different ways to combine the two observations, expert opinions and asset returns. First, we use the linear combination method. At the same time, we discuss how to use a logistic regression model to quantify expert
opinions. Second, we consider that expert opinions follow a mixed Dirichlet distribution. Under this assumption, we use another probability measure to
estimate the unnormalized filters, needed for the EM algorithm.
In the third part, we assume that expert opinions follow a mixed Dirichlet distribution and focus on how we can obtain approximate optimal portfolio
strategies in different observation settings. We claim the approximate strategies from the dynamic programming equations in different settings and analyze the dependence on the discretization step. Finally we compute different
observation settings in a simulation study.
Elastomers and their various composites, and blends are frequently used as engineering working parts subjected to rolling friction movements. This fact already substantiates the importance of a study addressing the rolling tribological properties of elastomers and their compounds. It is worth noting that until now the research and development works on the friction and wear of rubber materials were mostly focused on abrasion and to lesser extent on sliding type of loading. As the tribological knowledge acquired with various counterparts, excluding rubbers, can hardly be adopted for those with rubbers, there is a substantial need to study the latter. Therefore, the present work was aimed at investigating the rolling friction and wear properties of different kinds of elastomers against steel under unlubricated condition. In the research the rolling friction and wear properties of various rubber materials were studied in home-made rolling ball-on-plate test configurations under dry condition. The materials inspected were ethylene/propylene/diene rubber (EPDM) without and with carbon black (EPDM_CB), hydrogenated acrylonitrile/butadiene rubber (HNBR) without and with carbon black/silica/multiwall carbon nanotube (HNBR_CB/silica/MWCNT), rubber-rubber hybrid (HNBR and fluororubber (HNBR-FKM)) and rubber-thermoplastic blend (HNBR and cyclic butylene terephthalate oligomers (HNBR-CBT)). The dominant wear mechanisms were investigated by scanning electron microscopy (SEM), and analyzed as a function of composition and testing conditions. Differential scanning calorimetry (DSC), dynamic-mechanical thermal analysis (DMTA), atomic force microscopy (AFM), and transmission electron microscopy (TEM) along with other auxiliary measurements, were adopted to determine the phase structure and network-related properties of the rubber systems. The changes of the friction and wear as a function of type and amount of the additives were explored. The friction process of selected rubbers was also modelled by making use of the finite element method (FEM). The results show that incorporation of filler enhanced generally the wear resistance, hardness, stiffness (storage modulus), and apparent crosslinking of the related rubbers (EPDM-, HNBR- and HNBR-FKM based ones), but did not affect their glass transition temperature. Filling of rubbers usually reduced the coefficient of friction (COF). However, the tribological parameters strongly depended also on the test set-up and test duration. High wear loss was noticed for systems showing the occurrence of Schallamach-type wavy pattern. The blends HNBR-FKM and HNBR-CBT were two-phase structured. In HNBR-FKM, the FKM was dispersed in form of large microscaled domains in the HNBR matrix. This phase structure did not change by incorporation of MWCNT. It was established that the MWCNT was preferentially embedded in the HNBR matrix. Blending HNBR with FKM reduced the stiffness and degree of apparent crosslinking of the blend, which was traced to the dilution of the cure recipe with FKM. The coefficient of friction increased with increasing FKM opposed to the expectation. On the other hand, the specific wear rate (Ws) changed marginally with increasing content of FKM. In HNBR-CBT hybrids the HNBR was the matrix, irrespective to the rather high CBT content. Both the partly and mostly polymerized CBT ((p)CBT and pCBT, respectively) in the hybrids worked as active filler and thus increased the stiffness and hardness. The COF and Ws decreased with increasing CBT content. The FEM results in respect to COF achieved on systems possessing very different structures and thus properties (EPDM_30CB, HNBR-FKM 100-100 and HNBR-(p)CBT 100-100, respectively) were in accordance with the experimental results. This verifies that FEM can be properly used to consider the complex viscoelastic behaviour of rubber materials under dry rolling condition.
Indoor positioning system (IPS) is becoming more and more popular in recent years in industrial, scientific and medical areas. The rapidly growing demand of accurate position information attracts much attention and effort in developing various kinds of positioning systems that are characterized by parameters like accuracy,robustness,
latency, cost, etc. These systems have been successfully used in many applications such as automation in manufacturing, patient tracking in hospital, action detection for human-machine interacting and so on.
The different performance requirements in various applications lead to existence of greatly diverse technologies, which can be categorized into two groups: inertial positioning(involving momentum sensors embedded on the object device to be located) and external sensing (geometry estimation based on signal measurement). In positioning
systems based on external sensing, the input signal used for locating refers to many sources, such as visual or infrared signal in optical methods, sound or ultra-sound in acoustic methods and radio frequency based methods. This dissertation gives a recapitulative survey of a number of existence popular solutions for indoor positioning systems. Basic principles of individual technologies are demonstrated and discussed. By comparing the performances like accuracy, robustness, cost, etc., a comprehensive review of the properties of each technologies is presented, which concludes a guidance for designing a location sensing systems for indoor applications. This thesis will lately focus on presenting the development of a high precision IPS
prototype system based on RF signal from the concept aspect to the implementation up to evaluation. Developing phases related to this work include positioning scenario, involved technologies, hardware development, algorithms development, firmware generation, prototype evaluation, etc.. The developed prototype is a narrow band RF system, and it is suitable for a flexible frequency selection in UHF (300MHz3GHz) and SHF (3GHz30GHz) bands, enabling this technology to meet broad service preferences. The fundamental of the proposed system classified itself as a hyperbolic position fix system, which estimates a location by solving non-linear equations derived from time difference of arrival (TDoA) measurements. As the positioning accuracy largely depends on the temporal resolution of the signal acquisition, a dedicated RF front-end system is developed to achieve a time resolution in range of multiple picoseconds down to less than 1 pico second. On the algorithms aspect, two processing units: TDoA estimator and the Hyperbolic equations solver construct the digital signal processing system. In order to implement a real-time positioning system, the processing system is implemented on a FPGA platform. Corresponding firmware is generated from the algorithms modeled in MATLAB/Simulink, using the high level synthesis (HLS) tool HDL Coder. The prototype system is evaluated and an accuracy of better than 1 cm is achieved. A better performance is potential feasible by manipulating some of the controlling conditions such as ADC sampling rate, ADC resolution, interpolation process, higher frequency, more stable antenna, etc. Although the proposed system is initially dedicated to indoor applications, it could also be a competitive candidate for an outdoor positioning service.
The objective of this thesis consists in developing systematic event-triggered control designs for specified event generators, which is an important alternative to the traditional periodic sampling control. Sporadic sampling inherently arising in event-triggered control is determined by the event-triggering conditions. This feature invokes the desire of
finding new control theory as the traditional sampled-data theory in computer control.
Developing controller coupling with the applied event-triggering condition to maximize the control performance is the essence for event-triggered control design. In the design the stability of the control system needs to be ensured with the first priority. Concerning variant control aims they should be clearly incorporated in the design procedures. Considering applications in embedded control systems efficient implementation requires a low complexity of embedded software architectures. The thesis targets at offering such a design to further complete the theory of event-triggered control designs.
Agricultural intensification has increased substantially in the last century to meet the globally growing demand for food, fodder, and bioenergy, thus agricultural cropland became the largest terrestrial biome globally. Pesticides became a central tool to this intensification strategy, thus pesticide application rose drastically over the last sixty years to secure or increase crop yields. However, pesticides are by design biologically active and known to contaminate non-target ecosystems, thereby adversely affecting their function or structure. Even though ecotoxicological knowledge about probable fate and effects has grown, little remains known about the spatiotemporal occurrence, potential effects, and risk drivers of pesticides on larger, i.e. macro, scales.
Consequently, the thesis gathered primarily pesticide exposure data via meta-analysis and from public monitoring databases to describe (i) detailed risks in aquatic ecosystems, (ii) the underlying risk drivers, (iii) associated spatiotemporal trends, (iv) the effect of land use and land-protection and (v) the protectiveness of regulatory frameworks. First, a meta-analysis of insecticides occurring in US surface waters (n = 5,817, 259 studies) revealed large-scale risks for aquatic ecosystems based on the exceedance of regulatory threshold levels (RTL) and identified high-risk substances, particularly pyrethroids, with increasing application trends (publication I). Following this, spatiotemporal factors driving insecticide risks were identified via model-building demonstrating that toxicity-weighted pesticide use was the primary driver in surface waters with subsequent model application generating a spatially comprehensive risk assessment for the United States (publication II). The toxicity-weighted pesticide use was subsequently expanded to an ongoing project covering additional species groups and all pesticides used in the US from 1992 – 2016, highlighting a drastic shift of toxic pressures from vertebrates to aquatic invertebrates. Large-scale monitoring data from European surface waters (n > 8.3 million) of 352 organic chemicals identified pesticides as the main class or organic contaminants causing risks in aquatic ecosystems. Additional analyses established links between agricultural intensity and resulting environmental risks for aquatic invertebrates and plants on this macro scale (publication III). Finally, high-resolution monitoring data from Saxony, Germany, provided, for the first time, detailed insights into the occurrence and resulting risks of organic contaminants (primarily pesticides) in protected surface waters of nature conservation areas (publication IV).
In summary, the thesis gathered and used large-scale datasets to analyze the impact of agricultural intensification – and later anthropogenic land use – on ecosystems to reduce knowledge deficits in ecotoxicology on macro scales. Insecticides were shown to be important and spatially extensive agents of impairments to surface water quality and being directly linked to their use in respective landscapes. Changes in the pesticide use composition over time shifted environmental risks from vertebrates to other central species groups (e.g. aquatic invertebrates), highlighting a new challenge to the integrity of aquatic environments. The thesis provided novel insights into contaminants' individual risk characteristics, their interaction with various spatiotemporal drivers and their relevance on various macro scales. Overall, a discrepancy remains evident between estimated environmental impacts of pesticides derived during regulatory approval processes contrasted by a posteriori field measurements detailing larger than assumed adverse exposures and effects. This discrepancy led to pesticides being the most impactful chemical stressor for aquatic ecosystems compared to other organic contaminants on a continental scale; a threat that even increased for some species groups. The extensive use of pesticides has reached levels where even strictly protected surface waters in Germany are regularly exposed adversely, hence threatening conservation areas’ function as ecological refugia. Taken together, the thesis provides new macro-scale evidence regarding the contribution of pesticides (and associated drivers) to large-scale changes in biological systems evidenced over the last decades, underlining their likely contribution to the ongoing freshwater biodiversity crisis globally. Particularly agricultural systems will require substantial changes going forward to protect or reestablish the integrity of aquatic ecosystems and their provision of vital ecological services.
In this thesis, a new concept to prove Mosco convergence of gradient-type Dirichlet forms within the \(L^2\)-framework of K.~Kuwae and T.~Shioya for varying reference measures is developed.
The goal is, to impose as little additional conditions as possible on the sequence of reference measure \({(\mu_N)}_{N\in \mathbb N}\), apart from weak convergence of measures.
Our approach combines the method of Finite Elements from numerical analysis with the topic of Mosco convergence.
We tackle the problem first on a finite-dimensional substructure of the \(L^2\)-framework, which is induced by finitely many basis functions on the state space \(\mathbb R^d\).
These are shifted and rescaled versions of the archetype tent function \(\chi^{(d)}\).
For \(d=1\) the archetype tent function is given by
\[\chi^{(1)}(x):=\big((-x+1)\land(x+1)\big)\lor 0,\quad x\in\mathbb R.\]
For \(d\geq 2\) we define a natural generalization of \(\chi^{(1)}\) as
\[\chi^{(d)}(x):=\Big(\min_{i,j\in\{1,\dots,d\}}\big(\big\{1+x_i-x_j,1+x_i,1-x_i\big\}\big)\Big)_+,\quad x\in\mathbb R^d.\]
Our strategy to obtain Mosco convergence of
\(\mathcal E^N(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu_N\) towards \(\mathcal E(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu\) for \(N\to\infty\)
involves as a preliminary step to restrict those bilinear forms to arguments \(u,v\) from the vector space spanned by the finite family \(\{\chi^{(d)}(\frac{\,\cdot\,}{r}-\alpha)\) \(|\alpha\in Z\}\) for
a finite index set \(Z\subset\mathbb Z^d\) and a scaling parameter \(r\in(0,\infty)\).
In a diagonal procedure, we consider a zero-sequence of scaling parameters and a sequence of index sets exhausting \(\mathbb Z^d\).
The original problem of Mosco convergence, \(\mathcal E^N\) towards \(\mathcal E\) w.r.t.~arguments \(u,v\) form the respective minimal closed form domains extending the pre-domain \(C_b^1(\mathbb R^d)\), can be solved
by such a diagonal procedure if we ask for some additional conditions on the Radon-Nikodym derivatives \(\rho_N(x)=\frac{d\mu_N(x)}{d x}\), \(N\in\mathbb N\). The essential requirement reads
\[\frac{1}{(2r)^d}\int_{[-r,r]^d}|\rho_N(x)- \rho_N(x+y)|d y \quad \overset{r\to 0}{\longrightarrow} \quad 0 \quad \text{in } L^1(d x),\,
\text{uniformly in } N\in\mathbb N.\]
As an intermediate step towards a setting with an infinite-dimensional state space, we let $E$ be a Suslin space and analyse the Mosco convergence of
\(\mathcal E^N(u,v)=\int_E\int_{\mathbb R^d}\langle\nabla_x u(z,x),\nabla_x v(z,x)\rangle_\text{euc}d\mu_N(z,x)\) with reference measure \(\mu_N\) on \(E\times\mathbb R^d\) for \(N\in\mathbb N\).
The form \(\mathcal E^N\) can be seen as a superposition of gradient-type forms on \(\mathbb R^d\).
Subsequently, we derive an abstract result on Mosco convergence for classical gradient-type Dirichlet forms
\(\mathcal E^N(u,v)=\int_E\langle \nabla u,\nabla v\rangle_Hd\mu_N\) with reference measure \(\mu_N\) on a Suslin space $E$ and a tangential Hilbert space \(H\subseteq E\).
The preceding analysis of superposed gradient-type forms can be used on the component forms \(\mathcal E^{N}_k\), which provide the decomposition
\(\mathcal E^{N}=\sum_k\mathcal E^{N}_k\). The index of the component \(k\) runs over a suitable orthonormal basis of admissible elements in \(H\).
For the asymptotic form \(\mathcal E\) and its component forms \(\mathcal E^k\), we have to assume \(D(\mathcal E)=\bigcap_kD(\mathcal E^k)\) regarding their domains, which is equivalent to the Markov uniqueness of \(\mathcal E\).
The abstract results are tested on an example from statistical mechanics.
Under a scaling limit, tightness of the family of laws for a microscopic dynamical stochastic interface model over \((0,1)^d\) is shown and its asymptotic Dirichlet form identified.
The considered model is based on a sequence of weakly converging Gaussian measures \({(\mu_N)}_{N\in\mathbb N}\) on \(L^2((0,1)^d)\), which are
perturbed by a class of physically relevant non-log-concave densities.
Production, purification and analysis of novel peptide antibiotics from terrestrial cyanobacteria
(2024)
Cyanobacteria are a known source for bioactive compounds, of which several also show antibiotic activity. In regard to the growing number of multi-resistant pathogens, the search for novel antibiotic substances is of great importance and unexploited sources should be explored. So, this thesis initially dealt with the identification of productive strains, especially within the group of the terrestrial cyanobacteria, which are less well studied than marine and freshwater strains. Amongst these, Chroococcidiopsis cubana, an extremely desiccation and radiation tolerant, unicellular cyanobacterium was found to produce an extracellular antimicrobial metabolite effective against the Gram-positive indicator bacterium Micrococcus luteus as well as the pathogenic yeast Candida auris. However, as the sole identification of a productive cyanobacterium is not sufficient for further analysis and a future production scale-up, the second part of this thesis targeted the identification of compound synthesis prerequisites. As a result, a limitation of nitrogen was shown to be the production trigger, a finding that was used for the establishment of a continuous production system. The increased compound formation was then used for purification and analysis steps. As a second approach, in silico identified bacteriocin gene clusters from C. cubana were cloned and heterologously expressed in Escherichia coli. By this, the bacteriocin B135CC was identified as a strong bacteriolytic agent, active predominantly against the Gram-positive strains Staphylococcus aureus and Mycobacterium phlei. The peptide showed no cytotoxic effects against mouse neuroblastoma (N2a-) cells and a high temperature tolerance up to 60 °C. In order to facilitate the whole project, two standard protocols, specifically adapted for the work with cyanobacteria, were established. First, a method for a quick and easy in vivo vitality estimation of phototrophic cells and second, an approach for a high throughput determination of nitrate concentrations in microalgal cultures. Both methods greatly helped to proceed the main objectives of this work, the first one by simplifying the development of suitable cryopreservation protocols for individual cyanobacteria strains and the second one by accelerating the determination of the optimal nitrate concentration for the production of the antimicrobial compound from C. cubana. In the course of this cultivation optimization, the ability of cyanobacteria to utilize organic carbon sources for an accelerated cell growth was examined in greater detail. It could be shown that C. cubana reaches significantly higher growth rates when mixotrophically cultivated with fructose or glucose. Interestingly, this effect was even further enhanced when light intensity was decreased. Under these low-light conditions, phototrophically cultivated C. cubana cells showed a clearly decreased cell growth. This effect might be extremely useful for a quick and economic preparation of precultures.
Modern digital imaging technologies, such as digital microscopy or micro-computed tomography, deliver such large amounts of 2D and 3D-image data that manual processing becomes infeasible. This leads to a need for robust, flexible and automatic image analysis tools in areas such as histology or materials science, where microstructures are being investigated (e.g. cells, fiber systems). General-purpose image processing methods can be used to analyze such microstructures. These methods usually rely on segmentation, i.e., a separation of areas of interest in digital images. As image segmentation algorithms rarely adapt well to changes in the imaging system or to different analysis problems, there is a demand for solutions that can easily be modified to analyze different microstructures, and that are more accurate than existing ones. To address these challenges, this thesis contributes a novel statistical model for objects in images and novel algorithms for the image-based analysis of microstructures. The first contribution is a novel statistical model for the locations of objects (e.g. tumor cells) in images. This model is fully trainable and can therefore be easily adapted to many different image analysis tasks, which is demonstrated by examples from histology and materials science. Using algorithms for fitting this statistical model to images results in a method for locating multiple objects in images that is more accurate and more robust to noise and background clutter than standard methods. On simulated data at high noise levels (peak signal-to-noise ratio below 10 dB), this method achieves detection rates up to 10% above those of a watershed-based alternative algorithm. While objects like tumor cells can be described well by their coordinates in the plane, the analysis of fiber systems in composite materials, for instance, requires a fully three dimensional treatment. Therefore, the second contribution of this thesis is a novel algorithm to determine the local fiber orientation in micro-tomographic reconstructions of fiber-reinforced polymers and other fibrous materials. Using simulated data, it will be demonstrated that the local orientations obtained from this novel method are more robust to noise and fiber overlap than those computed using an established alternative gradient-based algorithm, both in 2D and 3D. The property of robustness to noise of the proposed algorithm can be explained by the fact that a low-pass filter is used to detect local orientations. But even in the absence of noise, depending on fiber curvature and density, the average local 3D-orientation estimate can be about 9° more accurate compared to that alternative gradient-based method. Implementations of that novel orientation estimation method require repeated image filtering using anisotropic Gaussian convolution filters. These filter operations, which other authors have used for adaptive image smoothing, are computationally expensive when using standard implementations. Therefore, the third contribution of this thesis is a novel optimal non-orthogonal separation of the anisotropic Gaussian convolution kernel. This result generalizes a previous one reported elsewhere, and allows for efficient implementations of the corresponding convolution operation in any dimension. In 2D and 3D, these implementations achieve an average performance gain by factors of 3.8 and 3.5, respectively, compared to a fast Fourier transform-based implementation. The contributions made by this thesis represent improvements over state-of-the-art methods, especially in the 2D-analysis of cells in histological resections, and in the 2D and 3D-analysis of fibrous materials.
The thesis at hand deals with the numerical solution of multiscale problems arising in the modeling of processes in fluid and thermo dynamics. Many of these processes, governed by partial differential equations, are relevant in engineering, geoscience, and environmental studies. More precisely, this thesis discusses the efficient numerical computation of effective macroscopic thermal conductivity tensors of high-contrast composite materials. The term "high-contrast" refers to large variations in the conductivities of the constituents of the composite. Additionally, this thesis deals with the numerical solution of Brinkman's equations. This system of equations adequately models viscous flows in (highly) permeable media. It was introduced by Brinkman in 1947 to reduce the deviations between the measurements for flows in such media and the predictions according to Darcy's model.
Most of today’s wireless communication devices operate on unlicensed bands with uncoordinated spectrum access, with the consequence that RF interference and collisions are impairing the overall performance of wireless networks. In the classical design of network protocols, both packets in a collision are considered lost, such that channel access mechanisms attempt to avoid collisions proactively. However, with the current proliferation of wireless applications, e.g., WLANs, car-to-car networks, or the Internet of Things, this conservative approach is increasingly limiting the achievable network performance in practice. Instead of shunning interference, this thesis questions the notion of „harmful“ interference and argues that interference can, when generated in a controlled manner, be used to increase the performance and security of wireless systems. Using results from information theory and communications engineering, we identify the causes for reception or loss of packets and apply these insights to design system architectures that benefit from interference. Because the effect of signal propagation and channel fading, receiver design and implementation, and higher layer interactions on reception performance is complex and hard to reproduce by simulations, we design and implement an experimental platform for controlled interference generation to strengthen our theoretical findings with experimental results. Following this philosophy, we introduce and evaluate a system architecture that leverage interference.
First, we identify the conditions for successful reception of concurrent transmissions in wireless networks. We focus on the inherent ability of angular modulation receivers to reject interference when the power difference of the colliding signals is sufficiently large, the so-called capture effect. Because signal power fades over distance, the capture effect enables two or more sender–receiver pairs to transmit concurrently if they are positioned appropriately, in turn boosting network performance. Second, we show how to increase the security of wireless networks with a centralized network access control system (called WiFire) that selectively interferes with packets that violate a local security policy, thus effectively protecting legitimate devices from receiving such packets. WiFire’s working principle is as follows: a small number of specialized infrastructure devices, the guardians, are distributed alongside a network and continuously monitor all packet transmissions in the proximity, demodulating them iteratively. This enables the guardians to access the packet’s content before the packet fully arrives at the receiver. Using this knowledge the guardians classify the packet according to a programmable security policy. If a packet is deemed malicious, e.g., because its header fields indicate an unknown client, one or more guardians emit a limited burst of interference targeting the end of the packet, with the objective to introduce bit errors into it. Established communication standards use frame check sequences to ensure that packets are received correctly; WiFire leverages this built-in behavior to prevent a receiver from processing a harmful packet at all. This paradigm of „over-the-air“ protection without requiring any prior modification of client devices enables novel security services such as the protection of devices that cannot defend themselves because their performance limitations prohibit the use of complex cryptographic protocols, or of devices that cannot be altered after deployment.
This thesis makes several contributions. We introduce the first software-defined radio based experimental platform that is able to generate selective interference with the timing precision needed to evaluate the novel architectures developed in this thesis. It implements a real-time receiver for IEEE 802.15.4, giving it the ability to react to packets in a channel-aware way. Extending this system design and implementation, we introduce a security architecture that enables a remote protection of wireless clients, the wireless firewall. We augment our system with a rule checker (similar in design to Netfilter) to enable rule-based selective interference. We analyze the security properties of this architecture using physical layer modeling and validate our analysis with experiments in diverse environmental settings. Finally, we perform an analysis of concurrent transmissions. We introduce a new model that captures the physical properties correctly and show its validity with experiments, improving the state of the art in the design and analysis of cross-layer protocols for wireless networks.
Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential
(2016)
Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.
In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.
Within toxicology, reproductive toxicology is a highly relevant and socially particularly sensitive field.
It encompasses all toxicological processes within the reproductive cycle and therefore includes many effects and modes of action. This makes the assessment of reproductive toxicity very challenging despite the established in vivo studies. In addition, the in vivo studies are very demanding both in terms of their conduct and interpretation, and there is scope for decision-making on both aspects. As a result, the interpretation of study results may vary from laboratory to laboratory. For the final classification, the assessment of relevance for men is decisive. The problem here is that relatively little is known about the species differences between men and the
usual test animals (rat and rabbit). The rabbit in particular has hardly been researched in molecular biology. The aim of the dissertation was to develop approaches for a better assessment of
reproductive toxicity, with two different foci: The first aim was to investigate species differences, focusing on the expression of xenobiotic transporters during ontogeny. Xenobiotic transporters, of the superfamily of ATP-binding cassette transporters (ABC) or solute carriers (SLC), are known to transport exogenous substances in
addition to their endogenous substrates and therefore play an important role in the absorption, distribution and excretion of xenobiotics. Species differences in kinetics can in turn have a major
impact on toxic effects. In the study, the expression of 20 xenobiotic transporters during ontogeny was investigated at the mRNA level in the liver, kidney and placenta of rats and rabbits and compared with that of men. This revealed major differences in the expression of the transporters between the species. However, further studies on the functionality and activity of the xenobiotic transporters are needed to fully assess the kinetic impact of the observed species differences. Overall, the study provides a valid starting point for further systematic investigations of species differences at the protein level. Furthermore, it provides previously unavailable data on the expression of xenobiotic transporters during ontogeny in rabbits, which is an important step in the molecular biological study of this species.
The second part focused on investigating the predictive power of in silico models for reproductive
toxicology in relation to pesticides. Both the commercial and the freely available models did not
perform adequately in the evaluation. Three reasons could be identified for this: 1. many pesticides
are outside the chemical space of the models, 2. different definition/assessment of reproductive
toxicity and 3. problems in detecting similarity between molecules. To solve these problems, an
extension of the databases on reproductive toxicity in relation to pesticides, respecting a uniform
nomenclature, is needed. Furthermore, endpoint-specific models should be developed which, in
addition to the usual structure-based fingerprints, use descriptors for, for example, biological
activity.
Overall, the dissertation shows how essential it is to further research the modes of action of
reproductive toxicity. This knowledge is necessary to correctly assess in vivo studies and their
relevance to men, as well as to improve the predictive power of in silico models by incorporating
this information.
Model uncertainty is a challenge that is inherent in many applications of mathematical models in various areas, for instance in mathematical finance and stochastic control. Optimization procedures in general take place under a particular model. This model, however, might be misspecified due to statistical estimation errors and incomplete information. In that sense, any specified model must be understood as an approximation of the unknown "true" model. Difficulties arise since a strategy which is optimal under the approximating model might perform rather bad in the true model. A natural way to deal with model uncertainty is to consider worst-case optimization.
The optimization problems that we are interested in are utility maximization problems in continuous-time financial markets. It is well known that drift parameters in such markets are notoriously difficult to estimate. To obtain strategies that are robust with respect to a possible misspecification of the drift we consider a worst-case utility maximization problem with ellipsoidal uncertainty sets for the drift parameter and with a constraint on the strategies that prevents a pure bond investment.
By a dual approach we derive an explicit representation of the optimal strategy and prove a minimax theorem. This enables us to show that the optimal strategy converges to a generalized uniform diversification strategy as uncertainty increases.
To come up with a reasonable uncertainty set, investors can use filtering techniques to estimate the drift of asset returns based on return observations as well as external sources of information, so-called expert opinions. In a Black-Scholes type financial market with a Gaussian drift process we investigate the asymptotic behavior of the filter as the frequency of expert opinions tends to infinity. We derive limit theorems stating that the information obtained from observing the discrete-time expert opinions is asymptotically the same as that from observing a certain diffusion process which can be interpreted as a continuous-time expert. Our convergence results carry over to convergence of the value function in a portfolio optimization problem with logarithmic utility.
Lastly, we use our observations about how expert opinions improve drift estimates for our robust utility maximization problem. We show that our duality approach carries over to a financial market with non-constant drift and time-dependence in the uncertainty set. A time-dependent uncertainty set can then be defined based on a generic filter. We apply this to various investor filtrations and investigate which effect expert opinions have on the robust strategies.
In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.
In this dissertation we consider complex, projective hypersurfaces with many isolated singularities. The leading questions concern the maximal number of prescribed singularities of such hypersurfaces in a given linear system, and geometric properties of the equisingular stratum. In the first part a systematic introduction to the theory of equianalytic families of hypersurfaces is given. Furthermore, the patchworking method for constructing hypersurfaces with singularities of prescribed types is described. In the second part we present new existence results for hypersurfaces with many singularities. Using the patchworking method, we show asymptotically proper results for hypersurfaces in P^n with singularities of corank less than two. In the case of simple singularities, the results are even asymptotically optimal. These statements improve all previous general existence results for hypersurfaces with these singularities. Moreover, the results are also transferred to hypersurfaces defined over the real numbers. The last part of the dissertation deals with the Castelnuovo function for studying the cohomology of ideal sheaves of zero-dimensional schemes. Parts of the theory of this function for schemes in P^2 are generalized to the case of schemes on general surfaces in P^3. As an application we show an H^1-vanishing theorem for such schemes.
Accurate path tracking control of tractors became a key technology for automation in agriculture. Increasingly sophisticated solutions, however, revealed that accurate path tracking control of implements is at least equally important. Therefore, this work focuses on accurate path tracking control of both tractors and implements. The latter, as a prerequisite for improved control, are equipped with steering actuators like steerable wheels or a steerable drawbar, i.e. the implements are actively steered. This work contributes both new plant models and new control approaches for those kinds of tractor-implement combinations. Plant models comprise dynamic vehicle models accounting for forces and moments causing the vehicle motion as well as simplified kinematic descriptions. All models have been derived in a systematic and automated manner to allow for variants of implements and actuator combinations. Path tracking controller design begins with a comprehensive overview and discussion of existing approaches in related domains. Two new approaches have been proposed combining the systematic setup and tuning of a Linear-Quadratic-Regulator with the simplicity of a static output feedback approximation. The first approach ensures accurate path tracking on slopes and curves by including integral control for a selection of controlled variables. The second approach, instead, ensures this by adding disturbance feedforward control based on side-slip estimation using a non-linear kinematic plant model and an Extended Kalman Filter. For both approaches a feedforward control approach for curved path tracking has been newly derived. In addition, a straightforward extension of control accounting for the implement orientation has been developed. All control approaches have been validated in simulations and experiments carried out with a mid-size tractor and a custom built demonstrator implement.
To support scientific work with large and complex data the field of scientific visualization emerged in computer science and produces images through computational analysis of the data. Frameworks for combination of different analysis and visualization modules allow the user to create flexible pipelines for this purpose and set the standard for interactive scientific visualization used by domain scientists.
Existing frameworks employ a thread-parallel message-passing approach to parallel and distributed scalability, leaving the field of scientific visualization in high performance computing to specialized ad-hoc implementations. The task-parallel programming paradigm proves promising to improve scalability and portability in high performance computing implementations and thus, this thesis aims towards the creation of a framework for distributed, task-based visualization modules and pipelines.
The major contribution of the thesis is the establishment of modules for Merge Tree construction and (based on the former) topological simplification. Such modules already form a necessary first step for most visualization pipelines and can be expected to increase in importance for larger and more complex data produced and/or analysed by high performance computing.
To create a task-parallel, distributed Merge Tree construction module the construction process has to be completely revised. We derive a novel property of Merge Tree saddles and introduce a novel task-parallel, distributed Merge Tree construction method that has both good performance and scalability. This forms the basis for a module for topological simplification which we extend by introducing novel alternative simplification parameters that aim to reduce the importance of prior domain knowledge to increase flexibility in typical high performance computing scenarios.
Both modules lay the groundwork for continuative analysis and visualization steps and form a fundamental step towards an extensive task-parallel visualization pipeline framework for high performance computing.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
The present work investigated three important constructs in the field of psychology: creativity, intelligence and giftedness. The major objective was to clarify some aspects about each one of these three constructs, as well as some possible correlations between them. Of special interest were: (1) the relationship between creativity and intelligence - particularly the validity of the threshold theory; (2) the development of these constructs within average and above-average intelligent children and throughout grade levels; and (3) the comparison between the development of intelligence and creativity in above-average intelligent primary school children that participated in a special program for children classified as “gifted”, called Entdeckertag (ET), against an age-class- and-IQ matched control group. The ET is a pilot program which was implemented in 2004 by the Ministry for Education, Science, Youth and Culture of the state of Rhineland-Palatinate, Germany. The central goals of this program are the early recognition of gifted children and intervention, based on the areas of German language, general science and mathematics, and also to foster the development of a child’s creativity, social ability, and more. Five hypotheses were proposed and analyzed, and reported separately within five chapters. To analyze these hypotheses, a sample of 217 children recruited from first to fourth grade, and between the ages of six and ten years, was tested for intelligence and creativity. Children performed three tests: Standard Progressive Matrices (SPM) for the assessment of classical intelligence, Test of Creative Thinking – Drawing Production (TCT-DP) for the measurement of classical creativity, and Creative Reasoning Task (CRT) for the evaluation of convergent and divergent thinking, both in open problem spaces. Participants were divided according to two general cohorts: Intervention group (N = 43), composed of children participating in the Entdeckertag program, and a non-intervention group (N = 174), composed of children from the regular primary school. For the testing of the hypotheses, children were placed into more specific groups according to the particular hypothesis that was being tested. It could be concluded that creativity and intelligence were not significantly related and the threshold theory was not confirmed. Additionally, intelligence accounted for less than 1% of the variance within creativity; moreover, scores on intelligence were unable to predict later creativity scores. The development of classical intelligence and classical creativity throughout grade levels also presented a different pattern; intelligence grew increasingly and continually, whereas creativity stagnated after the third grade. Finally, the ET program proved to be beneficial for classical intelligence after two years of attendance, but no effect was found for creativity. Overall, results indicate that organizations and institutions such as schools should not look solely to intelligence performance, especially when aiming to identify and foster gifted or creative individuals.
Backward compatibility of class libraries ensures that an old implementation of a library can safely be replaced by a new implementation without breaking existing clients.
Formal reasoning about backward compatibility requires an adequate semantic model to compare the behavior of two library implementations.
In the object-oriented setting with inheritance and callbacks, finding such models is difficult as the interface between library implementations and clients are complex.
Furthermore, handling these models in a way to support practical reasoning requires appropriate verification tools.
This thesis proposes a formal model for library implementations and a reasoning approach for backward compatibility that is implemented using an automatic verifier. The first part of the thesis develops a fully abstract trace-based semantics for class libraries of a core sequential object-oriented language. Traces abstract from the control flow (stack) and data representation (heap) of the library implementations. The construction of a most general context is given that abstracts exactly from all possible clients of the library implementation.
Soundness and completeness of the trace semantics as well as the most general context are proven using specialized simulation relations on the operational semantics. The simulation relations also provide a proof method for reasoning about backward compatibility.
The second part of the thesis presents the implementation of the simulation-based proof method for an automatic verifier to check backward compatibility of class libraries written in Java. The approach works for complex library implementations, with recursion and loops, in the setting of unknown program contexts. The verification process relies on a coupling invariant that describes a relation between programs that use the old library implementation and programs that use the new library implementation. The thesis presents a specification language to formulate such coupling invariants. Finally, an application of the developed theory and tool to typical examples from the literature validates the reasoning and verification approach.
For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.
Industrial robots are vital in automation technology, but their limitations become evident in applications requiring high path accuracy. This research focuses on improving the dynamic path accuracy of industrial robots by integrating additional sensor technology and employing intelligent feed-forward control. Specifically, the inclusion of secondary encoder sensors enables explicit measurement and compensation of robot gear deformations. Three types of model-based feed-forward controllers, namely physics-based, data-based, and hybrid, are developed to effectively counteract dynamic effects.
Firstly, a physics-based feed-forward control method is proposed, explicitly modeling joint deformations, hydraulic weight compensation, and other relevant features. Nonlinear friction parameters are accurately identified using a globally optimized design of experiments. The resulting physics-based model is fully continuously differentiable, facilitating its transformation into a code-optimized flatness-based feed-forward control.
Secondly, a data-based feed-forward control approach is introduced, leveraging a continuous-time neural network. The continuous-time approach demonstrates enhanced model generalization capabilities even with limited data. Furthermore, a time domain normalization method is introduced, significantly improving numerical properties by concurrently normalizing measurement timelines, robot states, and state derivatives. Based on previous work, a method ensuring input-to-state and global-asymptotic stability is presented, employing a Lyapunov function. Model stability is enforced already during training using constrained optimization techniques. Moreover, the data-based methods are evaluated on public benchmarks, extending its applicability beyond the field of robotics.
Both the physics-based and data-based models are combined into a hybrid model. Comparative analysis of the three models reveals that the continuous-time neural network yields the highest model accuracy, while the physics-based model delivers the best safety properties. The effectiveness of all three models is experimentally validated using an industrial robot.
In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations.
For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions.
For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain.
A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis.
Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods.