Refine
Year of publication
Document Type
- Doctoral Thesis (940) (remove)
Language
- English (940) (remove)
Has Fulltext
- yes (940)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (54)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (22)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
Living systems incessantly engage in the regulation of their cellular processes to fulfill their biological functions. Beyond development-related adjustments or cell cycle oscillations, environmental fluctuations compel the system to reorganize metabolic pathways, structural components, or molecular repair and reconstitution mechanisms. These responses manifest across diverse temporal scales, necessitating an intricate regulatory orchestration. Time series experiments have become increasingly popular for charting the chronological order and elucidating the underlying mechanisms. In the era of high-throughput technologies, the majority of cellular molecules can be analyzed in one fell swoop, generating a comprehensive snapshot of the status quo of most present molecules. Methodological advancements also permit the monitoring not only of molecular abundances but also the functional status of transcripts and proteins. However, due to the still high efforts associated with such experiments, the number of measured time points and the replication of measurements remains limited. Resulting datasets contain signals from thousands of molecules, yet they are sparse in temporal resolution and are often imprecise due to biological variability and technical measurement inaccuracies.
This thesis explores the complexities arising from the examination of short time series data and introduces pioneering tools that offer fresh insights into the realm of biological time series analysis. The broad spectrum of analytic possibilities ranges from a molecule-centric investigation of individual time courses to a holistic aggregation of the system’s response to its main characteristics. By creating a modeling framework that applies domain-specific constraints, time-course signals can be transformed from a series of discrete data points into a continuous curve. These curves align with current biological conjectures about molecule kinetics being smooth and devoid of superfluous oscillations. Noise present at individual time points is judiciously accounted for during curve fitting, mitigating the impact of time points with high variance on the curve. Subsequent classification is based on the features of these curves (extreme points and inflection points) and ensures a reduction in data amount and complexity. Succinct labels assigned to each molecule's kinetics encapsulate the signal's most notable features. Besides this modeling approach, an innovative enrichment strategy is introduced, that is independent of prior data partitioning and capable of segregating the temporal response into its thermodynamically relevant components. This approach allows for a continuous assessment of each molecule's contribution to these components, obviating the need for exclusive allocation. The application of various analytical approaches to heat acclimation experiments in Chlamydomonas highlights the relevance and potential of time series experiments and specifically tailored analysis techniques. The integration of different system levels has led to the identification of regulatory peculiarities, such as an increased correlation between transcripts and corresponding proteins during acclimation responses. These and other insights may herald new avenues of research that could ultimately enhance plant robustness in the face of increasing environmental perturbations.
The growing popularity of time series experiments necessitates dedicated analytical approaches that empower researchers and analysts to decipher patterns, discern trends, and unravel the underlying structures within the data, facilitating predictions and the derivation of meaningful conclusions that could potentially build bridges between the interweaved systems levels.
Distributed message-passing systems have become ubiquitous and essential for our daily lives. Hence, designing and implementing them correctly is of utmost importance. This is, however, very challenging at the same time. In fact, it is well-known that verifying such systems is algorithmically undecidable in general due to the interplay of asynchronous communication (messages are buffered) and concurrency. When designing communication in a system, it is natural to start with a global protocol specification of the desired communication behaviour. In such a top-down approach, the implementability problem asks, given such a global protocol, if the specified behaviour can be implemented in a distributed setting without additional synchronisation. This problem has been studied from two perspectives in the literature. On the one hand, there are Multiparty Session Types (MSTs) from process algebra, with global types to specify protocols. Key to the MST approach is a so-called projection operator, which takes a global type and tries to project it onto every participant: if successful, the local specifications are safe to use. This approach is efficient but brittle. On the other hand, High-level Message Sequence Charts (HMSCs) study the implementability problem from an automata-theoretic perspective. They employ very few restrictions on protocol specifications, making the implementability problem for HMSCs undecidable in general. The work in this thesis is the first to formally build a bridge between the world of MSTs and HMSCs. To start, we present a generalised projection operator for sender-driven choice. This allows a sender to send to different receivers when branching, which is crucial to handle common communication patterns from distributed computing. Despite this first step, we also show that the classical MST projection approach is inherently incomplete. We present the first formal encoding from global types to HMSCs. With this, we prove decidability of the implementability problem for global types with sender-driven choice. Furthermore, we develop the first direct and complete projection operator for global types with sender-driven choice, using automata-theoretic techniques, and show its effectiveness with a prototype implementation. We are the first to provide an upper bound for the implementability problem for global types with sender-driven (or directed) choice and show it to be in PSPACE. We also provide a session type system that uses the results from our projection operator. Last, we introduce protocol state machines (PSMs) – an automata-based protocol specification formalism – that subsume both global types from MSTs and HMSCs with regard to expressivity. We use transformations on PSMs to show that many of the syntactic restrictions of global types are not restrictive in terms of protocol expressivity. We prove that the implementability problem for PSMs with mixed choice, which requires no dedicated sender for a branch but solely all labels to be distinct, is undecidable in general. With our results on expressivity, this answers an open question: the implementability problem for mixed-choice global types is undecidable in general.
Coastal port-industrial areas are becoming increasingly significant due to urban shrinkage, population
decline, and climate change. To address social and economic issues and enhance climate resilience, it
is crucial to anticipate urban shrinkage in both stable and growing coastal areas that are undergoing
economic transformation. Urban planning can better understand the dynamics of planning for urban
shrinkage and climate resilience, as port-industrial areas have a large economic impact on nearby
coastal communities.
This dissertation examines the long-term implications of urban shrinkage in coastal port-industrial
areas in the context of climate change and sea level rise in England. The research problem is that
current urban policy does not adequately address the challenges of urban shrinkage and climate
resilience in these areas. The research questions are: What are the population changes in local areas
in England? What effect does population decline have on changing urbanisation patterns in older
industrial areas? What type of adaptation efforts were made in North East Lincolnshire, England, and
Bremerhaven, Germany, in response to the 2013 tidal surge, and how did this affect urban
shrinkage?
The dissertation applies an integrated concept of Shrinkage-Resilience as a framework for analysis.
The methodology includes a review of existing models and frameworks, as well as case studies of
international and local contexts. The findings suggest that between 2013-2019, 68% of older
industrial areas (including coastal ports) in England are undergoing changing urbanisation patterns
relative to population, land use, and green belt areas, and are key areas for urban policy, such as the
Levelling Up agenda. One of the areas, North East Lincolnshire is discussed and compared to
Bremerhaven. These examples demonstrate the link between Shrinkage-Resilience approaches and
their practical implementation in coastal port-industrial areas affected by urban shrinkage.
This research advances the scientific practice of urban planning and policy-making for shrinking cities
by introducing the approach of Shrinkage-Resilience, which emphasises the importance of
considering long-term social, economic, and environmental impacts in urban shrinkage contexts. This
approach is crucial in the transition to a more sustainable and inclusive society, where the welfare of
present and future generations, the environment, and economic development are taken into
account. The dissertation provides recommendations for urban planning to incorporate policy
changes for shrinking cities and coastal port-industrial areas worldwide, to include disaster risk
reduction and climate change adaptation approaches.
To increase situational awareness of the crane operator, the aim of this thesis is to develop a vision-based deep learning object detection from crane load-view using an adaptive perception in the construction area. Conventional worker detection methods are based on simple shape or color features from the worker's appearances. Nonetheless, these methods can fail to recognize the workers who do not wear the protective gears. To find out an image representation of the object from the top view manually or handcrafted feature is crucial. We, therefore, employed deep learning methods to automatically learn those features.
To yield optimal results, deep learning methods require mass amount of data.
Due to the data deficit especially in the construction domain, we developed the photorealistic world to create the data in addition to our samples collected from the real construction area. The simulated platform does not benefit only from diverse data types, but also concurrent research development which speeds up the pipeline at a low cost.
Our research findings indicate that the combination of synthetic and real training samples improved the state-of-the-art detector. In line with previous studies to bridge the gap between synthetic and real data, the results of preprocessed synthetic images are substantially better than using the raw data by approximately 10%.
Finding the right deep learning model for load-view detection is challenging.
By investigating our training data, it becomes evident that the majority of bounding box sizes are very small with a complex background.
In addition, we gave the priority to speed over accuracy based on the construction safety criteria. Finally, RetinaNet is chosen out of the three primary object detection models.
Nevertheless, the data-driven detection algorithm can fail to handle scale invariance, especially for detectors whose input size is changed in an extremely wide range.
The adaptive zoom feature can enhance the quality of the worker detection.
To avoid further data gathering and extensive retraining, the proposed automatic zoom method of the load-view crane camera supports the deep learning algorithm, specifically in the high scale variant problem. The finite state machine is employed for control strategies to adapt the zoom level to cope not only with inconsistent detection but also abrupt camera movement during lifting operation. Consequently, the detector is able to detect a small size object by smooth continuous zoom control without additional training.
The adaptive zoom control not only enhances the performance of the top-view object detection but also reduces the interaction of the crane operator with camera system, reducing the risk of fatality during load lifting operation.
Aquatic habitats are closely linked to the adjacent riparian area. Fluxes of nutrients, energy and matter through emerging aquatic insects are a key component of the aquatic subsidy to terrestrial systems. In fact, adult insects serve as high-quality prey for riparian predators. Stressors impacting the aquatic subsidy can thus translate to consequences for the receiving terrestrial food web, while mechanistic knowledge is extremely limited. Against this background, this thesis aimed at (i) assessing the impact of a model stressor specifically targeting insect emergence, that is the mosquito control agent Bacillus thuringiensis var. israelensis, on quantity, temporal dynamics and (ii) quality of emerging aquatic insects. For this purpose, outdoor floodplain pond mesocosms (n = 6) were employed. Since emergence is, in most cases, no point event but occurs over a longer period emergence was monitored over 3.5 months. The model stressor, i.e., Bti applied three times during spring at 2.88 × 10^9 ITU/ha, shifted the emergence time of aquatic insects, especially of non-biting midges (Diptera: Chironomidae), by ten days with a 26% reduced peak, while the nutrient content was not altered. On this basis, (ii) the propagation of the effects in aquatic subsidy emergence to riparian predators was investigated. Stable isotope analyses were used to assess the diet of a model predator, that is the web-building riparian spider Tetragnatha extensa. Results suggested changes in the composition of the spider’s diet to replace missing Chironomidae by other aquatic and terrestrial prey organisms pointing to further negative consequences. Finally, the thesis aimed at (iii) the understanding of processes underlying an altered emergence of aquatic subsidy mainly consisting of chironomids. Using a laboratory-based test design, populations of Chironomus riparius (n = 6) were assessed for their sensitivity towards Bti under different food qualities (high and low nutritious) before and after a long-term (six months) Bti exposure. Signs of phenotypic adaptation were observed in emergence time and nutrient content over multiple generations, resulting in changes in chironomids’ quantity and quality as food source. Overall, it can be concluded that direct and indirect effects of an aquatic stressor, as well as the adaptive response to it, can alter ecosystems at different levels, including individual, population and community level. Furthermore, this thesis highlights the importance of a temporal perspective when investigating the impact of aquatic stressors beyond ecosystem boundaries. It illustrates potential bottom-up effects on riparian predators through altered emergence of aquatic insects, feeding our understanding of meta-ecosystems and how stressors and their effects are transferred across systems. These insights will support efforts to protect and conserve natural ecosystems.
The German energy mix, which provides an overview of the sources of electricity available in Germany, is changing as a result of the expansion of renewable energy sources. With this shift towards sustainable energy sources such as wind and solar power, the electricity market situation is also in flux. Whereas in the past there were few uncertainties in electricity generation and only demand was subject to stochastic uncertainties, generation is now subject to stochastic fluctuations as well, especially due to weather dependency. To provide a supportive framework for this different situation, the electricity market has introduced, among other things, the intraday market, products with half-hourly and quarter-hourly time slices, and a modified balancing energy market design. As a result, both electricity price forecasting and optimization issues remain topical.
In this thesis, we first address intraday market modeling and intraday index forecasting. To do so, we move to the level of individual bids in the intraday market and use them to model the limit order books of intraday products. Based on statistics of the modeled limit order books, we present a novel estimator for the intraday indices. Especially for less liquid products, the order book statistics contain relevant information that allows for significantly more accurate predictions in comparison to the benchmark estimator.
Unlike the intraday market, the day ahead market allows smaller companies without their own trading department to participate since it is operated as a market with daily auctions. We optimize the flexibility offer of such a small company in the day ahead market and model the prices with a stochastic multi-factor model already used in the industry. To make this model accessible for stochastic optimization, we discretize it in time and space using scenario trees. Here we present existing algorithms for scenario tree generation as well as our own extensions and adaptations. These are based on the nested distance, which measures the distance between two distributions of stochastic processes. Based on the resulting scenario trees, we apply the stochastic optimization methods of stochastic programming, dynamic programming, and reinforcement learning to illustrate in which context the methods are appropriate.
Virtual Possibilities: Exploring the Role of Emerging Technologies in Work and Learning Environments
(2024)
The present work aims to investigate whether virtual reality can support learning as well as vocational work environments. To this end, four studies were conducted, with the first set investigating the demands for vocational workers and the impact of input methods on participant performance. These studies laid the foundation needed to create studies incorporating virtual reality research. The second set of studies was concerned with the impact of virtual reality on learning performance as well as the influence of binaural stimuli presentation on task performance. Results of each study are discussed individually and in conjunction with one another. The four studies are further supplemented with further research conducted by the author as well as an analysis of the growing field of virtual reality-based research. The thesis closes by embedding the discussed work into the scientific landscape and tries to give an outlook for virtual reality-based use cases in the future.
In recent years, there has been a growing need for accurate 3D scene reconstruction. Recent developments in the automotive industry have led to the increased use of ADAS where 3D reconstruction techniques are used, for example, as part of a collision detection system. For such applications, scene geometry reconstruction is usually performed in the form of depth estimation, where distances to scene objects are obtained.
In general, depth estimation systems can be divided into active and passive. Both systems have their advantages and disadvantages, but passive systems are usually cheaper to produce and easier to assemble and integrate than active systems. Passive systems can be stereo- or multiple-view based. Up to a certain limit, increasing the number of views in multi-view systems usually results in improved depth estimation accuracy.
One potential problem for ensuring the reliability of multi-view systems is the need to accurately estimate the orientation of their optical sensors. One way to ensure sensor placement for multi-view systems is to rigidly fix the sensors at the manufacturing stage. Unlike arbitrary sensor placement, using of a simplified and known sensor placement geometry further simplifies the depth estimation.
We meet with the concept of light field, which parameterizes all visible light passing through all viewpoints by their intersection with angular and spatial planes. When applied to computer vision, this gives us a 2D set of 2D images, where the physical distances between each image are fixed and proportional to each other.
Existing light field depth estimation methods provide good accuracy, which is suitable for industrial applications. However, the main problems of these methods are related to their running time and resource requirements. Most of the algorithms presented in the literature are typically sharpened for accuracy, can only be run on high-performance machines and often require a significant amount of time to process and obtain results.
Real-world applications often have running time requirements. Also, often there is a power-consumption limitation. In this dissertation, we investigate the problem of building a depth estimation system with an light field camera that satisfies the operating time and power consumption constraints without significant loss of estimation accuracy.
First, an algorithm for calibrating light field cameras is proposed, together with an algorithm for automatic calibration refinement, that works on arbitrary captured scenes. An algorithm for classical geometric depth estimation using light field cameras is proposed. Ways to optimize the algorithm for real-time use without significant loss of accuracy are presented. Finally, the ways how the presented depth estimation methods can be extended using modern deep learning paradigms under the two previously mentioned constraints are shown.
With the expansion of the electromobility and wind energy, the number of frequency inverter-controlled electric motors and generators is increasing. In parallel, the number of the rolling bearing failures caused by inverter-induced parasitic currents also shows an increasing trend. In order to determine the electrical state of the rolling bearing, to develop preventive measures against damages caused by parasitic currents and to support system-level calculations, electrical rolling bearing models have been developed. The models are based on the electrical insulating ability of the lubricant film that develops in the rolling contacts. For the capacitance calculation of the rolling contacts, different correction factors were developed to simplify the complex tribological and electrical interactions of this region. The state-of-the-art correction factors vary widely, and their validity range also differ significantly, which leads to uncertainty in their general application and to the demand for further investigations of this field. In the present work, a combined simulation method is developed that can determine the rolling bearing capacitance of axially loaded rolling bearings. The simulation consists of an electrically extended EHL simulation for calculating the capacitance of the rolling contact, and an electrical FEM simulation for the capacitance calculation of the non-contact regions. With the combination of the resulted capacitance values of the two simulation methods, the total rolling bearing capacitance can be determined with high accuracy and without using correction factors. In addition, due to experimental investigations, the different capacitance sources of the rolling bearing are identified. After the validation of the combined simulation method, it can be applied for the investigation of the different capacitance sources, i.e., to determine their significance compared to the total rolling bearing capacitance. The developed simulation method allows a detailed analysis of the rolling bearing capacitances, taking into account influencing factors that could not be considered before (e.g., oil quantity in the environment of the rolling bearing). As a result, the accurate calculation of the rolling bearing capacitance can improve the prediction of the harmful parasitic currents and help to develop preventive measures against them.
Knowledge workers face an ever increasing flood of information in their daily work. They live in a “multi-tasking craziness”, involving activities like creating, finding, processing, assessing or organizing information while constantly switching from one context to another, each being associated with different tasks, documents, mails, etc. Hence, their personal information sphere consisting of file, mail and bookmark folders as well as their content, calendar entries, etc. is cluttered with information that has become irrelevant. Finding important information thus gets harder and much of previously gained knowledge is practically lost.
This thesis explores new ways of solving this problem by investigating the potential of self-(re)organizing and especially forgetting-enabled personal knowledge assistants in the given scenario. It utilizes so-called Managed Forgetting, which is an escalating set of measures to overcome the binary keep-or-delete paradigm, ranging from temporal hiding, to condensation, to adaptive reorganization, synchronization, archiving and deletion. Managed Forgetting is combined with two other major ideas: First, it uses the Semantic Desktop as an ecosystem, which brings Semantic Web and thus knowledge graph technologies to a user’s desktop, making it possible to capture and represent major parts of a user’s personal mental model in a machine-understandable way and exploit it in many different applications. Second, the system uses explicated context information – so-called Context Spaces: context is seen as an explicit interaction element users can work with (i.e. a “tangible” object similar to a folder) and in (immersion). The thesis is structured according to the basic interaction cycle with such a system, ranging from evidence collection to information extraction and context elicitation, followed by information value assessment and the actual support measures consisting of self-(re)organization decisions (back-end) and user interface updates (front-end). The system’s data foundation are personal or group knowledge graphs as well as native data. This work makes contributions to all of these aspects, whereas several of them have been investigated and developed in interdisciplinary research with cognitive scientists. On a more general level, searching and trust in such highly autonomous assistants have also been investigated.
In summary, a self-(re)organizing and especially forgetting-enabled support system for information management and knowledge work has been realized. Its different features vary in maturity: the most mature ones are already in practical use (also in industry), while the latest are just well elaborated (position papers) or rough ideas. Different evaluation strategies have been applied ranging from mere data-driven experiments to various user studies. Some of them were rather short-term with controlled laboratory conditions, others less controlled but spanning several months. Different benefits of working with such a system could be quantified, e.g. cognitive offloading effects and reduced task switching/resumption time. Other benefits were gathered qualitatively, e.g. tidiness of the information sphere and its better alignment with the user’s mental model. The presented approach has been shown to hold a lot of potential. In some aspects, however, only first steps have been taken towards tapping it, e.g. several support measures can be further refined and automation further increased.
This thesis focuses on the operation of reliability-constrained routes in wireless ad-hoc networks. A complete communication protocol that is capable of guaranteeing a statistical minimum reliability level would have to support several functionalities: first, routes that are capable of supporting the specified Quality of Service requirement have to be discovered. During operation of discovered routes, the current Quality of Service level has to be monitored continuously. Whenever significant deviations are detected and the required level of Quality of Service is endangered, route maintenance has to ensure continuous operation. All four functionalities, route discovery, route operation, route maintenance and collection and distribution of network status information, will be addressed in this thesis.
In the first part of the thesis, we propose a new approach for Quality-of- Service routing in wireless ad-hoc networks called rmin-routing, with the provision of statistical minimum route reliability as main route selection criterion. To achieve specified minimum route reliabilities, we improve the reliability of individual links by well-directed retransmissions, to be applied during the operation of routes. To select among a set of candidate routes, we define and apply route quality criteria concerning network load.
High-quality information about the network status is essential for the discovery and operation of routes and clusters in wireless ad-hoc networks. This requires permanent observation and assessment of nodes, links, and link metrics, and the exchange of gathered status data. In the second part of the thesis, we present cTEx, a configurable topology explorer for wireless ad-hoc networks that efficiently detects and exchanges high-quality network status information during operation.
In the third part, we propose a decentralized algorithm for the discovery and operation of reliability-constrained routes in wireless ad-hoc networks called dRmin-routing. The algorithm uses locally available network status information about network topology and link properties that is collected proactively in order to discover a preliminary route candidate. This is followed by a distributed, reactive search along this preselected route to remove imprecisions of the locally recorded network status before making a final route selection. During route operation, dRmin-routing monitors routes and performs different kinds of route repair actions to maintain route reliability in order to overcome varying link reliabilities.
Modeling and Simulation of Internet of Things Infrastructures for Cyber-Physical Energy Systems
(2024)
This dissertation presents a novel approach to the model-based development and simulation-based validation of Internet of Things (IoT) infrastructures within the context of Cyber-Physical Energy Systems (CPES). CPES represents an evolution in energy management, seamlessly blending physical and cyber components for efficient, secure, and dependable energy distribution. However, the intricate interplay of these components demands innovative modeling and simulation strategies.
The work begins by establishing a robust foundation, exploring essential background elements such as requirements engineering, model-based systems engineering, digitalization approaches, and the intricacies of IoT platforms. It introduces the novel concept of homomorphic encryption, a critical enabler for securing IoT data within CPES.
In the exploration of the state of the art, the dissertation delves into the multifaceted landscape of IoT simulation, emphasizing the significance of versatility, community support, scalability, and synchronization.
The core contribution emerges in the chapter on simulating IoT networks. It introduces a sophisticated framework that encompasses hardware-in-the-loop, software-in-the-loop, and human-in-the-loop simulation. This innovative framework extends the boundaries of conventional simulation, enabling holistic evaluations of IoT systems.
A practical case study on smart energy usage showcases the application of the framework. Detailed SysML models, including requirements, package diagrams, block definition diagrams, internal block diagrams, state machine diagrams, and activity diagrams, are meticulously examined. The performance evaluation encompasses diverse aspects, from hardware and software validation to human interaction.
In conclusion, this dissertation represents a significant leap forward in the integration of IoT infrastructures within CPES. Its contributions extend from a comprehensive understanding of foundational elements to the practical implementation of a holistic simulation framework. This work not only addresses the current challenges but also outlines a path for future research, shaping the landscape of IoT integration within the dynamic realm of CPES. It offers invaluable insights for researchers, engineers, and stakeholders working towards resilient, secure, and energy-efficient infrastructures.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
In this thesis, material removal mechanisms in grinding are investigated considering a gritworkpiece interaction as well as a grinding-wheel workpiece interaction. In grit-workpiece interaction in a micrometer scale, single grit scratch experiments were performed to investigate material removal mechanism in grinding namely rubbing, plowing, and cutting. Experiments performed were analyzed based on material removal, process forces and specific energy. A finite element model is developed to simulate a single-grit scratch process. As part of the development of the finite element scratch model a 2D and 3D model is developed. A 2D model is utilized to test
material parameters and test various mesh discretizational approaches. A 3D model undertaking the tested material parameters from the 2D model is developed and is tested against experimental results for various mesh discretization. The simulation model is validated based on process forces and ground topography from experiments. The model is also further scaled to simulate multiple grit-workpiece interaction validated against experimental results. As a final step, simulation models are developed to simulate material removal, due to the interaction of grinding wheel and workpiece. A developed virtual grinding wheel topographical model is employed to display
an approach, to upscale a grinding process from grit-workpiece interaction to wheel-workpiece
interaction. In conclusion, practical conclusions drawn and scope for future studies are derived
based on the developed simulation models.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
Climate change will have severe consequences on Eastern Boundary Upwelling Systems (EBUS). They host the largest fisheries in the world supporting the life of millions of people due to their tremendous primary production. Therefore, it is of utmost importance to better understand predicted impacts like alternating upwelling intensities and light impediment on the structure and the trophic role of protistan plankton communities as they form the basis of the food web. Numerical models estimate the intensification of the frequency in eddy formation. These ocean features are of particular importance due to their influence on the distribution and diversity of plankton communities and the access to resources, which are still not well understood even to the present day. My PhD thesis entails two subjects conducted during large-scaled cooperation projects REEBUS (Role of Eddies in Eastern Boundary Upwelling Systems) and CUSCO (Coastal Upwelling System in a Changing Ocean).
Subject I of my study was conducted within the multidisciplinary framework REEBUS to investigate the influence of eddies on the biological carbon pump in the Canary Current System (CanCS). More specifically, the aim was to find out how mesoscale cyclonic eddies affect the regional diversity, structure, and trophic role of protistan plankton communities in a subtropical oligotrophic oceanic offshore region.
Samples were taken during the M156 and M160 cruises in the Atlantic Ocean around Cape Verde during July and December 2019, respectively. Three eddies with varying ages of emergence and three water layers (deep chlorophyll maximum DCM, right beneath the DCM and oxygen minimum zone OMZ) were sampled. Additional stations without eddy perturbation were analyzed as references. The effect of oceanic mesoscale cyclonic eddies on protistan plankton communities was analyzed by implementing three approaches. (i) V9 18S rRNA gene amplicons were examined to analyze the diversity and structure of the plankton communities and to infer their role in the biological carbon pump. (ii) By assigning functional traits to taxonomically assigned eDNA sequences, functional richness and ecological strategies (ES) were determined. (iii) Grazing experiments were conducted to assess abundance and carbon transfer from prokaryotes to phagotrophic protists.
All three eddies examined in this study differed in their ASV abundance, diversity, and taxonomic composition with the most pronounced differences in the DCM. Dinoflagellates were the most abundant taxa in all three depth layers. Other dominating taxa were radiolarians, Discoba and haptophytes. The trait-approach could only assign ~15% of all ASVs and revealed in general a relatively high functional richness. But no unique ES was determined within a specific eddy. This indicates pronounced functional redundancy, which is recognized to be correlated with ecosystem resilience and robustness by providing a degree of buffering capacity in the face of biodiversity loss. Elevated microbial abundances as well as bacterivory were clearly associated to mesoscale eddy features, albeit with remarkable seasonal fluctuations. Since eddy activity is expected to increase on a global scale in future climate change scenarios, cyclonic eddies could counteract climate change by enhancing carbon sequestration to abyssal depths. The findings demonstrate that cyclonic eddies are unique, heterogeneous, and abundant ecosystems with trapped water masses in which characteristic protistan plankton develop as the eddies age and migrate westward into subtropical oligotrophic offshore waters. Therefore, eddies influence regional protistan plankton diversity qualitatively and quantitatively.
Subject II of my PhD project contributed to the CUSCO field campaign to identify the influence of varying upwelling intensities in combination with distinct light treatments on the whole food web structure and carbon pump in the Humboldt Current System (HCS) off Peru. To accomplish such a task, eight offshore-mesocosms were deployed and two light scenarios (low light, LL; high light, HL) were created by darkening half of the mesocosms. Upwelling was simulated by injecting distinct proportions (0%, 15%, 30% and 45%) of collected deep-water (DW) into each of the moored mesocosms. My aim was to examine the changes in diversity, structure, and trophic role of protistan plankton communities for the induced manipulations by analyzing the V9 18S rRNA gene amplicons and performing short-term grazing experiments.
The upwelling simulations induced a significant increase in alpha diversity under both light conditions. In austral summer, reflected by HL conditions, a generally higher alpha diversity was recorded compared to the austral winter simulation, instigated by LL treatment. Significant alterations of the protistan plankton community structure could likewise be observed. Diatoms were associated to increased levels of DW addition in the mimicked austral winter situation. Under nutrient depletion, chlorophytes exhibited high relative abundances in the simulated austral winter scenario. Dinoflagellates dominated the austral summer condition in all upwelling simulations. Tendencies of reduced unicellular eukaryotes and increased prokaryotic abundances were determined under light impediment. Protistan-mediated mortality of prokaryotes also decreased by ~30% in the mimicked austral winter scenario.
The findings indicate that the microbial loop is a more relevant factor in the structure of the food web in austral summer and is more focused on the utilization of diatoms in austral winter in the HCS off Peru. It was evident that distinct light intensities coupled with multiple upwelling scenarios could lead to alterations in biochemical cycles, trophic interactions, and ecosystem services. Considering the threat of climate change, the predicted relocation of EBUS could limit primary production and lengthen the food web structure with severe socio-economic consequences.
Since their introduction, robots have primarily influenced the industrial world, providing new opportunities and challenges for humans and machinery. With the introduction of lightweight robots and mobile robot platforms, the field of robot applications has been expanded, diversified, and brought closer to society. The increased degree of digitalization and the personalization of goods and products require an enhanced and flexible robot deployment by operating several multi-robot systems along production processes, industrial applications, assembly and packaging lines, transport systems, etc.
Efficient and safe robot operation relies on successful task planning followed by the computation and execution of task-performing motion trajectories. This thesis addresses these issues by developing, implementing, and validating optimization-based methods for task and trajectory planning in robotics, considering certain optimality and performance criteria. The focus is mainly on the time optimality of the presented approaches with respect to both execution and computation time without compromising safe robot use.
Driven by a systematic approach, the basis for the algorithm development is established first by modeling the kinematics and dynamics of the considered robots and identifying required dynamic parameters. In a further step, time-optimal task and trajectory planning algorithms for a single robotic arm are developed. Initially, a hierarchical approach is introduced consisting of two decoupled optimization-based control policies, a binary problem for task planning, and a continuous model predictive trajectory planning problem. The two layers of the hierarchical structure are then merged into a monolithic layer, resulting in a hybrid structure in the form of a mixed-integer optimization problem for inherent task and trajectory planning.
Motivated by a multi-robot deployment, the hierarchical control structure for time-optimal task and trajectory planning is extended for the case of a two-arm robotic system with highly overlapping operational spaces, leading to challenging robot motions with high inter-robot collision potential. To this end, a novel predictive approach for collision avoidance is proposed based on a continuous approximation of the robot geometry, resulting in a nonlinear optimization problem capable of online applications with real-time requirements. Towards a mobile and flexible robot platform, a model predictive path-following controller for an omnidirectional mobile robot is introduced. Here, a time-minimal approach is also applied, which consists of the robot following a given parameterized path as accurately as possible and at maximum speed.
The performance of the proposed algorithms and methods is experimentally analyzed and validated under real conditions on robot demonstrators. Implementation details, including the resulting hardware and software architecture, are presented, followed by a detailed description of the results. Concrete and industry-oriented demonstrators for integrating robotic arms in existing manual processes and the indoor navigation of a mobile robot complete the work.
Cancer, a complex and multifaceted disease, continues to challenge the boundaries of biomedical research. In this dissertation, we explore the complexity of cancer genesis, employing multiscale modeling, abstract mathematical concepts such as stability analysis, and numerical simulations as powerful tools to decipher its underlying mechanisms. Through a series of comprehensive studies, we mainly investigate the cell cycle dynamics, the delicate balance between quiescence and proliferation, the impact of mutations, and the co-evolution of healthy and cancer stem cell lineages. The introductory chapter provides a comprehensive overview of cancer and the critical importance of understanding its underlying mechanisms. Additionally, it establishes the foundation by elucidating key definitions and presenting various modeling perspectives to address the cancer genesis. Next, cell cycle dynamics have been explored, revealing the temporal oscillatory dynamics that govern the progression of cells through the cell cycle.
The first half of the thesis investigates the cell cycle dynamics and evolution of cancer stem cell lineages by incorporating feedback regulation mechanisms. Thereby, the pivotal role of feedback loops in driving the expansion of cancer stem cells has been thoroughly studied, offering new perspectives on cancer progression. Furthermore, the mathematical rigor of the model has been addressed by deriving wellposedness conditions, thereby strengthening the reliability of our findings and conclusions. Then, expanding our modeling scope, we explore the interplay between quiescent and proliferating cell populations, shedding light on the importance of their equilibrium in cancer biology. The models developed in this context offer potential avenues for targeted cancer therapies, addressing perspective cell populations critical for cancer progression. The second half of the thesis focuses on multiscale modeling of proliferating and quiescent cell populations incorporating cell cycle dynamics and the extension thereof with mutation acquisition. Following rigorous mathematical analysis, the wellposedness of the proposed modeling frameworks have been studied along with steady-state solutions and stability criteria.
In a nutshell, this thesis represents a significant stride in our understanding of cancer genesis, providing a comprehensive view of the complex interplay between cell cycle dynamics, quiescence, proliferation, mutation acquisition, and cancer stem cells. The journey towards conquering cancer is far from over. However, this research provides valuable insights and directions for future investigation, bringing us closer to the ultimate goal of mitigating the impact of this formidable disease.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
Distributed Optimization of Constraint-Coupled Systems via Approximations of the Dual Function
(2024)
This thesis deals with the distributed optimization of constraint-coupled systems. This problem class is often encountered in systems consisting of multiple individual subsystems, which are coupled through shared limited resources. The goal is to optimize each subsystem in a distributed manner while still ensuring that system-wide constraints are satisfied. By introducing dual variables for the system-wide constraints the system-wide problem can be decomposed into individual subproblems. These resulting subproblems can then be coordinated by iteratively adapting the dual variables. This thesis presents two new algorithms that exploit the properties of the dual optimization problem. Both algorithms compute a quadratic surrogate function of the dual function in each iteration, which is optimized to adapt the dual variables. The Quadratically Approximated Dual Ascent (QADA) algorithm computes the surrogate function by solving a regression problem, while the Quasi-Newton Dual Ascent (QNDA) algorithm updates the surrogate function iteratively via a quasi-Newton scheme. Both algorithms employ cutting planes to take the nonsmoothness of the dual function into account. The proposed algorithms are compared to algorithms from the literature on a large number of different benchmark problems, showing superior performance in most cases. In addition to general convex and mixed-integer optimization problems, dual decomposition-based distributed optimization is applied to distributed model predictive control and distributed K-means clustering problems.
Lubricated tribological contact processes are important in both nature and in many technical applications. Fluid lubricants play an important role in contact processes, e.g. they reduce friction and cool the contact zone. The fundamentals of lubricated contact processes on the atomistic scale are, however, today not fully understood. A lubricated contact process is defined here as a process, where two solid bodies that are in close proximity and eventually in parts in direct contact, carry out a relative motion, whereat the remaining volume is submersed by a fluid lubricant. Such lubricated contact processes are difficult to examine experimentally. Atomistic simulations are an attractive alternative for investigating the fundamentals of such processes. In this work, molecular dynamics simulations were used for studying different elementary processes of lubricated tribological contacts. A simplified, yet realistic simulation setup was developed in this work for that purpose using classical force fields. In particular, the two solid bodies were fully submersed in the fluid lubricant such that the squeeze-out was realistically modeled. The velocity of the relative motion of the two solid bodies was imposed as a boundary condition. Two types of cases were considered in this work: i) a model system based on synthetic model substances, which enables a direct, but generic, investigation of molecular interaction features on the contact process; and ii) real substance systems, where the force fields describe specific real substances. Using the model system i), also the reproducibility of the findings obtained from the computer experiments was critically assessed. In most cases, also the dry reference case was studied. Both mechanical and thermodynamic properties were studied -- focusing on the influence of lubrication. The following properties were studied: The contact forces, the coefficient of friction, the dislocation behavior in the solid, the chip formation and the formation of the groove, the squeeze-out behavior of the fluid in the contact zone, the local temperature and the energy balance of the system, the adsorption of fluid particles on the solid surfaces, as well as the formation of a tribofilm. Systematic studies were carried out for elucidating the influence of the wetting behavior, the influence of the molecular architecture of the lubricant, and the influence of the lubrication gap height on the contact process. As expected, the presence of a fluid lubricant reduces the temperature in the vicinity of the contact zone. The presence of the lubricant is, moreover, found to have a significant influence on the friction and on the energy balance of the process. The presence of a lubricant reduces the coefficient of friction compared to a dry case in the starting phase of a contact process, while lubricant molecules remain in the contact zone between the two solid bodies. This is a result of an increased normal and slightly decreased tangential force in the starting phase. When the fluid molecules are squeezed out with ongoing contact time and the contact zone is essentially dry, the coefficient of friction is increased by the presence of a fluid compared to a dry case. This is attributed to an imprinting of individual fluid particles into the solid surface, which is energetically unfavorable. By studying the contact process in a wide range of gap height, the entire range of the Stribeck curve is obtained from the molecular simulations. Thereby, the three main lubrication regimes of the Stribeck curve and their transition regions are covered, namely boundary lubrication (significant elastic and plastic deformation of the substrate), mixed lubrication (adsorbed fluid layers dominate the process), and hydrodynamic lubrication (shear flow is set up between the surface and the asperity). The atomistic effects in the different lubrication regimes are elucidated. Notably, the formation of a tribofilm is observed, in which lubricant molecules are immersed into the metal surface. The formation of a tribofilm is found to have important consequences for the contact process. The work done by the relative motion is found to mainly dissipate and thereby heat up the system. Only a minor part of the work causes plastic deformation. Finally, the assumptions, simplifications, and approximations applied in the simulations are critically discussed, which highlights possible future work.
Reactive absorption with amines is the most important technique for the removal of CO2
from gas streams, e.g. from flue gas, natural gas or off-gas from the cement industry.
In this work a rigorous simulation model for the absorption and desorption of CO2 with
an amine-containing solvent is validated using data from pilot plants of various sizes.
This model was then coupled with a detailed simulation of a coal-fired power plant.
The power generation efficiency drop with CO2 capture was determined and process
parameters in the power plant and separation process were optimized. It was shown
that the high energy demand of CO2 separation significantly reduces power generation
efficiencies, which underlines the need for improvements. This can be achieved by better
solvents or by advanced process designs. In this work such improved CO2 separation
processes are described and evaluated by detailed simulation studies.
In order to develop detailed rigorous simulation models for reactive absorption with novel
solvent systems, a precise knowledge of the liquid phase reaction kinetics is necessary.
There are well established techniques for measuring species distributions in equilibirated
aqueous amine solutions by NMR spectrosopy. However, the existing NMR techniques
cannot be used for monitoring fast reactions in these solutions. Therefore, in this work
a novel temperature-controlled micro-reactor NMR probe head was developed which
enables studying reaction kinetics with time constants in the range of seconds.
On this basis, modern solvent systems for CO2 absorption can be characterized and
the scale-up of separation process for future plants can be accompanied using rigorous
process simulation.
Pervasive human impacts rapidly change freshwater biodiversity. Frequently recorded exceedances of regulatory acceptable thresholds by pesticide concentrations suggest that pesticide pollution is a relevant contributor to broad-scale trends in freshwater biodiversity. A more precise pre-release Ecological Risk Assessment (ERA) might increase its protectiveness, consequently reducing the likelihood of unacceptable effects on the environment. European ERA currently neglects possible differences in sensitivity between exposed ecosystems. If the taxonomic composition of assemblages would differ systematically among certain types of ecosystems, so might their sensitivity toward pesticides. In that case, a single regulatory threshold would be over- or underprotective.
In this thesis, we evaluate (1) whether the assemblage composition of macroinvertebrates, diatoms, fishes, and aquatic macrophytes differs systematically between the types of a European river typology system, and (2) whether these taxonomical differences engender differences in sensitivity toward pesticides. While a selection of ecoregions is available for Europe, only a single typology system that classifies individual river segments is available at this spatial scale - the Broad River Types (BRT).
In the first two papers of this thesis, we compiled and prepared large databases of macroinvertebrate (paper one), diatom, fish, and aquatic macrophyte (paper two) occurrences throughout Europe to evaluate whether assemblages are more similar within than among BRT types. Additionally, we compared its performance to that of different ecoregion systems. We employed multiple tests to evaluate the performances, two of which were also designed in the studies. All typology systems failed to reach common quality thresholds for the evaluated metrics for most taxa. Nonetheless, performance differed markedly between typology systems and taxa, with the BRT often performing worst. We showed that currently available, European freshwater typology systems are not well suited to capture differences in biotic communities and suggest several possible amelioration.
In the third study, we evaluated whether ecologically meaningful differences in sensitivity exist between BRT types. To this end, we predicted the sensitivity of macroinvertebrate assemblages across Europe toward Atrazine, copper, and Imidacloprid using a hierarchical species sensitivity distribution model. The predicted assemblage sensitives differed only marginally between BRT types. The largest difference between
median river type sensitivities was a factor of 2.6, which is far below the assessment factor suggested for such models (6), as well as the factor of variation commonly observed between toxicity tests of the same species-compound pair (7.5 for copper). Our results don’t support the notion that a type-specific ERA might improve the accuracy of thresholds. However, in addition to the taxonomic composition the bioavailability of chemicals, the interaction with other stressors, and the sensitivity of a given species might differ between river types.
Weak memory consistency models capture the outcomes of concurrent
programs that appear in practice and yet cannot be explained by thread
interleavings. Such outcomes pose two major challenges to formal
methods. First, establishing that a memory model satisfies its
intended properties (e.g., supports a certain compilation scheme) is
extremely error-prone: most proposed language models were initially
broken and required multiple iterations to achieve soundness. Second,
weak memory models make verification of concurrent programs much
harder, as a result of which there are no scalable verification
techniques beyond a few that target very simple models.
This thesis presents solutions to both of these problems.
First, it shows that the relevant metatheory of weak memory
models can be effectively decided (sparing years of manual proof
efforts), and presents Kater, a tool that can answer metatheoretic
queries in a matter of seconds. Second, it presents GenMC, the first
(and only) scalable stateless model checker that is parametric in the
choice of the memory model, often improving the prior state of the art
by orders of magnitude.
This thesis outlines the development of thermoplastic-graphite based plate heat exchangers from material screening to operation including performance evaluation and fouling investi-gations. Polypropylene and polyphenylene sulfide as matrix and graphite as filler were cho-sen as feedstock materials, as they possess a low density and excellent corrosion resistance at a comparatively low price.
For the purpose of material screening, custom-made polymer composite plates with a plate thickness of 1-2 mm and a filler content of up to 80 wt.% were investigated for their thermal and mechanical suitability with regard to their use in plate heat exchangers. Three-point flexural tests show that the loading of polypropylene with graphite leads to mechanical prop-erties that allow the composites to be applied as corrugated heat exchanger plates. The simu-lated maximum overpressure is greater than 7 bar, depending on the wall thickness. The thermal conductivity of the composites was increased by a factor of 12.5 compared to pure polypropylene, resulting in thermal conductivities of up to 2.74 W/mK.
The fabrication of the developed corrugated heat exchanger plates, with a thickness between 0.85 mm and 2.5 mm and a heat transfer surface area of 11.13·10-3 m² was carried out via processes that can be automized, namely extrusion and embossing. With the manufactured plate heat exchanger, overall heat transfer coefficients are determined over a wide range of operating conditions (Re = 200 - 1600), which are used to validate a plate heat exchanger model and consequently to compare the composites with conventional materials. The em-bossing, which seems to result in a shift of the internal graphite structure, leads to a further improvement of the thermal conductivity by 7-20 %, in addition to the impact of the filler. With low plate thicknesses, overall heat transfer coefficients of up to 1850 W/m²K could be obtained. Considering the low density of the manufactured thermal plates, this ensures com-parable performance with metallic materials over a wide range of process conditions (Re = 200 - 4000).
The fouling kinetics and amount of calcium sulfate and calcium carbonate, respectively, on different polypropylene/graphite composites in a flat plate heat exchanger and the developed chevron type plate heat exchanger are determined and compared to the reference material stainless steel. For a straight evaluation of the fouling susceptibility of the materials the for-mation of bubbles on the materials is considered by optical imaging or excluded by a degas-ser. The results are interpreted using surface free energy and roughness of the surfaces. The results show that if bubble formation is avoided, the polymer composites have a very low fouling tendency compared to stainless steel, which is attributed to the low surface free ener-gies of approximately 25 mN/m. This is particularly the case when turbulent flows are pre-sent, as is in plate heat exchangers or when sandblasted specimen are used. Sandblasting also continues to increase heat transfer compared to untreated samples by increasing thermal conductivity and creating local turbulences. Depending on the test conditions, the fouling resistance formed on the stainless steel surface is an order of magnitude greater than on the flat plate polymer composites. In addition, the fouling layers adhere only weakly to the com-posites, which indicates an easy cleaning in place after the formation of deposits. The fouling investigations in the plate heat exchanger reveal sensitivity to calcium sulfate fouling, how-ever, CFD simulations indicate that this is due to flow maldistribution and not the actual pol-ymer composite materials.
Machine Learning (ML) is expected to become an integrated part of future mobile networks due to its capacity for solving complex problems. During inference, ML algorithms extract the hidden knowledge of their input data which is delivered to them through wireless links in many scenarios. Transmission of a massive amount of such input data can impose a huge burden on the mobile network. On the other hand, it is known that ML algorithms can tolerate different levels of distortion on their input components, while the quality of their predictions remains unaffected. Therefore, utilization of the conventional approaches
implies a waste of radio resources, since they target an exact reconstruction of transmitted data, i.e., the input of ML algorithms. In this thesis, we propose a novel relevance based framework that focuses on the quality of final ML outputs instead of such syntax based reconstruction of transmitted inputs. To this end, we quantify the semantics or relevancy of input components in terms of the bit allocation aspect of data compression, where a higher tolerance for distortion implies less relevancy. A lower relevance level is translated into the allocation of less radio resources, e.g., bandwidth. The introduced formulation provides the foundations for the efficient support of ML models with their required data in the inference phase, while wireless resources are employed efficiently.
In this dissertation, a generic relevance based framework utilizing the Kullback-Leibler Divergence (KLD) is developed that is applicable to many realistic scenarios. The system model under study contains multiple sources transmitting correlated multivariate input components of a ML algorithm. The ML model is seen as a black box, which is trained and has fixed parameters while operating in the inference phase. Our proposed bit allocation accounts for the rate-distortion tradeoff. Hence, it is simply adjustable for application to
other problems. Here, an extended version of the proposed bit allocation strategy is introduced for signaling overhead reduction, in which the relevancy level of each input attribute changes instantaneously. In another expansion, to take the effect of dynamic channel states into account, a resource allocation approach for ML based centralized control systems is proposed. The novel quality of service metric takes outputs of ML algorithms into consideration,
and in combination with the designed greedy algorithm, provides significantly
improved end-to-end performance for a network of cart inverted pendulums.
The introduced relevance based framework is comprehensively investigated by considering various case studies, real and synthetic data, regression and classification, different estimators for the KLD, various ML models and codebook designs. Furthermore, the reliability of this proposed solution is explored in presence of packet drops, indicating robustness of the relevance based compression. In all of the simulations, the relevance based solutions deliver the best outcome in terms of the carefully chosen key performance indicators. In most of them, significantly high gains are also achieved compared to the conventional techniques, motivating further research on the subject.
The ability to sense and respond to different environmental conditions allows living organisms to adapt quickly to their surroundings. In order to use light as a source of information, plants, fungi, and bacteria employ phytochromes. With their ability to detect far-red and red light, phytochromes constitute a major photoreceptor family. Bacterial phytochromes (BphPs) are composed of an apo-phytochrome and an open-chain tetrapyrrole, the chromophore biliverdin IXα, which mediates the photosensory properties. Depending on the photoexcitation and the quality of the incident light, phytochromes interconvert between two photoconvertible parental states: the red light-absorbing Pr-form and the far-red light-absorbing Pfr-form. In contrast to prototypical phytochromes, with a thermal stable Pr ground state, there is a group of bacterial phytochromes that exhibit dark reversion from the Pr- to the Pfr-form. These special proteins are classified as bathy phytochromes and range across different classes of bacteria. Moreover, the majority of BphPs act as sensor histidine kinases in two-component regulatory systems. The light-triggered conformational change results in the autophosphorylation of the histidine kinase domain and the transphosphorylation of an associated response regulator, inducing a cellular response. Spectroscopic analysis utilizing homologously produced protein identified PaBphP, the histidine kinase of the human opportunistic pathogen Pseudomonas aeruginosa, as a bathy phytochrome. Intensive research on PaBphP revealed evidence that the interconversion between its physiological active and inactive states is influenced by light and darkness rather than far-red and red light. In order to conduct a comprehensive systematic analysis, further bacterial phytochromes were investigated regarding their biochemical and spectroscopic behavior, as well as their autokinase activity. In addition to PaBphP, this work employs the bathy phytochromes AtBphP2, AvBphP2, XccBphP from the non-photosynthetic plant pathogens Agrobacterium tumefaciens, Allorhizobium vitis, Xanthomonas campestris, as well as RtBphP2 from the soil bacterium Ramlibacter tataouinensis. All investigated BphPs displayed a bathy-typical behavior by developing a distinct Pr-form under far-red light conditions and undergoing dark reversion to their Pfr-form. Different Pr/Pfr-fractions can be identified among the BphP populations in varying natural light conditions, including red or blue light. The Pr-form is considered as the active form due to autophosphorylation activity in the heterologously produced phytochromes when exposed to light. In the absence of light, associated with the development of the Pfr-form, the phytochromes exhibited disabled or strongly reduced autokinase activity. Additionally, light-triggered phosphorylation was observed for the response regulator PaAlgB, which is linked to the phytochrome of P. aeruginosa. This study presents the first comparative investigation of numerous bathy phytochromes under identical conditions. The work addressed a gap in the literature by providing quantitative correlation between kinase activity and calculated Pr/Pfr-fractions obtained from spectroscopic measurements. The biological role of PaBphP was partially elucidated through phenotypic characterization employing P. aeruginosa mutant and overexpression strains. The generation of a functional model was possible by considering the postulated functions of the other phytochromes found in the literature. In summary, bathy BphPs are hypothesized to modulate bacterial virulence according to the circadian day/night rhythm of their hosts. The pathogens are believed to reduce their virulence during daylight hours to evade immune and defense reactions, while increasing their virulence during the evening and night, enabling more effective infections.
In contrast to motorbike tyres, whose friction during cornering has to be as high as possible, the desired effect in skiing is the opposite, that of low friction. The reduced friction between skis and ice or snow is made possible by a film of meltwater that forms as a function of friction power. To support this friction mechanism, skis are waxed with different waxes in both hobby and professional sports, depending on a variety of conditions. Waxes with fluorine additives show best performance in most conditions, corresponding to the lowest friction coefficients. However, for health and environmental reasons, the International Ski Federation (FIS) and the Biathlon Un-ion (IBU) have imposed a complete ban on fluorine additives at all FIS races and IBU events with effect from the 2023/2024 season. As a result, wax manufacturers are required to develop and extensively test fluorine-free waxes in order to remain competitive.
Traditional tests take place either indoors or outdoors in the field. Athletes, who complete a particular distance and whose time is measured, also note the impres-sions that the prepared skis provide to the skiers. The time and cost involved in nu-merous individual tests is a drawback, and the presence of only a single type of snow in the hall or field, air resistance, changing environmental conditions and var-iations in the athlete's movement, limit the depth of information. For the need of re-ducing the time-consuming procedure of indoor and outdoor tests, a tribometer of-fers a solution where friction measurements can be performed on a laboratory scale. Due to the consistent adjustable conditions such as temperature, speed and load applied to the friction partners, scientific studies can be carried out with reduced dis-turbance variables. At present, the tribometric results of laboratory instruments for predicting friction values do not translate into application in practice. The reasons for this are the compromises that have to be made in the design of the tribometers.
This work reviews the existing tribometers for their operating conditions and con-firms the need for a scientific method of characterising different waxes. In order to fill the gap between friction results obtained in laboratory tests which cannot yet be used in the selection of waxes, and traditional field tests, this thesis is dedicated to the methodical design and manufacture of a linear tribometer capable of measuring friction between a ski base made of UHMWPE (ultra high molecular weight polyeth-ylene) and an ice sample. The tribometer provides for the first time results that allow differentiating be-tween different modified waxes with regard to their running performance. Friction-influencing factors such as speed, temperature and the surface pressure below the ski base can be adjusted within the range relevant for ski sports. Furthermore, the laboratory-scale test stand, which is located in a cold chamber, is capable of ac-commodating not only typical ski jumping base lengths and widths, but also cross-country and alpine ski bases. To verify the tribometer, a ski base is treated with three waxes of different fluorine content and measured comparatively. With a minimum of 95% confidence, the friction differences between the tested waxes depending on their fluorine content is validated and proven at the end of this work.
Functional structures as well as materials provided by nature have always been a great source of inspiration for new technologies. Adapting and improving the discovered concepts, however, demands a detailed understanding of their working principles, while employing natural materials for fabrication tasks requires suitable functionalization and modification.
In this thesis, the white scales of the beetle Cyphochilus are examined in order to reveal unknown aspects of their light transport properties. In addition, the monomer of the material they are made of is utilized for 3D microfabrication.
White beetle scales have been fascinating scientists for more than a decade because they display brilliant whiteness despite their small thickness and the low refractive index contrast. Their optical properties arise from highly efficient light scattering within the disordered intra-scale network structure.
To gain a better understanding of the scattering properties, several previous studies have investigated the light transport and its connection to the structural anisotropy with the aid of diffusion theory. While this framework allows to relate the light scattering to macroscopic transport properties, an accurate determination of the effective refractive index of the structure is required. Due to its simplicity, the Maxwell-Garnett mixing rule is frequently used for this task, although its constraint to particle and feature sizes much smaller than the wavelength is clearly violated for the scales.
To provide a correct calculation of the effective refractive index, here, finite-difference time-domain simulations are used to systematically examine the impact of size effects on the effective refractive index. Deploying this simulation approach, the Maxwell-Garnett mixing rule is shown to break down for large particles. In contrast, it is found that a quadratic polynomial function describes the effective refractive index in close approximation, while its coefficients can be obtained from an empirical linear function. As a result, a simple mixing rule is reported that unambiguously surpasses classical mixing rules when composite media containing large feature sizes are considered. This is important not only for the accurate description of white beetle scales, but also for other turbid media, such as biological tissues in opto-biomedical diagnostics.
Describing light transport by means of diffusion theory moreover neglects any coherent effects, such as interference. Hence, their impact on the generation of brilliant whiteness is currently unknown. To shed a light on their role, spatial- and time-resolved light scattering spectromicroscopy is applied to investigate the scales and a model structure of them based on disordered Bragg stacks. For both structures the occurrence of weakly localized photonic modes, i.e., closed scattering loops, is observed, which is further verified in accompanying simulations. As shown in this thesis, leakage from these random photonic modes contributes at least 20% to the overall reflected light. This reveals the importance of coherent effects for a complete description of the underlying light transport properties; an aspect that is entirely missing in the purely diffusive transport presumed so far. Identifying the importance of weak localization for the generation of brilliant whiteness paves the way to further enhance the design of efficient optical scattering media, an issue that recently drawn great attention.
Unlike their plant-based counterparts, rigid carbohydrates, such as chitin, are currently unavailable for 3D microfabrication via direct laser writing, despite their great significance in the animal kingdom for the construction of functional microstructures. To overcome this gap, the monomeric unit of chitin, N-acetyl-D-glucosamine, is here functionalized to serve as a photo-crosslinkable monomer in a non-hydrogel photoresist. Since all previous photoresists based on animal carbohydrates are in the form of hydrogel formulations, a new group of photoresists is established for direct laser writing.
Moreover, it is exhibited that the sensitization effect, previously used only in the context of UV curing, can be successfully transferred to direct laser writing to increase the maximum writing speed. This effect is based on the beneficial combination of two photoinitiators.
In this, one photoinitiator is an efficient crosslinking agent for the monomer used, but a rather poor two-photon absorber. The other photoinitiator (called sensitizer) possesses, conversely, a much higher two-photon absorption coefficient at the applied wavelength but is not well suited as a crosslinking agent. In combination, the energy absorbed by the sensitizer is passed to the photoinitiator, resulting in the formation of radicals needed to start the polymerization. As this greatly increases the rate at which the photoinitiator is radicalized, resists containing a photoinitiator and a sensitizer are shown to outperform resists containing only one of the components. Deploying the sensitization effect in direct laser writing therefore offers a simple way to individually tune the crosslinking ability and the two-photon absorption properties by combining existing compounds, compared to the costly chemical synthesis of novel, customized photoinitiators.
Production, purification and analysis of novel peptide antibiotics from terrestrial cyanobacteria
(2024)
Cyanobacteria are a known source for bioactive compounds, of which several also show antibiotic activity. In regard to the growing number of multi-resistant pathogens, the search for novel antibiotic substances is of great importance and unexploited sources should be explored. So, this thesis initially dealt with the identification of productive strains, especially within the group of the terrestrial cyanobacteria, which are less well studied than marine and freshwater strains. Amongst these, Chroococcidiopsis cubana, an extremely desiccation and radiation tolerant, unicellular cyanobacterium was found to produce an extracellular antimicrobial metabolite effective against the Gram-positive indicator bacterium Micrococcus luteus as well as the pathogenic yeast Candida auris. However, as the sole identification of a productive cyanobacterium is not sufficient for further analysis and a future production scale-up, the second part of this thesis targeted the identification of compound synthesis prerequisites. As a result, a limitation of nitrogen was shown to be the production trigger, a finding that was used for the establishment of a continuous production system. The increased compound formation was then used for purification and analysis steps. As a second approach, in silico identified bacteriocin gene clusters from C. cubana were cloned and heterologously expressed in Escherichia coli. By this, the bacteriocin B135CC was identified as a strong bacteriolytic agent, active predominantly against the Gram-positive strains Staphylococcus aureus and Mycobacterium phlei. The peptide showed no cytotoxic effects against mouse neuroblastoma (N2a-) cells and a high temperature tolerance up to 60 °C. In order to facilitate the whole project, two standard protocols, specifically adapted for the work with cyanobacteria, were established. First, a method for a quick and easy in vivo vitality estimation of phototrophic cells and second, an approach for a high throughput determination of nitrate concentrations in microalgal cultures. Both methods greatly helped to proceed the main objectives of this work, the first one by simplifying the development of suitable cryopreservation protocols for individual cyanobacteria strains and the second one by accelerating the determination of the optimal nitrate concentration for the production of the antimicrobial compound from C. cubana. In the course of this cultivation optimization, the ability of cyanobacteria to utilize organic carbon sources for an accelerated cell growth was examined in greater detail. It could be shown that C. cubana reaches significantly higher growth rates when mixotrophically cultivated with fructose or glucose. Interestingly, this effect was even further enhanced when light intensity was decreased. Under these low-light conditions, phototrophically cultivated C. cubana cells showed a clearly decreased cell growth. This effect might be extremely useful for a quick and economic preparation of precultures.
Plant-specific factors affecting short-range attraction and oviposition of European grapevine moths
(2024)
The spread of pests and pathogens is increasingly intensified by climate change and globalization. Two of the most serious insect pests threating European viticulture are the European grape berry moth, Eupoecilia ambiguella (Hübner) and the European grapevine moth Lobesia botrana (Denis & Schiffermüller). Larvae feed on fructiferous organs of grapevine Vitis vinifera, resulting in high yield and quality losses. Under the aspects of integrated pest management, insecticide measures are only reasonable when other control strategies become ineffective. In order to support the development of novel decision support system for the application of insecticides, the aim of this thesis was to decipher plant-specific factors, which affect the short-range attraction and oviposition of L. botrana and E. ambiguella.
The focus was set on the visual, volatile, tactile and gustatory stimuli provided by their host plant after settlement. The use of artificial surfaces as model plant showed that oviposition of both species is affected by the color, the shape and the texture of the oviposition site. To explain a susceptibility of certain grapevine cultivars and phenological stages of the berries to egg infestations, we analysed and compared the chemical composition of the epicuticular waxes of the berry surface as well as the volatile organic compounds emitted by the berries. Thereby it turned out that the attractiveness to wax extracts decreased during ripening of the berries, highlighting a preference of earlier phenological stages of the berries for oviposition. In addition, grapevine cultivars exhibited variations in their volatile composition. The principle components perceived by female’s antennae could not explain the differentiation between cultivars, suggesting volatiles do not trigger orientation to certain cultivars. Furthermore, a method was developed to measure real-time behavioural response of female moths to volatiles. The setup allowed to quantify the orientation to a volatile source as well as movements of the antennae and ovipositor. They could be linked to the olfactory and gustatory perception of volatiles during the evaluation of suitable host plants for oviposition. In addition, the risk of potential alternative host plants in the vicinity of the vineyard was investigated. This confirmed that L. botrana in particular prefers the stimuli provided by some plants to those of grapevine. Overall, the results suggest that during oviposition, volatiles emitted by the plants and the composition of the plant surface are the most important factors for host plant differentiation.
Streams and their adjacent terrestrial ecosystem are tightly linked via the flux of organisms and matter. Emergent aquatic insects can be an important food source for riparian predators like bats, birds, spiders, and lizards. Information about the quality, quantity and phenology of emergent aquatic insects is necessary to estimate how riparian predators can benefit from them as food source. Though intensive agriculture is a globally dominant land use, little is known about how agricultural land use affects the quantity, quality as well as phenology of emergent aquatic insects. Typically, emergent aquatic insects contain more long-chain polyunsaturated fatty acids (PUFA) than terrestrial insects. Especially long-chain PUFA, were shown to enhance growth and immune response of spiders and birds.
In chapter 2, the PUFA transfer to spiders and the effect of food sources differing in their PUFA profiles on spiders was examined in outdoor microcosms under environmentally realistic conditions (i.e., normal weather conditions, possibility to construct orb webs as in their natural habitat). The environmental context determined how PUFA can affect the spiders. For instance, besides PUFA profiles of food sources, environmental variables like the temperature were important for the growth and body condition of spiders.
In the third chapter, the effect of agricultural land use on the quantity in terms of biomass as well as abundance, phenology and composition of emergent aquatic insects was assessed. Previous studies were limited to seasons or single time points, which hampered determining annual biomass export and shifts in phenology. Therefore, emergent aquatic insects were sampled continuously over the primary emergence period of one year and environmental variables associated with agricultural land use were monitored. The biomass and abundance in total were higher (61 – 68 and 79 – 86%, respectively) in agricultural than forested sites. In addition to that, a turn-over of emergent aquatic insect assemblages and a shift in phenology of aquatic insects was identified. In agricultural sites, 71% families of aquatic insects emerged earlier than in forested sites. Pesticide toxicity was associated with different aquatic insect order biomass and abundances. During the same experiment spiders were sampled in spring, summer, and autumn. Additionally, the fatty acid (FA) content of the spiders and emergent aquatic insects was determined. These results are presented in chapter 4. The FA export via emergent aquatic insects was higher (26 – 29%) in forested than agricultural sites, which indicated a reduced quality of aquatic insects as food source for riparian predators in agricultural sites. The FA profiles of mayflies, flies and caddisflies differed between land-use types, but not for spiders. Shading and pool habitats were the most important environmental variables for the FA profiles, though environmental variables explained only little variation in FA profiles. Overall, the quantity, quality and phenology of emergent aquatic insects differed between land-use types, which can affect population dynamics in the adjacent terrestrial ecosystem. Our results can be used in modeling food-web dynamics or meta-ecosystems to improve understanding of linked ecosystems.
Biodiversity has declined by approximately 70% in the last 50 years for vertebrate and invertebrate species. This loss in biodiversity is strongly connected with anthropogenic activities, such as agricultural intensification and pollution. Currently, pesticides are needed to secure the growing global food demand, although they are recognized as one of the main drivers of biodiversity loss, mainly in agricultural areas.
In the European Union, pesticides are regulated within the risk assessment framework, which aims to protect both the environment and human health from undesirable effects. The effects on non-target organisms are mostly assessed following a “one-size-fits-all” approach, focused on sensitive species tests. However, it has been recognized that the current methodology can be improved in order to minimize undesirable effects. Aiming to provide valuable data to inform future risk assessment, this thesis focused on two terrestrial organism groups that play beneficial roles, especially in agroecosystems: earthworms and spiders.
Although the earthworm Eisenia fetida is included in pesticide regulation, its use as the only earthworm representative may lead to uncertainties for the risk assessment. Therefore, we collected ecotoxicological data on field-captured earthworm species via acute exposure to imidacloprid and copper. In addition, we investigated the relationships between earthworm chemical sensitivity, biological traits and habitat preferences, and potential links with their ecosystem services (Chapter 2). We found that earthworms sampled from extremely acidic soils were less sensitive to copper than earthworms from neutral soils. Moreover, anecic and endogeic earthworms were more sensitive to imidacloprid than epigeic earthworms.
Spiders have, thus far, been understudied in regulatory risk assessment in comparison to other non-target arthropods. Thus, we aimed to collect ecotoxicological data of spider species sampled in different European climates via acute exposure to lambda-cyhalothrin. Moreover, we explored relationships between spider chemical sensitivity, phylogeny, biological traits and habitat preferences, as well as potential links with their ecosystem services (Chapter 3). Spiders showed a high sensitivity to lambda-cyhalothrin. Furthermore, our results showed that spider sensitivity varies depending on climate. We confirmed this relationship by incorporating different rearing and test temperatures into the toxicity testing protocol (Chapter 4).
The outcomes of this thesis contribute to informing pesticide regulatory practices, allowing for an improved protection and conservation of terrestrial organism groups and the ecosystem services they provide. The consideration of ecological traits, habitat variability and related plasticity, key species, and ecological network structure could improve the risk assessment framework and minimize the effects of pesticides and other stressors on an ecosystem-level.
The vast majority of all mitochondrial proteins are synthesized in the cytosol. These proteins carry characteristic targeting motifs within their sequence, which allows for the binding of chaperones, that in turn usher precursors to the mitochondrial surface for import and assembly. Though, our understanding of these early reactions is still lacking, recent efforts have shown that the ER surface can facilitate the import of mitochondrial proteins (ER-SURF) with the help of the J-protein Djp1. Close cooperation of organelles in form of membrane contact sites is crucial for cellular function. The aim of my work was to investigate whether ER-mitochondria contact sites are critical for the transfer of proteins from the ER to mitochondria.
Several contact sites have been characterized between ER and mitochondria in S. cerevisiae. One contact site is called the ER mitochondria encounter structure (ERMES) and another is partly formed by Tom70. Owing to the high propensity of suppressor mutations in ERMES, I employed a knockdown approach to deplete this contact site. Using an inducible CRISPR interference (CRISPRi) system, I could rapidly and efficiently deplete Mdm34, which is a part of ERMES. I could show that depletion of Mdm34 had a synthetic negative effect in combination with a deletion of TOM70. Loss of both contact sites led to a strong decrease of many mitochondrial proteins in the whole cell proteome. Using affinity purification of ER and mitochondria in conjunction with mass spectrometry I could demonstrate that a specific set of mitochondrial proteins are enriched on the ER upon loss of Mdm34 and Tom70, which mainly were proteins of the inner membrane e.g., Oxa1 and Cox5A. Moreover, I was able to validate that the import of these proteins was hampered upon loss of both contact sites. Also, in vivo the biogenesis of Oxa1 was impeded upon single loss of Mdm34 or Tom70 and strongly impaired if both were lost. Analysis of the maximum hydrophobicity of inner membrane proteins in the ER-SURF set revealed on average a significantly higher peak compared to other inner membrane proteins. I could show that deleting or swapping the transmembrane domain of Cox5A would make it contact site independent or reliant on contact sites respectively, as revealed by an in vitro import assay.
In this study I was able to demonstrate the involvement of membrane contact sites in ER-SURF and identify a list of putative clients. Furthermore, I could show that hydrophobicity of the transmembrane segment of inner membrane proteins is one determinant for ER-SURF dependence.
Chemical pollution is a ubiquitous stressor affecting streams and their linkages to riparian forests. Contaminants act by altering the emergence of aquatic insects from streams. Emergent insects can also take up contaminants and transfer them into the terrestrial ecosystem. Emergent insects are an important source of prey for riparian insectivores and changes in the emergence flux or contamination of insects can affect the riparian food web. However, little is knownabout the implications of emerging contaminants such as agricultural pesticides and wastewatereffluent on the terrestrial food web. In this dissertation, I address possible consequences ofagricultural and wastewater stream pollution for riparian insectivores, namely bats and spiders.
The contribution of aquatic prey to riparian spider diets has mainly been determined by stableisotope analysis, but DNA metabarcoding, a highly sensitive method of identifying consumedprey using DNA, promises to further detangle changes in these trophic interactions. In Chapter2, we tested a bleaching decontamination protocol to determine the suitability of using metabarcoding on spiders contaminated during sampling. We confirmed the applicability of metabarcoding, but also found that the wolf spiders (Lyocsidae) collected in riparian areas did not appear to rely strongly on aquatic prey. This informed our choice of Tetragnatha montana, which are highly reliant on aquatic prey, for the field study in Chapter 3.
We then conducted three field studies. Chapters 3 and 4 evaluate indirect trophic effects of chemical stream pollution on spiders and bats, respectively. Chapter 5 quantifies the accumulation of pesticides from the stream to riparian spiders via emergent insects. We found that riparian bats foraged more and that spiders consumed more Chironomidae at more polluted sites, indicating that there was no overall decrease in emergence due to chemical pollution. We also found that certain pesticides accumulated in emergent insects and riparian spiders. Together, this suggests that chemical stream pollution resulted in an increased dietary exposure of riparian insectivores to contaminants, rather than a decrease in prey availability.
These results demonstrate the role of streams and aquatic-terrestrial linkages in propagating stressors across ecosystem boundaries. They also show the benefit of using sensitive methods like DNA metabarcoding to unveil trophic effects of chemical pollution. Future studies should focus on quantifying the risk of contaminant uptake and potential effects for riparian bats, as well as considering how the observed drivers change in different contamination scenarios and ecosystems. This knowledge is important to protect the functionality of the riparian ecosystem and its inhabitants.
Since the turn of the millennium, character research has been on the rise among psychological researchers. In 2004, the field of positive psychology introduced the Values in Action (VIA) framework encompassing 24 theoretically justified and empirically supported character strengths intended for the measurement of good character. Their assignment to six "core virtues" according to Linnaean principles links the 24 character strengths to philosophical and religious theories of virtue. However, the originally developed proprietary VIA Inventory of Strengths (VIA-IS) for the measurement of the 24 character strengths and its public domain counterpart, the IPIP-VIA, are based on a relatively crude scale development approach. Yet, the VIA-IS and the IPIP-VIA dominated (applied) character research for a long time. While researchers recently refined the proprietary VIA instruments, no character strength scales developed according to the state of the art are available in the public domain, thwarting progress in character research. Furthermore, most factor-analytic studies on the hierarchical structure of the 24 VIA character strengths yielded inconsistent results regarding the number and nature of global VIA constructs due to differing methodological standards and strategies. Only recently, a growing body of research consonantly has suggested that three global constructs span the VIA trait space. Consequently, there is only one proprietary inventory for measuring global VIA constructs and none that is available in the public domain. Against this backdrop, this dissertation addressed three methodological challenges in character assessment, taking an open-science approach, a (cross-country) replicability approach, and an integrative approach (i.e., integrating the results into the larger picture of personality science, particularly linking the VIA character traits to the Big Five and value traits).
Study 1 revised the English-language IPIP-VIA and concurrently translated/adapted it to German to yield character strength scales especially suitable for cross-cultural large-scale assessment: The 96-item IPIP-VIA-R measures each character strength with four balanced-keyed, content-valid, and cross-culturally adaptable items building scales that showed satisfactory reliability, (partial) scalar measurement invariance across Germany and the UK, and evidence of construct and criterion validity. Study 2 applied the IPIP-VIA-R and a rigorous factor-analytic approach to revisit the hierarchical structure of the 24 VIA character strengths, revealing three well-interpretable global “core strengths” that were replicable across Germany and the UK: positivity, dependability, and mastery. Study 3 applied an Ant Colony Optimization algorithm to select an optimal 18-item subset of the IPIP-VIA-R to measure each core strength with a balanced-keyed, content-valid six-item scale that again showed satisfactory reliability, scalar measurement invariance across Germany and the UK, and evidence of construct and criterion validity.
Taken as a whole, the dissertation advanced the measurement of VIA character traits in the public domain, the understanding of the VIA character trait space (especially its intersection with Big Five personality and basic human values), and the establishment of the VIA trait hierarchy. To address its research questions framed as methodological challenges, the dissertation introduced and elaborated methodological approaches that researchers might adapt to other individual differences constructs. Even though there remain challenges to be taken up in future work (e.g., adapting the IPIP-VIA-R character and core strength scales for use in a more diverse set of cultures; multi-informant assessment), researchers and survey programs can readily apply the character scales developed as part of this dissertation.
One of the main tasks of molecular biology is understanding the mechanisms of molecular biological processes. This brings the problem of creating regulatory networks and therefore finding key regulators. In order to do it, it is important to have such representation of the data that can reveal the distinct patterns within the big groups. On one side, there are numerous experimentally determined kinetic information about the alteration of molecular presence in the observed system. On the other side, there are documented throughout the years evidences of the involvement of molecules in different biological processes. Both sources of the information have their drawbacks: experimental data reflect only a fleeting molecular state of each individual organism and therefore are often high-variant and noisy; functional groups were determined as generalization of known roles of molecules in biological processes and therefore can be not complete and only partially relevant to certain experimental conditions and individual organisms. Our goal is to get the overview of the experimentally observed molecules and extract the knowledge from both sources, avoiding constrains of noise distractions and generalization bias. The resulted optimal representation of the experimental data then would help to pinpoint potential regulators.
The proposed method is called the Signature Topology (ST) approach, as it uses the functional topology as the prior knowledge source and creates a specific signature for the given experimental data. The ST approach is based on knowledge-and-data-driven machine learning algorithm, that is implemented via a dynamic programming approach. Based on both prior knowledge and learning from the data, the proposed approach represents a combination of supervised and unsupervised machine learning. The resulting network structure deals with data abundance and avoids an over-detailed description that may lead to misinterpretation and is able to pick out elements with minor behavior patterns.
The method is tested with artificial data and applied to real-world mass-spectrometry proteome data and NGS-transcriptome data of Chlamydomonas reinhardtii. The proposed approach helps with identification of the potential regulatory genes, whose roles are not explicitly provided in the used functional ontology. Moreover, it shows a successful reduction in data complexity while preserving all individual molecular information reported in the literature and stored in the functional ontology. If the proposed approach analyzes different experimental data with the same ontology, the resulting networks are uniform and therefore can be compared. That gives an opportunity to compare between a great variety of experimental conditions, from different organisms to different
system levels.
Industrial robots are vital in automation technology, but their limitations become evident in applications requiring high path accuracy. This research focuses on improving the dynamic path accuracy of industrial robots by integrating additional sensor technology and employing intelligent feed-forward control. Specifically, the inclusion of secondary encoder sensors enables explicit measurement and compensation of robot gear deformations. Three types of model-based feed-forward controllers, namely physics-based, data-based, and hybrid, are developed to effectively counteract dynamic effects.
Firstly, a physics-based feed-forward control method is proposed, explicitly modeling joint deformations, hydraulic weight compensation, and other relevant features. Nonlinear friction parameters are accurately identified using a globally optimized design of experiments. The resulting physics-based model is fully continuously differentiable, facilitating its transformation into a code-optimized flatness-based feed-forward control.
Secondly, a data-based feed-forward control approach is introduced, leveraging a continuous-time neural network. The continuous-time approach demonstrates enhanced model generalization capabilities even with limited data. Furthermore, a time domain normalization method is introduced, significantly improving numerical properties by concurrently normalizing measurement timelines, robot states, and state derivatives. Based on previous work, a method ensuring input-to-state and global-asymptotic stability is presented, employing a Lyapunov function. Model stability is enforced already during training using constrained optimization techniques. Moreover, the data-based methods are evaluated on public benchmarks, extending its applicability beyond the field of robotics.
Both the physics-based and data-based models are combined into a hybrid model. Comparative analysis of the three models reveals that the continuous-time neural network yields the highest model accuracy, while the physics-based model delivers the best safety properties. The effectiveness of all three models is experimentally validated using an industrial robot.
The present thesis reports on studies of atomically precise, size-selected tantalum
cluster ions \(Ta_n^±\) under cryogenic conditions in a FT-ICR mass spectrometer with respect to surface adsorbate interactions at the fundamental level, focusing on \(N_2\) and \(H_2\) adsorption and activation. The wealth of results presented here is the result of systematic studies that have revealed valuable kinetic, spectroscopic, and quantum chemical information, which together paint a comprehensive picture of the elementary adsorption steps and mechanisms in detail.
The \(N_2\) and \(H_2\) adsorption processes to \(Ta_n^+\) clusters exhibit dependencies on cluster size n and on adsorbate load. In terms of \(N_2\) adsorption, there is evidence for spontaneous \(N_2\) activation and cleavage by \(Ta_2^+\) - \(Ta_4^+\), while it appears to be suppressedby \(Ta_5^+\) - \(Ta_8^+\). The activation and cleavage of \(N_2\) molecules proceeds across
surmountable barriers and along much-involved multidimensional reaction paths.
Underlying reaction processes and involved intermediates are elucidated. Two different processes are characteristic of \(H_2\) adsorption: There are fast adsoprtion processes without competing desorption reactions at low \(H_2\) loadings, indicating dissociative adsorption processes, followed by slow adsorption reactions accompanied by multiple desorption reactions at high \(H_2\) loadings, indicating molecular \(H_2\) adsorption. The threshold is the completion of the first adsorbate shell. The \(N_2\) adsorption study of \(Ta_n^-\) clusters revealed that the \(N_2\) adsorption ability of anionic tantalum clusters depends strongly on cluster size n. The cluster size n = 9 is the minimum size for \(N_2\) adsorption onto \(Ta_n^-\) clusters to yield stable and detectable cluster adsorbate species \([Ta_n(N_2)_m]^-\).
The booming global market of nanomaterials in the last few decades has led to the inevitable emission of these materials into aquatic environments; hence, understanding their physical, chemical, and biological transformations has become a big concern for environmental scientists. Despite a great deal of effort made to understand the mobility, fate, and risk assessment of e.g, TiO2 nanoparticles, it is still unclear if the obtained results, under lab-controlled conditions, can be generalized to realistic released nanoparticles in aquatic environments since the complex dynamics of environmental conditions are not completely reproducible under controlled conditions.
In the present study, we proposed a new approach to expose TiO2 nanoparticles to environmental conditions of natural surface waters by making use of dialysis membranes as passive reactors. The function of these reactors is based on the permeability of the membrane to the dissolved matter of surface waters while TiO2 nanoparticles do not pass through the membrane. These systems benefit from the fact that although the complexity and temporal variability of most of the environmental parameters of surface waters are reproducible inside the reactors, colloidal and particulate interferences remain separated. Furthermore, no significant reduction in pore size i.e., membrane fouling is observed in dialysis bags after exposure to surface waters which validates the efficiency of the system.
Taking advantage of these reactors to expose nanoparticles to surface waters, we investigated the influential physicochemical parameters of the surface waters on the formation of natural coating onto nanoparticles. Hence, dialysis bags were used to expose TiO2 nanoparticles, in situ, to ten different surface waters in the spring and summer of 2019. Due to the complexity of the natural dissolved matter of the surface waters as long as their low natural concentrations, we needed to use a combination of analytical techniques and multivariate data analysis to investigate the coatings. The initial findings were similar to the lab-controlled exposure studies in the literature showing pH, electrical conductivity, and Ca2+- Mg2+ concentration as the three most important parameters of surface waters controlling the formation of coatings. Nonetheless, we came across a phenomenon being overlooked under lab-controlled conditions; natural coatings are composed of not only organics (DOM: dissolved organic matter) but also inorganics (carbonate) which implies that their realistic coatings are more complex than what the previous studies described.
The second part of this thesis focused on investigating the interactions of more realistic nanoparticles (extracted TiO2 nanoparticles from 11 sunscreens) with DOM. Using ToF-SIMS combined with high-dimensional data analysis, we tried to find a general DOM-sorption pattern among TiO2 nanoparticles since finding this pattern could have ultimately opened a way to assess the fate of (more) realistic nanoparticles in aquatic environments. Contrary to our expectations, the results showed a unique sorption pattern for each sunscreen controlled by the composition of the sunscreens implying that the sorption pattern of each sunscreen should be investigated individually. In the next step of this study, we used random forest to extract the most important fragments of DOM sorbed onto each sunscreen followed by an effort to assign these important masses to chemical fragments.
Trying to provide a comprehensive understanding of interactions of the released n-TiO2 in aquatic environments, in future studies, we are going to expand our coating research to different types of TiO2 nanoparticles, such as extracted particles from paint, where the reaction media (surface waters) are covering a wide range of water parameters representative of various ecosystems. Making use of state-of-the-art techniques as long as multivariate data analysis, we will try to achieve a model describing the sorption mechanisms of dissolved matter of surface waters onto nanoparticles. Such studies can eventually lead us to a better understanding of the fate of the released nanoparticles under natural conditions.
The massive use of chemicals by humans is increasing pollution of the world’s ecosystems. Yet, knowledge about exposure and effects of chemicals in real-world ecosystems remains limited. Prediction of chemical effects in the context of ecotoxicological research and chemical regulation continues to focus on organism- or population-level responses established under simplified conditions while aiming to protect the functioning of ecosystems. A unified, comprehensive framework for the prediction of chemical effects in real-world ecosystems is still lacking. A major limitation of ecotoxicological studies considered in predictive modelling is that they rarely consider spatial dynamics (e.g. gene flow or species dispersal) as relevant processes influencing the trajectory of populations or communities, respectively. For instance, the spatial propagation of pesticide effects from polluted to least impacted sites has been predicted in several modelling studies but has not yet been characterised in the field.
The thesis starts in Chapter 1 with a brief introduction to chemical pollution in ecosystems, chemical effect prediction in ecotoxicology, and pesticides in freshwater ecosystems, then outlines the main objectives of the thesis. Subsequently, Chapter 2 presents a conceptual study about the current prediction of chemical effects in ecotoxicology and potential future avenues to improve ecological relevance of effect predictions by addressing the integration of different levels of biological organisation (termed biological levels). The study shows that approaches and tools that currently contribute to the prediction of chemical effects can be attributed to three idealised perspectives: the suborganismal, organismal and ecological perspective. The perspectives focus on different biological levels and are associated with distinct scientific concepts and communities. They complement each other so theoretical and empirical links between them may enhance prediction by capturing the entire phenomenon of chemical effects, from chemical uptake to ecosystem effects. Complex experimental studies accounting for eco-evolutionary dynamics are needed to cross barriers between biological levels as well as spatiotemporal scales. Overall, the conclusions of Chapter 2 may help to develop overarching frameworks for predicting chemical effects in ecosystems, including for untested species. Chapters 3 and 4 present a field study combined with laboratory analyses on the potential propagation of pesticides and their effects from agricultural stream sections to the edge of least impacted upstream sections, that can serve as refuges for many species. The study examines exposure and effects for different biological levels at three site types, the pesticide-polluted agricultural sites (termed agriculture), least impacted upstream sites (termed refuge) and transitional sites (termed edge) in six small streams of south-west Germany. The results in Chapter 3 show that regional transport of pesticides can lead to ecologically relevant pesticide exposure in forested sections within a few kilometres upstream of agricultural areas (i.e. at both edge and refuge sites). As further demonstrated in Chapter 3, the tested indicators of community responses (Jaccard Index, taxonomic richness, total abundance, SPEARpesticides) together suggest a species turnover from upstream refuge to downstream agricultural sites and a potential influence of adjacent agriculture on the edge sites. In contrast, Chapter 4 does not identify any particular edge effect that distinguish edge organisms and populations in edge sites from those in more upstream refuge sites. Gammarus fossarum populations at edges show equal levels of imidacloprid tolerance, energy reserves (i.e. lipid content) and genetic diversity to populations further upstream. Gammarus spp. from agricultural sites exhibit a lower imidacloprid tolerance compared to edge and refuge, potentially due to energy trade-offs in a multiple stressor environment, but related effects do not propagate to the edges (Chapter 4). Notwithstanding, the results of Chapter 4 indicate bidirectional gene flow between site types, supporting the hypothesis that adapted genotypes – if present at locally polluted sites – could spread to populations at least impacted sites. Taken together, Chapters 3 and 4, illustrate that pesticides and their effects can potentially propagate to least impacted upstream sections, empirically novel findings to our knowledge. These results of this thesis can help in predicting or explaining population and community dynamics in least impacted habitats and can ultimately inform pesticide management as well as freshwater restoration and protection of biodiversity.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
Particulate matter has been considered an indicator for the pollution of urban stormwater runoff for quite some time. There are only few studies that have investigated the contamination with organic micropollutants and metals both in the dis-solved and particulate phase as well as across different particle size classes. Yet, this distribution plays an important role in better understanding and optimising urban stormwater treatment measures. Therefore, this work aimed at assessing the composition of particulate matter in urban stormwater in terms of the physico-chemical properties (particle size distribution and organic content), as well as the occurrence of organic micropollutants and metals, their association to particulate matter and their removal from urban runoff. An intensive long term monitoring campaign at a centralised stormwater treatment facility of an industrial area was conducted. The stormwater runoff was sampled with large volume sampling tanks filled volume-proportional to the runoff at the two outlets of the facility. This allowed the determination of the event mean concentrations as well as the load-related removal efficiencies of the treatment facility for different parameters. Within each sample the concentrations of total suspended solids across different particle size fractions (< 63 µm, 63 – 125 µm, 125 – 250 µm, 250 – 2000 µm) were measured as well as their organic content. Furthermore, the concentrations and the phase distribution of 5 metals (Chromium, Copper, Zinc, Cadmium, Lead) and 29 organic micropollutants including polycyclic aromatic hydrocarbons, industrial chemicals (e.g. organophosphates, alkyphenols) and biocides were ana-lysed across different particle size fractions. In this study, over a period of almost 2.5 years, a total of 36 sampling events were recorded and investigated within two sampling periods (2015 – 2016 and 2017 – 2019) at the rainwater treatment facility in Freiburg Haid. The occurrence of organic micropollutants was determined in 22 of these events and the occurrence of metals in 17. The evaluation of the event mean concentration of total suspended solids showed that the fine fraction of the solids is of particular importance, as it showed an event mean concentration more than twice as high (34 mg \(L^{-1}\)) as the coarser particle fraction (14.9 mg \(L^{-1}\)). Regarding the occurrence of total suspended solids in terms of the transported solid load, the solids < 63 µm accounted for a mean proportion of 61 %, the fraction 63 – 125 µm for 13 %, the fraction 125 – 250 µm for 6 % and the fraction 250 – 2000 µm for 9 % of the total solid mass. In terms of the organic content of the solids, the results showed a clear increase of the organic content with increasing particle size (measured as loss on ignition).
As in the case of solids, the highest concentrations of the organic micropollutants and metals investigated were found in the particle size fraction < 63 µm. This fine fraction of the particles also accounted for the largest load of organic micropollutants and metals. Therefore, the particle loading with organic micropollutants or metals respectively the particle-bound micropollutant/metal concentration was calculated in this study. For most substances, a rather equal distribution over the smallest three particle size fractions was found. A certain correlation of the organic content with the occurrence of organic micropollutants and metals could be shown, therefore it can be assumed that the particle-bound concentration is certainly influenced by the organic content of the particulate matter. However, due to the fact that, among other things, the largest particle-bound pollutant loads are transported with particles < 63 µm, the fine fraction represents the relevant particle size in urban stormwater runoff. Regarding the total treatment efficiency (including sedimentation efficiency and volume retention), the investigated facility in this study was able to reduce the load of fine particles by only a quarter. The larger particle size classes were reduced by far more than half in most cases. If total suspended solids in its entire particle size range were used as a proxy to estimate the removal efficiency of metals and organic micropollutants, the efficiency would be overestimated and the actual pollutant load released into the environment would thus be underestimated. However, the investigation, weather the particle size fraction < 63 µm would be more suitable, showed that even for substances with a high tendency to adsorb onto particles (e.g. Cr, Cu, IND, GHI), the total treatment efficiency was still overestimated by the fine fraction.
Previous research has shown the importance of early science, technology, engineering, and math education for children’s knowledge, as it establishes a groundwork for their later learning and academic achievement. However, the engagement of preschool teachers especially in science learning activities is infrequent, and some teachers still pronounce the belief that science education is inappropriate for the early childhood years. Furthermore, there is a lack of clarity regarding the connections between teachers' attitudes (including their knowledge, beliefs, and willingness) towards teaching early science and their actual teaching practice, as well as the subsequent effects of teacher practice on children's learning outcomes. This dissertation primarily aims to clarify these associations. Block play offers the possibility to link scientific concepts (e.g., stability) to children’s everyday activities and thus represents an age-appropriate way to examine young children’s STEM-learning. The present dissertation encompasses three research articles, focusing specifically on the interplay between preschool teachers’ dispositions and practice in block play and 4- to 6-year old children’s knowledge. The first article focused on the validation of a self-developed instrument to assess preschool teachers’ willingness to engage in science teaching and examined the predictive power of teachers’ willingness for teachers’ practice. Results suggested that the instrument measured teachers’ willingness reliably and validly, however, teachers’ willingness did not predict their practice in block play. The second article examined the relationship between the preschool teachers’ instructional quality during block play and various aspects of children's knowledge. Specifically, the study explored how instructional quality in block play influenced children's knowledge in stability, math, and spatial language. Additionally, children’s academic self-concept and cognitive aspects (i.e., intelligence, working memory) were considered. Results implied that preschool teachers’ scaffolding activities were related to children’s stability knowledge in block play. Moreover, teachers’ instructional quality was positively correlated with children’s academic self-concept in block play. The primary focus of the third article was on implementing a block play curriculum. Therefore, study 3 employed a longitudinal design to assess the effectiveness of a teacher training on teachers’ practice with the curriculum, which included both, guided and free play. Teachers were randomly assigned to either a control group or an experimental group. The experimental groups received training with the block play curriculum, while the control group did not receive any training. Results showed no change in teachers’ knowledge before and after training. Nonetheless, teachers in the experimental group applied more scaffolding after the training. Furthermore, preschool teachers applied more scaffolding during guided than during free play. Children’s math score in the experimental group, but not in the control group, significantly improved from pre- to post-test. In the general discussion, the findings of the three articles are reflected in the light of the interplay between teachers’ dispositions and their teaching practice as well as the impact of teacher practice on children’s knowledge. Besides, the discussion reflects on methodological difficulties of empirical studies in early childcare settings, providing a prospective view on multimethod approaches for future research. Taken together, the present dissertation contributes to a more profound understanding of how teacher practices and children's knowledge interact. Further, the research holds great relevance for practical application as it illustrates the differential effects of teacher training on preschool teachers’ knowledge and their teaching practice.
Nitrogen removal from wastewater is increasingly important to protect natural water sources and has proven a challenge for wastewater treatment plants in different countries. Strict discharge norms for nitrogen components and unfavourable wastewater quality are among the main challenges observed.
An example WWTP (450,000 PECOD,120), representative of these challenges (i.e. strict discharge norm for NH4-N and TN, partially unfavourable wastewater composition for upstream denitrification) was modelled with the software SIMBA. The model was calibrated, and validated, using different statistical parameters. The model was used for dynamic simulation to test different operational and automation strategies, to improve nitrogen removal.
The tested strategies considered the bypass of primary clarifiers, changes in the anaerobic, anoxic, and aerobic reactors configuration, changes in the aeration system (DO setpoint, the inclusion of online sensors and different control approaches in the aeration loop), the adjustment of the internal recirculation rate, the implementation of intermittent denitrification, among others. The addition of an anaerobic digestion stage, considering the adjustment of the sludge age in the biological treatment and the treatment of the centrate (including nitrogen backload), was tested as well.
To evaluate the strategies' performance, an evaluation criteria chart was created to select the best strategies from an overall perspective, considering the improvements or deterioration in norm compliance, aeration requirements, pollutant emissions to the environment, and biogas production (if applicable).
The best overall results were obtained with strategies that aimed to improve the denitrification capacity (e.g. increase anoxic volume by reducing aerobic volume), adjusted the air requirements (e.g. inclusion of an NH4-N online measurement in the aeration control loop), and provided flexibility (e.g. intermittent denitrification). With the right combination of strategies, the norm compliance was significantly improved e.g. reduced from 31 to 4 in a year, as well as the emissions to the environment.
The inclusion of an anaerobic digestion stage for sewage sludge treatment challenges the nitrogen removal even further, but similar optimisation strategies, based on the same approach were able to improve norm compliance.
However, none of the combinations, with or without anaerobic digestion, achieved total norm compliance. Therefore, a different technology than A2/O, an SBR treatment stage was designed, providing increased operational flexibility. The A2/O system in the computer model was replaced by an SBR process. This showed the best results, based on the criteria previously defined, with total norm compliance.
Based on the learnings of the design, redesign, and strategies tested, a guideline for an integral optimisation of nitrogen removal was developed, based on six pillars, considering a detailed WWTP operational analysis, the use of dynamic simulation as a tool, the testing of known and simple optimization approaches, the definition of clear and objective evaluation criteria, the consideration of anaerobic digestion (and the backload) and finally the re-evaluation of the type of technology for biological wastewater treatment.
This dissertation project aims to examine the potential of network modelling, an increasingly popular methodology in emotion research (e.g., Fried et al., 2016), to better comprehend age-related differences in structural connections between cognitive processes such as fluid intelligence and executive control functions. Furthermore, it aims to identify the key variables that link self-regulation to executive control functions and age-related discrepancies. Lastly, it seeks to delve into the key variables and correlations between executive control functions, self-regulation, and affect utilizing a longitudinal design in combination with machine learning as a data-driven method.
In study 1, differences between the cognitive performance networks of younger (M = 38.0 years of age, SD = 9.9) and older (M = 64.1 years of age, SD = 7.7) adults were explored. Network modelling showed that while speeded attention is essential throughout the life-span, connections between fluid intelligence and working memory were stronger, and intelligence was more central in the older group. Additionally, confirmatory factor modelling demonstrated that latent correlations were highest between working memory and intelligence, particularly in older adults, whereas inhibition had the lowest correlations with other abilities. This research suggests that the relations of cognitive abilities may differ between younger and older adults, indicating process-specific changes in the cognitive performance network.
In study 2, we investigated the connections of self-regulation (SR) and executive control functions (EF), which are theoretical concepts encompassing various cognitive abilities supporting the regulation of behavior, thoughts, and emotions (Inzlicht et al., 2021; Wiebe & Karbach, 2017). Evidence, however, implies that correlations between self-report measures and performance-based tasks are often difficult to observe (e.g., Eisenberg et al., 2019). We investigated connections and overlap between different aspects of SR and EF in a life-span sample (14-82 years). Participants completed several self-report measures and behavioral tasks, such as sensation seeking, mindfulness, grit, or eating behavior questionnaires and working memory, inhibition, and shifting tasks. Network models for a youth, middle-aged, and older-aged group were estimated to identify key variables that are well connected in the SR and EF construct space. In general, stronger connections were observed within the clusters of SR and EF than between them, and older adults appeared to have more connections between SR and EF than younger individuals, probably because of declining cognitive resources.
In study 3, we analyzed the intricate links between EF, SR and affect, as well as individual differences in these relations. Bridgett et al. (2013) proposed that EF and self-regulation SR are psychological constructs to support the regulation of cognition and affect. A total of 315 participants, aged 14 to 80, answered questionnaires and took part in behavioral tasks which evaluated EF, SR, and both positive and negative affect two times (one-month apart). Combined X-means and deep learning algorithms aided in the separation of two distinct groups who featured different EF performances, SR tendencies, and affective experiences. Network model analysis was then utilized to confirm the connections between the EF, SR, and affect variables in each of the two groups. The two groups displayed a maximal centrality for variables linked to SR and positive affect. Group membership remained mostly consistent (85%) across both measurement occasions. Logistic regression indicated that age and personality (conscientiousness, neuroticism, and agreeableness) predicted group membership. This sheds light on stable individual differences in the complex relations of EF, SR, and affect.
This dissertation project utilized a combination of standard approaches (such as confirmatory factor analysis; CFA) and advanced approaches (such as network models, machine learning algorithms, and deep learning) to explore the connections between cognitive abilities, EF, SR, and affect. Our findings are in line with the theory of process specific changes in age-dedifferentiation. Findings suggested that connections between SR and EF were stronger within clusters, and positive affect was better connected to SR than EF measures. Lastly, age and personality traits were found to predict the clusters. These findings suggest that computational modelling is an effective exploratory tool in understanding how cognitive abilities and other psychological constructs may interact. Further research is necessary to gain further insights on the mechanisms behind differences in network structures.
This thesis deals with modeling and simulation of district heating networks (DHN) and the mathematical analysis of the proposed DHN model. We provide a detailed derivation of the complete system of governing equations, starting from a brief exposition of the physical quantities of interest, continued with the components to set up a graph based network model accounting for fluxes and coupling conditions, the transport equations for water and thermal energy in pipelines, and the terms representing consumers and producers. On this basis, we perform an analysis of the solvability of the model equations, starting from the scalar advection problem in a single–consumer single–producer network, to a generalized problem suitable to model simple networks without loops. We also derive an abstract formulation of the problem, which serves as a rigorous mathematical model that can be utilized for optimization problems. The theoretical results can be utilized to perform tran- sient simulations of real world DHN and optimize their performance by optimal control, as indicated in a case study.
In one-dimensional (1-D) Ultrasound (US) measurements, signals are
acquired that form the basis of more sophisticated two-dimensional (2-D) or
three-dimensional (3-D) US imaging. These 1-D signals contain a lot of raw
information about the US wave propagation and interaction with the
medium that is only processed in parts during image generation. While
image representations are easy to interpret for humans, the analysis of US
wave signals is hard to perform without applying algorithms to extract
desired features.
This work investigates reliable and fast 1-D US signal classifications to
distinguish between different stages or states in biomedical US scenarios and
shows how the new field of Machine Learning (ML) on raw US wave data
provides advantages and different applications. To achieve good results, the
input signals are treated as time series, which requires the deployment of
comparatively complex Time Series Classification (TSC) algorithms.
The literature shows that a lot of research efforts have previously only
tackled the classification and segmentation of US Brightness mode (B-Mode)
images, while neglecting approaches to classify 1-D signals to a large extent.
This research contributes by developing, deploying and evaluating
classification approaches for three distinct biomedical US classification tasks
and finds that respective signal classifications for different scenarios are
possible with varying degrees of accuracies. It entails the comparison of
several combinations of data types (e.g. temporal, spectral and statistical
features or raw signals), ML models and pre-processing steps to provide a
strong foundation for robust, binary classifications of 1-D US signals for
scenarios based on low-cost wearable, mobile and stationary devices. This
research addresses scientific questions not answered before by informing on
detailed descriptions of beneficial domain specific knowledge (domain specific
knowledge (DSK)), achieved accuracies and times needed for training and
evaluation of the examined ML models.
The resulting ML pipelines includes solutions based on data acquired from
custom experimental setups or clinical trials. Possible real-world applications
might include muscle contraction trackers, muscle fatigue detectors,
epiphyseal radius bone closure detectors or devices providing information
about advanced liver disease stages.
Automated machine-assisted
classifications requiring as little DSK as possible from the end user enable
application scenarios ranging from fitness or rehabilitation trackers as
consumer devices to solutions providing diagnostic support without requiring
extensive knowledge from professional medical practitioners. For example,
decision support systems for bone age assessments in clinical use or liver
health assessment systems for gastroenterologists.
This work shows that reliable, robust and fast classifications based on 1-D
US signals are possible with high degrees of accuracies depending on the
examined scenario with achieved F 1 -scores ranging from ≈ 70% to ≈ 87%.
These results prove that real-life applications for recreational purposes are
already possible and that critical applications for clinical use are highly likely
to be achieved once the presented approaches are further optimized in the future.
A new class of amines that are promising solvents for reactive CO2-absorption processes was thoroughly investigated in a comprehensive experimental study. The amines are all derivatives of triacetoneamine and differ only in the substituent of the triacetoneamine ring structure. These amines are abbreviated by the acronym EvA with a consecutive number that designates the derivatives. About 50 EvAs were considered in the present study, from which 26 were actually synthesized and investigated as aqueous solvents. The investigated properties were: solubility of CO2, rate of absorption of CO2, liquidliquid and solid-liquid equilibrium, speciation (qualitative and quantitative), pK-values, pH-values, foaming behavior, density, dynamic viscosity, vapor pressure, and liquid heat capacity. All 26 EvAs were assessed in an experimental screening. The results were compared with the results of two standard solvents from industry: aqueous solvents of monoethanolamine (MEA) and a solvent blend of methyl-diethanolamine and piperazine (MDEA/PZ). Detailed studies were carried out for two EvAs that revealed significantly improved performance compared to MEA and MDEA/PZ: EvA34 combines favorable properties of MEA and MDEA/PZ in one molecule. EvA25 reveals a liquid-liquid phase split that reduces the solubility of CO2 in the solvent and shifts the CO2 into the aqueous phase. This allowed the design of a new CO2-absorption process, that takes advantage of the liquid-liquid phase split. Finally, the chemical speciation in 16 EvAs was investigated by NMR spectroscopy. From the results, relationships between the chemical structure of the EvAs and the observed speciation, basicity, and application properties were established. This enabled giving guidelines for the design of new amines and proposing new types of amines, which were called ADAMs.
This thesis aims to establish a transient electro-thermomechanical model capable of characterizing the shape-morphing capabilities of shape memory alloy hybrid composites (SMAHCs). The particular SMAHC type examined in this study comprises a rigid substrate, a soft interlayer, and SMA wires sewed on top. The model was synthesized from the bottom up using well-established equations, methodologies, and solution procedures, taking into account appropriate simplifications and assumptions. The implementation was done with open-source solutions to ensure free availability. The model extends existing models to include aspects of external influences so that, for example, the efficiency and dynamics of the SMAHC can be predicted as a function of external mechanical loads and different ambient temperatures. Inputs to the model include geometric and material design factors and Joule’s heat and ambient conditions, while outputs include the SMAHC’s deflection, load-carrying capacities, bandwidth, and energy consumption. Individual components of the SMAHC were characterized to create simulation input parameters, and methodologies for characterization were devised. The thermomechanical and electro-thermomechanical model was validated by comparing experimental and simulated data. Regardless of the various assumptions and simplifications, the findings demonstrate that the transient deformation behavior during the electrically induced thermal activation of a SMAHC at room temperature and external loads of less than 19.2 N can be predicted with variations of less than 20 percent. With increasing mechanical stresses in the shape memory alloy attributable to external loads or rigid substrates and temperatures above the austenite start temperature or below -10°C, the model’s applicability may become unreasonable.
The field of 3D reconstruction is one of the most important areas in computer
vision. It is not only of theoretical importance, but it is also increasingly
used in practical applications, be it in reverse engineering, quality control or
robotics. In practical applications, where high precision reconstructions are
required for a large variety of different objects, structured light reconstruction
is often the method of choice. It allows to achieve accurate and dense
point correspondences over the entire scene, regardless of object texture or
features. Techniques that project phase-shifted sinusoidals are widely used
because, based on the harmonic addition theorem, they theoretically allow
surface encoding in full camera resolution invariant to the object’s coloring.
In this thesis, a fully-automatic reconstruction pipeline based on the sinusoidal
structured light technique is presented. From the projection of the
fringe patterns for encoding the object’s surface, the robust matching of the
point correspondences in sub-pixel accuracy, the auto-calibration of the setup
including the active device, up to the fully-automatic alignment of the partial
reconstructions, all steps will be described and examined in detail. During
that, improvements will be achieved in the area of matching, obtaining highly
accurate and topologically consistent correspondences in sub-pixel precision
between all the devices used. Furthermore, the auto-calibration from point
correspondences, based on the epipolar geometry of the structured light system
is improved. Weaknesses of previous methods in the extraction of focal
lengths from the fundamental matrices are discovered and addressed. The partial
point clouds, reconstructed from the auto-calibrated devices, are finally
pre-aligned using a neural network approach, based on light-resistant optical
flow estimation and subsequently refined using a global approach.
The weaknesses of the structured light method itself will also be addressed
and partially fixed during the course of this work. Since it is an active reconstruction method, certain surface properties can affect the quality of the
reconstruction. It will be shown how these problems can be eliminated or at
least be reduced using an iterative approach that combines fringe patterns with
an inverse texture. Another weakness of the method is its time-consuming acquisition procedure. Typically, a large number of horizontal and vertical fringe
patterns are projected onto the scene to achieve high-precision encoding despite
the limited dynamic range and resolution of the projector. Therefore, a
method will be presented which allows to combine the horizontal and vertical
patterns for a simultaneous two dimensional surface encoding.
The intensive use of pesticides is one of the main causes for global arthropod decline which can subsequently affect ecosystem services such as pollination, natural pest control, and soil fertility and cascade to higher trophic levels including bats and birds. However, agriculture in large parts is strongly dependent on pesticides, and viticulture in particular is one of the major consumers of fungicides. Fungus-resistant grape varieties offer a very good opportunity to reduce fungicide applications by more than 80 % while maintaining healthy grapes. Here, the effects of fungicide reduction on arthropods and natural pest control were investigated on the one hand in a long- term study in an experimental vineyard and on the other hand in 32 commercially managed vineyards in southwestern Germany. In both designs, fungicide reduction resulted in mostly positive effects on arthropods and natural pest control. Particularly beneficial arthropods such as predatory mites and spiders were promoted by reduced fungicide applications. Contrastingly, potential vineyard pests such as phytophagous mites and leafhoppers decreased under fungicide reduction. Fungus-resistant grape varieties are thus a promising approach to foster resilient agroecosystems and a more sustainable viticulture.
Northwest Africa is predicted to undergo a climatic shift from a temperate to an arid climate resulting in increased aridity, water salinity, and river intermittency. These changes have the potential to impact freshwater communities, ecosystem functioning, and related ecosystem services. However, there is still limited data on the impact of climate change and salinity on river ecosystems and the people depending on it, particularly in understudied regions such as Northwest Africa. In this dissertation, I focus on the Draa River basin in southern Morocco to assess the primary factors shaping and altering macroinvertebrate communities. A particular focus is placed on the impacts of salt on the ecosystem and the consequences for human well-being. We conducted a meta-analysis covering 195 sites in Northwest Africa to examine the responses of insect communities and their trait profiles to climate change and anthropogenically induced stressors. To exclude large-scale geographic patterns such as variations in climate conditions we conducted a confluence-based study focusing on tributaries and their joint downstream sections near three confluences in the Draa River basin. Additionally, we investigated the water and biological quality of 17 further sites, aiming to explore the relationship between human well-being and the ecosystem. Our approach involved conducting water measurements, biological monitoring, and household surveys to create water, biological, and human satisfaction indices. Our findings revealed that insect family richness in arid sites of Northwest Africa was, on average, 37 % lower than in temperate sites. Among the strongest factors contributing to reduced richness and low biological quality were low flow and high water salinity. Based on the results of the confluence study only around five taxa comprised over 90 % of specimens per site, with a higher proportion of salt-tolerant generalist species in saline sites. Resistance and resilience traits such as small body size, aerial dispersal, and air breathing were found to promote survival in arid and saline sites. However, low γ-diversity in the basin caused minimal differences in macroinvertebrate community composition suggesting that the community was generally adapted to the arid climate. We observed positive associations between river water quality and biological quality indices. However, no significant associations were found between these indices and human satisfaction. Human satisfaction was particularly low in the Middle Draa, where 89 % of respondents reported emotional distress due to water salinity and scarcity. Inhabitants in areas characterized by higher levels of water salinity and scarcity generally rated drinking and irrigation water quality lower. Considering that large parts of Northwest Africa will become arid by the end of the century, we can expect a loss of macroinvertebrate diversity affecting the entire ecosystem, which might potentially affect human well-being negatively. To protect the integrity of the ecosystem in the face of ongoing climate change, it is crucial to limit anthropogenic stressors such as secondary salinization and the pressures on water resources. Protecting both more and less saline rivers, preserving natural water flow, and maintaining connectivity between habitats will allow to maintain the Draa River biodiversity, ensure ecosystem functioning, and benefit inhabitants through ecosystem services. Future policies and action plans should consider the interdependence between ecosystems and human inhabitants to enhance overall well-being.
In recent decades, there has been a strong global decline in biodiversity which is attributed, among other reasons, to intensified agriculture and the loss of habitats. Due to the significant ecological impacts it is crucial to comprehensively understand how management practices and the surrounding landscape affect species, as well as how these factors influence their populations over the long term. We studied the influence of weather and trapping effort on multi-day Malaise trap sampling, examining their effects on long-term monitoring data. We further explored how vineyard management and the presence of semi-natural habitats (SNH) affect arthropods in the wine-growing region Palatinate in southwest Germany.
We evaluated the impact of ambient weather conditions and trapping effort during Malaise trap exposure on biomass and taxa richness using metabarcoding. Insect activity was highest when the weather was warm and dry. Taxa accumulation increased fourfold from three days of monthly trapping to continuous trap exposure and nearly sixfold from sampling at a single site to 32 sites. Common species are likely to be captured with short trapping durations and a small number of sampling sites, while it remains challenging to comprehensively sample rare species. Metabarcoding provides a valuable method for long-term monitoring. However, additional sequencing efforts are required to establish more comprehensive DNA databases.
Furthermore, we investigated how organic and conventional management, reduction of pesticides, and SNH in the surrounding landscape affect arthropod diversity in vineyards. Biodiversity was assessed in 32 vineyards in a crossed design of management (organic vs. conventional) and pesticide use (regular vs. reduced in fungus-resistant grape varieties). The pairs of vineyards were located in 16 landscapes, with increasing proportions of SNH in the surrounding area of the vineyards. We measured the biomass of captured specimens and used metabarcoding to assess the general arthropod biodiversity. Furthermore, we used morphological and acoustic species identification to investigate effects on wild bees and orthopterans. Biomass was almost one-third higher in conventional compared to organic vineyards, while organic vineyards had almost 50 % more bees. Densities of herb-dwelling orthopterans were 2.9 times higher in fungus-resistant compared to classic grape varieties under organic management. Higher proportions of SNH increased arthropod richness as well as abundance and richness of above-ground-nesting bees and further changed community composition of arthropods, including wild bees and orthopterans. Increased inter-row vegetation had positive effects on various groups of organisms. Our studies on the influence of vineyard management show that reducing pesticide use, particularly under organic management, can enhance sustainability in viticulture and promote biodiversity. Moreover, further species benefit from diverse inter-row vegetation and SNH in the surrounding landscape. We conclude that the cultivation of fungus-resistant grape varieties is of importance to minimize the need for non-specific pesticides, while it is also important to provide diverse vegetation in inter-rows and create a structurally rich environment with suitable SNH to conserve biodiversity in viticulture.
During our daily lives, we are confronted with vast amounts of data, the processing of which can dramatically influence our lives, both positively and negatively. The enormous amount of data (images, texts, tables, and time series), its variety and possible applications are not always obvious. Due to advancements in the internet of things (IoT), there exist billions of sensors that produce time series which can be found everywhere, whether in medicine, the financial sector or the agricultural economy. This incredible amount of time series data has many hidden features which are useful for industry as well as for daily use, e.g. improving the cancer prediction can save real human lives. Recently, several deep learning methods have been proposed for analyzing this time series data. However, due to their black box nature, their applicability is limited in critical sectors like medicine, finance, and communication. In addition, it is now a compulsion as per artificial intelligence (AI) Act and per General Data Protection Regulation (GDPR) to protect any sensitive data and provide explanations in safety-critical domains. To enable use of DNNs in a broader domain scope, this thesis presents a framework for privacy-preserved and interpretable time series analysis. TimeFrame consists of four main components, namely, post-hoc interpretability, intrinsic interpretability, direct privacy, and indirect privacy. Interpretability is indispensable to avoid damaging people or the infrastructure. In the past years, the development mostly focused on image data, which prevented the full potential of DNNs in time series processing from being exploited. To overcome this limitation, TimeFrame introduces five (Time to Focus, TSViz, TimeREISE, TSInsight, Data Lens) novel post-hoc and two (PatchX, P2ExNet) novel intrinsic interpretability components. TimeFrame addresses multiple perspectives such as attribution, compression, visualization, influence, prototyping, and hierarchical splitting. Compared to existing methods, the components show better explanations, robustness, and scalability. Another crucial factor is the privacy when dealing with sensitive data and deep learning. In this context, TimeFrame introduces two (PPML, PPML x XAI) components for direct and one (From Private to Public) component for indirect privacy. These components benchmark privacy approaches, their effect on interpretability, and the synthetic generation of data to overcome privacy concerns. TimeFrame offers a large set of interpretability and privacy components that can be combined and consider numerous different aspects. Furthermore, the novel approaches have shown to consistently outperform twenty existing state-of-the-art methods across up to 20 different datasets. To guarantee the fairness, various metrics were used including performance change, Sensitivity, Infidelity, Continuity, runtime, model dependency, compression rate, and others. This broad set of metrics makes it possible to provide guidelines for a more appropriate use of existing state-of-the-art approaches as well as the novel components included in TimeFrame.
Individual thermal comfort in buildings, especially in office workplaces, is becoming increasingly
important in modern society. While technical devices for user-specific heating are well known and
implemented, only a few proven methods for individual cooling of a single person are available, most
of which are limited to convective heat transfer.
The primary goal of this research was the development of an effective and efficient cooling system
for individual building occupants based on longwave radiation exchange. To achieve this, the
technological concept of a thermoelectric cooling partition with latent heat storage (Thecla) was
developed. The system combines Peltier elements and heat storage based on a phase change
material to provide a tempered surface for directional radiative cooling of a person.
Thecla has been practically evaluated in the form of real prototypes in hardware tests and human
subject studies. In addition, the concept was evaluated theoretically through precise thermodynamic
analyses of each individual component and of the overall system. Based on these assessments, an
explicit computational model of Thecla was developed, which calculates the thermodynamic
behavior and energy balance of the system for varying environmental and operating parameters.
Coupled with measured and simulated building energy data, the overall energy efficiency of Thecla in
combination with central space cooling systems was assessed.
The analysis suggests, that the system concept of the thermoelectric partition is effective for
individual user cooling. Thecla provides a perceptible and measurable cooling effect associated with
a reduction in the overall thermal sensation. The applied technologies allow cooling operation over
relevant periods of time and, through latent heat storage, a temporal shift of cooling loads in
buildings. For realistic application scenarios in buildings with central air conditioning, an existing
energy-saving potential by using Thecla could be proven and quantified.
Toxicology, the study of the adverse effects of chemicals and physical agents on living organisms, is a critical process in chemical and drug development. The low throughput, high costs, limited predictivity and ethical concerns related to traditional animal-based toxicity studies render them impractical to assess the growing number and complexity of both existing and new compounds and their formulations. These factors together with the increasing implementation of more demanding regulations, evidence the current need to develop innovative, reliable, cost effective and high throughput toxicological methods.
The use of metabolomics in vitro presents the powerful combination of a human relevant system with a multiparametric approach that allows assessing multiple endpoints in a single biological sample. Applying metabolomics in a cell-based system offers an alternative to both, the ethical concerns and relevance of animal testing and the restraining nature of single endpoint evaluations characteristic of conventional toxicological in vitro assays. However, there are still challenges that hamper the expansion of metabolomics beyond a research tool to a feasible and implementable technology for toxicology assessment.
The aim of this dissertation is to advance the applications of in vitro metabolomics in toxicology by addressing three major challenges that have limited its widespread implementation in the field. In chapter 2 the restrictive high cost and low throughput of in vitro metabolomics was addressed through the development, standardization and proof of concept of a high throughput targeted LC-MS/MS in vitro metabolomics platform for the characterization of hepatotoxicity. In chapter 3, the use of the developed in vitro metabolomics system was expanded beyond hazard identification, to its implementation for deriving dose- and time response metrics that were shown useful for Point of departure (PoD) estimations for human risk assessment. Finally, in chapter 4 in order to increase the reliance and confidence of using in vitro metabolomics data for risk assessment, the human relevance of the metabolomics in vitro assays was attempted to be improved by the implementation and evaluation of in vitro metabolomics in a hiPSCs-derived 3D liver organoid system.
The work developed here demonstrates the suitable of in vitro metabolomics for mechanistic-based hazard identification and risk assessment. By advancing the applications of metabolomics in toxicology, this work has significantly contributed to the aim of toxicology of the 21st century for a human-relevant non-animal toxicological testing, supporting the toxicology task of protecting human health and the environment.
This dissertation contributes to the emerging research field on men’s underrepresentation in communal domains such as health care, elementary education, and the domestic sphere (HEED). Since these areas are traditionally associated with women and therefore counter-stereotypic for men, various barriers can hinder men’s higher participation. We explored these relations using the example of how men’s interest in parental leave – as a form of communal engagement – is shaped across different stages of the transition to fatherhood. Specifically, we focused on how gendered beliefs regarding masculinity and fatherhood, the possible selves men can imagine for their future, and the social support men receive from their normative environment relate to their intentions to take parental leave and their engagement in care more broadly. In Chapter 2, using experimental designs, we examined how different representations of a prototypical man, varying in stereotypic agentic and counter-stereotypic communal content, affect men’s hypothetical intentions to take leave and their communal possible selves. Findings suggested that a combined description of a prototypical man as agentic and communal tended to increase men’s parental leave-taking intentions as compared to a control condition. In line with contrast effects, also an exclusively agentic male prototype tended to push men towards more communal outcomes. In Chapter 3, in a cross-sectional examination of the parental leave-taking intentions of expectant fathers, we found first evidence for a link between male prototypes and men’s behavioral preferences to take parental leave after birth. Yet, the support that expectant fathers received from their partners for taking parental leave emerged as the strongest predictor of men’s leave-taking desire, intention, and expected duration. In Chapter 4, using longitudinal data collected during men’s transition to fatherhood, we studied discrepancies between men’s prenatal caregiver and breadwinner possible selves and their actual postnatal engagement in each domain. Results suggested that fathers, on average, expected and desired to share childcare and breadwinning rather equally with their partners but had difficulties translating their intentions into behavior. The extent to which fathers experienced discrepancies was related to their attitudes towards the father role and the social support they received for taking parental leave and engaging in childcare. Moreover, experiencing a mismatch between their expected, desired, and actual division of labor had consequences for fathers’ intentions to take parental leave in the future. Across the empirical chapters, we found that men generally had high communal intentions and did not consider care engagement as nonnormative for their gender. However, men continue to face barriers that prevent them from translating their communal intentions into behavior. We outline strengths and limitations of the present research given the emerging nature of the research field. Moreover, we discuss implications for future research on men’s orientation towards care as well as implications for how to foster the realization of communal intentions into actual behavior.
Highly Automated Driving (HAD) vehicles represent complex and safety critical systems. They are deployed in an open context i.e., an intricate environment which undergoes continual changes. The complexity of these systems and insufficiencies in sensing and understanding the open context may result in unsafe and uncertain behaviour. The safety critical nature of the HAD vehicles requires modelling of root causes for unsafe behaviour and their mitigation to argue sufficient reduction of residual risk.
Standardization activities such as ISO 21448 provide guidelines on the Safety Of The Intended Functionality (SOTIF) and focus on the analysis of performance limitations under the influence of triggering conditions that can lead to hazardous behaviour. SOTIF references traditional safety analyses methods e.g., Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) to perform safety analysis. These analyses methods are based on certain assumptions e.g., single point failure in FMEA and independence of basic events in FTA. Moreover, these analyses are generally based on expert knowledge i.e., data-based models or hybrid approaches (expert and data) are seldom practised. The resulting safety model is fixed i.e., it is generally seen as a one-time artefact. Open context environment may contain triggering conditions which may not be evident to the expert. Open context also evolves over time and new phenomena may emerge.
This thesis explores the applicability of the traditional safety analyses techniques to provide safety models for HAD vehicles operating in the open context, under the light of modelling assumptions taken by traditional safety analyses techniques. Moreover, incorporating uncertainties into safety analyses models is also explored. An explicit distinction between the inherent uncertainty of a probabilistic event (aleatory) and uncertainty due to lack of knowledge (epistemic) is made to formalize models to perform SOTIF analysis. A further distinction is made for conditions of complete ignorance and termed as ontological uncertainty. The distinction is important as for HAD vehicles operating in open context the ontological uncertainty can never be completely disregarded.
This thesis proposes a novel framework of SOTIF to model, estimate and dis cover triggering conditions relevant to performance limitations. The framework provides the ability to model uncertainties while also providing a hybrid approach i.e., supporting inclusion of expert knowledge as well as data driven engineering processes. Two representative algorithms are provided to support the framework. Bayesian Network (BN) and p-value hypothesis testing are utilised in this regard. The framework is implemented on a real-world case study in which LIDARs based perception systems are used as vehicle detection system.
Inland waters, such as freshwater impoundments, are significant and variable sources of the greenhouse gas methane to the atmosphere. In water bodies, methane is mainly produced in the organic-matter rich bottom sediment, where it can accumulate, form gas voids, and be transported to the atmosphere by gas bubbles escaping the sediment. The bubble mediated transport of methane, known as methane ebullition, is a commonly dominant pathway of methane emissions in freshwater reservoirs. Ebullition results from a complex interplay of several simultaneous physical and bio-geochemical processes acting at different timescales, leading to highly variable fluxes in both space and time. Although the sediment matrix is a hot spot for gas production and accumulation, there is a lack of in-situ data on free gas storage in reservoirs and the interaction among sediment gas storage, methane budget, and methane ebullition. Several environmental variables are known to be ebullition drivers; however, simulating the temporal dynamics of ebullition and identifying the governing factors across different systems remains challenging. Therefore, the main goal of this thesis was to investigate the effect of different drivers on the spatial variability and temporal dynamics of methane ebullition in impoundments. Two contrasting reservoirs, one subtropical and one temperate, were investigated. High-frequency measurements of ebullition fluxes and environmental variables, and acoustic-based mapping of gas content in the sediment were performed in both reservoirs, constituting the dataset for this study. The main findings were presented in three main scientific manuscripts. The spatial distribution of gas content in the sediment was primarily controlled by sediment deposition and water depth, with shallow regions of high sediment deposition were hot spots of free gas accumulation in the sediment. Temporal changes in gas content in the sediment were linked to the methane budget components in the reservoir and further influenced by the temporal dynamics of ebullition. While the sediment could store days of accumulated potential methane production, which could sustain months of mean ebullition flux, periods of intensified ebullition led to a depletion of gas stored in the sediment. Large spatial scale ebullition drivers, such as pressure changes, resulted in the synchronization of ebullition events across different monitoring sites. Nevertheless, the degree of correlation between ebullition and environmental variables varied from one system to another and over time. Thermal stratification was an important modulator in the relationship between ebullition and other environmental variables, such as bottom currents and turbulence. The temporal dynamics of ebullition could be captured and reproduced by empirical models based on known environmental variables. However, these models failed to reproduce the sub-daily variabilities of ebullition and demonstrated poor performance when transferred from one system to another. Lastly, although some questions remain unanswered, the findings from this study contribute to advancing the understanding of the complex dynamics of methane ebullition and its controls in freshwater reservoirs.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
Agricultural intensification has increased substantially in the last century to meet the globally growing demand for food, fodder, and bioenergy, thus agricultural cropland became the largest terrestrial biome globally. Pesticides became a central tool to this intensification strategy, thus pesticide application rose drastically over the last sixty years to secure or increase crop yields. However, pesticides are by design biologically active and known to contaminate non-target ecosystems, thereby adversely affecting their function or structure. Even though ecotoxicological knowledge about probable fate and effects has grown, little remains known about the spatiotemporal occurrence, potential effects, and risk drivers of pesticides on larger, i.e. macro, scales.
Consequently, the thesis gathered primarily pesticide exposure data via meta-analysis and from public monitoring databases to describe (i) detailed risks in aquatic ecosystems, (ii) the underlying risk drivers, (iii) associated spatiotemporal trends, (iv) the effect of land use and land-protection and (v) the protectiveness of regulatory frameworks. First, a meta-analysis of insecticides occurring in US surface waters (n = 5,817, 259 studies) revealed large-scale risks for aquatic ecosystems based on the exceedance of regulatory threshold levels (RTL) and identified high-risk substances, particularly pyrethroids, with increasing application trends (publication I). Following this, spatiotemporal factors driving insecticide risks were identified via model-building demonstrating that toxicity-weighted pesticide use was the primary driver in surface waters with subsequent model application generating a spatially comprehensive risk assessment for the United States (publication II). The toxicity-weighted pesticide use was subsequently expanded to an ongoing project covering additional species groups and all pesticides used in the US from 1992 – 2016, highlighting a drastic shift of toxic pressures from vertebrates to aquatic invertebrates. Large-scale monitoring data from European surface waters (n > 8.3 million) of 352 organic chemicals identified pesticides as the main class or organic contaminants causing risks in aquatic ecosystems. Additional analyses established links between agricultural intensity and resulting environmental risks for aquatic invertebrates and plants on this macro scale (publication III). Finally, high-resolution monitoring data from Saxony, Germany, provided, for the first time, detailed insights into the occurrence and resulting risks of organic contaminants (primarily pesticides) in protected surface waters of nature conservation areas (publication IV).
In summary, the thesis gathered and used large-scale datasets to analyze the impact of agricultural intensification – and later anthropogenic land use – on ecosystems to reduce knowledge deficits in ecotoxicology on macro scales. Insecticides were shown to be important and spatially extensive agents of impairments to surface water quality and being directly linked to their use in respective landscapes. Changes in the pesticide use composition over time shifted environmental risks from vertebrates to other central species groups (e.g. aquatic invertebrates), highlighting a new challenge to the integrity of aquatic environments. The thesis provided novel insights into contaminants' individual risk characteristics, their interaction with various spatiotemporal drivers and their relevance on various macro scales. Overall, a discrepancy remains evident between estimated environmental impacts of pesticides derived during regulatory approval processes contrasted by a posteriori field measurements detailing larger than assumed adverse exposures and effects. This discrepancy led to pesticides being the most impactful chemical stressor for aquatic ecosystems compared to other organic contaminants on a continental scale; a threat that even increased for some species groups. The extensive use of pesticides has reached levels where even strictly protected surface waters in Germany are regularly exposed adversely, hence threatening conservation areas’ function as ecological refugia. Taken together, the thesis provides new macro-scale evidence regarding the contribution of pesticides (and associated drivers) to large-scale changes in biological systems evidenced over the last decades, underlining their likely contribution to the ongoing freshwater biodiversity crisis globally. Particularly agricultural systems will require substantial changes going forward to protect or reestablish the integrity of aquatic ecosystems and their provision of vital ecological services.
More than 2.4 % of the continental surface area is covered by shallow aquatic systems such as ponds. Despite occupying only a tiny fraction of the earth's surface area, ponds are globally significant sites of carbon cycling. They receive carbon, process it and emit large amounts of greenhouse gases into the atmosphere, the most potent among others are carbon dioxide (CO2) and methane (CH4). Tube-dwelling macroinvertebrates, such as chironomid larvae (Diptera: Chironomidae) change biogeochemical functions, particularly in shallow aquatic systems. Through bioturbation involving burrow ventilation and sediment particle reworking, tube-dwelling macroinvertebrates enhance solute exchange between sediment and water. Stimulate the benthic microbial community, and regulate organic matter decomposition. This doctoral project integrates aquatic carbon biogeochemical processes with the research field of ecology to relate knowledge of biogeochemical reaction dynamics upon application of the mosquito control biocide Bacillus thuringiensis israelensis (Bti), which is an entomopathogen that kills mosquitos larvae, but also reduces the abundance of chironomids. The interdisciplinary approach combines field measurements and laboratory experiments. First, an experiment was conducted in 12 outdoor floodplain ponds mesocosms (FPMs), where the effect of Bti application on carbon transformations, carbon pools, and carbon fluxes was monitored for one year. Half of the FPMs were Bti-treated and the remaining half were controls. The study revealed that seasonal variations governed changes in transformations, pools, and fluxes on the carbon components. Treated FPMs, for which a 26 % and 41% reduction in emerging merolimnic insects and macroinvertebrates abundance, respectively was reported (in companion studies) were higher CH4emitters (137% higher than in control mesocosms). The higher CH4 emissions occurred specifically in the shallow zone where the macroinvertebrate reduction was also significant. In the same treated FPMs, a tendency towards less dissolved organic carbon in porewater (33% lower than in control mesocosms), was potentially caused by the reduction in bioturbation activities of chironomids, whereas the remaining measured components of the carbon budget were not affected by the treatment with Bti. Second, laboratory microcosm (LMs) experiments that excluded environmental constraints were developed, to clarify the findings of the FPMs experiment. Out of the 15 microcosms, 3 were treated (each set) with standard Bti dose, 5 times standard Bti dose, chironomid larvae with low and high areal density, and control. The findings demonstrated that bioturbation increased CH4 and CO2 efflux and sediment oxygen (O2) consumption, while it did not affect the net production of CH4 and CO2. The negligible effect on net production rates in treatments with chironomids indicates that the increase in emissions rate was predominantly caused by bioturbation, which reduced the gas accumulation in the sediment. In the absence of chironomids, the application of any dose of Bti led to a three-fold higher net production rate of CH4 and CO2 (by up to 2.7 times than in control), due to the high addition of bioavailable carbon through the Bti excipients. However, the sole addition of carbon through the Bti excipients could not justify the high net production rate suggesting that the addition of Bti triggered a more robust carbon metabolism process. Both FPMs and LMs results suggested that the application of Bti may have functional implications on carbon biogeochemistry in affected aquatic systems beyond those mediated by changes in macroinvertebrate communities.
This doctoral dissertation is comprised of nine published articles covering different
methods for ‘Fast, Robust Rigid and Non-Rigid Registration for Globally Consistent
3D Scene and Shape Reconstruction’. Overall the contributing articles are separated
and discussed in three stages – The first part of the thesis i.e., chapter 2 explains
three novel method classes of rigid point set registration namely Gravitational Approach (GA), Fast Gravitational Approach (FGA), and RPSRNet. GA was introduced as the first physics-based rigid point set registration. It includes elegant modeling of rigid by dynamics using Newtonian mechanics. The method proposed many new avenues for other types of pattern matching tasks thank point set registration. Next, FGA method, published 4 years after GA presented as an extension that breaks the algorithmic complexity of GA from O(M N ) to O(M log N ) using Barnes-Hut tree representation of point cloud. It also eliminates the requirement of heuristic optimization parameter settings by GA, and achieve state-of-the-art alignment accuracy on LiDAR odometry. Finally, RPSRNet presents deep learning version of FGA, with custom convolution layers for hierarchical point feature embedding. RPSRNet is robust and the fastest among SoA methods for LiDAR data registration. The second part, i.e., chapter 3, of the thesis introduces NRGA as the fist physics-based non-rigid point set
registration method which is computationally slow but robust against noisy and partial inputs. NRGA preserves structural consistency as it coherently regularize motion of deformable vertices. For articulated hand shape reconstruction, a tailored version of NRGA -- Articulated-NRGA -- is effective to refine final hand shape. Collision and penetration avoidance between source and target surfaces are tackled by constrained optimization in NRGA. This setting has improved hand and object interaction reconstruction. Next contribution FoldMatch method remodels the shape deformation by introducing wrinkle vector field (WVF) for capturing complex clothing and garment details while fitting body models onto 3D Scans. Quantitative evaluation of FoldMatch and NRGA shows their effectiveness in geometrically consistent surface modeling and reconstruction tasks. Finally, the third part of the thesis explains globally consistent outdoor scene reconstruciton, odometry estimation, and uncertainty guided pose-graph optimization in a novel LiDAR-based localization and map building method, called Deep Evidential LiDAR Odometry (DELO). This is the first Odometry method to use predictive uncertainty modeling for sensor pose prediction network.
Molecular simulation is an important tool for investigating the behavior of fluids and solids. Nanoscopic processes and physical properties of the material can be studied predictively based on the description of the molecular interactions by force fields. This is used in the present work to tackle engineering questions that are hard to answer with other methods. First, mass transfer at fluid interfaces was investigated on the nanoscopic level. Therefore, two distinct simulation methods were developed and used to systematically investigate the mass transfer in mixtures of simple model fluids, described by the ‘Lennard-Jones truncated and shifted’ (LJTS) potential. The research question was whether the adsorption of components at the interface, which is observed also in many simple fluid mixtures, has an influence on the mass transfer. Such an influence was indeed found in the studies with both scenarios. Furthermore, explosions of nanodroplets caused by a spontaneous evaporation of the liquid phase were investigated with non-equilibrium molecular dynamics (NEMD) simulations. In these simulations, the interior of an LJTS droplet was superheated by a local thermostat, so that a vapor bubble nucleated inside of the droplet. Depending on the degree of superheating, different phenomena were observed, ranging from a simple evaporation of the droplet over oscillatory behavior of the bubble to an immediate droplet explosion. For molecular simulations of real mixtures, suitable force fields are needed. In this work, a set of molecular models for the alkali nitrates was developed and systematically compared to experimental data of thermophysical and structural properties of aqueous alkali nitrate solutions from the literature. Lastly, the structure and clustering of 1:1 electrolytes in aqueous solution was investigated for a broad concentration range starting from near infinite dilution up to high supersaturation. Based on the simulation results, an empirical rule was proposed to provide estimates of the solubility of salts with standard molecular dynamics simulations without the need of elaborate calculation schemes or significant additional computational effort.
Drought is a significant environmental factor that can impair plant growth and development, leading to reduced crop productivity or even plant death. Maintaining sugar distribution from source to sink is crucial for increasing crop production under water limitation conditions. Numerous studies have suggested that nutrition fertilization, especially potassium (K), can enhance plant growth and yield production. To investigate the mechanism of K in sugar long-distance transportation under drought stress, we established a soil-grow system and a hydroponic-grow system with varying amounts of potassium supplementation and analyzed the biochemical and molecular responses in Arabidopsis and potato plants under drought stress conditions. Our findings showed that excess potassium fertilization limited sucrose metabolism, leading to lower drought tolerance in Arabidopsis in both grow systems. However, higher potassium supplementation altered sugar relocation and potassium movement, resulting in an increase in starch yield production in both potato plants with different sink strength capacities. We also proposed that a low amount of sodium increases Arabidopsis drought tolerance under low potassium conditions since a low amount of sodium can improve the control of osmotic potential, leading to more water being retained in plant cells.
Silicon (Si) has received considerable attention recently for its potential in mitigating drought stress, although the effects vary among different plant species. To investigate the mechanism of Si in drought stress tolerance, we applied monosilicic acid in hydroponic media and then applied PEG8000 to simulate drought stress. Our findings revealed that Si-dependent drought mitigation occurred more in the shoot than in the root of Arabidopsis, and we observed silicon accumulation in the shoot of Arabidopsis. In Si-treated plants, more glucose was accumulated in the vacuole, leading to better osmotic potential control under drought stress. RNA sequencing analysis showed that Si altered the activity of sugar transporters and the sugar metabolism process, and increased photosynthesis. However, Si-dependent regulation in sugar transporter showed different responses in potato. Understanding the mechanism of Si in potato requires further studies. Overall, our dissertation provides important information for clarifying the mechanism of Si in drought stress, which forms the basis for further investigation.
Methods for scale and orientation invariant analysis of lower dimensional structures in 3d images
(2023)
This thesis is motivated by two groups of scientific disciplines: engineering sciences and mathematics. On the one hand, engineering sciences such as civil engineering want to design sustainable and cost-effective materials with desirable mechanical properties. The material behaviour depends on physical properties and production parameters. Therefore, physical properties are measured experimentally from real samples. In our case, computed tomography (CT) is used to non-destructively gain insight into the materials’ microstructure. This results in large 3d images which yield information on geometric microstructure characteristics. On the other hand, mathematical sciences are interested in designing methods with suitable and guaranteed properties. For example, a natural assumption of human vision is to analyse images regardless of object position, orientation, or scale. This assumption is formalized through the concepts of equivariance and invariance.
In Part I, we deal with oriented structures in materials such as concrete or fiber-reinforced composites. In image processing, knowledge of the local structure orientation can be used for various tasks, e.g. structure enhancement. The idea of using banks of directed filters parameterized in the orientation space is effective in 2d. However, this class of methods is prohibitive in 3d due to the high computational burden of filtering when using a fine discretization of the unit sphere. Hence, we introduce a method for 3d pixel-wise orientation estimation and directional filtering inspired by the idea of adaptive refinement in discretized settings. Furthermore, an operator for distinction between isotropic and anisotropic structures is defined based on our method. Finally, usefulness of the method is shown on 3d CT images in three different tasks on a fiber-reinforced polymer, concrete with cracks, and partially closed foams. Additionally, our method is extended to construct line granulometry and characterize fiber length and orientation distributions in fiber-reinforced polymers produced by either 3d printing or by injection moulding.
In Part II, we investigate how to introduce scale invariance for neural networks by using the Riesz transform. In classical convolutional neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, the network may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform, a scale equivariant operator. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider segmenting cracks in CT images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real CT images featuring a wide range of crack widths. As an alternative to deep learning models, the Riesz transform is utilized to construct a scale equivariant scattering network, which does not require a lengthy training procedure and works with very few training examples. Mathematical foundations behind this representation are laid out and analyzed. We show that this representation with 4 times less features than the original scattering networks from Mallat performs comparably well on texture classification and gives superior performance when dealing with scales outside the training set distribution.
Hardware devices fabricated with recent process technology are intrinsically
more susceptible to faults than before. Resilience against hardware faults is,
therefore, a major concern for safety-critical embedded systems and has been
addressed in several standards. These standards demand a systematic and
thorough safety evaluation, especially for the highest safety levels. However,
any attempt to cover all faults for all theoretically possible scenarios that a sys-
tem might be used in can easily lead to excessive costs. Instead, an application-
dependent approach should be taken: strategies for test and fault resilience
must target only those faults that can actually have an effect in the situations
in which the hardware is being used.
In order to provide the data for such safety evaluations, we propose scalable
and formal methods to analyse the effects of hardware faults on hardware/soft-
ware systems across three abstraction levels where we:
(1) perform a fault effect analysis at instruction set architecture level by em-
ploying fault injection into a hardware-dependent software model called
program netlist,
(2) use the results from the program netlist analysis to perform a deductive
analysis to determine “application-redundant” faults at the gate level by
exploiting standard combinational test pattern generation,
(3) use the results from the program netlist analysis to perform an inductive
analysis to identify all faults of a given fault list that can have an effect
on selected objects of the high-level software, such as specified safety
functions, by employing Abstract Interpretation.
These methods aid in the certification process for the higher safety levels
by (a) providing formal guarantees that certain faults can be ignored and (b)
pointing to those faults which need to be detected in order to ensure product
safety.
We consider transient and permanent faults corrupting data in program-
visible hardware registers and model them using the single-event upset and
stuck-at fault models, respectively.
Scalability of our approaches results from combining an analysis at the ma-
chine and hardware level with separate analyses on gate level and C level
source code, as well as, exploiting certain properties that are characteristic for
embedded systems software. We demonstrate the effectiveness and scalability
of each method on industry-oriented software, including a software system
with about 138 k lines of C code.
Ambulatory assessment (AA) is becoming an increasingly popular research method in the fields of psychology and life science. Nevertheless, knowledge about the effects that design choices, such as questionnaire length (i.e., number of items per questionnaire), have on AA participants’ perceived burden, data quantity (i.e., compliance with the AA protocol), and data quality is still surprisingly restricted. The aims of this dissertation were to experimentally manipulate aspects of an AA study’s sampling strategy - sampling frequency (Study 1) and questionnaire length (Study 2) - and to investigate their impact on perceived burden, data quantity, and aspects of data quality in three papers. In Study 1, students (n = 313) received either 3 or 9 questionnaires per day for the first 7 days of the study. In Study 2, students (n = 282) received either a 33- or 82-item questionnaire 3 times a day for 14 days.
Paper 1 described that a higher sampling frequency (Study 1) led to a higher perceived participant burden, but did not affect other aspects of data quantity and quality. Furthermore, a longer questionnaire (Study 2) did not affect perceived participant burden or data quantity, but did lead to a lower within-person variability, and a lower within-person relationship between time-varying variables. Paper 2 investigated the effects of the sampling frequency (Study 1) on careless responding by identifying careless responding indices that could be applied to AA data and by extending the multilevel latent class analysis model to a multigroup multilevel latent class analysis model. Results indicated that a higher sampling frequency did not affect careless responding. Paper 3 investigated the effects of questionnaire length (Study 2) on (the relative impact of) response styles by extending the item response tree (IRTree) modeling approach to a multilevel data structure. Results indicated that a longer questionnaire led to a greater relative impact of RS.
Although further validation of the results is essential, I hope that future researchers will integrate the results of this dissertation when designing an AA study.
Thermodynamic Modeling of Poorly Specified Mixtures using NMR Fingerprinting and Machine Learning
(2023)
Poorly specified mixtures, i.e., mixtures of unknown or incompletely known composition,
are common in many fields of process engineering. Dealing with such mixtures in process
design is challenging as their properties cannot be described with classical thermodynamic
models, which require a full specification. As a workaround, pseudo-components
can be introduced, which are generally defined using ad-hoc assumptions. In the present
thesis, a new framework is developed for the thermodynamic modeling of such mixtures
using nuclear magnetic resonance (NMR) experiments in combination with machine-learning
(ML) methods. In the framework, a characterization of a mixture in terms of
structural groups (“NMR fingerprint”) is obtained by using the ML concept of support
vector classification. Based on the group-specific fingerprint, quantum-chemical descriptors
of the unknown part of the mixture as well as activity coefficients can already be
predicted. Furthermore, a meaningful definition of pseudo-components is achieved by
clustering the structural groups into pseudo-components with the K-medians algorithm
based on their self-diffusion coefficients measured by pulsed-field gradient (PFG) NMR.
It is demonstrated that the characterization of poorly specified mixtures in terms of
pseudo-components can be combined with several thermodynamic group-contribution
methods. The resulting thermodynamic models were applied to various poorly specified
mixtures and used for solving two typical tasks from conceptual fluid separation process
design: the solvent screening for liquid-liquid extraction processes and the simulation
of open evaporation processes. The predictions with the methods developed here show
very good agreement with the results obtained for the fully specified mixtures.
Internal waves are oscillating disturbances within a stable density-stratified fluid. In stratified water basins, these waves have been detected and pointed out as one of the most important processes of water movement and vertical mixing. A fraction of the wind momentum and energy that cross the water surface are responsible for generating large standing internal waves, also called basin-scale internal seiches, in stratified basins. Despite the huge number of publications describing different mechanisms that can influence the dissipation rates and accelerate the wave damping of internal seiche in thermally stratified lakes and reservoirs, many details of their application to field observations are site specific and do not evaluate the effects in a combined way. This research paid particular attention to some mechanisms that may contribute in inhibiting the generation of internal seiche through field measurements and numerical simulations. Our results underline the importance of bathymetry on energy dissipation, indicating that the gentle sloping bottom may act as a primary mechanism to inhibit the formation of internal seiches. The basin shapes (reservoir bends) and self-induced mixing near the wave crest act as secondary mechanisms to extract energy from upwelling events, which is responsible for triggering internal seiches in thermally stratified lakes. Numerical simulations indicate that a higher amount of energy is transferred from the wind to the internal seiche for an increasing deviation of the stratification from a two-layer structure, suggesting that the stratification profile is not responsible for inhibiting the occurrence of basin-scale internal waves, but only for modifying its structure, favoring the formation of internal waves with higher vertical modes. The outcome of this study may be of great relevance in describing the biogeochemical cycle in lakes and reservoirs, since each mechanism may have different trigger effects on the cycle of nutrients and other elements in thermally stratified lakes.
Numerical study on the kinematic response of piled foundations to a stationary or moving load
(2023)
The present numerical study focuses on the problem of dynamic interaction of piled foundations under harmonic excitation at high frequencies relevant for the vibration protection practice. The finite-element programs Plaxis (2D & 3D) and Abaqus are employed for time- and frequency-domain analyses, respectively.
As a first step, dynamic impedances of pile groups, piled rafts and embedded footings are derived for all oscillation modes in order to gain insight into the problem of inertial loading.
Emphasis is placed on the kinematic response of single piles, pile groups and piled rafts to a wave field emanating from a distant stationary or moving harmonic vertical point load acting on the surface of the soil. Transfer functions, which are ratios relating the response of the foundation to that of the free-field, quantify the kinematic interaction. Only the vertical component of the response is assessed as mostly critical in the frame of the selected excitation. It is shown that a stationary harmonic load is a good approximation for a moving harmonic load; this is true for a travelling speed of the load that is relatively low in comparison with the Rayleigh wave velocity in the soil, which is quite common in engineering practice. Analogously, a static load is a good approximation of a moving load of constant magnitude. Moreover, analytical solutions are presented for single pile and pile group response under Rayleigh wave excitation, which can be also employed in the near-field, as shown herein.
The extension of piled foundations by additional rows against the wave propagation direction is examined under the scope of vibration protection. Indeed, for a considerable frequency range, the further addition of pile rows to a piled foundation has a favorable effect on the reduction of the vibration level calculated at the furthest-back pile row or at the free-field behind the foundation. This is, however, not valid, as the excitation frequency increases further, and the interplay between the piles becomes more complex. On the other hand, the extension of the piled foundation by additional pile columns parallel to the wave propagation direction has a positive effect at high frequencies.
The accuracy of the results is assessed by verification against rigorous solutions. The importance of key aspects in finite-element modelling is also highlighted.
Ecotoxicology is the science that researches effects of toxicants on biological entities. Following the famous toxicological principle formulated 1538 by von Hohenheim, known as Paracelsus, thereby generally all chemicals are able to act as toxicants. Unlike human toxicology that focuses on toxic effects on individuals and populations of one species, Homo sapiens, ecotoxicology is not constrained in its scope of biological entities. It is interested in toxic effects on individuals and populations of any species (excluding humans), and on communities and entire ecosystems (Walker et al., 2012; Köhler & Triebskorn, 2013; Newman 2014). One example of where the ecological foundation of ecotoxicology manifests itself are indirect effects, which are effects on biological entities that are not directly caused by chemicals but instead are mediated by ecological interactions and environmental conditions (Walker et al., 2012). With this large scope, ecotoxicology is an inter- and multidisciplinary science that links chemical, biological and environmental knowledge.
With millions of species and at least 100,000 chemicals that potentially interact with them in the environment (Wang et al., 2021), ecotoxicology has a large ground to cover. Among these sheer numbers, there are some groups that are of special importance regarding their potential environmental impact. Pesticides are one group of chemicals that have a large, if not the largest, ecotoxicological relevance: they are toxic for biological entities, sometimes in very low concentrations , and they are used in large amounts and globally (Bernhardt et al., 2017). The high toxicity of pesticides, much higher than that of most other groups of chemicals, is a result of their intended use: they are designed to reduce detrimental effects of, e.g., insects, plants or fungi on agriculture by controlling respective populations, often, and in the sense of their Latin name, through induced lethality (Walker et al., 2012). However, they act not specific enough to be toxic only for the intended species that are considered pests, but also show toxicity towards species living in habitats next to pesticide-treated areas. The widespread agricultural use of pesticides, on the other hand, is a result of their work-and-cost-efficiency for securing yields, but also results in exposure of ecosystems at a global scale (Sharma et al., 2019). In summary, pesticides can be abstractly seen as toxicity intentionally applied to agricultural areas, unintentionally also exposing organisms in non-agricultural areas to toxicity.
The risks of pesticide use for ecosystems have led major jurisdictions, like the United States of America (US) and the European Union (EU), to enact elaborated regulatory processes that require a registration of pesticides prior use (EFSA, 2013; EPA, 2011; Stehle & Schulz, 2015b). A by-product of these registration processes are regulatory threshold levels (RTL) which can be used for scientific risk analysis outside the regulatory process (Stehle & Schulz, 2015a). The RTL for an organism group is basically derived from the most sensitive effect concentrations found in standardized toxicity tests for species representative for the group, multiplied by a safety factor, although specifics differ among regulatory processes. Conceptually, they mark the threshold that separates environmental concentrations associated with acceptable risk (concentrations below the RTL) from concentrations associated with unacceptable risk (concentrations above the RTL).
Due to the high degree of procedural standardization in the derivation of RTLs, they have been found as a good measure to make the toxicities of different pesticides comparable, and they were employed in a series of studies to characterize environmental pesticide concentrations (e.g., Stehle & Schulz, 2015a; Stehle et al., 2018; Wolfram et al., 2018; Wolfram et al., 2021; Schulz et al., 2021, also, in Appendix B; Bub et al., 2023, also, in Appendix C). RTL reflect, for instance, that insecticides show regulatory unacceptable concentrations towards fish between 3 ng/L (deltamethrin, a pyrethroid) and 110 mg/L (imidacloprid, a neonicotinoid), a range of nine orders of magnitude. At the same, imidacloprid is very toxic to pollinators (RTL of 1.52 ng/organism), while more than 95% of all of the insecticides, with regulatory unacceptable concentrations among insecticides ranging as high as 1,6 mg/organism, indicating a toxicity six orders of magnitude lower than that of imidacloprid.
At large-scales, ecotoxicology deals with pesticide impacts on a national (e.g., Bub et al., 2023; Douglas & Tooker, 2015; Hallmann et al., 2014; Schulz et al., 2021; Stehle et al., 2019; Wolfram et al., 2018), continental (Wolfram et al., 2021) or the global scale (Stehle & Schulz, 2015a; Stehle et al., 2018). This maximization of considered scale is in line with the general tendency of ecotoxicology towards larger scales, but generally requires new methodological and conceptual approaches. Historically, individual chemicals and groups of chemicals have been identified that mark, caused by their immense release into the environment, main disruptors of processes in the Earth system, like greenhouses gases for the climate change, chlorofluorocarbons for the depletion of the atmosphere’s ozone layer, dichlorodiphenyl-trichloroethane and other organochlorides for bioaccumulation in food webs and declines in bird populations, etc., but for other phenomena, like declines in biodiversity or numbers of insect species (Outhwaite et al., 2020; Seibold et al., 2019; Vörösmarty et al., 2010), the active part of chemical pollution is only understood to a much lesser extent. There are indicators that pesticides may play a major role
This dissertation contributes to the research of large-scale risks of pesticide use, and of large-scale ecotoxicology in general, in several ways (Figure 1). In Chapter 2, it presents a labeled property graph, the MAGIC graph (Meta-Analysis of the Global Impact of Chemicals graph), as a solution to the methodological issues that arise when increasing amounts of data from more and more sources are combined for analysis (Bub et al., 2019; also, in Appendix A). The MAGIC graph is able to link chemical information from different sources, even if these sources use different nomenclatures. This enables analyses that incorporate toxicological data, like thousands of RTLs (for different organism groups and jurisdictions) for hundreds of pesticides, and information on pesticide use and chemical classes. The MAGIC graph is implemented in a way that allows it to be organically extended by additional chemical, biological and environmental data, and eventually scaled to all chemicals of environmental interest.
Chapter 3 shows, how the combination of the linked pesticide data with a systemic consideration of pesticide use supports the interpretation of pesticide risks in the US (Schulz et al., 2021; also, in Appendix B). This systemic approach includes a new measure, the total applied toxicity (TAT), which integrates used pesticide amounts and pesticide toxicities, and the consideration of pesticide use as a complex system whose state and evolution can be visualized in phase-space plots. The combination of the described methods and concepts led to a novel view on pesticide risks in the US and can provide a framework for future ecotoxicological research at large scales.
Chapter 4 displays the results of the methods and concepts of the US pesticide risk analysis applied to Germany (Bub et al., 2023; also, in Appendix C). A pesticide risk analysis of Germany is of special importance in the context of the EU’s goal to drastically reduce pesticide risks (European Commission, 2020) and Germany being one of the important agricultural producers in the EU. A comparison of the results for Germany to those for the US did also allow to evaluate the impact of scale and differing RTLs, information that can help other ecotoxicological large-scale assessments. Chapter 5 adds a conclusion and an outlook.
This work aims to study textile structures in the frame of linear elasticity to understand how
the structure and material parameters influence the macroscopic homogenized model. More
precisely, we are interested in how the textile design parameters, such as the ratio between
fibers’ distance and cross-section width, the strength of the contact sliding between yarns,
and the partial clamp on the textile boundaries determine the phenomena that one can see in
shear experiments with textiles. Among others, when the warp and weft yarns change their
in-plane angles first and, after reaching some critical shear angle, the textile plate comes out
of the plane, and its folding starts.
The textile structure under consideration is a woven square, partially clamped on the left
and bottom boundary, made of long thin fibers that cross each other in a periodic pattern.
The fibers cannot penetrate each other, and in-plane sliding is allowed. This last assumption,
together with the partial clamp, adds new levels of complexity to the problem due to
the anisotropy in the yarn’s behavior in the unclamped subdomains of the textile.
The limiting behavior and macroscopic strain fields are found by passing to the limit with
respect to the yarn’s thickness r and the distance between them e, parameters that are asymptotically
related. The homogenization and dimension reduction are done via the unfolding
method, which separates the macroscopic scale from the periodicity cell. In addition to the
homogenization, a dimension reduction from a 3D to a 2D problem is applied. Adapting
the classical unfolding results to both the anisotropic context and to lattice grids (which are
constructed starting from the center lines of the rods crossing each other) are the main tools
we developed to tackle this type of model. They represent the first part of the thesis and are
published in Falconi, Griso, and Orlik, 2022b and Falconi, Griso, and Orlik, 2022a.
Given the parameters mentioned above, we then proceed to classify different textile problems,
incorporating the results from other works on the topic and thoroughly investigating
some others. After the study is conducted, we draw conclusions and give a mathematical
explanation concerning the expected approximation of the displacements, the expected solvability
of the limit problems, and the phenomena mentioned above. The results can be found
in “Asymptotic behavior for textiles with loose contact”, which has been recently submitted.
Symplectic linear quotient singularities belong to the class of symplectic singularities introduced by Beauville in 2000.
They are linear quotients by a group preserving a symplectic form on the vector space and are necessarily singular by a classical theorem of Chevalley-Serre-Shephard-Todd.
We study \(\mathbb Q\)-factorial terminalizations of such quotient singularities, that is, crepant partial resolutions that are allowed to have mild singularities.
The only symplectic linear quotients that can possibly admit a smooth \(\mathbb Q\)-factorial terminalization are by a theorem of Verbitsky those by symplectic reflection groups.
A smooth \(\mathbb Q\)-factorial terminalization is in this context referred to as a symplectic resolution and over the past two decades, there is an ongoing effort to classify exactly which symplectic reflection groups give rise to quotients that admit symplectic resolutions.
We reduce this classification to finitely many, precisely 45, open cases by proving that for almost all quotients by symplectically primitive symplectic reflection groups no such resolution exists.
Concentrating on the groups themselves, we prove that a parabolic subgroup of a symplectic reflection group is generated by symplectic reflections as well.
This is a direct analogue of a theorem of Steinberg for complex reflection groups.
We further study divisor class groups of \(\mathbb Q\)-factorial terminalizations of linear quotients by finite subgroups \(G\) of the special linear group and prove that such a class group is completely controlled by the symplectic reflections - or more generally junior elements - contained in \(G\).
We finally discuss our implementation of an algorithm by Yamagishi for the computation of the Cox ring of a \(\mathbb Q\)-factorial terminalization of a linear quotient in the computer algebra system OSCAR.
We use this algorithm to construct a generating system of the Cox ring corresponding to the quotient by a dihedral group of order \(2d\) with \(d\) odd acting by symplectic reflections.
Although our argument follows the algorithm, the proof does not logically depend on computer calculations.
We are able to derive the \(\mathbb Q\)-factorial terminalization itself from the Cox ring in this case.
Phycobilisomes (PBS) are the major light-harvesting complexes for the majority of cyanobacteria
and allow these organisms to absorb in the so-called green gap. They consist of smaller units called
phycobiliproteins (PBPs), which are composed of an α- and a β-subunit with covalently bound
linear tetrapyrroles (phycobilins). The latter are attached to the apo-PBPs by phycobiliprotein
lyases. Interestingly, cyanobacteria of the genus Prochlorococcus lack complete PBS and instead
use prochlorophyte chlorophyll-binding proteins (Pcbs), which effectively utilize the energy of the
blue light region. The low-light-adapted (LL) strain Prochlorococcus marinus SS120 has a single
PBP, phycoerythrin-III (PE-III). It has been postulated that PE-III is chromophorylated with the
phycobilins phycourobilin (PUB) and phycoerythrobilin (PEB) in a 3:1 ratio. Thereby, the function
of PE-III remains unclear so far, so that light-gathering function and also photoreceptor function
are discussed.
The main goal of this work was to characterize the assembly of PE-III and thus the function of the
six putative phycobiliprotein lyases of P. marinus SS120. Previous work found that the individual
lyases could not be produced in soluble form, so we switched to a dual pDuet™ plasmid system in
E. coli, which was successfully established. Investigation of the binding of PEB to Apo-PE
revealed that the CpeS lyase specifically chromophorylated Cys82 with 3Z-PEB. Unfortunately,
additional chromophorylation could not be observed using the pDuet system. Therefore, in a
second part of the work, the entire PE gene cluster from P. marinus SS120 was to be introduced
into E. coli and expressed. Although the gene cluster was successfully transcribed within E. coli,
no translation was observed, possibly due to incompatible translation initiation between
Prochlorococcus and E. coli. The introduction of a mini PE cluster (CpeAB) into the
cyanobacterium Synechococcus sp. PCC 7002 was also successfully performed, in which case
production of CpeB but not CpeA from Prochlorococcus was detected. Recombinant CpeB was
also detected together with intrinsic PBP in Synechococcussp. 7002, indicating structural similarity
and incorporation into PBS in Synechococcus sp. 7002. Overall, the obtained results suggest that a
cyanobacterial host is a good option for the studies on the assembly of PE-III from P. marinus and,
based on this, future work could aim at generating an artificial operon using synthetic biology to
achieve efficient translation of all genes.
Tropical reservoirs are recognized as globally important sources of greenhouse gases (GHG). Tropical mountainous areas of high hydroelectric development have been poorly studied. The objective of this study is to understand GHG dynamics in tropical mountain reservoirs. Data on seasonal and diurnal GHG dynamics were collected during six field campaigns in the Porce III reservoir in the Colombian Andes, where the importance of oxic CH4 production in the variability of dissolved gas at the surface, as well as the variation of water levels as an incident factor in GHG fluxes on a seasonal scale, was evidenced. CO2 flux at the reservoir water-atmosphere interface were monitored with a high-resolution technique over periods of several weeks, where the importance of primary productivity in the diurnal cycling of CO2 flux was inferred, showing alternation as sink-source, and pulses of synoptic-scale CO2 flux were observed as a consequence of the simultaneous occurrence of increases in surface concentrations and high wind speed. In laboratory experiments, a relationship was found between rain rate, turbulent kinetic energy dissipation rate and gas transfer rate, contributing to the modeling of this phenomenon with applicability in inland waters. In general, the results obtained contribute to the understanding of GHG dynamics in eutrophic tropical reservoirs.
Solving probabilistic-robust optimization problems using methods from semi-infinite optimization
(2023)
Optimization under uncertainty is one field of mathematics which is strongly inspired by real world problems. To handle uncertainties several models have arisen. One of these is the probust model where a combination of probabilistic and worst-case uncertainty is considered. So far, just problem instances with a special structure can be dealt with. In this thesis, we introduce solving techniques applicable for any probust optimization problem. On the one hand, we create upper bounds for the solution value by solving a sequence of chance constrained optimization problems. These bounds are based on discretization schemes which are inspired by semi-infinite optimization. On the other hand, we create lower bounds by solving a sequence of set-approximation problems. Here, we substitute the original event set by an appropriate family of sets. We examine the performance of the corresponding algorithms on simple packing problems where we can provide the probust solution analytically. Afterwards, we solve a water reservoir and a distillation problem and compare the probust solutions with solutions arising from other uncertainty models.
The area of Baiturrahman Grand Mosque in Banda Aceh, Indonesia, is a trade and service area and, at the same time, is a historical site that has a lot of historical heritage. Nevertheless, only a few people walk along the corridor of the town. People prefer driving to commute within the area and stop at their destination point. Some of them walk but only for 20 to 30 meters. Synergically, the growing number of motor vehicles increases significantly. Research in 2016 shows that 77% of 3600 respondents go by motorcycle for daily trips. The traffic jam appears during the rush hours.
A walkability concept is an approach to this problem because it gives social, economic, and environmental benefits. Before analyzing the case study of Banda Aceh, the writer determined a definition of walkability for the context of the research. By comparing Journals from Indonesia, Malaysia, and Thailand, it is to know how the researchers of the three countries define walkability. The researchers describe it in 3 ways: creating or adapting the definition, building variables, or starting by defining elements that shape it. After building a definition, the writer chose the researcher's most often used parameters. The chosen parameters become variable to evaluate the site condition in the case study.
The absence of pedestrians and the trend of using a motor vehicle in the case study area is a time bomb that will endanger human survival in the future. This research investigates three aspects: the people, physical environment, and policy to answer the problem. The structured questionnaires spread over the research site to get the people's background of not walking. An observation in the research site is to get a clear idea of the condition of the pedestrian system and its physical environment. The writer learned the official planning documents related to city spatial plans and pedestrian development to get a deeper understanding.
Kaiserslautern in Germany is a comparative study in this research because it is one of the best practice examples of pedestrian development. The local government has built the pedestrian zone system since the 1960s and was entirely successful 38 years after the construction. Moreover, the city has two planning tools for the transport plan. Firstly, Mobilitätsplan Klima+ 2030 provides information about people's mobility, standards, principles of transport development, and strategic guidelines for traffic development. Secondly, Nahverkehrsplan Stadt Kaiserslautern service/trip performance, minimum standards, connection reliability, and local transportation network development/development, including various investment steps.
The questionnaire result shows that two-thirds of respondents who visited the old city center denied walking due to personal reasons and weather. It is because most of them own motor vehicles. Meanwhile, there are obstacles and destroyed parts along the pedestrian lane. The barriers are broken lane material, traders' products, street vendors, street cafés, and plants. Nonetheless, Banda Aceh has a plan for pedestrian system development in its city spatial plan. The document plans four segments of pedestrian lane development.
This research is advantageous in adding some knowledge to the field of urban pedestrian development. It could be a consideration in researching and planning a pedestrian system development for the cities that face a similar problem. Moreover, it helps promote a healthy, sustainable town that can save people and the environment from pollution in the future.
The research deals with a question about Architecture and its design strategies, combining historical information and digital tools. Design strategies are historically defined, they rely on geometry, context, building technologies and other factors. The study of Architecture´s own history, particularly in the verge of technological advancements, like the introduction of new materials or tools may shed some light on how to internalize digital tools like parametric design and digital fabrication.
From industrial fault detection to medical image analysis or financial fraud prevention: Anomaly detection—the task of identifying data points that show significant deviations from the majority of data—is critical in industrial and technological applications. For efficient and effective anomaly detection, a rich set of semantic features are required to be automatically extracted from the complex data. For example, many recent advances in image anomaly detection are based on self-supervised learning, which learns rich features from a large amount of unlabeled complex image data by exploiting data augmentations. For image data, predefined transformations such as rotations are used to generate varying views of the data. Unfortunately, for data other than images, such as time series, tabular data, graphs, or text, it is unclear what are suitable transformations. This becomes an obstacle to successful self-supervised anomaly detection on other data types.
This thesis proposes Neural Transformation Learning, a self-supervised anomaly detection method that is applicable to general data types. In contrast to previous methods relying on hand-crafted transformations, neural transformation learning learns the transformations from data and uses them for detection. The key ingredient is a novel objective that encourages learning diverse transformations while preserving the relevant semantic content of the data. We prove theoretically and empirically that it is more suited than existing objectives for transformation learning.
We also introduce the extensions of neural transformation learning for anomaly detection within time series and graph-level anomaly detection. The extensions combine transformation learning and other learning paradigms to incorporate vital prior knowledge about time series and graph data. Moreover, we propose a general training strategy for deep anomaly detection with contaminated data. The idea is to infer the unlabeled anomalies and utilize them for updating parameters alternatively. In setups where expert feedback is available, we present a diverse querying strategy based on the seeding algorithm of K-means++ for active anomaly detection.
Our extensive experiments and analysis demonstrate that neural transformation learning achieves remarkable and robust anomaly detection performance on various data types. Finally, we outline specific paths for future research.
Ecosystems are interconnected through the exchange of resources known as subsidies. Subsidies have the potential to affect the receiving ecosystem, altering its productivity and trophic cascade. The boundary between aquatic and terrestrial ecosystems provides a clear distinction between aquatic and terrestrial organisms and is a particularly interesting location for studying resource subsidies. Process-based models can aid in predicting the effects of anthropogenic stressors on food webs and understanding the functioning of meta-ecosystems. The goal of this thesis is to contribute to the development of theories on how changes in subsidies affect recipient ecosystems using aquatic-terrestrial interface as a case study. In this thesis, a review of process-based food web models applied to the aquatic-terrestrial interface
(aquatic-terrestrial models) and theoretical meta-ecosystems (theoretical models) was carried out (chapter 2). Results show that the models have enhanced our understanding of how terrestrial subsidies affect aquatic ecosystem. General understanding of how subsidies affect the stability and
functions of meta-ecosystems was also enhanced. However, existing aquatic-terrestrial models focused primarily on how subsidies from terrestrial ecosystems affect aquatic ecosystems, with none considering reciprocal flows. Furthermore, the quality characteristics of subsidies were not taken into account, despite potential differences from alternative local resources. Therefore, chapters 3 and 4 developed theories using terrestrial ecosystems with aquatic subsidies as a case study. Chapter 3 focused on how changes in subsidy quality affect the recipient ecosystem and hypothesized that changes in subsidy quality have a cascading effect on the recipient ecosystem (subsidy quality hypothesis). However, the model predictions were most sensitive to the input rate of inorganic nutrients in the recipient ecosystem, indicating that ecosystems are controlled by both top-down (TD) and bottom-up (BU) processes. Chapter 4 shows that the TD and BU processes of ecosystems interact antagonistically. The generated theories can be integrated into empirical research by testing predictions, assumptions, using model equations, and adopting the framework. This thesis improves our understanding of the impacts of subsidies on recipient ecosystems. Future meta-ecosystem models may consider the cross-ecosystem flow of information to further enhance our understanding of
meta-ecosystems. Additionally, aquatic-terrestrial models developed to predict algae blooms may consider developing trait-based models to improve predictions.
Light is an essential aspect of daily life, exerting a profound influence on various physiological and behavioral processes, including circadian rhythms, alertness, cognition, mood, and behavior. Technological advances, particularly the widespread adoption of light-emitting diodes (LEDs), have significantly accelerated the impact of lighting on the human experience. With the increasing global accessibility to electric and modern lighting systems, there is a pressing need to scientifically investigate the human-centered effects of lighting for the billions of people worldwide who encounter natural and electric lighting in their daily lives. Extensive interdisciplinary research across fields such as physics, engineering, psychology, medicine, business administration, and architecture has explored the biological and psychological effects of lighting, underscoring the immense potential for further advancements in this domain. Notably, innovative lighting technologies and strategies hold tremendous promise in enhancing human health, performance, and overall well-being.
Beyond physical spaces, three-dimensional virtual environments, including metaverse platforms, are becoming increasingly important. Simulated lighting in virtual spaces can have visual and non-visual effects on users. As technological progress and digitalization extend globally, more individuals will be exposed to virtual lighting scenarios. Consequently, exploring the human-centered lighting effects in virtual environments offers a compelling opportunity to improve the quality of user experiences. This thesis demonstrates the adaptability of established measurement methods from physical illumination and perception research for virtual environments.
This thesis comprises three parts. The first part reviews the current state of research on lighting and its influences on humans, examines research methods in lighting research, and identifies research gaps. The second part investigates the effects of lighting on complex emotional and behavioral constructs, specifically conflict handling. Elaborate laboratory experiments explore lighting as an independent variable, including realistic correlated color temperature (CCT) levels and enhanced CCT changes. Statistical analyses provide in-depth examination and critical discussion of the effects. The third part explores lighting in virtual spaces, considering literature, methodological approaches, and challenges. Two studies investigate visual and non-visual effects, and preferences in virtual environment design. Comparative analysis of the data yields implications for research and practice, including the interdisciplinary perspective of a novel approach called human-centric virtual lighting (HCVL).
In conclusion, this thesis comprehensively explores the impact of lighting on the human experience in both physical spaces and virtual environments. By addressing research gaps and employing contemporary methodologies, the findings contribute to our understanding of the effects of lighting on humans. Furthermore, the implications for research and practice offer valuable insights for the development of innovative lighting technologies and strategies aimed at enhancing the well-being and experiences of individuals worldwide. This work highlights the relevance of interdisciplinary research involving fields such as architecture, business management, event management, computer science, design, engineering, ergonomics, lighting research, medicine, physics, and psychology in advancing our understanding of visual and non-visual lighting effects.
Interactions between flow hydrodynamics and biofilm attributes and functioning in stream ecosystems
(2023)
Biofilms constitute an integral part of freshwater ecosystems and are central to regulating essential stream biogeochemical functions, such as nutrient uptake and metabolism. Under-standing the environmental factors that dictate the composition of biofilm communities and their role in whole-system nutrient cycling remains challenging, given the large spatial and temporal variability of biofilm communities. Pristine mountain streams exhibit a heteroge-neous streambed ranging from boulders to sand, provoking high spatiotemporal flow varia-bility. Our current knowledge of the interactions between flow hydrodynamics and biofilm attributes stems from mesocosm studies, which are inherently limited in environmental real-ism. Moreover, the mechanism linking flow hydrodynamics to microbial biodiversity and ecosystem functioning is currently not studied. My thesis aims to link streambed heteroge-neity and the associated development of the flow field to biofilm attributes and nitrogen uptake based on a multidisciplinary field approach. It integrates several spatial and temporal scales ranging from millimeter-sized spots to stream reaches and from milliseconds to minutes (i.e., the hydraulic scale of velocity fluctuations), up to days, months and years (i.e., the hydrological scale of flow fluctuations). I demonstrate that the spatial niche variability of flow hydrodynamics was an essential driver of biofilm community composition, diversity and morphology, in line with the habitat heterogeneity hypothesis initially formulated for terrestrial ecosystems. Furthermore, hydraulic mass transfer associated to flow diversity and biofilm biomass determined biofilm areal nitrogen uptake at scales ranging from spots to the stream reach. At the whole-ecosystem level, flow diversity determined the quantitative role of biofilms compared to other nitrogen uptake compartments by sorting them according to prevailing flow conditions. The magnitude of effects depended on ambient nutrient back-ground and season, suggesting a hierarchy of the environmental controls on biofilms. In summary, my interdisciplinary research provided a mechanistic understanding of how hy-dromorphological diversity determines the diversity, morphology, and the functional role of biofilms in streams. By improving the understanding of these relationships, my research improves our ability to predict and scale measurements of important stream biogeochemical functions. Moreover, it helps to face the challenges imposed by environmental changes and biodiversity loss.
Semi-structured data is a common data format in many domains.
It is characterized by a hierarchical structure and a schema that is not fixed.
Efficient and scalable processing of this data is therefore challenging, as many existing indexing and processing techniques are not well-suited for this data format.
This dissertation presents a novel approach to processing large JSON datasets.
We describe a new data processor, JODA, that is designed to process semi-structured data by using all available computing resources and state-of-the-art techniques.
Using a custom query language and a vertically-scaling pipeline query execution engine, JODA can process large datasets with high throughput.
We optimize JODA by using a novel optimization for iterative query workloads called delta trees, which succinctly represent the changes between two documents.
This allows us to process iterative and exploratory queries efficiently.
We improve the filtering performance of JODA by implementing a holistic adaptive indexing approach that creates and improves structural and content indices on the fly, depending on the query load.
No prior knowledge about the data is required, and the indices are automatically improved over time.
JODA is also modularized and can be extended with new user-defined predicates, functions, indices, import, and export functionalities.
These modules can be written in an external programming language and integrated into the query execution pipeline at runtime.
To evaluate this system against competitors, we introduce a benchmark generator, coined BETZE, which aims to simulate data scientists exploring unknown JSON datasets.
The generator can be tweaked to generate query workload with different characteristics, or predefined presets can be used to quickly generate a benchmark.
We see that JODA outperforms competitors in most tasks over a wide range of datasets and use-cases.
Facing the demands of the energy transition, gas turbines require continuous development to improve thermal efficiency. Since this can be achieved by further increasing the turbine inlet temperature, advanced cooling techniques are required to protect the highly loaded turbine components. This includes the first nozzle guide vane, which is located just downstream of the combustion chamber. Film cooling, i.e., injecting coolant into the hot-gas path, has been a cornerstone of turbine cooling. While the coolant film is typically supplied through discrete cooling holes, design-related gaps, e.g., the purge slot between the transition duct and the vane platform, can be utilized for injecting coolant. Since the coolant is drawn from the compressor, potentially offsetting thermal efficiency gains from increased turbine inlet temperatures, efficient use of the coolant is critical. In this context, experimental data obtained under engine-like flow conditions, i.e., matching the Mach and Reynolds numbers that are present in the engine, are indispensable for assessing the film cooling performance. Existing research on upstream slot injection has a blind spot, as all high-speed studies were conducted in linear cascades. This approach neglects, by principle, the influence of the radial pressure gradient that naturally occurs in swirling flows and potentially affects coolant propagation. Therefore, a high-speed annular sector cascade has been developed: It allows testing the film cooling performance and aerodynamic effects of coolant flows from various upstream slot configurations, not only at engine-like Mach and Reynolds numbers but also considering the radial pressure gradient. The cascade is equipped with nozzle guide vanes with contoured endwalls representing state-of-the-art turbine design. The results to be expected from the test rig are, therefore, of great relevance.
The annular sector cascade is integrated into the existing high-speed turbine test facility at the Institute of Fluid Mechanics and Turbomachinery (University of Kaiserslautern-Landau), which was previously used for testing a linear cascade with the same nozzle guide vane design. It incorporates various measurement techniques such as five-hole probes, pressure-sensitive paint, and infrared thermography to investigate both the thermal and aerodynamic aspects of film cooling. This thesis provides a detailed description of the cascade development, starting from the aerodynamic design up to the structural implementation. It also includes the results of the previous measurements in the linear cascade, as they provided the basis for refining the measurement methods.
Research across virtually all subfields of psychology has suffered from construct proliferation, often resulting in redundant constructs that strongly overlap conceptually and/or empirically. Such cases of old wine in new bottles, i.e., established constructs with new labels, are instances of the jangle fallacy and are problematic because they lead to fragmented literatures and thereby considerably impede the accumulation of knowledge.
The present thesis aims at demonstrating how to scrutinize potential jangle fallacies in a theory-driven, deductive, and falsificationist way. Using the example of the common core of aversive traits, D, I discuss the ways one can find and test differences between more or less overlapping, competing constructs. Specifically,the first paper tests the plausibility of a potential jangle fallacy with respect to D and a Fast Life History Strategy, concluding that the latter is unlikely to represent the common core of aversive traits at all. The remaining three papers test the distinctness of D from FFM Agreeableness, HEXACO Honesty-Humility, and a blend of the two, AG+, all of which are conceptually and empirically remarkably similar to, but could nevertheless be dissociated from D, thereby also refuting an instance of the jangle fallacy.
Although research often places emphasis on similarities, it is impossible to conclusively prove the equivalence of constructs. I therefore conclude that a falsificationist approach is more informative in that it allows to test whether any differences identified on a conceptual level can be confirmed empirically. Stated differently, if a new construct is dissociable both theoretically and empirically, one may assume that it is functionally distinct and no instance of the jangle fallacy.
This thesis describes the synthesis and extensive characterization of mononuclear
cis-(carboxylato)(hydroxo)iron(III) and cis-(carboxylato)(aqua)iron(II) complexes
among others and illuminates their capability to engage in hydrogen atom transfer
reactions via reactivity studies with suitable substrates. The employed carboxylates
include benzoate, p-nitrobenzoate, and p-methoxybenzoate. Additionally, the first
example for a solution-stable mononuclear cis-di(hydroxo)iron(III) complex is
presented, the extensive characterization of which aims to contribute to the
identification of spectroscopic markers and a better understanding of the role of the
carboxylate ligand in the above-mentioned complexes.
The cis-(carboxylato)(hydroxo/aqua)iron(III/II) complexes match the coordination
environment and the electronic properties of the active iron site in the resting state of
rabbit lipoxygenase as well as of the reaction intermediates postulated for the
enzymatic mechanism. In addition to being excellent structural and electronic models,
the cis-(carboxylato)(hydroxo)iron(III) complexes display reactivity in abstracting
hydrogen atoms from (weak) O–H and C–H bonds of suitable substrates, thus proving
themselves to be worthy functional model complexes for lipoxygenases. The findings
are supported with extensive structural, spectroscopic, spectrometric, magnetic, and
electrochemical investigations as well as with quantified thermodynamic and kinetic
parameters to allow for an adequate comparison between the derivatives with varying
carboxylate ligands and to other works. Moreover, the reactivity investigation for the
cis-(benzoato)(hydroxo)iron(III) (the first example found) was exemplary accompanied
by a thorough theoretical study (done by external cooperation partners), which
validates the experimental results and identifies an underlying concerted protoncoupled-electron-transfer (cPCET) mechanism for the
cis-(carboxylato)(hydroxo)iron(III) complexes – analogous to the one suggested for the
enzyme.
The synthesis and study of a functional structural model complex is extremely
challenging and rarely successful. Thus, this result alone represents a significant
scientific advancement for the field, as no such model for lipoxygenases had been
precedented prior to this project. The in-depth studies with derivatives of the initial cis-(benzoato)(hydroxo/aqua)iron(III/II) complexes further contribute to this
advancement by illuminating structure-function relations.
3D joint angles based human pose is needed for applications like activity recognition, musculoskeletal health, sports biomechanics and ergonomics. The microelectromechanical systems (MEMS) based magnetic-inertial measurement units (MIMUs) can estimate 3D orientation. Due to small size, MIMUs can be attached to the body as wearable sensors for obtaining full 3D human pose and this system is termed as inertial motion capture (i-Mocap). But the MIMUs suffer from sensor errors and disturbances, due to which orientation estimated from individual MIMUs can be erroneous. Accurate sensor calibration is essential and subsequently alignment of these sensors to body segments must also be precisely known, which is called sensor-to-segment calibration. Sensor fusion is employed to address the disturbances and noise in MIMUs. Many state-of-art inertial motion capture approaches ignore the magnetometer and only use IMUs to reduce the error arising from inhomogeneous magnetic field. These algorithms rely on kinematic constraints and assumptions regarding joints and are based on IMUs located on the adjacent body segments. The full body coverage requires 13-17 such units and can be quite obtrusive. The setting up and calibration of so many wearable sensors also take time.
This thesis focuses on 3D human pose estimation from a reduced number of MIMUs and deals with this problem systematically. First we propose an accurate simultaneous calibration of multiple MIMUs, which also learns the uncertainty of individual sensors. We then describe a novel sensor fusion algorithm for robust orientation estimation from an MIMU and for updating sensors calibration online. The residual errors in both sensor calibration and fusion can result in drift error in the joint angles. Therefore, we present anatomical (sensor-to-segment) calibration in which an orientation offset correction term is updated and used for online correction of residual drift in individual joint angles. Subsequently we demonstrate that 3D human joint angle constraints can be learned using a data-driven approach in a high dimensional latent space. Owing to temporal and joint angle constraints, it is possible to use only a reduced set of sensors (as opposed to one sensor per segment) and still obtain 3D human pose. But the spatial and temporal prior learning from data is often limited due to finite set of movement patterns in most datasets. This introduces uncertainty while estimating 3D human pose from sparse MIMU sensors. We propose a magnetometer robust orientation parameterization and a data-driven deep learning framework to predict 3D human pose with associated uncertainty from sparse MIMUs. The model is evaluated on real MIMU data and we show that the uncertainty predicted by the trained model is well-correlated with actual error and ambiguity.
Adult emerging aquatic insects can transfer micropollutants, accumulated during their aquatic development, from aquatic to terrestrial ecosystems. This process depends on both contaminant- and organism-specific properties and processes. The transfer of contaminants can result in the dietary exposure of terrestrial insectivores at the aquatic-terrestrial ecosystem boundary. It is, however, unknown whether this route of contaminant transfer is relevant for current-use pesticides, despite their ubiquity in freshwater ecosystems globally. Furthermore, empirical investigation of pesticides in terrestrial insectivores which consume emerging aquatic insects (e.g. riparian spiders) is lacking. In the present work, two laboratory batch-scale studies and a field study were conducted to investigate the transfer of current-use pesticides by emerging aquatic insects and the dietary exposure of riparian spiders preying on emerging insects. In the two laboratory studies, larvae of the model organism, Chironomus riparius, were exposed, either chronically to seven fungicides and two herbicides, or acutely (24-hours) to three individual insecticides during their development. The pesticides were all small organic molecules, selected to cover a low to moderate lipophilicity range (logKow 1.2 – 4.7). Exposure took place at three environmentally relevant concentrations for the fungicides and herbicides (1.2 – 2.5, 17.5 – 35.0 or 50.0 – 100.0 ng/mL) and two for the insecticides (0.1 and either 4 or 16 ng/mL). Eight of the nine fungicides and herbicides, as well as one of the three insecticides were detected in the adult insects after metamorphosis. Concentrations of the pesticides decreased over metamorphosis. However, the transfer of individual pesticides was not well predicted using published models which are based on contaminant lipophilicity andwere developed using other contaminant classes. In the present work, pesticide-specific differences in bioaccumulation by the larvae, retention through metamorphosis and sex-specific bioamplification and elimination over the course of the terrestrial life stage were observed. The neonicotinoid, thiacloprid, was the only insecticide retained by the emerging insects, due to its slow elimination by the larvae. Thiacloprid also decreased insect emergence success. An approximate 30 % higher survival to emergence in the low exposure level (0.1 ng/mL), however, resulted in a relatively higher insecticide flux, from the aquatic to the terrestrial environment compared to the higher exposure (4 ng/mL). For the field study, a method for the analysis of 82 current-use pesticides by high-performance liquid chromatography tandem to triple quadrupole mass spectrometry using small volumes (30 mg) of insect material was validated and applied to samples of emerging insects and Tetragnatha spp. spiders which were collected from stream sites impacted by agricultural activities. Emerging aquatic insects from three orders (Diptera, Ephemeroptera and Trichoptera) contained 27 pesticides whereas 49 pesticides were found in the aquatic environment (water, sediment and aquatic leaf litter). This included mixtures of up to four neonicotinoid insecticides in the insects, with concentrations up to 12300 times greater than were found in the water. Furthermore, the web-building riparian spiders contained 29 pesticides, generally at low concentrations, however concentrations of three neonicotinoids and one herbicide were biomagnified compared to the emerging insects. The three studies included in this thesis thus reveal that the aquatic-terrestrial transfer of current-use pesticides occurs, even at very low environmentally relevant exposure concentrations. Furthermore, new knowledge was generated on the diverse interactions between current-use pesticides and organisms over their entire lifecycles, affecting the propensities for individual pesticides to be transferred via insect emergence. A wide range of pesticides were found to be dietarily bioavailable to riparian spiders, and likely many other riparian insectivores. The neonicotinoid insecticides stood out for their potential to negatively impact adjacent terrestrial food webs through negative impacts on aquatic insect emergence (i.e. biomass flux), while still having a high propensity to be transferred by emerging insects and bioaccumulated in riparian spiders.
Efforts in decarbonization lead to electrification, not only for road vehicles but also in the sector of mobile machines. Aside from batteries, those machines are electrified by tethering systems, nowadays featuring an AC low voltage system. Those systems are applied, e.g., to underground load haul dumpers with short tethering lines and low machine power. To expand tethering to further markets as agricultural machinery, this work proposes an HVDC tethering system allowing higher machine power and transmission length due to thinner, lighter tethering lines. The HVDC voltage is converted by distribution over a number of series connected DC/DC converters. Less blocking voltage on the semiconductors allows faster switching technology to reduce the converters’ weight and volume. The concepts modularity allows for flexible adaption on various application scenarios. Since comparable concepts exist for offshore wind farms connectivity, its applicability for this is discussed. A full bridge inverter/rectifier LLC resonance DC/DC converter is presented for the modules. A switched LTI converter model is developed and a Common Quadratic Lyapunov Function (CQLF) is computed for prove of stability. The converter control features soft startup and voltage control over all modules. The concepts are validated by simulation and on a scaled prototype.
In the context of distributed networked control systems, many issues affect the performance and functionality of the connected subsystems, mainly raised because of the communication medium imposed into the system structure. The communication functionality must generally cope with the data exchange requirements between system entities. Therefore, due to the limited communication resources, especially in wireless networks, an optimal algorithm for the assignment of the communication resources and proper selection of the right Medium Access Control (MAC) protocol are highly needed.
In this dissertation, we studied several problems raised by communication networks in wireless networked control systems, with a particular focus on the effect of standard Medium Access Control (MAC) protocols on the overall control system performance. We examined the effect of both the Time Division Multiple Access (TDMA) and the Orthogonal Frequency Division Multiple Access (OFDMA) protocols and developed a set of distributed algorithms that suit their specification requirements.
As a benchmark, we used a vehicle dynamics optimal control problem where the objective of the optimization problem is to penalize the maximal utilization of the tire's adhesion forces for a given driving maneuver. The problem was decomposed into a distributed form using primal and dual decomposition techniques, and solving algorithms were derived using both primal and dual subgradient methods. The problem solver was tested with respect to a wireless networked system structure and evaluated for different communication typologies, such as uni-directional, bidirectional, and broadcasting topology.
Later, the setup of the solution algorithms was extended concerning the specification of the TDMA and OFDMA protocols, and we introduced an event-triggered scheme into the solver algorithm. The proposed event-triggered scheme is mainly utilized to reduce communication between concurrent computation subsystems, which is primarily intended to facilitate real-time efficiency.
Next, we investigated the effect of the data exchange between subsystems on the overall solver performance and adapted the sensitivity analysis concept within the event-based communication scheme. An adaptive sensitivity-based TDMA algorithm was developed to manage the extensive communication resource requests, and channel utilization was adapted for the optimal solution behavior.
In the last part of the thesis, we extended our research direction to the multi-vehicle concept and investigated the communication resource allocation problem in the context of the OFDMA protocol. We developed an adaptive sensitivity-based OFDMA protocol based on linking the evolution of the application layer to the communication layer and assigning the communication resources concerning the sensitivity analysis of the optimization problem at the application layer.
Augmented (AR), Virtual (VR) and Mixed Reality (MR) are on their way into everyday life. The recent emergence of consumer-friendly hardware to access this technology has greatly benefited the community. Research and application examples for AR, VR and MR can be found in many fields, such as medicine, sports, the area of cultural heritage, teleworking, entertainment and gaming. Although this technology has been around for decades, immersive applications using this technology are still in their infancy. As manufacturers increase accessibility to these technologies by introducing consumer grade hardware with natural input modalities such as eye gaze or hand tracking, new opportunities but also problems and challenges arise. Researchers strive to develop and investigate new techniques for dynamic content creation or novel interaction techniques. It has yet to be found out which interactions can be made intuitively by users. A major issue is that the possibilities for easy prototyping and rapid testing of new interaction techniques are limited and largely unexplored.
In this thesis, different solutions are proposed to improve gesture-based interaction in immersive environments by introducing gesture authoring tools and developing novel applications. Specifically, hand gestures should be made more accessible to people outside this specialised domain. First, a survey which explores one of the largest and most promising application scenario for AR, VR and MR, namely remote collaboration is introduced. Based on the results of this survey, the thesis focuses on several important issues to consider when developing and creating applications. At its core, the thesis is about rapid prototyping based on panorama images and the use of hand gestures for interactions. Therefore, a technique to create immersive applications with panorama based virtual environments including hand gestures is introduced. A framework to rapidly design, prototype, implement, and create arbitrary one-handed gestures is presented. Based on a user study, the potential of the framework as well as efficacy and usability of hand gestures is investigated. Next, the potential of hand gestures for locomotion tasks in VR is investigated. Additionally, it is analysed how lay people can adapt to the use of hand tracking technology in this context. Lastly, the use of hand gestures for grasping virtual objects is explored and compared to state of the art techniques. Within this thesis, different input modalities and techniques are compared in terms of usability, effort, accuracy, task completion time, user rating, and naturalness.
Though Computer Aided Design (CAD) and Simulation software are mature, well established, and in wide professional use, modern design and prototyping pipelines are challenging the limits of these tools. Advances in 3D printing have brought manufacturing capability to the general public. Moreover, advancements in Machine Learning and sensor technology are enabling enthusiasts and small companies to develop their own autonomous vehicles and machines. This means that many more users are designing (or customizing) 3D objects in CAD, and many are testing machine autonomy in Simulation. Though Graphical User Interfaces (GUIs) are the de-facto standard for these tools, we find that these interfaces are not robust and flexible. For example, designs made using GUI often break when customized, and setting up large simulations can be quite tedious in GUI. Though programmatic interfaces do not suffer from these limitations, they are generally quite difficult to use, and often do not provide appropriate abstractions and language constructs.
In this Thesis, we present our work on bridging the ease of use of GUI with the robustness and flexibility of programming. For CAD, we propose an interactive framework that automatically synthesizes robust programs from GUI-based design operations. Additionally, we apply program analysis to ensure customizations do not lead to invalid objects. Finally, for simulation, we propose a novel programmatic framework that simplifies building of complex test environments, and a test generation mechanism that guarantees good coverage over test parameters. Our contributions help bring some of the advantages of programming to traditionally GUI-dominant workflows. Through novel programmatic interfaces, and without sacrificing ease of use, we show that the design and customization of 3D objects can be made more robust, and that the creation of parameterized simulations can be simplified.
Faces deliver invaluable information about people. Machine-based perception can be of a great benefit in extracting that underlying information in face images if the problem is properly modeled. Classical image processing algorithms may fail to handle the diverse data available today due to several challenges related to varying capturing locations, and conditions. Advanced machine learning methods and algorithms are now highly beneficial due to the rapid development of powerful hardware, enabling feasible advanced solutions based on data learning and summarization into powerful models. In this thesis, novel solutions are provided to the problems of head orientation estimation and gender prediction. Initially, classical machine learning algorithms were used to address head orientation estimation but were limited by their inability to handle large datasets and poor generalization. To overcome these challenges, a new highly accurate head pose dataset was acquired to tackle the identified problems. Novel trained deep neural networks have been exploited, that use the acquired data and provide novel architectures. The information about head pose is then represented in the network weights, thus, allowing predicting the head orientation angles given a new unseen face. The acquired dataset, named AutoPOSE opens the door for further studies in the field of computer vision and especially, face analysis. The problem of gender prediction has also been explored, but unlike humans who can easily identify gender from a face, computers face difficulties due to facial similarities. Therefore, hand-crafted features are not effective for generalization. To address this, a new deep learning method was developed and evaluated on multiple public datasets, with identified challenges in both still images and videos addressed. Finally, the effect of facial appearance changes due to head orientation variation has been investigated on gender prediction accuracy. A novel orientation-guided feature maps recalibration method is presented, that significantly increased the accuracy of gender prediction.
In conclusion, two problems have been addressed in this thesis, independently and joined together. Existing methods have been enhanced with intelligent pre-processing methods and new approaches have been introduced to tackle existing challenges, that arise from pose, illumination, and occlusion variations. The proposed methods have been extensively evaluated, showing that head orientation and gender prediction can be estimated with high accuracy using machine learning-based methods. Also, the evaluations showed that the use of head orientation information consistently improved the gender prediction accuracy. Scientific contributions have been presented, and the new acquired highly accurate dataset motivates the research community to push the state-of-the-art forward.
Undocumented enterprise data can easily pile up in companies in form of datasets and personal information. In absence of a data management strategy, such data becomes rather messy and may not fit for its intended use. Since there is often no documentation available, only a limited number of domain experts are aware of its contents. Therefore, for companies it becomes increasingly difficult to use such data to its full potential. To provide a solution, this PhD thesis investigates the construction of enterprise and personal knowledge graphs by semantically enriching messy data with meaning using semantic technologies. Since real world entities and their interrelations are organized in a graph, knowledge graphs serve as a semantic bridge between domain conceptualization and raw data. Spreadsheets are a prominent example of such enterprise data, since they are widely used by knowledge workers in the industrial sector. Two distinct approaches are investigated to construct knowledge graphs from them: a global extraction & annotation method and a local mapping technique. The latter is further complemented with a predictor of mapping rules on messy data. Different human-in-the-loop strategies are considered to include experts depending on their user group. Since non-technical users usually lack understanding of semantic technologies, they need appropriate tools to be able to give feedback. In case of developers, approaches are proposed to close the technology gap between industry and Semantic Web related concepts. Semantic Web practitioners participate with ontology modeling and linked data applications. Enterprise and personal data is typically confidential which is why it cannot be shared with a research community to discuss its challenges. However, for evaluation and reproducibility reasons publicly available datasets are mandatory. The thesis proposes ways to generate synthetic datasets with the goal to be as authentic as possible. Besides that, for internal evaluations a crawler of personal data on desktops is implemented. There are further contributions related to this thesis in diverse domains. One is about the motivation to support users in their daily work using personal knowledge assistants. Others are the agricultural field and the data science domain which also benefit from knowledge graph approaches. In conclusion, this PhD thesis contributes to the construction of knowledge graphs from especially messy enterprise data, while users from different groups take part in this process in various ways.
The present thesis describes the experimental performance determination and numerical
modeling of an aerostatic porous bearing made of an orthotropically layered ceramic
composite material (CMC). The high temperature resistance, low thermal expansion and
high reusability of this material makes it eminently suitable for use in highly stressed
fluid-film bearing applications.
The work involves the development of an aerostatic journal bearing made of porous,
orthotropically layered carbon fiber-reinforced carbon composite (C/C) and the design
of a journal bearing test rig, which contained additional aerostatic support bearings and
six optical laser triangulation sensors. The sensor system enabled the measurement of
lubricant film thickness and shaft misalignment. As a result of the slight air lubrication
clearance of 30 μm, the focus was on low concentricity and the determination of shaft
misalignments.
The preliminary tests included the determination of the permeability of the porous material
and the applicability of Darcy’s law. A scan of the inner surface of the porous bushing
revealed a characteristic grooved structure, which can be attributed to the layered structure
of the material. Bearing tests were conducted up to a rotational speed of 8000 rpm and a
pressure ratio of 5 to 7. No significant effect of rotational speed on load-carrying capacity
and gas consumption was observed in this operating range. The examined operating points
did not indicate any sign of the occurrence of the pneumatic hammer. A temporary load of
below 90N on the bearing and an eccentricity ratio below 0.8 did not cause any significant
wear on the shaft.
Four numerical models, based on Reynolds’ lubricant film equation and Darcy’s law were
developed. The models were gradually extended with consideration of shaft misalignment,
the compressibility of the gas, the geometry of the pressure supply chamber and the
embedding of the groove structure. The models were validated with external publications
and the performed tests.
Numerous studies have investigated aerostatic porous bearings made of sintered metal
and graphite. Current computational approaches to determine a fast preliminary design
reached max. deviations of approximately 20 - 24% compared to experimental tests. One
of the central claims of this research was to extend this area of investigation by porous,
othotropically layered bearings made of C/C. The developed extended Full-Darcy model
achieved a maximum deviation in the load-carrying capacity of 21.6% and in the gas
consumption of 23.5%.
This study demonstrates the applicability of a resistant material from the aerospace field
(reusable thrust chambers made of CMC) for highly stressed and durable fluid-film bearings.
Furthermore, a numerical model for the computation and design of these bearings was
developed and validated.
This thesis focuses on novel methods to establish the utility of wearable devices along with machine learning and pattern recognition methods for formal education and address the open research questions posed by existing methods. Firstly, state-of-the-art methods are proposed to analyse the cognitive activities in the learning process, i.e., reading, writing, and their correlation. Furthermore, this thesis presents real-time applications in wearable space as an experimental tool in Physics education, and an air-writing system.
There are two critical components in analysing the reading behaviour, i.e., WHERE a person looks at (gaze analysis) and WHAT a person looks at (content analysis). This thesis proposes novel methods to classify the reading content to address the WHAT AT component. The proposed methods are based on a hybrid approach, which fuses the traditional computer vision methods with deep neural networks. These methods, when evaluated on publicly available datasets, yield state-of-the-art results to define the structure of the document images. Moreover, extensive efforts were made to refine and correct ICDAR2017-POD dataset along with a completely new FFD dataset.
Traditionally, handwriting research focuses on character and number recognition without looking into the type of writing, i.e. text, math, and drawing. This thesis reports multiple contributions for on-line handwriting classification. First, it presents a public dataset for on-line handwriting classification OnTabWriter, collected using iPen and an iPad. In addition, a new feature set is introduced for on-line handwriting classification to establish the benchmark on the proposed dataset to classify handwriting as plain text, mathematical expression, and plot/graph. An ablation study is made to evaluate the performance of the proposed feature set in comparison to existing feature sets. Lastly, this thesis evaluates the importance of context for on-line handwriting classification.
Analysing reading and writing activities individually is not enough to provide insights to identify the student's expertise unless their correlations are analysed. This thesis presents a study where reading data from wearable eye-trackers and writing data from sensor pen are analysed together in correlation to correlate the expertise of the users in Physics education with their actual knowledge. Initial results show a strong correlation between individual's expertise and understanding of the subject.
Augmented reality & virtual applications can play a vital role in making classroom environments more interactive and engaging both for teachers and learners. To validate the hypothesis, different applications are developed and evaluated. First, smart glasses are used as an experimental tool in Physics education to help the learners perform experiments by providing assistance and feedback on head mounted display in understanding acoustics concepts. Second, a real-time application of air-writing with the finger on an imaginary canvas using a single IMU as the FAirWrite system is also presented. FAirWrite system is further equipped with DL methods to classify the air-written characters.