Refine
Year of publication
Document Type
- Doctoral Thesis (940) (remove)
Language
- English (940) (remove)
Has Fulltext
- yes (940)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (54)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (22)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (8)
- Kaiserslautern - Fachbereich Bauingenieurwesen (6)
- Kaiserslautern - Fachbereich ARUBI (5)
- Landau - Fachbereich Psychologie (5)
- Fraunhofer (ITWM) (4)
- Kaiserslautern - Fachbereich Architektur (1)
- Landau - Fachbereich Erziehungswissenschaften (1)
A new class of amines that are promising solvents for reactive CO2-absorption processes was thoroughly investigated in a comprehensive experimental study. The amines are all derivatives of triacetoneamine and differ only in the substituent of the triacetoneamine ring structure. These amines are abbreviated by the acronym EvA with a consecutive number that designates the derivatives. About 50 EvAs were considered in the present study, from which 26 were actually synthesized and investigated as aqueous solvents. The investigated properties were: solubility of CO2, rate of absorption of CO2, liquidliquid and solid-liquid equilibrium, speciation (qualitative and quantitative), pK-values, pH-values, foaming behavior, density, dynamic viscosity, vapor pressure, and liquid heat capacity. All 26 EvAs were assessed in an experimental screening. The results were compared with the results of two standard solvents from industry: aqueous solvents of monoethanolamine (MEA) and a solvent blend of methyl-diethanolamine and piperazine (MDEA/PZ). Detailed studies were carried out for two EvAs that revealed significantly improved performance compared to MEA and MDEA/PZ: EvA34 combines favorable properties of MEA and MDEA/PZ in one molecule. EvA25 reveals a liquid-liquid phase split that reduces the solubility of CO2 in the solvent and shifts the CO2 into the aqueous phase. This allowed the design of a new CO2-absorption process, that takes advantage of the liquid-liquid phase split. Finally, the chemical speciation in 16 EvAs was investigated by NMR spectroscopy. From the results, relationships between the chemical structure of the EvAs and the observed speciation, basicity, and application properties were established. This enabled giving guidelines for the design of new amines and proposing new types of amines, which were called ADAMs.
This thesis aims to establish a transient electro-thermomechanical model capable of characterizing the shape-morphing capabilities of shape memory alloy hybrid composites (SMAHCs). The particular SMAHC type examined in this study comprises a rigid substrate, a soft interlayer, and SMA wires sewed on top. The model was synthesized from the bottom up using well-established equations, methodologies, and solution procedures, taking into account appropriate simplifications and assumptions. The implementation was done with open-source solutions to ensure free availability. The model extends existing models to include aspects of external influences so that, for example, the efficiency and dynamics of the SMAHC can be predicted as a function of external mechanical loads and different ambient temperatures. Inputs to the model include geometric and material design factors and Joule’s heat and ambient conditions, while outputs include the SMAHC’s deflection, load-carrying capacities, bandwidth, and energy consumption. Individual components of the SMAHC were characterized to create simulation input parameters, and methodologies for characterization were devised. The thermomechanical and electro-thermomechanical model was validated by comparing experimental and simulated data. Regardless of the various assumptions and simplifications, the findings demonstrate that the transient deformation behavior during the electrically induced thermal activation of a SMAHC at room temperature and external loads of less than 19.2 N can be predicted with variations of less than 20 percent. With increasing mechanical stresses in the shape memory alloy attributable to external loads or rigid substrates and temperatures above the austenite start temperature or below -10°C, the model’s applicability may become unreasonable.
The field of 3D reconstruction is one of the most important areas in computer
vision. It is not only of theoretical importance, but it is also increasingly
used in practical applications, be it in reverse engineering, quality control or
robotics. In practical applications, where high precision reconstructions are
required for a large variety of different objects, structured light reconstruction
is often the method of choice. It allows to achieve accurate and dense
point correspondences over the entire scene, regardless of object texture or
features. Techniques that project phase-shifted sinusoidals are widely used
because, based on the harmonic addition theorem, they theoretically allow
surface encoding in full camera resolution invariant to the object’s coloring.
In this thesis, a fully-automatic reconstruction pipeline based on the sinusoidal
structured light technique is presented. From the projection of the
fringe patterns for encoding the object’s surface, the robust matching of the
point correspondences in sub-pixel accuracy, the auto-calibration of the setup
including the active device, up to the fully-automatic alignment of the partial
reconstructions, all steps will be described and examined in detail. During
that, improvements will be achieved in the area of matching, obtaining highly
accurate and topologically consistent correspondences in sub-pixel precision
between all the devices used. Furthermore, the auto-calibration from point
correspondences, based on the epipolar geometry of the structured light system
is improved. Weaknesses of previous methods in the extraction of focal
lengths from the fundamental matrices are discovered and addressed. The partial
point clouds, reconstructed from the auto-calibrated devices, are finally
pre-aligned using a neural network approach, based on light-resistant optical
flow estimation and subsequently refined using a global approach.
The weaknesses of the structured light method itself will also be addressed
and partially fixed during the course of this work. Since it is an active reconstruction method, certain surface properties can affect the quality of the
reconstruction. It will be shown how these problems can be eliminated or at
least be reduced using an iterative approach that combines fringe patterns with
an inverse texture. Another weakness of the method is its time-consuming acquisition procedure. Typically, a large number of horizontal and vertical fringe
patterns are projected onto the scene to achieve high-precision encoding despite
the limited dynamic range and resolution of the projector. Therefore, a
method will be presented which allows to combine the horizontal and vertical
patterns for a simultaneous two dimensional surface encoding.
The intensive use of pesticides is one of the main causes for global arthropod decline which can subsequently affect ecosystem services such as pollination, natural pest control, and soil fertility and cascade to higher trophic levels including bats and birds. However, agriculture in large parts is strongly dependent on pesticides, and viticulture in particular is one of the major consumers of fungicides. Fungus-resistant grape varieties offer a very good opportunity to reduce fungicide applications by more than 80 % while maintaining healthy grapes. Here, the effects of fungicide reduction on arthropods and natural pest control were investigated on the one hand in a long- term study in an experimental vineyard and on the other hand in 32 commercially managed vineyards in southwestern Germany. In both designs, fungicide reduction resulted in mostly positive effects on arthropods and natural pest control. Particularly beneficial arthropods such as predatory mites and spiders were promoted by reduced fungicide applications. Contrastingly, potential vineyard pests such as phytophagous mites and leafhoppers decreased under fungicide reduction. Fungus-resistant grape varieties are thus a promising approach to foster resilient agroecosystems and a more sustainable viticulture.
Northwest Africa is predicted to undergo a climatic shift from a temperate to an arid climate resulting in increased aridity, water salinity, and river intermittency. These changes have the potential to impact freshwater communities, ecosystem functioning, and related ecosystem services. However, there is still limited data on the impact of climate change and salinity on river ecosystems and the people depending on it, particularly in understudied regions such as Northwest Africa. In this dissertation, I focus on the Draa River basin in southern Morocco to assess the primary factors shaping and altering macroinvertebrate communities. A particular focus is placed on the impacts of salt on the ecosystem and the consequences for human well-being. We conducted a meta-analysis covering 195 sites in Northwest Africa to examine the responses of insect communities and their trait profiles to climate change and anthropogenically induced stressors. To exclude large-scale geographic patterns such as variations in climate conditions we conducted a confluence-based study focusing on tributaries and their joint downstream sections near three confluences in the Draa River basin. Additionally, we investigated the water and biological quality of 17 further sites, aiming to explore the relationship between human well-being and the ecosystem. Our approach involved conducting water measurements, biological monitoring, and household surveys to create water, biological, and human satisfaction indices. Our findings revealed that insect family richness in arid sites of Northwest Africa was, on average, 37 % lower than in temperate sites. Among the strongest factors contributing to reduced richness and low biological quality were low flow and high water salinity. Based on the results of the confluence study only around five taxa comprised over 90 % of specimens per site, with a higher proportion of salt-tolerant generalist species in saline sites. Resistance and resilience traits such as small body size, aerial dispersal, and air breathing were found to promote survival in arid and saline sites. However, low γ-diversity in the basin caused minimal differences in macroinvertebrate community composition suggesting that the community was generally adapted to the arid climate. We observed positive associations between river water quality and biological quality indices. However, no significant associations were found between these indices and human satisfaction. Human satisfaction was particularly low in the Middle Draa, where 89 % of respondents reported emotional distress due to water salinity and scarcity. Inhabitants in areas characterized by higher levels of water salinity and scarcity generally rated drinking and irrigation water quality lower. Considering that large parts of Northwest Africa will become arid by the end of the century, we can expect a loss of macroinvertebrate diversity affecting the entire ecosystem, which might potentially affect human well-being negatively. To protect the integrity of the ecosystem in the face of ongoing climate change, it is crucial to limit anthropogenic stressors such as secondary salinization and the pressures on water resources. Protecting both more and less saline rivers, preserving natural water flow, and maintaining connectivity between habitats will allow to maintain the Draa River biodiversity, ensure ecosystem functioning, and benefit inhabitants through ecosystem services. Future policies and action plans should consider the interdependence between ecosystems and human inhabitants to enhance overall well-being.
In recent decades, there has been a strong global decline in biodiversity which is attributed, among other reasons, to intensified agriculture and the loss of habitats. Due to the significant ecological impacts it is crucial to comprehensively understand how management practices and the surrounding landscape affect species, as well as how these factors influence their populations over the long term. We studied the influence of weather and trapping effort on multi-day Malaise trap sampling, examining their effects on long-term monitoring data. We further explored how vineyard management and the presence of semi-natural habitats (SNH) affect arthropods in the wine-growing region Palatinate in southwest Germany.
We evaluated the impact of ambient weather conditions and trapping effort during Malaise trap exposure on biomass and taxa richness using metabarcoding. Insect activity was highest when the weather was warm and dry. Taxa accumulation increased fourfold from three days of monthly trapping to continuous trap exposure and nearly sixfold from sampling at a single site to 32 sites. Common species are likely to be captured with short trapping durations and a small number of sampling sites, while it remains challenging to comprehensively sample rare species. Metabarcoding provides a valuable method for long-term monitoring. However, additional sequencing efforts are required to establish more comprehensive DNA databases.
Furthermore, we investigated how organic and conventional management, reduction of pesticides, and SNH in the surrounding landscape affect arthropod diversity in vineyards. Biodiversity was assessed in 32 vineyards in a crossed design of management (organic vs. conventional) and pesticide use (regular vs. reduced in fungus-resistant grape varieties). The pairs of vineyards were located in 16 landscapes, with increasing proportions of SNH in the surrounding area of the vineyards. We measured the biomass of captured specimens and used metabarcoding to assess the general arthropod biodiversity. Furthermore, we used morphological and acoustic species identification to investigate effects on wild bees and orthopterans. Biomass was almost one-third higher in conventional compared to organic vineyards, while organic vineyards had almost 50 % more bees. Densities of herb-dwelling orthopterans were 2.9 times higher in fungus-resistant compared to classic grape varieties under organic management. Higher proportions of SNH increased arthropod richness as well as abundance and richness of above-ground-nesting bees and further changed community composition of arthropods, including wild bees and orthopterans. Increased inter-row vegetation had positive effects on various groups of organisms. Our studies on the influence of vineyard management show that reducing pesticide use, particularly under organic management, can enhance sustainability in viticulture and promote biodiversity. Moreover, further species benefit from diverse inter-row vegetation and SNH in the surrounding landscape. We conclude that the cultivation of fungus-resistant grape varieties is of importance to minimize the need for non-specific pesticides, while it is also important to provide diverse vegetation in inter-rows and create a structurally rich environment with suitable SNH to conserve biodiversity in viticulture.
During our daily lives, we are confronted with vast amounts of data, the processing of which can dramatically influence our lives, both positively and negatively. The enormous amount of data (images, texts, tables, and time series), its variety and possible applications are not always obvious. Due to advancements in the internet of things (IoT), there exist billions of sensors that produce time series which can be found everywhere, whether in medicine, the financial sector or the agricultural economy. This incredible amount of time series data has many hidden features which are useful for industry as well as for daily use, e.g. improving the cancer prediction can save real human lives. Recently, several deep learning methods have been proposed for analyzing this time series data. However, due to their black box nature, their applicability is limited in critical sectors like medicine, finance, and communication. In addition, it is now a compulsion as per artificial intelligence (AI) Act and per General Data Protection Regulation (GDPR) to protect any sensitive data and provide explanations in safety-critical domains. To enable use of DNNs in a broader domain scope, this thesis presents a framework for privacy-preserved and interpretable time series analysis. TimeFrame consists of four main components, namely, post-hoc interpretability, intrinsic interpretability, direct privacy, and indirect privacy. Interpretability is indispensable to avoid damaging people or the infrastructure. In the past years, the development mostly focused on image data, which prevented the full potential of DNNs in time series processing from being exploited. To overcome this limitation, TimeFrame introduces five (Time to Focus, TSViz, TimeREISE, TSInsight, Data Lens) novel post-hoc and two (PatchX, P2ExNet) novel intrinsic interpretability components. TimeFrame addresses multiple perspectives such as attribution, compression, visualization, influence, prototyping, and hierarchical splitting. Compared to existing methods, the components show better explanations, robustness, and scalability. Another crucial factor is the privacy when dealing with sensitive data and deep learning. In this context, TimeFrame introduces two (PPML, PPML x XAI) components for direct and one (From Private to Public) component for indirect privacy. These components benchmark privacy approaches, their effect on interpretability, and the synthetic generation of data to overcome privacy concerns. TimeFrame offers a large set of interpretability and privacy components that can be combined and consider numerous different aspects. Furthermore, the novel approaches have shown to consistently outperform twenty existing state-of-the-art methods across up to 20 different datasets. To guarantee the fairness, various metrics were used including performance change, Sensitivity, Infidelity, Continuity, runtime, model dependency, compression rate, and others. This broad set of metrics makes it possible to provide guidelines for a more appropriate use of existing state-of-the-art approaches as well as the novel components included in TimeFrame.
Individual thermal comfort in buildings, especially in office workplaces, is becoming increasingly
important in modern society. While technical devices for user-specific heating are well known and
implemented, only a few proven methods for individual cooling of a single person are available, most
of which are limited to convective heat transfer.
The primary goal of this research was the development of an effective and efficient cooling system
for individual building occupants based on longwave radiation exchange. To achieve this, the
technological concept of a thermoelectric cooling partition with latent heat storage (Thecla) was
developed. The system combines Peltier elements and heat storage based on a phase change
material to provide a tempered surface for directional radiative cooling of a person.
Thecla has been practically evaluated in the form of real prototypes in hardware tests and human
subject studies. In addition, the concept was evaluated theoretically through precise thermodynamic
analyses of each individual component and of the overall system. Based on these assessments, an
explicit computational model of Thecla was developed, which calculates the thermodynamic
behavior and energy balance of the system for varying environmental and operating parameters.
Coupled with measured and simulated building energy data, the overall energy efficiency of Thecla in
combination with central space cooling systems was assessed.
The analysis suggests, that the system concept of the thermoelectric partition is effective for
individual user cooling. Thecla provides a perceptible and measurable cooling effect associated with
a reduction in the overall thermal sensation. The applied technologies allow cooling operation over
relevant periods of time and, through latent heat storage, a temporal shift of cooling loads in
buildings. For realistic application scenarios in buildings with central air conditioning, an existing
energy-saving potential by using Thecla could be proven and quantified.
Toxicology, the study of the adverse effects of chemicals and physical agents on living organisms, is a critical process in chemical and drug development. The low throughput, high costs, limited predictivity and ethical concerns related to traditional animal-based toxicity studies render them impractical to assess the growing number and complexity of both existing and new compounds and their formulations. These factors together with the increasing implementation of more demanding regulations, evidence the current need to develop innovative, reliable, cost effective and high throughput toxicological methods.
The use of metabolomics in vitro presents the powerful combination of a human relevant system with a multiparametric approach that allows assessing multiple endpoints in a single biological sample. Applying metabolomics in a cell-based system offers an alternative to both, the ethical concerns and relevance of animal testing and the restraining nature of single endpoint evaluations characteristic of conventional toxicological in vitro assays. However, there are still challenges that hamper the expansion of metabolomics beyond a research tool to a feasible and implementable technology for toxicology assessment.
The aim of this dissertation is to advance the applications of in vitro metabolomics in toxicology by addressing three major challenges that have limited its widespread implementation in the field. In chapter 2 the restrictive high cost and low throughput of in vitro metabolomics was addressed through the development, standardization and proof of concept of a high throughput targeted LC-MS/MS in vitro metabolomics platform for the characterization of hepatotoxicity. In chapter 3, the use of the developed in vitro metabolomics system was expanded beyond hazard identification, to its implementation for deriving dose- and time response metrics that were shown useful for Point of departure (PoD) estimations for human risk assessment. Finally, in chapter 4 in order to increase the reliance and confidence of using in vitro metabolomics data for risk assessment, the human relevance of the metabolomics in vitro assays was attempted to be improved by the implementation and evaluation of in vitro metabolomics in a hiPSCs-derived 3D liver organoid system.
The work developed here demonstrates the suitable of in vitro metabolomics for mechanistic-based hazard identification and risk assessment. By advancing the applications of metabolomics in toxicology, this work has significantly contributed to the aim of toxicology of the 21st century for a human-relevant non-animal toxicological testing, supporting the toxicology task of protecting human health and the environment.
This dissertation contributes to the emerging research field on men’s underrepresentation in communal domains such as health care, elementary education, and the domestic sphere (HEED). Since these areas are traditionally associated with women and therefore counter-stereotypic for men, various barriers can hinder men’s higher participation. We explored these relations using the example of how men’s interest in parental leave – as a form of communal engagement – is shaped across different stages of the transition to fatherhood. Specifically, we focused on how gendered beliefs regarding masculinity and fatherhood, the possible selves men can imagine for their future, and the social support men receive from their normative environment relate to their intentions to take parental leave and their engagement in care more broadly. In Chapter 2, using experimental designs, we examined how different representations of a prototypical man, varying in stereotypic agentic and counter-stereotypic communal content, affect men’s hypothetical intentions to take leave and their communal possible selves. Findings suggested that a combined description of a prototypical man as agentic and communal tended to increase men’s parental leave-taking intentions as compared to a control condition. In line with contrast effects, also an exclusively agentic male prototype tended to push men towards more communal outcomes. In Chapter 3, in a cross-sectional examination of the parental leave-taking intentions of expectant fathers, we found first evidence for a link between male prototypes and men’s behavioral preferences to take parental leave after birth. Yet, the support that expectant fathers received from their partners for taking parental leave emerged as the strongest predictor of men’s leave-taking desire, intention, and expected duration. In Chapter 4, using longitudinal data collected during men’s transition to fatherhood, we studied discrepancies between men’s prenatal caregiver and breadwinner possible selves and their actual postnatal engagement in each domain. Results suggested that fathers, on average, expected and desired to share childcare and breadwinning rather equally with their partners but had difficulties translating their intentions into behavior. The extent to which fathers experienced discrepancies was related to their attitudes towards the father role and the social support they received for taking parental leave and engaging in childcare. Moreover, experiencing a mismatch between their expected, desired, and actual division of labor had consequences for fathers’ intentions to take parental leave in the future. Across the empirical chapters, we found that men generally had high communal intentions and did not consider care engagement as nonnormative for their gender. However, men continue to face barriers that prevent them from translating their communal intentions into behavior. We outline strengths and limitations of the present research given the emerging nature of the research field. Moreover, we discuss implications for future research on men’s orientation towards care as well as implications for how to foster the realization of communal intentions into actual behavior.
Highly Automated Driving (HAD) vehicles represent complex and safety critical systems. They are deployed in an open context i.e., an intricate environment which undergoes continual changes. The complexity of these systems and insufficiencies in sensing and understanding the open context may result in unsafe and uncertain behaviour. The safety critical nature of the HAD vehicles requires modelling of root causes for unsafe behaviour and their mitigation to argue sufficient reduction of residual risk.
Standardization activities such as ISO 21448 provide guidelines on the Safety Of The Intended Functionality (SOTIF) and focus on the analysis of performance limitations under the influence of triggering conditions that can lead to hazardous behaviour. SOTIF references traditional safety analyses methods e.g., Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) to perform safety analysis. These analyses methods are based on certain assumptions e.g., single point failure in FMEA and independence of basic events in FTA. Moreover, these analyses are generally based on expert knowledge i.e., data-based models or hybrid approaches (expert and data) are seldom practised. The resulting safety model is fixed i.e., it is generally seen as a one-time artefact. Open context environment may contain triggering conditions which may not be evident to the expert. Open context also evolves over time and new phenomena may emerge.
This thesis explores the applicability of the traditional safety analyses techniques to provide safety models for HAD vehicles operating in the open context, under the light of modelling assumptions taken by traditional safety analyses techniques. Moreover, incorporating uncertainties into safety analyses models is also explored. An explicit distinction between the inherent uncertainty of a probabilistic event (aleatory) and uncertainty due to lack of knowledge (epistemic) is made to formalize models to perform SOTIF analysis. A further distinction is made for conditions of complete ignorance and termed as ontological uncertainty. The distinction is important as for HAD vehicles operating in open context the ontological uncertainty can never be completely disregarded.
This thesis proposes a novel framework of SOTIF to model, estimate and dis cover triggering conditions relevant to performance limitations. The framework provides the ability to model uncertainties while also providing a hybrid approach i.e., supporting inclusion of expert knowledge as well as data driven engineering processes. Two representative algorithms are provided to support the framework. Bayesian Network (BN) and p-value hypothesis testing are utilised in this regard. The framework is implemented on a real-world case study in which LIDARs based perception systems are used as vehicle detection system.
Inland waters, such as freshwater impoundments, are significant and variable sources of the greenhouse gas methane to the atmosphere. In water bodies, methane is mainly produced in the organic-matter rich bottom sediment, where it can accumulate, form gas voids, and be transported to the atmosphere by gas bubbles escaping the sediment. The bubble mediated transport of methane, known as methane ebullition, is a commonly dominant pathway of methane emissions in freshwater reservoirs. Ebullition results from a complex interplay of several simultaneous physical and bio-geochemical processes acting at different timescales, leading to highly variable fluxes in both space and time. Although the sediment matrix is a hot spot for gas production and accumulation, there is a lack of in-situ data on free gas storage in reservoirs and the interaction among sediment gas storage, methane budget, and methane ebullition. Several environmental variables are known to be ebullition drivers; however, simulating the temporal dynamics of ebullition and identifying the governing factors across different systems remains challenging. Therefore, the main goal of this thesis was to investigate the effect of different drivers on the spatial variability and temporal dynamics of methane ebullition in impoundments. Two contrasting reservoirs, one subtropical and one temperate, were investigated. High-frequency measurements of ebullition fluxes and environmental variables, and acoustic-based mapping of gas content in the sediment were performed in both reservoirs, constituting the dataset for this study. The main findings were presented in three main scientific manuscripts. The spatial distribution of gas content in the sediment was primarily controlled by sediment deposition and water depth, with shallow regions of high sediment deposition were hot spots of free gas accumulation in the sediment. Temporal changes in gas content in the sediment were linked to the methane budget components in the reservoir and further influenced by the temporal dynamics of ebullition. While the sediment could store days of accumulated potential methane production, which could sustain months of mean ebullition flux, periods of intensified ebullition led to a depletion of gas stored in the sediment. Large spatial scale ebullition drivers, such as pressure changes, resulted in the synchronization of ebullition events across different monitoring sites. Nevertheless, the degree of correlation between ebullition and environmental variables varied from one system to another and over time. Thermal stratification was an important modulator in the relationship between ebullition and other environmental variables, such as bottom currents and turbulence. The temporal dynamics of ebullition could be captured and reproduced by empirical models based on known environmental variables. However, these models failed to reproduce the sub-daily variabilities of ebullition and demonstrated poor performance when transferred from one system to another. Lastly, although some questions remain unanswered, the findings from this study contribute to advancing the understanding of the complex dynamics of methane ebullition and its controls in freshwater reservoirs.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
Agricultural intensification has increased substantially in the last century to meet the globally growing demand for food, fodder, and bioenergy, thus agricultural cropland became the largest terrestrial biome globally. Pesticides became a central tool to this intensification strategy, thus pesticide application rose drastically over the last sixty years to secure or increase crop yields. However, pesticides are by design biologically active and known to contaminate non-target ecosystems, thereby adversely affecting their function or structure. Even though ecotoxicological knowledge about probable fate and effects has grown, little remains known about the spatiotemporal occurrence, potential effects, and risk drivers of pesticides on larger, i.e. macro, scales.
Consequently, the thesis gathered primarily pesticide exposure data via meta-analysis and from public monitoring databases to describe (i) detailed risks in aquatic ecosystems, (ii) the underlying risk drivers, (iii) associated spatiotemporal trends, (iv) the effect of land use and land-protection and (v) the protectiveness of regulatory frameworks. First, a meta-analysis of insecticides occurring in US surface waters (n = 5,817, 259 studies) revealed large-scale risks for aquatic ecosystems based on the exceedance of regulatory threshold levels (RTL) and identified high-risk substances, particularly pyrethroids, with increasing application trends (publication I). Following this, spatiotemporal factors driving insecticide risks were identified via model-building demonstrating that toxicity-weighted pesticide use was the primary driver in surface waters with subsequent model application generating a spatially comprehensive risk assessment for the United States (publication II). The toxicity-weighted pesticide use was subsequently expanded to an ongoing project covering additional species groups and all pesticides used in the US from 1992 – 2016, highlighting a drastic shift of toxic pressures from vertebrates to aquatic invertebrates. Large-scale monitoring data from European surface waters (n > 8.3 million) of 352 organic chemicals identified pesticides as the main class or organic contaminants causing risks in aquatic ecosystems. Additional analyses established links between agricultural intensity and resulting environmental risks for aquatic invertebrates and plants on this macro scale (publication III). Finally, high-resolution monitoring data from Saxony, Germany, provided, for the first time, detailed insights into the occurrence and resulting risks of organic contaminants (primarily pesticides) in protected surface waters of nature conservation areas (publication IV).
In summary, the thesis gathered and used large-scale datasets to analyze the impact of agricultural intensification – and later anthropogenic land use – on ecosystems to reduce knowledge deficits in ecotoxicology on macro scales. Insecticides were shown to be important and spatially extensive agents of impairments to surface water quality and being directly linked to their use in respective landscapes. Changes in the pesticide use composition over time shifted environmental risks from vertebrates to other central species groups (e.g. aquatic invertebrates), highlighting a new challenge to the integrity of aquatic environments. The thesis provided novel insights into contaminants' individual risk characteristics, their interaction with various spatiotemporal drivers and their relevance on various macro scales. Overall, a discrepancy remains evident between estimated environmental impacts of pesticides derived during regulatory approval processes contrasted by a posteriori field measurements detailing larger than assumed adverse exposures and effects. This discrepancy led to pesticides being the most impactful chemical stressor for aquatic ecosystems compared to other organic contaminants on a continental scale; a threat that even increased for some species groups. The extensive use of pesticides has reached levels where even strictly protected surface waters in Germany are regularly exposed adversely, hence threatening conservation areas’ function as ecological refugia. Taken together, the thesis provides new macro-scale evidence regarding the contribution of pesticides (and associated drivers) to large-scale changes in biological systems evidenced over the last decades, underlining their likely contribution to the ongoing freshwater biodiversity crisis globally. Particularly agricultural systems will require substantial changes going forward to protect or reestablish the integrity of aquatic ecosystems and their provision of vital ecological services.
More than 2.4 % of the continental surface area is covered by shallow aquatic systems such as ponds. Despite occupying only a tiny fraction of the earth's surface area, ponds are globally significant sites of carbon cycling. They receive carbon, process it and emit large amounts of greenhouse gases into the atmosphere, the most potent among others are carbon dioxide (CO2) and methane (CH4). Tube-dwelling macroinvertebrates, such as chironomid larvae (Diptera: Chironomidae) change biogeochemical functions, particularly in shallow aquatic systems. Through bioturbation involving burrow ventilation and sediment particle reworking, tube-dwelling macroinvertebrates enhance solute exchange between sediment and water. Stimulate the benthic microbial community, and regulate organic matter decomposition. This doctoral project integrates aquatic carbon biogeochemical processes with the research field of ecology to relate knowledge of biogeochemical reaction dynamics upon application of the mosquito control biocide Bacillus thuringiensis israelensis (Bti), which is an entomopathogen that kills mosquitos larvae, but also reduces the abundance of chironomids. The interdisciplinary approach combines field measurements and laboratory experiments. First, an experiment was conducted in 12 outdoor floodplain ponds mesocosms (FPMs), where the effect of Bti application on carbon transformations, carbon pools, and carbon fluxes was monitored for one year. Half of the FPMs were Bti-treated and the remaining half were controls. The study revealed that seasonal variations governed changes in transformations, pools, and fluxes on the carbon components. Treated FPMs, for which a 26 % and 41% reduction in emerging merolimnic insects and macroinvertebrates abundance, respectively was reported (in companion studies) were higher CH4emitters (137% higher than in control mesocosms). The higher CH4 emissions occurred specifically in the shallow zone where the macroinvertebrate reduction was also significant. In the same treated FPMs, a tendency towards less dissolved organic carbon in porewater (33% lower than in control mesocosms), was potentially caused by the reduction in bioturbation activities of chironomids, whereas the remaining measured components of the carbon budget were not affected by the treatment with Bti. Second, laboratory microcosm (LMs) experiments that excluded environmental constraints were developed, to clarify the findings of the FPMs experiment. Out of the 15 microcosms, 3 were treated (each set) with standard Bti dose, 5 times standard Bti dose, chironomid larvae with low and high areal density, and control. The findings demonstrated that bioturbation increased CH4 and CO2 efflux and sediment oxygen (O2) consumption, while it did not affect the net production of CH4 and CO2. The negligible effect on net production rates in treatments with chironomids indicates that the increase in emissions rate was predominantly caused by bioturbation, which reduced the gas accumulation in the sediment. In the absence of chironomids, the application of any dose of Bti led to a three-fold higher net production rate of CH4 and CO2 (by up to 2.7 times than in control), due to the high addition of bioavailable carbon through the Bti excipients. However, the sole addition of carbon through the Bti excipients could not justify the high net production rate suggesting that the addition of Bti triggered a more robust carbon metabolism process. Both FPMs and LMs results suggested that the application of Bti may have functional implications on carbon biogeochemistry in affected aquatic systems beyond those mediated by changes in macroinvertebrate communities.
This doctoral dissertation is comprised of nine published articles covering different
methods for ‘Fast, Robust Rigid and Non-Rigid Registration for Globally Consistent
3D Scene and Shape Reconstruction’. Overall the contributing articles are separated
and discussed in three stages – The first part of the thesis i.e., chapter 2 explains
three novel method classes of rigid point set registration namely Gravitational Approach (GA), Fast Gravitational Approach (FGA), and RPSRNet. GA was introduced as the first physics-based rigid point set registration. It includes elegant modeling of rigid by dynamics using Newtonian mechanics. The method proposed many new avenues for other types of pattern matching tasks thank point set registration. Next, FGA method, published 4 years after GA presented as an extension that breaks the algorithmic complexity of GA from O(M N ) to O(M log N ) using Barnes-Hut tree representation of point cloud. It also eliminates the requirement of heuristic optimization parameter settings by GA, and achieve state-of-the-art alignment accuracy on LiDAR odometry. Finally, RPSRNet presents deep learning version of FGA, with custom convolution layers for hierarchical point feature embedding. RPSRNet is robust and the fastest among SoA methods for LiDAR data registration. The second part, i.e., chapter 3, of the thesis introduces NRGA as the fist physics-based non-rigid point set
registration method which is computationally slow but robust against noisy and partial inputs. NRGA preserves structural consistency as it coherently regularize motion of deformable vertices. For articulated hand shape reconstruction, a tailored version of NRGA -- Articulated-NRGA -- is effective to refine final hand shape. Collision and penetration avoidance between source and target surfaces are tackled by constrained optimization in NRGA. This setting has improved hand and object interaction reconstruction. Next contribution FoldMatch method remodels the shape deformation by introducing wrinkle vector field (WVF) for capturing complex clothing and garment details while fitting body models onto 3D Scans. Quantitative evaluation of FoldMatch and NRGA shows their effectiveness in geometrically consistent surface modeling and reconstruction tasks. Finally, the third part of the thesis explains globally consistent outdoor scene reconstruciton, odometry estimation, and uncertainty guided pose-graph optimization in a novel LiDAR-based localization and map building method, called Deep Evidential LiDAR Odometry (DELO). This is the first Odometry method to use predictive uncertainty modeling for sensor pose prediction network.
Molecular simulation is an important tool for investigating the behavior of fluids and solids. Nanoscopic processes and physical properties of the material can be studied predictively based on the description of the molecular interactions by force fields. This is used in the present work to tackle engineering questions that are hard to answer with other methods. First, mass transfer at fluid interfaces was investigated on the nanoscopic level. Therefore, two distinct simulation methods were developed and used to systematically investigate the mass transfer in mixtures of simple model fluids, described by the ‘Lennard-Jones truncated and shifted’ (LJTS) potential. The research question was whether the adsorption of components at the interface, which is observed also in many simple fluid mixtures, has an influence on the mass transfer. Such an influence was indeed found in the studies with both scenarios. Furthermore, explosions of nanodroplets caused by a spontaneous evaporation of the liquid phase were investigated with non-equilibrium molecular dynamics (NEMD) simulations. In these simulations, the interior of an LJTS droplet was superheated by a local thermostat, so that a vapor bubble nucleated inside of the droplet. Depending on the degree of superheating, different phenomena were observed, ranging from a simple evaporation of the droplet over oscillatory behavior of the bubble to an immediate droplet explosion. For molecular simulations of real mixtures, suitable force fields are needed. In this work, a set of molecular models for the alkali nitrates was developed and systematically compared to experimental data of thermophysical and structural properties of aqueous alkali nitrate solutions from the literature. Lastly, the structure and clustering of 1:1 electrolytes in aqueous solution was investigated for a broad concentration range starting from near infinite dilution up to high supersaturation. Based on the simulation results, an empirical rule was proposed to provide estimates of the solubility of salts with standard molecular dynamics simulations without the need of elaborate calculation schemes or significant additional computational effort.
Drought is a significant environmental factor that can impair plant growth and development, leading to reduced crop productivity or even plant death. Maintaining sugar distribution from source to sink is crucial for increasing crop production under water limitation conditions. Numerous studies have suggested that nutrition fertilization, especially potassium (K), can enhance plant growth and yield production. To investigate the mechanism of K in sugar long-distance transportation under drought stress, we established a soil-grow system and a hydroponic-grow system with varying amounts of potassium supplementation and analyzed the biochemical and molecular responses in Arabidopsis and potato plants under drought stress conditions. Our findings showed that excess potassium fertilization limited sucrose metabolism, leading to lower drought tolerance in Arabidopsis in both grow systems. However, higher potassium supplementation altered sugar relocation and potassium movement, resulting in an increase in starch yield production in both potato plants with different sink strength capacities. We also proposed that a low amount of sodium increases Arabidopsis drought tolerance under low potassium conditions since a low amount of sodium can improve the control of osmotic potential, leading to more water being retained in plant cells.
Silicon (Si) has received considerable attention recently for its potential in mitigating drought stress, although the effects vary among different plant species. To investigate the mechanism of Si in drought stress tolerance, we applied monosilicic acid in hydroponic media and then applied PEG8000 to simulate drought stress. Our findings revealed that Si-dependent drought mitigation occurred more in the shoot than in the root of Arabidopsis, and we observed silicon accumulation in the shoot of Arabidopsis. In Si-treated plants, more glucose was accumulated in the vacuole, leading to better osmotic potential control under drought stress. RNA sequencing analysis showed that Si altered the activity of sugar transporters and the sugar metabolism process, and increased photosynthesis. However, Si-dependent regulation in sugar transporter showed different responses in potato. Understanding the mechanism of Si in potato requires further studies. Overall, our dissertation provides important information for clarifying the mechanism of Si in drought stress, which forms the basis for further investigation.
Methods for scale and orientation invariant analysis of lower dimensional structures in 3d images
(2023)
This thesis is motivated by two groups of scientific disciplines: engineering sciences and mathematics. On the one hand, engineering sciences such as civil engineering want to design sustainable and cost-effective materials with desirable mechanical properties. The material behaviour depends on physical properties and production parameters. Therefore, physical properties are measured experimentally from real samples. In our case, computed tomography (CT) is used to non-destructively gain insight into the materials’ microstructure. This results in large 3d images which yield information on geometric microstructure characteristics. On the other hand, mathematical sciences are interested in designing methods with suitable and guaranteed properties. For example, a natural assumption of human vision is to analyse images regardless of object position, orientation, or scale. This assumption is formalized through the concepts of equivariance and invariance.
In Part I, we deal with oriented structures in materials such as concrete or fiber-reinforced composites. In image processing, knowledge of the local structure orientation can be used for various tasks, e.g. structure enhancement. The idea of using banks of directed filters parameterized in the orientation space is effective in 2d. However, this class of methods is prohibitive in 3d due to the high computational burden of filtering when using a fine discretization of the unit sphere. Hence, we introduce a method for 3d pixel-wise orientation estimation and directional filtering inspired by the idea of adaptive refinement in discretized settings. Furthermore, an operator for distinction between isotropic and anisotropic structures is defined based on our method. Finally, usefulness of the method is shown on 3d CT images in three different tasks on a fiber-reinforced polymer, concrete with cracks, and partially closed foams. Additionally, our method is extended to construct line granulometry and characterize fiber length and orientation distributions in fiber-reinforced polymers produced by either 3d printing or by injection moulding.
In Part II, we investigate how to introduce scale invariance for neural networks by using the Riesz transform. In classical convolutional neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, the network may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform, a scale equivariant operator. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider segmenting cracks in CT images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real CT images featuring a wide range of crack widths. As an alternative to deep learning models, the Riesz transform is utilized to construct a scale equivariant scattering network, which does not require a lengthy training procedure and works with very few training examples. Mathematical foundations behind this representation are laid out and analyzed. We show that this representation with 4 times less features than the original scattering networks from Mallat performs comparably well on texture classification and gives superior performance when dealing with scales outside the training set distribution.
Hardware devices fabricated with recent process technology are intrinsically
more susceptible to faults than before. Resilience against hardware faults is,
therefore, a major concern for safety-critical embedded systems and has been
addressed in several standards. These standards demand a systematic and
thorough safety evaluation, especially for the highest safety levels. However,
any attempt to cover all faults for all theoretically possible scenarios that a sys-
tem might be used in can easily lead to excessive costs. Instead, an application-
dependent approach should be taken: strategies for test and fault resilience
must target only those faults that can actually have an effect in the situations
in which the hardware is being used.
In order to provide the data for such safety evaluations, we propose scalable
and formal methods to analyse the effects of hardware faults on hardware/soft-
ware systems across three abstraction levels where we:
(1) perform a fault effect analysis at instruction set architecture level by em-
ploying fault injection into a hardware-dependent software model called
program netlist,
(2) use the results from the program netlist analysis to perform a deductive
analysis to determine “application-redundant” faults at the gate level by
exploiting standard combinational test pattern generation,
(3) use the results from the program netlist analysis to perform an inductive
analysis to identify all faults of a given fault list that can have an effect
on selected objects of the high-level software, such as specified safety
functions, by employing Abstract Interpretation.
These methods aid in the certification process for the higher safety levels
by (a) providing formal guarantees that certain faults can be ignored and (b)
pointing to those faults which need to be detected in order to ensure product
safety.
We consider transient and permanent faults corrupting data in program-
visible hardware registers and model them using the single-event upset and
stuck-at fault models, respectively.
Scalability of our approaches results from combining an analysis at the ma-
chine and hardware level with separate analyses on gate level and C level
source code, as well as, exploiting certain properties that are characteristic for
embedded systems software. We demonstrate the effectiveness and scalability
of each method on industry-oriented software, including a software system
with about 138 k lines of C code.
Ambulatory assessment (AA) is becoming an increasingly popular research method in the fields of psychology and life science. Nevertheless, knowledge about the effects that design choices, such as questionnaire length (i.e., number of items per questionnaire), have on AA participants’ perceived burden, data quantity (i.e., compliance with the AA protocol), and data quality is still surprisingly restricted. The aims of this dissertation were to experimentally manipulate aspects of an AA study’s sampling strategy - sampling frequency (Study 1) and questionnaire length (Study 2) - and to investigate their impact on perceived burden, data quantity, and aspects of data quality in three papers. In Study 1, students (n = 313) received either 3 or 9 questionnaires per day for the first 7 days of the study. In Study 2, students (n = 282) received either a 33- or 82-item questionnaire 3 times a day for 14 days.
Paper 1 described that a higher sampling frequency (Study 1) led to a higher perceived participant burden, but did not affect other aspects of data quantity and quality. Furthermore, a longer questionnaire (Study 2) did not affect perceived participant burden or data quantity, but did lead to a lower within-person variability, and a lower within-person relationship between time-varying variables. Paper 2 investigated the effects of the sampling frequency (Study 1) on careless responding by identifying careless responding indices that could be applied to AA data and by extending the multilevel latent class analysis model to a multigroup multilevel latent class analysis model. Results indicated that a higher sampling frequency did not affect careless responding. Paper 3 investigated the effects of questionnaire length (Study 2) on (the relative impact of) response styles by extending the item response tree (IRTree) modeling approach to a multilevel data structure. Results indicated that a longer questionnaire led to a greater relative impact of RS.
Although further validation of the results is essential, I hope that future researchers will integrate the results of this dissertation when designing an AA study.
Thermodynamic Modeling of Poorly Specified Mixtures using NMR Fingerprinting and Machine Learning
(2023)
Poorly specified mixtures, i.e., mixtures of unknown or incompletely known composition,
are common in many fields of process engineering. Dealing with such mixtures in process
design is challenging as their properties cannot be described with classical thermodynamic
models, which require a full specification. As a workaround, pseudo-components
can be introduced, which are generally defined using ad-hoc assumptions. In the present
thesis, a new framework is developed for the thermodynamic modeling of such mixtures
using nuclear magnetic resonance (NMR) experiments in combination with machine-learning
(ML) methods. In the framework, a characterization of a mixture in terms of
structural groups (“NMR fingerprint”) is obtained by using the ML concept of support
vector classification. Based on the group-specific fingerprint, quantum-chemical descriptors
of the unknown part of the mixture as well as activity coefficients can already be
predicted. Furthermore, a meaningful definition of pseudo-components is achieved by
clustering the structural groups into pseudo-components with the K-medians algorithm
based on their self-diffusion coefficients measured by pulsed-field gradient (PFG) NMR.
It is demonstrated that the characterization of poorly specified mixtures in terms of
pseudo-components can be combined with several thermodynamic group-contribution
methods. The resulting thermodynamic models were applied to various poorly specified
mixtures and used for solving two typical tasks from conceptual fluid separation process
design: the solvent screening for liquid-liquid extraction processes and the simulation
of open evaporation processes. The predictions with the methods developed here show
very good agreement with the results obtained for the fully specified mixtures.
Internal waves are oscillating disturbances within a stable density-stratified fluid. In stratified water basins, these waves have been detected and pointed out as one of the most important processes of water movement and vertical mixing. A fraction of the wind momentum and energy that cross the water surface are responsible for generating large standing internal waves, also called basin-scale internal seiches, in stratified basins. Despite the huge number of publications describing different mechanisms that can influence the dissipation rates and accelerate the wave damping of internal seiche in thermally stratified lakes and reservoirs, many details of their application to field observations are site specific and do not evaluate the effects in a combined way. This research paid particular attention to some mechanisms that may contribute in inhibiting the generation of internal seiche through field measurements and numerical simulations. Our results underline the importance of bathymetry on energy dissipation, indicating that the gentle sloping bottom may act as a primary mechanism to inhibit the formation of internal seiches. The basin shapes (reservoir bends) and self-induced mixing near the wave crest act as secondary mechanisms to extract energy from upwelling events, which is responsible for triggering internal seiches in thermally stratified lakes. Numerical simulations indicate that a higher amount of energy is transferred from the wind to the internal seiche for an increasing deviation of the stratification from a two-layer structure, suggesting that the stratification profile is not responsible for inhibiting the occurrence of basin-scale internal waves, but only for modifying its structure, favoring the formation of internal waves with higher vertical modes. The outcome of this study may be of great relevance in describing the biogeochemical cycle in lakes and reservoirs, since each mechanism may have different trigger effects on the cycle of nutrients and other elements in thermally stratified lakes.
Numerical study on the kinematic response of piled foundations to a stationary or moving load
(2023)
The present numerical study focuses on the problem of dynamic interaction of piled foundations under harmonic excitation at high frequencies relevant for the vibration protection practice. The finite-element programs Plaxis (2D & 3D) and Abaqus are employed for time- and frequency-domain analyses, respectively.
As a first step, dynamic impedances of pile groups, piled rafts and embedded footings are derived for all oscillation modes in order to gain insight into the problem of inertial loading.
Emphasis is placed on the kinematic response of single piles, pile groups and piled rafts to a wave field emanating from a distant stationary or moving harmonic vertical point load acting on the surface of the soil. Transfer functions, which are ratios relating the response of the foundation to that of the free-field, quantify the kinematic interaction. Only the vertical component of the response is assessed as mostly critical in the frame of the selected excitation. It is shown that a stationary harmonic load is a good approximation for a moving harmonic load; this is true for a travelling speed of the load that is relatively low in comparison with the Rayleigh wave velocity in the soil, which is quite common in engineering practice. Analogously, a static load is a good approximation of a moving load of constant magnitude. Moreover, analytical solutions are presented for single pile and pile group response under Rayleigh wave excitation, which can be also employed in the near-field, as shown herein.
The extension of piled foundations by additional rows against the wave propagation direction is examined under the scope of vibration protection. Indeed, for a considerable frequency range, the further addition of pile rows to a piled foundation has a favorable effect on the reduction of the vibration level calculated at the furthest-back pile row or at the free-field behind the foundation. This is, however, not valid, as the excitation frequency increases further, and the interplay between the piles becomes more complex. On the other hand, the extension of the piled foundation by additional pile columns parallel to the wave propagation direction has a positive effect at high frequencies.
The accuracy of the results is assessed by verification against rigorous solutions. The importance of key aspects in finite-element modelling is also highlighted.
Ecotoxicology is the science that researches effects of toxicants on biological entities. Following the famous toxicological principle formulated 1538 by von Hohenheim, known as Paracelsus, thereby generally all chemicals are able to act as toxicants. Unlike human toxicology that focuses on toxic effects on individuals and populations of one species, Homo sapiens, ecotoxicology is not constrained in its scope of biological entities. It is interested in toxic effects on individuals and populations of any species (excluding humans), and on communities and entire ecosystems (Walker et al., 2012; Köhler & Triebskorn, 2013; Newman 2014). One example of where the ecological foundation of ecotoxicology manifests itself are indirect effects, which are effects on biological entities that are not directly caused by chemicals but instead are mediated by ecological interactions and environmental conditions (Walker et al., 2012). With this large scope, ecotoxicology is an inter- and multidisciplinary science that links chemical, biological and environmental knowledge.
With millions of species and at least 100,000 chemicals that potentially interact with them in the environment (Wang et al., 2021), ecotoxicology has a large ground to cover. Among these sheer numbers, there are some groups that are of special importance regarding their potential environmental impact. Pesticides are one group of chemicals that have a large, if not the largest, ecotoxicological relevance: they are toxic for biological entities, sometimes in very low concentrations , and they are used in large amounts and globally (Bernhardt et al., 2017). The high toxicity of pesticides, much higher than that of most other groups of chemicals, is a result of their intended use: they are designed to reduce detrimental effects of, e.g., insects, plants or fungi on agriculture by controlling respective populations, often, and in the sense of their Latin name, through induced lethality (Walker et al., 2012). However, they act not specific enough to be toxic only for the intended species that are considered pests, but also show toxicity towards species living in habitats next to pesticide-treated areas. The widespread agricultural use of pesticides, on the other hand, is a result of their work-and-cost-efficiency for securing yields, but also results in exposure of ecosystems at a global scale (Sharma et al., 2019). In summary, pesticides can be abstractly seen as toxicity intentionally applied to agricultural areas, unintentionally also exposing organisms in non-agricultural areas to toxicity.
The risks of pesticide use for ecosystems have led major jurisdictions, like the United States of America (US) and the European Union (EU), to enact elaborated regulatory processes that require a registration of pesticides prior use (EFSA, 2013; EPA, 2011; Stehle & Schulz, 2015b). A by-product of these registration processes are regulatory threshold levels (RTL) which can be used for scientific risk analysis outside the regulatory process (Stehle & Schulz, 2015a). The RTL for an organism group is basically derived from the most sensitive effect concentrations found in standardized toxicity tests for species representative for the group, multiplied by a safety factor, although specifics differ among regulatory processes. Conceptually, they mark the threshold that separates environmental concentrations associated with acceptable risk (concentrations below the RTL) from concentrations associated with unacceptable risk (concentrations above the RTL).
Due to the high degree of procedural standardization in the derivation of RTLs, they have been found as a good measure to make the toxicities of different pesticides comparable, and they were employed in a series of studies to characterize environmental pesticide concentrations (e.g., Stehle & Schulz, 2015a; Stehle et al., 2018; Wolfram et al., 2018; Wolfram et al., 2021; Schulz et al., 2021, also, in Appendix B; Bub et al., 2023, also, in Appendix C). RTL reflect, for instance, that insecticides show regulatory unacceptable concentrations towards fish between 3 ng/L (deltamethrin, a pyrethroid) and 110 mg/L (imidacloprid, a neonicotinoid), a range of nine orders of magnitude. At the same, imidacloprid is very toxic to pollinators (RTL of 1.52 ng/organism), while more than 95% of all of the insecticides, with regulatory unacceptable concentrations among insecticides ranging as high as 1,6 mg/organism, indicating a toxicity six orders of magnitude lower than that of imidacloprid.
At large-scales, ecotoxicology deals with pesticide impacts on a national (e.g., Bub et al., 2023; Douglas & Tooker, 2015; Hallmann et al., 2014; Schulz et al., 2021; Stehle et al., 2019; Wolfram et al., 2018), continental (Wolfram et al., 2021) or the global scale (Stehle & Schulz, 2015a; Stehle et al., 2018). This maximization of considered scale is in line with the general tendency of ecotoxicology towards larger scales, but generally requires new methodological and conceptual approaches. Historically, individual chemicals and groups of chemicals have been identified that mark, caused by their immense release into the environment, main disruptors of processes in the Earth system, like greenhouses gases for the climate change, chlorofluorocarbons for the depletion of the atmosphere’s ozone layer, dichlorodiphenyl-trichloroethane and other organochlorides for bioaccumulation in food webs and declines in bird populations, etc., but for other phenomena, like declines in biodiversity or numbers of insect species (Outhwaite et al., 2020; Seibold et al., 2019; Vörösmarty et al., 2010), the active part of chemical pollution is only understood to a much lesser extent. There are indicators that pesticides may play a major role
This dissertation contributes to the research of large-scale risks of pesticide use, and of large-scale ecotoxicology in general, in several ways (Figure 1). In Chapter 2, it presents a labeled property graph, the MAGIC graph (Meta-Analysis of the Global Impact of Chemicals graph), as a solution to the methodological issues that arise when increasing amounts of data from more and more sources are combined for analysis (Bub et al., 2019; also, in Appendix A). The MAGIC graph is able to link chemical information from different sources, even if these sources use different nomenclatures. This enables analyses that incorporate toxicological data, like thousands of RTLs (for different organism groups and jurisdictions) for hundreds of pesticides, and information on pesticide use and chemical classes. The MAGIC graph is implemented in a way that allows it to be organically extended by additional chemical, biological and environmental data, and eventually scaled to all chemicals of environmental interest.
Chapter 3 shows, how the combination of the linked pesticide data with a systemic consideration of pesticide use supports the interpretation of pesticide risks in the US (Schulz et al., 2021; also, in Appendix B). This systemic approach includes a new measure, the total applied toxicity (TAT), which integrates used pesticide amounts and pesticide toxicities, and the consideration of pesticide use as a complex system whose state and evolution can be visualized in phase-space plots. The combination of the described methods and concepts led to a novel view on pesticide risks in the US and can provide a framework for future ecotoxicological research at large scales.
Chapter 4 displays the results of the methods and concepts of the US pesticide risk analysis applied to Germany (Bub et al., 2023; also, in Appendix C). A pesticide risk analysis of Germany is of special importance in the context of the EU’s goal to drastically reduce pesticide risks (European Commission, 2020) and Germany being one of the important agricultural producers in the EU. A comparison of the results for Germany to those for the US did also allow to evaluate the impact of scale and differing RTLs, information that can help other ecotoxicological large-scale assessments. Chapter 5 adds a conclusion and an outlook.
This work aims to study textile structures in the frame of linear elasticity to understand how
the structure and material parameters influence the macroscopic homogenized model. More
precisely, we are interested in how the textile design parameters, such as the ratio between
fibers’ distance and cross-section width, the strength of the contact sliding between yarns,
and the partial clamp on the textile boundaries determine the phenomena that one can see in
shear experiments with textiles. Among others, when the warp and weft yarns change their
in-plane angles first and, after reaching some critical shear angle, the textile plate comes out
of the plane, and its folding starts.
The textile structure under consideration is a woven square, partially clamped on the left
and bottom boundary, made of long thin fibers that cross each other in a periodic pattern.
The fibers cannot penetrate each other, and in-plane sliding is allowed. This last assumption,
together with the partial clamp, adds new levels of complexity to the problem due to
the anisotropy in the yarn’s behavior in the unclamped subdomains of the textile.
The limiting behavior and macroscopic strain fields are found by passing to the limit with
respect to the yarn’s thickness r and the distance between them e, parameters that are asymptotically
related. The homogenization and dimension reduction are done via the unfolding
method, which separates the macroscopic scale from the periodicity cell. In addition to the
homogenization, a dimension reduction from a 3D to a 2D problem is applied. Adapting
the classical unfolding results to both the anisotropic context and to lattice grids (which are
constructed starting from the center lines of the rods crossing each other) are the main tools
we developed to tackle this type of model. They represent the first part of the thesis and are
published in Falconi, Griso, and Orlik, 2022b and Falconi, Griso, and Orlik, 2022a.
Given the parameters mentioned above, we then proceed to classify different textile problems,
incorporating the results from other works on the topic and thoroughly investigating
some others. After the study is conducted, we draw conclusions and give a mathematical
explanation concerning the expected approximation of the displacements, the expected solvability
of the limit problems, and the phenomena mentioned above. The results can be found
in “Asymptotic behavior for textiles with loose contact”, which has been recently submitted.
Symplectic linear quotient singularities belong to the class of symplectic singularities introduced by Beauville in 2000.
They are linear quotients by a group preserving a symplectic form on the vector space and are necessarily singular by a classical theorem of Chevalley-Serre-Shephard-Todd.
We study \(\mathbb Q\)-factorial terminalizations of such quotient singularities, that is, crepant partial resolutions that are allowed to have mild singularities.
The only symplectic linear quotients that can possibly admit a smooth \(\mathbb Q\)-factorial terminalization are by a theorem of Verbitsky those by symplectic reflection groups.
A smooth \(\mathbb Q\)-factorial terminalization is in this context referred to as a symplectic resolution and over the past two decades, there is an ongoing effort to classify exactly which symplectic reflection groups give rise to quotients that admit symplectic resolutions.
We reduce this classification to finitely many, precisely 45, open cases by proving that for almost all quotients by symplectically primitive symplectic reflection groups no such resolution exists.
Concentrating on the groups themselves, we prove that a parabolic subgroup of a symplectic reflection group is generated by symplectic reflections as well.
This is a direct analogue of a theorem of Steinberg for complex reflection groups.
We further study divisor class groups of \(\mathbb Q\)-factorial terminalizations of linear quotients by finite subgroups \(G\) of the special linear group and prove that such a class group is completely controlled by the symplectic reflections - or more generally junior elements - contained in \(G\).
We finally discuss our implementation of an algorithm by Yamagishi for the computation of the Cox ring of a \(\mathbb Q\)-factorial terminalization of a linear quotient in the computer algebra system OSCAR.
We use this algorithm to construct a generating system of the Cox ring corresponding to the quotient by a dihedral group of order \(2d\) with \(d\) odd acting by symplectic reflections.
Although our argument follows the algorithm, the proof does not logically depend on computer calculations.
We are able to derive the \(\mathbb Q\)-factorial terminalization itself from the Cox ring in this case.
Phycobilisomes (PBS) are the major light-harvesting complexes for the majority of cyanobacteria
and allow these organisms to absorb in the so-called green gap. They consist of smaller units called
phycobiliproteins (PBPs), which are composed of an α- and a β-subunit with covalently bound
linear tetrapyrroles (phycobilins). The latter are attached to the apo-PBPs by phycobiliprotein
lyases. Interestingly, cyanobacteria of the genus Prochlorococcus lack complete PBS and instead
use prochlorophyte chlorophyll-binding proteins (Pcbs), which effectively utilize the energy of the
blue light region. The low-light-adapted (LL) strain Prochlorococcus marinus SS120 has a single
PBP, phycoerythrin-III (PE-III). It has been postulated that PE-III is chromophorylated with the
phycobilins phycourobilin (PUB) and phycoerythrobilin (PEB) in a 3:1 ratio. Thereby, the function
of PE-III remains unclear so far, so that light-gathering function and also photoreceptor function
are discussed.
The main goal of this work was to characterize the assembly of PE-III and thus the function of the
six putative phycobiliprotein lyases of P. marinus SS120. Previous work found that the individual
lyases could not be produced in soluble form, so we switched to a dual pDuet™ plasmid system in
E. coli, which was successfully established. Investigation of the binding of PEB to Apo-PE
revealed that the CpeS lyase specifically chromophorylated Cys82 with 3Z-PEB. Unfortunately,
additional chromophorylation could not be observed using the pDuet system. Therefore, in a
second part of the work, the entire PE gene cluster from P. marinus SS120 was to be introduced
into E. coli and expressed. Although the gene cluster was successfully transcribed within E. coli,
no translation was observed, possibly due to incompatible translation initiation between
Prochlorococcus and E. coli. The introduction of a mini PE cluster (CpeAB) into the
cyanobacterium Synechococcus sp. PCC 7002 was also successfully performed, in which case
production of CpeB but not CpeA from Prochlorococcus was detected. Recombinant CpeB was
also detected together with intrinsic PBP in Synechococcussp. 7002, indicating structural similarity
and incorporation into PBS in Synechococcus sp. 7002. Overall, the obtained results suggest that a
cyanobacterial host is a good option for the studies on the assembly of PE-III from P. marinus and,
based on this, future work could aim at generating an artificial operon using synthetic biology to
achieve efficient translation of all genes.
Tropical reservoirs are recognized as globally important sources of greenhouse gases (GHG). Tropical mountainous areas of high hydroelectric development have been poorly studied. The objective of this study is to understand GHG dynamics in tropical mountain reservoirs. Data on seasonal and diurnal GHG dynamics were collected during six field campaigns in the Porce III reservoir in the Colombian Andes, where the importance of oxic CH4 production in the variability of dissolved gas at the surface, as well as the variation of water levels as an incident factor in GHG fluxes on a seasonal scale, was evidenced. CO2 flux at the reservoir water-atmosphere interface were monitored with a high-resolution technique over periods of several weeks, where the importance of primary productivity in the diurnal cycling of CO2 flux was inferred, showing alternation as sink-source, and pulses of synoptic-scale CO2 flux were observed as a consequence of the simultaneous occurrence of increases in surface concentrations and high wind speed. In laboratory experiments, a relationship was found between rain rate, turbulent kinetic energy dissipation rate and gas transfer rate, contributing to the modeling of this phenomenon with applicability in inland waters. In general, the results obtained contribute to the understanding of GHG dynamics in eutrophic tropical reservoirs.
Solving probabilistic-robust optimization problems using methods from semi-infinite optimization
(2023)
Optimization under uncertainty is one field of mathematics which is strongly inspired by real world problems. To handle uncertainties several models have arisen. One of these is the probust model where a combination of probabilistic and worst-case uncertainty is considered. So far, just problem instances with a special structure can be dealt with. In this thesis, we introduce solving techniques applicable for any probust optimization problem. On the one hand, we create upper bounds for the solution value by solving a sequence of chance constrained optimization problems. These bounds are based on discretization schemes which are inspired by semi-infinite optimization. On the other hand, we create lower bounds by solving a sequence of set-approximation problems. Here, we substitute the original event set by an appropriate family of sets. We examine the performance of the corresponding algorithms on simple packing problems where we can provide the probust solution analytically. Afterwards, we solve a water reservoir and a distillation problem and compare the probust solutions with solutions arising from other uncertainty models.
The area of Baiturrahman Grand Mosque in Banda Aceh, Indonesia, is a trade and service area and, at the same time, is a historical site that has a lot of historical heritage. Nevertheless, only a few people walk along the corridor of the town. People prefer driving to commute within the area and stop at their destination point. Some of them walk but only for 20 to 30 meters. Synergically, the growing number of motor vehicles increases significantly. Research in 2016 shows that 77% of 3600 respondents go by motorcycle for daily trips. The traffic jam appears during the rush hours.
A walkability concept is an approach to this problem because it gives social, economic, and environmental benefits. Before analyzing the case study of Banda Aceh, the writer determined a definition of walkability for the context of the research. By comparing Journals from Indonesia, Malaysia, and Thailand, it is to know how the researchers of the three countries define walkability. The researchers describe it in 3 ways: creating or adapting the definition, building variables, or starting by defining elements that shape it. After building a definition, the writer chose the researcher's most often used parameters. The chosen parameters become variable to evaluate the site condition in the case study.
The absence of pedestrians and the trend of using a motor vehicle in the case study area is a time bomb that will endanger human survival in the future. This research investigates three aspects: the people, physical environment, and policy to answer the problem. The structured questionnaires spread over the research site to get the people's background of not walking. An observation in the research site is to get a clear idea of the condition of the pedestrian system and its physical environment. The writer learned the official planning documents related to city spatial plans and pedestrian development to get a deeper understanding.
Kaiserslautern in Germany is a comparative study in this research because it is one of the best practice examples of pedestrian development. The local government has built the pedestrian zone system since the 1960s and was entirely successful 38 years after the construction. Moreover, the city has two planning tools for the transport plan. Firstly, Mobilitätsplan Klima+ 2030 provides information about people's mobility, standards, principles of transport development, and strategic guidelines for traffic development. Secondly, Nahverkehrsplan Stadt Kaiserslautern service/trip performance, minimum standards, connection reliability, and local transportation network development/development, including various investment steps.
The questionnaire result shows that two-thirds of respondents who visited the old city center denied walking due to personal reasons and weather. It is because most of them own motor vehicles. Meanwhile, there are obstacles and destroyed parts along the pedestrian lane. The barriers are broken lane material, traders' products, street vendors, street cafés, and plants. Nonetheless, Banda Aceh has a plan for pedestrian system development in its city spatial plan. The document plans four segments of pedestrian lane development.
This research is advantageous in adding some knowledge to the field of urban pedestrian development. It could be a consideration in researching and planning a pedestrian system development for the cities that face a similar problem. Moreover, it helps promote a healthy, sustainable town that can save people and the environment from pollution in the future.
The research deals with a question about Architecture and its design strategies, combining historical information and digital tools. Design strategies are historically defined, they rely on geometry, context, building technologies and other factors. The study of Architecture´s own history, particularly in the verge of technological advancements, like the introduction of new materials or tools may shed some light on how to internalize digital tools like parametric design and digital fabrication.
From industrial fault detection to medical image analysis or financial fraud prevention: Anomaly detection—the task of identifying data points that show significant deviations from the majority of data—is critical in industrial and technological applications. For efficient and effective anomaly detection, a rich set of semantic features are required to be automatically extracted from the complex data. For example, many recent advances in image anomaly detection are based on self-supervised learning, which learns rich features from a large amount of unlabeled complex image data by exploiting data augmentations. For image data, predefined transformations such as rotations are used to generate varying views of the data. Unfortunately, for data other than images, such as time series, tabular data, graphs, or text, it is unclear what are suitable transformations. This becomes an obstacle to successful self-supervised anomaly detection on other data types.
This thesis proposes Neural Transformation Learning, a self-supervised anomaly detection method that is applicable to general data types. In contrast to previous methods relying on hand-crafted transformations, neural transformation learning learns the transformations from data and uses them for detection. The key ingredient is a novel objective that encourages learning diverse transformations while preserving the relevant semantic content of the data. We prove theoretically and empirically that it is more suited than existing objectives for transformation learning.
We also introduce the extensions of neural transformation learning for anomaly detection within time series and graph-level anomaly detection. The extensions combine transformation learning and other learning paradigms to incorporate vital prior knowledge about time series and graph data. Moreover, we propose a general training strategy for deep anomaly detection with contaminated data. The idea is to infer the unlabeled anomalies and utilize them for updating parameters alternatively. In setups where expert feedback is available, we present a diverse querying strategy based on the seeding algorithm of K-means++ for active anomaly detection.
Our extensive experiments and analysis demonstrate that neural transformation learning achieves remarkable and robust anomaly detection performance on various data types. Finally, we outline specific paths for future research.
Ecosystems are interconnected through the exchange of resources known as subsidies. Subsidies have the potential to affect the receiving ecosystem, altering its productivity and trophic cascade. The boundary between aquatic and terrestrial ecosystems provides a clear distinction between aquatic and terrestrial organisms and is a particularly interesting location for studying resource subsidies. Process-based models can aid in predicting the effects of anthropogenic stressors on food webs and understanding the functioning of meta-ecosystems. The goal of this thesis is to contribute to the development of theories on how changes in subsidies affect recipient ecosystems using aquatic-terrestrial interface as a case study. In this thesis, a review of process-based food web models applied to the aquatic-terrestrial interface
(aquatic-terrestrial models) and theoretical meta-ecosystems (theoretical models) was carried out (chapter 2). Results show that the models have enhanced our understanding of how terrestrial subsidies affect aquatic ecosystem. General understanding of how subsidies affect the stability and
functions of meta-ecosystems was also enhanced. However, existing aquatic-terrestrial models focused primarily on how subsidies from terrestrial ecosystems affect aquatic ecosystems, with none considering reciprocal flows. Furthermore, the quality characteristics of subsidies were not taken into account, despite potential differences from alternative local resources. Therefore, chapters 3 and 4 developed theories using terrestrial ecosystems with aquatic subsidies as a case study. Chapter 3 focused on how changes in subsidy quality affect the recipient ecosystem and hypothesized that changes in subsidy quality have a cascading effect on the recipient ecosystem (subsidy quality hypothesis). However, the model predictions were most sensitive to the input rate of inorganic nutrients in the recipient ecosystem, indicating that ecosystems are controlled by both top-down (TD) and bottom-up (BU) processes. Chapter 4 shows that the TD and BU processes of ecosystems interact antagonistically. The generated theories can be integrated into empirical research by testing predictions, assumptions, using model equations, and adopting the framework. This thesis improves our understanding of the impacts of subsidies on recipient ecosystems. Future meta-ecosystem models may consider the cross-ecosystem flow of information to further enhance our understanding of
meta-ecosystems. Additionally, aquatic-terrestrial models developed to predict algae blooms may consider developing trait-based models to improve predictions.
Light is an essential aspect of daily life, exerting a profound influence on various physiological and behavioral processes, including circadian rhythms, alertness, cognition, mood, and behavior. Technological advances, particularly the widespread adoption of light-emitting diodes (LEDs), have significantly accelerated the impact of lighting on the human experience. With the increasing global accessibility to electric and modern lighting systems, there is a pressing need to scientifically investigate the human-centered effects of lighting for the billions of people worldwide who encounter natural and electric lighting in their daily lives. Extensive interdisciplinary research across fields such as physics, engineering, psychology, medicine, business administration, and architecture has explored the biological and psychological effects of lighting, underscoring the immense potential for further advancements in this domain. Notably, innovative lighting technologies and strategies hold tremendous promise in enhancing human health, performance, and overall well-being.
Beyond physical spaces, three-dimensional virtual environments, including metaverse platforms, are becoming increasingly important. Simulated lighting in virtual spaces can have visual and non-visual effects on users. As technological progress and digitalization extend globally, more individuals will be exposed to virtual lighting scenarios. Consequently, exploring the human-centered lighting effects in virtual environments offers a compelling opportunity to improve the quality of user experiences. This thesis demonstrates the adaptability of established measurement methods from physical illumination and perception research for virtual environments.
This thesis comprises three parts. The first part reviews the current state of research on lighting and its influences on humans, examines research methods in lighting research, and identifies research gaps. The second part investigates the effects of lighting on complex emotional and behavioral constructs, specifically conflict handling. Elaborate laboratory experiments explore lighting as an independent variable, including realistic correlated color temperature (CCT) levels and enhanced CCT changes. Statistical analyses provide in-depth examination and critical discussion of the effects. The third part explores lighting in virtual spaces, considering literature, methodological approaches, and challenges. Two studies investigate visual and non-visual effects, and preferences in virtual environment design. Comparative analysis of the data yields implications for research and practice, including the interdisciplinary perspective of a novel approach called human-centric virtual lighting (HCVL).
In conclusion, this thesis comprehensively explores the impact of lighting on the human experience in both physical spaces and virtual environments. By addressing research gaps and employing contemporary methodologies, the findings contribute to our understanding of the effects of lighting on humans. Furthermore, the implications for research and practice offer valuable insights for the development of innovative lighting technologies and strategies aimed at enhancing the well-being and experiences of individuals worldwide. This work highlights the relevance of interdisciplinary research involving fields such as architecture, business management, event management, computer science, design, engineering, ergonomics, lighting research, medicine, physics, and psychology in advancing our understanding of visual and non-visual lighting effects.
Interactions between flow hydrodynamics and biofilm attributes and functioning in stream ecosystems
(2023)
Biofilms constitute an integral part of freshwater ecosystems and are central to regulating essential stream biogeochemical functions, such as nutrient uptake and metabolism. Under-standing the environmental factors that dictate the composition of biofilm communities and their role in whole-system nutrient cycling remains challenging, given the large spatial and temporal variability of biofilm communities. Pristine mountain streams exhibit a heteroge-neous streambed ranging from boulders to sand, provoking high spatiotemporal flow varia-bility. Our current knowledge of the interactions between flow hydrodynamics and biofilm attributes stems from mesocosm studies, which are inherently limited in environmental real-ism. Moreover, the mechanism linking flow hydrodynamics to microbial biodiversity and ecosystem functioning is currently not studied. My thesis aims to link streambed heteroge-neity and the associated development of the flow field to biofilm attributes and nitrogen uptake based on a multidisciplinary field approach. It integrates several spatial and temporal scales ranging from millimeter-sized spots to stream reaches and from milliseconds to minutes (i.e., the hydraulic scale of velocity fluctuations), up to days, months and years (i.e., the hydrological scale of flow fluctuations). I demonstrate that the spatial niche variability of flow hydrodynamics was an essential driver of biofilm community composition, diversity and morphology, in line with the habitat heterogeneity hypothesis initially formulated for terrestrial ecosystems. Furthermore, hydraulic mass transfer associated to flow diversity and biofilm biomass determined biofilm areal nitrogen uptake at scales ranging from spots to the stream reach. At the whole-ecosystem level, flow diversity determined the quantitative role of biofilms compared to other nitrogen uptake compartments by sorting them according to prevailing flow conditions. The magnitude of effects depended on ambient nutrient back-ground and season, suggesting a hierarchy of the environmental controls on biofilms. In summary, my interdisciplinary research provided a mechanistic understanding of how hy-dromorphological diversity determines the diversity, morphology, and the functional role of biofilms in streams. By improving the understanding of these relationships, my research improves our ability to predict and scale measurements of important stream biogeochemical functions. Moreover, it helps to face the challenges imposed by environmental changes and biodiversity loss.
Semi-structured data is a common data format in many domains.
It is characterized by a hierarchical structure and a schema that is not fixed.
Efficient and scalable processing of this data is therefore challenging, as many existing indexing and processing techniques are not well-suited for this data format.
This dissertation presents a novel approach to processing large JSON datasets.
We describe a new data processor, JODA, that is designed to process semi-structured data by using all available computing resources and state-of-the-art techniques.
Using a custom query language and a vertically-scaling pipeline query execution engine, JODA can process large datasets with high throughput.
We optimize JODA by using a novel optimization for iterative query workloads called delta trees, which succinctly represent the changes between two documents.
This allows us to process iterative and exploratory queries efficiently.
We improve the filtering performance of JODA by implementing a holistic adaptive indexing approach that creates and improves structural and content indices on the fly, depending on the query load.
No prior knowledge about the data is required, and the indices are automatically improved over time.
JODA is also modularized and can be extended with new user-defined predicates, functions, indices, import, and export functionalities.
These modules can be written in an external programming language and integrated into the query execution pipeline at runtime.
To evaluate this system against competitors, we introduce a benchmark generator, coined BETZE, which aims to simulate data scientists exploring unknown JSON datasets.
The generator can be tweaked to generate query workload with different characteristics, or predefined presets can be used to quickly generate a benchmark.
We see that JODA outperforms competitors in most tasks over a wide range of datasets and use-cases.
Facing the demands of the energy transition, gas turbines require continuous development to improve thermal efficiency. Since this can be achieved by further increasing the turbine inlet temperature, advanced cooling techniques are required to protect the highly loaded turbine components. This includes the first nozzle guide vane, which is located just downstream of the combustion chamber. Film cooling, i.e., injecting coolant into the hot-gas path, has been a cornerstone of turbine cooling. While the coolant film is typically supplied through discrete cooling holes, design-related gaps, e.g., the purge slot between the transition duct and the vane platform, can be utilized for injecting coolant. Since the coolant is drawn from the compressor, potentially offsetting thermal efficiency gains from increased turbine inlet temperatures, efficient use of the coolant is critical. In this context, experimental data obtained under engine-like flow conditions, i.e., matching the Mach and Reynolds numbers that are present in the engine, are indispensable for assessing the film cooling performance. Existing research on upstream slot injection has a blind spot, as all high-speed studies were conducted in linear cascades. This approach neglects, by principle, the influence of the radial pressure gradient that naturally occurs in swirling flows and potentially affects coolant propagation. Therefore, a high-speed annular sector cascade has been developed: It allows testing the film cooling performance and aerodynamic effects of coolant flows from various upstream slot configurations, not only at engine-like Mach and Reynolds numbers but also considering the radial pressure gradient. The cascade is equipped with nozzle guide vanes with contoured endwalls representing state-of-the-art turbine design. The results to be expected from the test rig are, therefore, of great relevance.
The annular sector cascade is integrated into the existing high-speed turbine test facility at the Institute of Fluid Mechanics and Turbomachinery (University of Kaiserslautern-Landau), which was previously used for testing a linear cascade with the same nozzle guide vane design. It incorporates various measurement techniques such as five-hole probes, pressure-sensitive paint, and infrared thermography to investigate both the thermal and aerodynamic aspects of film cooling. This thesis provides a detailed description of the cascade development, starting from the aerodynamic design up to the structural implementation. It also includes the results of the previous measurements in the linear cascade, as they provided the basis for refining the measurement methods.
Research across virtually all subfields of psychology has suffered from construct proliferation, often resulting in redundant constructs that strongly overlap conceptually and/or empirically. Such cases of old wine in new bottles, i.e., established constructs with new labels, are instances of the jangle fallacy and are problematic because they lead to fragmented literatures and thereby considerably impede the accumulation of knowledge.
The present thesis aims at demonstrating how to scrutinize potential jangle fallacies in a theory-driven, deductive, and falsificationist way. Using the example of the common core of aversive traits, D, I discuss the ways one can find and test differences between more or less overlapping, competing constructs. Specifically,the first paper tests the plausibility of a potential jangle fallacy with respect to D and a Fast Life History Strategy, concluding that the latter is unlikely to represent the common core of aversive traits at all. The remaining three papers test the distinctness of D from FFM Agreeableness, HEXACO Honesty-Humility, and a blend of the two, AG+, all of which are conceptually and empirically remarkably similar to, but could nevertheless be dissociated from D, thereby also refuting an instance of the jangle fallacy.
Although research often places emphasis on similarities, it is impossible to conclusively prove the equivalence of constructs. I therefore conclude that a falsificationist approach is more informative in that it allows to test whether any differences identified on a conceptual level can be confirmed empirically. Stated differently, if a new construct is dissociable both theoretically and empirically, one may assume that it is functionally distinct and no instance of the jangle fallacy.
This thesis describes the synthesis and extensive characterization of mononuclear
cis-(carboxylato)(hydroxo)iron(III) and cis-(carboxylato)(aqua)iron(II) complexes
among others and illuminates their capability to engage in hydrogen atom transfer
reactions via reactivity studies with suitable substrates. The employed carboxylates
include benzoate, p-nitrobenzoate, and p-methoxybenzoate. Additionally, the first
example for a solution-stable mononuclear cis-di(hydroxo)iron(III) complex is
presented, the extensive characterization of which aims to contribute to the
identification of spectroscopic markers and a better understanding of the role of the
carboxylate ligand in the above-mentioned complexes.
The cis-(carboxylato)(hydroxo/aqua)iron(III/II) complexes match the coordination
environment and the electronic properties of the active iron site in the resting state of
rabbit lipoxygenase as well as of the reaction intermediates postulated for the
enzymatic mechanism. In addition to being excellent structural and electronic models,
the cis-(carboxylato)(hydroxo)iron(III) complexes display reactivity in abstracting
hydrogen atoms from (weak) O–H and C–H bonds of suitable substrates, thus proving
themselves to be worthy functional model complexes for lipoxygenases. The findings
are supported with extensive structural, spectroscopic, spectrometric, magnetic, and
electrochemical investigations as well as with quantified thermodynamic and kinetic
parameters to allow for an adequate comparison between the derivatives with varying
carboxylate ligands and to other works. Moreover, the reactivity investigation for the
cis-(benzoato)(hydroxo)iron(III) (the first example found) was exemplary accompanied
by a thorough theoretical study (done by external cooperation partners), which
validates the experimental results and identifies an underlying concerted protoncoupled-electron-transfer (cPCET) mechanism for the
cis-(carboxylato)(hydroxo)iron(III) complexes – analogous to the one suggested for the
enzyme.
The synthesis and study of a functional structural model complex is extremely
challenging and rarely successful. Thus, this result alone represents a significant
scientific advancement for the field, as no such model for lipoxygenases had been
precedented prior to this project. The in-depth studies with derivatives of the initial cis-(benzoato)(hydroxo/aqua)iron(III/II) complexes further contribute to this
advancement by illuminating structure-function relations.
3D joint angles based human pose is needed for applications like activity recognition, musculoskeletal health, sports biomechanics and ergonomics. The microelectromechanical systems (MEMS) based magnetic-inertial measurement units (MIMUs) can estimate 3D orientation. Due to small size, MIMUs can be attached to the body as wearable sensors for obtaining full 3D human pose and this system is termed as inertial motion capture (i-Mocap). But the MIMUs suffer from sensor errors and disturbances, due to which orientation estimated from individual MIMUs can be erroneous. Accurate sensor calibration is essential and subsequently alignment of these sensors to body segments must also be precisely known, which is called sensor-to-segment calibration. Sensor fusion is employed to address the disturbances and noise in MIMUs. Many state-of-art inertial motion capture approaches ignore the magnetometer and only use IMUs to reduce the error arising from inhomogeneous magnetic field. These algorithms rely on kinematic constraints and assumptions regarding joints and are based on IMUs located on the adjacent body segments. The full body coverage requires 13-17 such units and can be quite obtrusive. The setting up and calibration of so many wearable sensors also take time.
This thesis focuses on 3D human pose estimation from a reduced number of MIMUs and deals with this problem systematically. First we propose an accurate simultaneous calibration of multiple MIMUs, which also learns the uncertainty of individual sensors. We then describe a novel sensor fusion algorithm for robust orientation estimation from an MIMU and for updating sensors calibration online. The residual errors in both sensor calibration and fusion can result in drift error in the joint angles. Therefore, we present anatomical (sensor-to-segment) calibration in which an orientation offset correction term is updated and used for online correction of residual drift in individual joint angles. Subsequently we demonstrate that 3D human joint angle constraints can be learned using a data-driven approach in a high dimensional latent space. Owing to temporal and joint angle constraints, it is possible to use only a reduced set of sensors (as opposed to one sensor per segment) and still obtain 3D human pose. But the spatial and temporal prior learning from data is often limited due to finite set of movement patterns in most datasets. This introduces uncertainty while estimating 3D human pose from sparse MIMU sensors. We propose a magnetometer robust orientation parameterization and a data-driven deep learning framework to predict 3D human pose with associated uncertainty from sparse MIMUs. The model is evaluated on real MIMU data and we show that the uncertainty predicted by the trained model is well-correlated with actual error and ambiguity.
Adult emerging aquatic insects can transfer micropollutants, accumulated during their aquatic development, from aquatic to terrestrial ecosystems. This process depends on both contaminant- and organism-specific properties and processes. The transfer of contaminants can result in the dietary exposure of terrestrial insectivores at the aquatic-terrestrial ecosystem boundary. It is, however, unknown whether this route of contaminant transfer is relevant for current-use pesticides, despite their ubiquity in freshwater ecosystems globally. Furthermore, empirical investigation of pesticides in terrestrial insectivores which consume emerging aquatic insects (e.g. riparian spiders) is lacking. In the present work, two laboratory batch-scale studies and a field study were conducted to investigate the transfer of current-use pesticides by emerging aquatic insects and the dietary exposure of riparian spiders preying on emerging insects. In the two laboratory studies, larvae of the model organism, Chironomus riparius, were exposed, either chronically to seven fungicides and two herbicides, or acutely (24-hours) to three individual insecticides during their development. The pesticides were all small organic molecules, selected to cover a low to moderate lipophilicity range (logKow 1.2 – 4.7). Exposure took place at three environmentally relevant concentrations for the fungicides and herbicides (1.2 – 2.5, 17.5 – 35.0 or 50.0 – 100.0 ng/mL) and two for the insecticides (0.1 and either 4 or 16 ng/mL). Eight of the nine fungicides and herbicides, as well as one of the three insecticides were detected in the adult insects after metamorphosis. Concentrations of the pesticides decreased over metamorphosis. However, the transfer of individual pesticides was not well predicted using published models which are based on contaminant lipophilicity andwere developed using other contaminant classes. In the present work, pesticide-specific differences in bioaccumulation by the larvae, retention through metamorphosis and sex-specific bioamplification and elimination over the course of the terrestrial life stage were observed. The neonicotinoid, thiacloprid, was the only insecticide retained by the emerging insects, due to its slow elimination by the larvae. Thiacloprid also decreased insect emergence success. An approximate 30 % higher survival to emergence in the low exposure level (0.1 ng/mL), however, resulted in a relatively higher insecticide flux, from the aquatic to the terrestrial environment compared to the higher exposure (4 ng/mL). For the field study, a method for the analysis of 82 current-use pesticides by high-performance liquid chromatography tandem to triple quadrupole mass spectrometry using small volumes (30 mg) of insect material was validated and applied to samples of emerging insects and Tetragnatha spp. spiders which were collected from stream sites impacted by agricultural activities. Emerging aquatic insects from three orders (Diptera, Ephemeroptera and Trichoptera) contained 27 pesticides whereas 49 pesticides were found in the aquatic environment (water, sediment and aquatic leaf litter). This included mixtures of up to four neonicotinoid insecticides in the insects, with concentrations up to 12300 times greater than were found in the water. Furthermore, the web-building riparian spiders contained 29 pesticides, generally at low concentrations, however concentrations of three neonicotinoids and one herbicide were biomagnified compared to the emerging insects. The three studies included in this thesis thus reveal that the aquatic-terrestrial transfer of current-use pesticides occurs, even at very low environmentally relevant exposure concentrations. Furthermore, new knowledge was generated on the diverse interactions between current-use pesticides and organisms over their entire lifecycles, affecting the propensities for individual pesticides to be transferred via insect emergence. A wide range of pesticides were found to be dietarily bioavailable to riparian spiders, and likely many other riparian insectivores. The neonicotinoid insecticides stood out for their potential to negatively impact adjacent terrestrial food webs through negative impacts on aquatic insect emergence (i.e. biomass flux), while still having a high propensity to be transferred by emerging insects and bioaccumulated in riparian spiders.
Efforts in decarbonization lead to electrification, not only for road vehicles but also in the sector of mobile machines. Aside from batteries, those machines are electrified by tethering systems, nowadays featuring an AC low voltage system. Those systems are applied, e.g., to underground load haul dumpers with short tethering lines and low machine power. To expand tethering to further markets as agricultural machinery, this work proposes an HVDC tethering system allowing higher machine power and transmission length due to thinner, lighter tethering lines. The HVDC voltage is converted by distribution over a number of series connected DC/DC converters. Less blocking voltage on the semiconductors allows faster switching technology to reduce the converters’ weight and volume. The concepts modularity allows for flexible adaption on various application scenarios. Since comparable concepts exist for offshore wind farms connectivity, its applicability for this is discussed. A full bridge inverter/rectifier LLC resonance DC/DC converter is presented for the modules. A switched LTI converter model is developed and a Common Quadratic Lyapunov Function (CQLF) is computed for prove of stability. The converter control features soft startup and voltage control over all modules. The concepts are validated by simulation and on a scaled prototype.
In the context of distributed networked control systems, many issues affect the performance and functionality of the connected subsystems, mainly raised because of the communication medium imposed into the system structure. The communication functionality must generally cope with the data exchange requirements between system entities. Therefore, due to the limited communication resources, especially in wireless networks, an optimal algorithm for the assignment of the communication resources and proper selection of the right Medium Access Control (MAC) protocol are highly needed.
In this dissertation, we studied several problems raised by communication networks in wireless networked control systems, with a particular focus on the effect of standard Medium Access Control (MAC) protocols on the overall control system performance. We examined the effect of both the Time Division Multiple Access (TDMA) and the Orthogonal Frequency Division Multiple Access (OFDMA) protocols and developed a set of distributed algorithms that suit their specification requirements.
As a benchmark, we used a vehicle dynamics optimal control problem where the objective of the optimization problem is to penalize the maximal utilization of the tire's adhesion forces for a given driving maneuver. The problem was decomposed into a distributed form using primal and dual decomposition techniques, and solving algorithms were derived using both primal and dual subgradient methods. The problem solver was tested with respect to a wireless networked system structure and evaluated for different communication typologies, such as uni-directional, bidirectional, and broadcasting topology.
Later, the setup of the solution algorithms was extended concerning the specification of the TDMA and OFDMA protocols, and we introduced an event-triggered scheme into the solver algorithm. The proposed event-triggered scheme is mainly utilized to reduce communication between concurrent computation subsystems, which is primarily intended to facilitate real-time efficiency.
Next, we investigated the effect of the data exchange between subsystems on the overall solver performance and adapted the sensitivity analysis concept within the event-based communication scheme. An adaptive sensitivity-based TDMA algorithm was developed to manage the extensive communication resource requests, and channel utilization was adapted for the optimal solution behavior.
In the last part of the thesis, we extended our research direction to the multi-vehicle concept and investigated the communication resource allocation problem in the context of the OFDMA protocol. We developed an adaptive sensitivity-based OFDMA protocol based on linking the evolution of the application layer to the communication layer and assigning the communication resources concerning the sensitivity analysis of the optimization problem at the application layer.
Augmented (AR), Virtual (VR) and Mixed Reality (MR) are on their way into everyday life. The recent emergence of consumer-friendly hardware to access this technology has greatly benefited the community. Research and application examples for AR, VR and MR can be found in many fields, such as medicine, sports, the area of cultural heritage, teleworking, entertainment and gaming. Although this technology has been around for decades, immersive applications using this technology are still in their infancy. As manufacturers increase accessibility to these technologies by introducing consumer grade hardware with natural input modalities such as eye gaze or hand tracking, new opportunities but also problems and challenges arise. Researchers strive to develop and investigate new techniques for dynamic content creation or novel interaction techniques. It has yet to be found out which interactions can be made intuitively by users. A major issue is that the possibilities for easy prototyping and rapid testing of new interaction techniques are limited and largely unexplored.
In this thesis, different solutions are proposed to improve gesture-based interaction in immersive environments by introducing gesture authoring tools and developing novel applications. Specifically, hand gestures should be made more accessible to people outside this specialised domain. First, a survey which explores one of the largest and most promising application scenario for AR, VR and MR, namely remote collaboration is introduced. Based on the results of this survey, the thesis focuses on several important issues to consider when developing and creating applications. At its core, the thesis is about rapid prototyping based on panorama images and the use of hand gestures for interactions. Therefore, a technique to create immersive applications with panorama based virtual environments including hand gestures is introduced. A framework to rapidly design, prototype, implement, and create arbitrary one-handed gestures is presented. Based on a user study, the potential of the framework as well as efficacy and usability of hand gestures is investigated. Next, the potential of hand gestures for locomotion tasks in VR is investigated. Additionally, it is analysed how lay people can adapt to the use of hand tracking technology in this context. Lastly, the use of hand gestures for grasping virtual objects is explored and compared to state of the art techniques. Within this thesis, different input modalities and techniques are compared in terms of usability, effort, accuracy, task completion time, user rating, and naturalness.
Though Computer Aided Design (CAD) and Simulation software are mature, well established, and in wide professional use, modern design and prototyping pipelines are challenging the limits of these tools. Advances in 3D printing have brought manufacturing capability to the general public. Moreover, advancements in Machine Learning and sensor technology are enabling enthusiasts and small companies to develop their own autonomous vehicles and machines. This means that many more users are designing (or customizing) 3D objects in CAD, and many are testing machine autonomy in Simulation. Though Graphical User Interfaces (GUIs) are the de-facto standard for these tools, we find that these interfaces are not robust and flexible. For example, designs made using GUI often break when customized, and setting up large simulations can be quite tedious in GUI. Though programmatic interfaces do not suffer from these limitations, they are generally quite difficult to use, and often do not provide appropriate abstractions and language constructs.
In this Thesis, we present our work on bridging the ease of use of GUI with the robustness and flexibility of programming. For CAD, we propose an interactive framework that automatically synthesizes robust programs from GUI-based design operations. Additionally, we apply program analysis to ensure customizations do not lead to invalid objects. Finally, for simulation, we propose a novel programmatic framework that simplifies building of complex test environments, and a test generation mechanism that guarantees good coverage over test parameters. Our contributions help bring some of the advantages of programming to traditionally GUI-dominant workflows. Through novel programmatic interfaces, and without sacrificing ease of use, we show that the design and customization of 3D objects can be made more robust, and that the creation of parameterized simulations can be simplified.
Faces deliver invaluable information about people. Machine-based perception can be of a great benefit in extracting that underlying information in face images if the problem is properly modeled. Classical image processing algorithms may fail to handle the diverse data available today due to several challenges related to varying capturing locations, and conditions. Advanced machine learning methods and algorithms are now highly beneficial due to the rapid development of powerful hardware, enabling feasible advanced solutions based on data learning and summarization into powerful models. In this thesis, novel solutions are provided to the problems of head orientation estimation and gender prediction. Initially, classical machine learning algorithms were used to address head orientation estimation but were limited by their inability to handle large datasets and poor generalization. To overcome these challenges, a new highly accurate head pose dataset was acquired to tackle the identified problems. Novel trained deep neural networks have been exploited, that use the acquired data and provide novel architectures. The information about head pose is then represented in the network weights, thus, allowing predicting the head orientation angles given a new unseen face. The acquired dataset, named AutoPOSE opens the door for further studies in the field of computer vision and especially, face analysis. The problem of gender prediction has also been explored, but unlike humans who can easily identify gender from a face, computers face difficulties due to facial similarities. Therefore, hand-crafted features are not effective for generalization. To address this, a new deep learning method was developed and evaluated on multiple public datasets, with identified challenges in both still images and videos addressed. Finally, the effect of facial appearance changes due to head orientation variation has been investigated on gender prediction accuracy. A novel orientation-guided feature maps recalibration method is presented, that significantly increased the accuracy of gender prediction.
In conclusion, two problems have been addressed in this thesis, independently and joined together. Existing methods have been enhanced with intelligent pre-processing methods and new approaches have been introduced to tackle existing challenges, that arise from pose, illumination, and occlusion variations. The proposed methods have been extensively evaluated, showing that head orientation and gender prediction can be estimated with high accuracy using machine learning-based methods. Also, the evaluations showed that the use of head orientation information consistently improved the gender prediction accuracy. Scientific contributions have been presented, and the new acquired highly accurate dataset motivates the research community to push the state-of-the-art forward.
Undocumented enterprise data can easily pile up in companies in form of datasets and personal information. In absence of a data management strategy, such data becomes rather messy and may not fit for its intended use. Since there is often no documentation available, only a limited number of domain experts are aware of its contents. Therefore, for companies it becomes increasingly difficult to use such data to its full potential. To provide a solution, this PhD thesis investigates the construction of enterprise and personal knowledge graphs by semantically enriching messy data with meaning using semantic technologies. Since real world entities and their interrelations are organized in a graph, knowledge graphs serve as a semantic bridge between domain conceptualization and raw data. Spreadsheets are a prominent example of such enterprise data, since they are widely used by knowledge workers in the industrial sector. Two distinct approaches are investigated to construct knowledge graphs from them: a global extraction & annotation method and a local mapping technique. The latter is further complemented with a predictor of mapping rules on messy data. Different human-in-the-loop strategies are considered to include experts depending on their user group. Since non-technical users usually lack understanding of semantic technologies, they need appropriate tools to be able to give feedback. In case of developers, approaches are proposed to close the technology gap between industry and Semantic Web related concepts. Semantic Web practitioners participate with ontology modeling and linked data applications. Enterprise and personal data is typically confidential which is why it cannot be shared with a research community to discuss its challenges. However, for evaluation and reproducibility reasons publicly available datasets are mandatory. The thesis proposes ways to generate synthetic datasets with the goal to be as authentic as possible. Besides that, for internal evaluations a crawler of personal data on desktops is implemented. There are further contributions related to this thesis in diverse domains. One is about the motivation to support users in their daily work using personal knowledge assistants. Others are the agricultural field and the data science domain which also benefit from knowledge graph approaches. In conclusion, this PhD thesis contributes to the construction of knowledge graphs from especially messy enterprise data, while users from different groups take part in this process in various ways.
The present thesis describes the experimental performance determination and numerical
modeling of an aerostatic porous bearing made of an orthotropically layered ceramic
composite material (CMC). The high temperature resistance, low thermal expansion and
high reusability of this material makes it eminently suitable for use in highly stressed
fluid-film bearing applications.
The work involves the development of an aerostatic journal bearing made of porous,
orthotropically layered carbon fiber-reinforced carbon composite (C/C) and the design
of a journal bearing test rig, which contained additional aerostatic support bearings and
six optical laser triangulation sensors. The sensor system enabled the measurement of
lubricant film thickness and shaft misalignment. As a result of the slight air lubrication
clearance of 30 μm, the focus was on low concentricity and the determination of shaft
misalignments.
The preliminary tests included the determination of the permeability of the porous material
and the applicability of Darcy’s law. A scan of the inner surface of the porous bushing
revealed a characteristic grooved structure, which can be attributed to the layered structure
of the material. Bearing tests were conducted up to a rotational speed of 8000 rpm and a
pressure ratio of 5 to 7. No significant effect of rotational speed on load-carrying capacity
and gas consumption was observed in this operating range. The examined operating points
did not indicate any sign of the occurrence of the pneumatic hammer. A temporary load of
below 90N on the bearing and an eccentricity ratio below 0.8 did not cause any significant
wear on the shaft.
Four numerical models, based on Reynolds’ lubricant film equation and Darcy’s law were
developed. The models were gradually extended with consideration of shaft misalignment,
the compressibility of the gas, the geometry of the pressure supply chamber and the
embedding of the groove structure. The models were validated with external publications
and the performed tests.
Numerous studies have investigated aerostatic porous bearings made of sintered metal
and graphite. Current computational approaches to determine a fast preliminary design
reached max. deviations of approximately 20 - 24% compared to experimental tests. One
of the central claims of this research was to extend this area of investigation by porous,
othotropically layered bearings made of C/C. The developed extended Full-Darcy model
achieved a maximum deviation in the load-carrying capacity of 21.6% and in the gas
consumption of 23.5%.
This study demonstrates the applicability of a resistant material from the aerospace field
(reusable thrust chambers made of CMC) for highly stressed and durable fluid-film bearings.
Furthermore, a numerical model for the computation and design of these bearings was
developed and validated.
This thesis focuses on novel methods to establish the utility of wearable devices along with machine learning and pattern recognition methods for formal education and address the open research questions posed by existing methods. Firstly, state-of-the-art methods are proposed to analyse the cognitive activities in the learning process, i.e., reading, writing, and their correlation. Furthermore, this thesis presents real-time applications in wearable space as an experimental tool in Physics education, and an air-writing system.
There are two critical components in analysing the reading behaviour, i.e., WHERE a person looks at (gaze analysis) and WHAT a person looks at (content analysis). This thesis proposes novel methods to classify the reading content to address the WHAT AT component. The proposed methods are based on a hybrid approach, which fuses the traditional computer vision methods with deep neural networks. These methods, when evaluated on publicly available datasets, yield state-of-the-art results to define the structure of the document images. Moreover, extensive efforts were made to refine and correct ICDAR2017-POD dataset along with a completely new FFD dataset.
Traditionally, handwriting research focuses on character and number recognition without looking into the type of writing, i.e. text, math, and drawing. This thesis reports multiple contributions for on-line handwriting classification. First, it presents a public dataset for on-line handwriting classification OnTabWriter, collected using iPen and an iPad. In addition, a new feature set is introduced for on-line handwriting classification to establish the benchmark on the proposed dataset to classify handwriting as plain text, mathematical expression, and plot/graph. An ablation study is made to evaluate the performance of the proposed feature set in comparison to existing feature sets. Lastly, this thesis evaluates the importance of context for on-line handwriting classification.
Analysing reading and writing activities individually is not enough to provide insights to identify the student's expertise unless their correlations are analysed. This thesis presents a study where reading data from wearable eye-trackers and writing data from sensor pen are analysed together in correlation to correlate the expertise of the users in Physics education with their actual knowledge. Initial results show a strong correlation between individual's expertise and understanding of the subject.
Augmented reality & virtual applications can play a vital role in making classroom environments more interactive and engaging both for teachers and learners. To validate the hypothesis, different applications are developed and evaluated. First, smart glasses are used as an experimental tool in Physics education to help the learners perform experiments by providing assistance and feedback on head mounted display in understanding acoustics concepts. Second, a real-time application of air-writing with the finger on an imaginary canvas using a single IMU as the FAirWrite system is also presented. FAirWrite system is further equipped with DL methods to classify the air-written characters.