Refine
Year of publication
Document Type
- Doctoral Thesis (941) (remove)
Language
- English (941) (remove)
Has Fulltext
- yes (941)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (54)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (23)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (8)
- Kaiserslautern - Fachbereich Bauingenieurwesen (6)
- Kaiserslautern - Fachbereich ARUBI (5)
- Landau - Fachbereich Psychologie (5)
- Fraunhofer (ITWM) (4)
- Kaiserslautern - Fachbereich Architektur (1)
- Landau - Fachbereich Erziehungswissenschaften (1)
Weak memory consistency models capture the outcomes of concurrent
programs that appear in practice and yet cannot be explained by thread
interleavings. Such outcomes pose two major challenges to formal
methods. First, establishing that a memory model satisfies its
intended properties (e.g., supports a certain compilation scheme) is
extremely error-prone: most proposed language models were initially
broken and required multiple iterations to achieve soundness. Second,
weak memory models make verification of concurrent programs much
harder, as a result of which there are no scalable verification
techniques beyond a few that target very simple models.
This thesis presents solutions to both of these problems.
First, it shows that the relevant metatheory of weak memory
models can be effectively decided (sparing years of manual proof
efforts), and presents Kater, a tool that can answer metatheoretic
queries in a matter of seconds. Second, it presents GenMC, the first
(and only) scalable stateless model checker that is parametric in the
choice of the memory model, often improving the prior state of the art
by orders of magnitude.
This thesis outlines the development of thermoplastic-graphite based plate heat exchangers from material screening to operation including performance evaluation and fouling investi-gations. Polypropylene and polyphenylene sulfide as matrix and graphite as filler were cho-sen as feedstock materials, as they possess a low density and excellent corrosion resistance at a comparatively low price.
For the purpose of material screening, custom-made polymer composite plates with a plate thickness of 1-2 mm and a filler content of up to 80 wt.% were investigated for their thermal and mechanical suitability with regard to their use in plate heat exchangers. Three-point flexural tests show that the loading of polypropylene with graphite leads to mechanical prop-erties that allow the composites to be applied as corrugated heat exchanger plates. The simu-lated maximum overpressure is greater than 7 bar, depending on the wall thickness. The thermal conductivity of the composites was increased by a factor of 12.5 compared to pure polypropylene, resulting in thermal conductivities of up to 2.74 W/mK.
The fabrication of the developed corrugated heat exchanger plates, with a thickness between 0.85 mm and 2.5 mm and a heat transfer surface area of 11.13·10-3 m² was carried out via processes that can be automized, namely extrusion and embossing. With the manufactured plate heat exchanger, overall heat transfer coefficients are determined over a wide range of operating conditions (Re = 200 - 1600), which are used to validate a plate heat exchanger model and consequently to compare the composites with conventional materials. The em-bossing, which seems to result in a shift of the internal graphite structure, leads to a further improvement of the thermal conductivity by 7-20 %, in addition to the impact of the filler. With low plate thicknesses, overall heat transfer coefficients of up to 1850 W/m²K could be obtained. Considering the low density of the manufactured thermal plates, this ensures com-parable performance with metallic materials over a wide range of process conditions (Re = 200 - 4000).
The fouling kinetics and amount of calcium sulfate and calcium carbonate, respectively, on different polypropylene/graphite composites in a flat plate heat exchanger and the developed chevron type plate heat exchanger are determined and compared to the reference material stainless steel. For a straight evaluation of the fouling susceptibility of the materials the for-mation of bubbles on the materials is considered by optical imaging or excluded by a degas-ser. The results are interpreted using surface free energy and roughness of the surfaces. The results show that if bubble formation is avoided, the polymer composites have a very low fouling tendency compared to stainless steel, which is attributed to the low surface free ener-gies of approximately 25 mN/m. This is particularly the case when turbulent flows are pre-sent, as is in plate heat exchangers or when sandblasted specimen are used. Sandblasting also continues to increase heat transfer compared to untreated samples by increasing thermal conductivity and creating local turbulences. Depending on the test conditions, the fouling resistance formed on the stainless steel surface is an order of magnitude greater than on the flat plate polymer composites. In addition, the fouling layers adhere only weakly to the com-posites, which indicates an easy cleaning in place after the formation of deposits. The fouling investigations in the plate heat exchanger reveal sensitivity to calcium sulfate fouling, how-ever, CFD simulations indicate that this is due to flow maldistribution and not the actual pol-ymer composite materials.
Plant-specific factors affecting short-range attraction and oviposition of European grapevine moths
(2024)
The spread of pests and pathogens is increasingly intensified by climate change and globalization. Two of the most serious insect pests threating European viticulture are the European grape berry moth, Eupoecilia ambiguella (Hübner) and the European grapevine moth Lobesia botrana (Denis & Schiffermüller). Larvae feed on fructiferous organs of grapevine Vitis vinifera, resulting in high yield and quality losses. Under the aspects of integrated pest management, insecticide measures are only reasonable when other control strategies become ineffective. In order to support the development of novel decision support system for the application of insecticides, the aim of this thesis was to decipher plant-specific factors, which affect the short-range attraction and oviposition of L. botrana and E. ambiguella.
The focus was set on the visual, volatile, tactile and gustatory stimuli provided by their host plant after settlement. The use of artificial surfaces as model plant showed that oviposition of both species is affected by the color, the shape and the texture of the oviposition site. To explain a susceptibility of certain grapevine cultivars and phenological stages of the berries to egg infestations, we analysed and compared the chemical composition of the epicuticular waxes of the berry surface as well as the volatile organic compounds emitted by the berries. Thereby it turned out that the attractiveness to wax extracts decreased during ripening of the berries, highlighting a preference of earlier phenological stages of the berries for oviposition. In addition, grapevine cultivars exhibited variations in their volatile composition. The principle components perceived by female’s antennae could not explain the differentiation between cultivars, suggesting volatiles do not trigger orientation to certain cultivars. Furthermore, a method was developed to measure real-time behavioural response of female moths to volatiles. The setup allowed to quantify the orientation to a volatile source as well as movements of the antennae and ovipositor. They could be linked to the olfactory and gustatory perception of volatiles during the evaluation of suitable host plants for oviposition. In addition, the risk of potential alternative host plants in the vicinity of the vineyard was investigated. This confirmed that L. botrana in particular prefers the stimuli provided by some plants to those of grapevine. Overall, the results suggest that during oviposition, volatiles emitted by the plants and the composition of the plant surface are the most important factors for host plant differentiation.
Production, purification and analysis of novel peptide antibiotics from terrestrial cyanobacteria
(2024)
Cyanobacteria are a known source for bioactive compounds, of which several also show antibiotic activity. In regard to the growing number of multi-resistant pathogens, the search for novel antibiotic substances is of great importance and unexploited sources should be explored. So, this thesis initially dealt with the identification of productive strains, especially within the group of the terrestrial cyanobacteria, which are less well studied than marine and freshwater strains. Amongst these, Chroococcidiopsis cubana, an extremely desiccation and radiation tolerant, unicellular cyanobacterium was found to produce an extracellular antimicrobial metabolite effective against the Gram-positive indicator bacterium Micrococcus luteus as well as the pathogenic yeast Candida auris. However, as the sole identification of a productive cyanobacterium is not sufficient for further analysis and a future production scale-up, the second part of this thesis targeted the identification of compound synthesis prerequisites. As a result, a limitation of nitrogen was shown to be the production trigger, a finding that was used for the establishment of a continuous production system. The increased compound formation was then used for purification and analysis steps. As a second approach, in silico identified bacteriocin gene clusters from C. cubana were cloned and heterologously expressed in Escherichia coli. By this, the bacteriocin B135CC was identified as a strong bacteriolytic agent, active predominantly against the Gram-positive strains Staphylococcus aureus and Mycobacterium phlei. The peptide showed no cytotoxic effects against mouse neuroblastoma (N2a-) cells and a high temperature tolerance up to 60 °C. In order to facilitate the whole project, two standard protocols, specifically adapted for the work with cyanobacteria, were established. First, a method for a quick and easy in vivo vitality estimation of phototrophic cells and second, an approach for a high throughput determination of nitrate concentrations in microalgal cultures. Both methods greatly helped to proceed the main objectives of this work, the first one by simplifying the development of suitable cryopreservation protocols for individual cyanobacteria strains and the second one by accelerating the determination of the optimal nitrate concentration for the production of the antimicrobial compound from C. cubana. In the course of this cultivation optimization, the ability of cyanobacteria to utilize organic carbon sources for an accelerated cell growth was examined in greater detail. It could be shown that C. cubana reaches significantly higher growth rates when mixotrophically cultivated with fructose or glucose. Interestingly, this effect was even further enhanced when light intensity was decreased. Under these low-light conditions, phototrophically cultivated C. cubana cells showed a clearly decreased cell growth. This effect might be extremely useful for a quick and economic preparation of precultures.
The ability to sense and respond to different environmental conditions allows living organisms to adapt quickly to their surroundings. In order to use light as a source of information, plants, fungi, and bacteria employ phytochromes. With their ability to detect far-red and red light, phytochromes constitute a major photoreceptor family. Bacterial phytochromes (BphPs) are composed of an apo-phytochrome and an open-chain tetrapyrrole, the chromophore biliverdin IXα, which mediates the photosensory properties. Depending on the photoexcitation and the quality of the incident light, phytochromes interconvert between two photoconvertible parental states: the red light-absorbing Pr-form and the far-red light-absorbing Pfr-form. In contrast to prototypical phytochromes, with a thermal stable Pr ground state, there is a group of bacterial phytochromes that exhibit dark reversion from the Pr- to the Pfr-form. These special proteins are classified as bathy phytochromes and range across different classes of bacteria. Moreover, the majority of BphPs act as sensor histidine kinases in two-component regulatory systems. The light-triggered conformational change results in the autophosphorylation of the histidine kinase domain and the transphosphorylation of an associated response regulator, inducing a cellular response. Spectroscopic analysis utilizing homologously produced protein identified PaBphP, the histidine kinase of the human opportunistic pathogen Pseudomonas aeruginosa, as a bathy phytochrome. Intensive research on PaBphP revealed evidence that the interconversion between its physiological active and inactive states is influenced by light and darkness rather than far-red and red light. In order to conduct a comprehensive systematic analysis, further bacterial phytochromes were investigated regarding their biochemical and spectroscopic behavior, as well as their autokinase activity. In addition to PaBphP, this work employs the bathy phytochromes AtBphP2, AvBphP2, XccBphP from the non-photosynthetic plant pathogens Agrobacterium tumefaciens, Allorhizobium vitis, Xanthomonas campestris, as well as RtBphP2 from the soil bacterium Ramlibacter tataouinensis. All investigated BphPs displayed a bathy-typical behavior by developing a distinct Pr-form under far-red light conditions and undergoing dark reversion to their Pfr-form. Different Pr/Pfr-fractions can be identified among the BphP populations in varying natural light conditions, including red or blue light. The Pr-form is considered as the active form due to autophosphorylation activity in the heterologously produced phytochromes when exposed to light. In the absence of light, associated with the development of the Pfr-form, the phytochromes exhibited disabled or strongly reduced autokinase activity. Additionally, light-triggered phosphorylation was observed for the response regulator PaAlgB, which is linked to the phytochrome of P. aeruginosa. This study presents the first comparative investigation of numerous bathy phytochromes under identical conditions. The work addressed a gap in the literature by providing quantitative correlation between kinase activity and calculated Pr/Pfr-fractions obtained from spectroscopic measurements. The biological role of PaBphP was partially elucidated through phenotypic characterization employing P. aeruginosa mutant and overexpression strains. The generation of a functional model was possible by considering the postulated functions of the other phytochromes found in the literature. In summary, bathy BphPs are hypothesized to modulate bacterial virulence according to the circadian day/night rhythm of their hosts. The pathogens are believed to reduce their virulence during daylight hours to evade immune and defense reactions, while increasing their virulence during the evening and night, enabling more effective infections.
Functional structures as well as materials provided by nature have always been a great source of inspiration for new technologies. Adapting and improving the discovered concepts, however, demands a detailed understanding of their working principles, while employing natural materials for fabrication tasks requires suitable functionalization and modification.
In this thesis, the white scales of the beetle Cyphochilus are examined in order to reveal unknown aspects of their light transport properties. In addition, the monomer of the material they are made of is utilized for 3D microfabrication.
White beetle scales have been fascinating scientists for more than a decade because they display brilliant whiteness despite their small thickness and the low refractive index contrast. Their optical properties arise from highly efficient light scattering within the disordered intra-scale network structure.
To gain a better understanding of the scattering properties, several previous studies have investigated the light transport and its connection to the structural anisotropy with the aid of diffusion theory. While this framework allows to relate the light scattering to macroscopic transport properties, an accurate determination of the effective refractive index of the structure is required. Due to its simplicity, the Maxwell-Garnett mixing rule is frequently used for this task, although its constraint to particle and feature sizes much smaller than the wavelength is clearly violated for the scales.
To provide a correct calculation of the effective refractive index, here, finite-difference time-domain simulations are used to systematically examine the impact of size effects on the effective refractive index. Deploying this simulation approach, the Maxwell-Garnett mixing rule is shown to break down for large particles. In contrast, it is found that a quadratic polynomial function describes the effective refractive index in close approximation, while its coefficients can be obtained from an empirical linear function. As a result, a simple mixing rule is reported that unambiguously surpasses classical mixing rules when composite media containing large feature sizes are considered. This is important not only for the accurate description of white beetle scales, but also for other turbid media, such as biological tissues in opto-biomedical diagnostics.
Describing light transport by means of diffusion theory moreover neglects any coherent effects, such as interference. Hence, their impact on the generation of brilliant whiteness is currently unknown. To shed a light on their role, spatial- and time-resolved light scattering spectromicroscopy is applied to investigate the scales and a model structure of them based on disordered Bragg stacks. For both structures the occurrence of weakly localized photonic modes, i.e., closed scattering loops, is observed, which is further verified in accompanying simulations. As shown in this thesis, leakage from these random photonic modes contributes at least 20% to the overall reflected light. This reveals the importance of coherent effects for a complete description of the underlying light transport properties; an aspect that is entirely missing in the purely diffusive transport presumed so far. Identifying the importance of weak localization for the generation of brilliant whiteness paves the way to further enhance the design of efficient optical scattering media, an issue that recently drawn great attention.
Unlike their plant-based counterparts, rigid carbohydrates, such as chitin, are currently unavailable for 3D microfabrication via direct laser writing, despite their great significance in the animal kingdom for the construction of functional microstructures. To overcome this gap, the monomeric unit of chitin, N-acetyl-D-glucosamine, is here functionalized to serve as a photo-crosslinkable monomer in a non-hydrogel photoresist. Since all previous photoresists based on animal carbohydrates are in the form of hydrogel formulations, a new group of photoresists is established for direct laser writing.
Moreover, it is exhibited that the sensitization effect, previously used only in the context of UV curing, can be successfully transferred to direct laser writing to increase the maximum writing speed. This effect is based on the beneficial combination of two photoinitiators.
In this, one photoinitiator is an efficient crosslinking agent for the monomer used, but a rather poor two-photon absorber. The other photoinitiator (called sensitizer) possesses, conversely, a much higher two-photon absorption coefficient at the applied wavelength but is not well suited as a crosslinking agent. In combination, the energy absorbed by the sensitizer is passed to the photoinitiator, resulting in the formation of radicals needed to start the polymerization. As this greatly increases the rate at which the photoinitiator is radicalized, resists containing a photoinitiator and a sensitizer are shown to outperform resists containing only one of the components. Deploying the sensitization effect in direct laser writing therefore offers a simple way to individually tune the crosslinking ability and the two-photon absorption properties by combining existing compounds, compared to the costly chemical synthesis of novel, customized photoinitiators.
In contrast to motorbike tyres, whose friction during cornering has to be as high as possible, the desired effect in skiing is the opposite, that of low friction. The reduced friction between skis and ice or snow is made possible by a film of meltwater that forms as a function of friction power. To support this friction mechanism, skis are waxed with different waxes in both hobby and professional sports, depending on a variety of conditions. Waxes with fluorine additives show best performance in most conditions, corresponding to the lowest friction coefficients. However, for health and environmental reasons, the International Ski Federation (FIS) and the Biathlon Un-ion (IBU) have imposed a complete ban on fluorine additives at all FIS races and IBU events with effect from the 2023/2024 season. As a result, wax manufacturers are required to develop and extensively test fluorine-free waxes in order to remain competitive.
Traditional tests take place either indoors or outdoors in the field. Athletes, who complete a particular distance and whose time is measured, also note the impres-sions that the prepared skis provide to the skiers. The time and cost involved in nu-merous individual tests is a drawback, and the presence of only a single type of snow in the hall or field, air resistance, changing environmental conditions and var-iations in the athlete's movement, limit the depth of information. For the need of re-ducing the time-consuming procedure of indoor and outdoor tests, a tribometer of-fers a solution where friction measurements can be performed on a laboratory scale. Due to the consistent adjustable conditions such as temperature, speed and load applied to the friction partners, scientific studies can be carried out with reduced dis-turbance variables. At present, the tribometric results of laboratory instruments for predicting friction values do not translate into application in practice. The reasons for this are the compromises that have to be made in the design of the tribometers.
This work reviews the existing tribometers for their operating conditions and con-firms the need for a scientific method of characterising different waxes. In order to fill the gap between friction results obtained in laboratory tests which cannot yet be used in the selection of waxes, and traditional field tests, this thesis is dedicated to the methodical design and manufacture of a linear tribometer capable of measuring friction between a ski base made of UHMWPE (ultra high molecular weight polyeth-ylene) and an ice sample. The tribometer provides for the first time results that allow differentiating be-tween different modified waxes with regard to their running performance. Friction-influencing factors such as speed, temperature and the surface pressure below the ski base can be adjusted within the range relevant for ski sports. Furthermore, the laboratory-scale test stand, which is located in a cold chamber, is capable of ac-commodating not only typical ski jumping base lengths and widths, but also cross-country and alpine ski bases. To verify the tribometer, a ski base is treated with three waxes of different fluorine content and measured comparatively. With a minimum of 95% confidence, the friction differences between the tested waxes depending on their fluorine content is validated and proven at the end of this work.
Pervasive human impacts rapidly change freshwater biodiversity. Frequently recorded exceedances of regulatory acceptable thresholds by pesticide concentrations suggest that pesticide pollution is a relevant contributor to broad-scale trends in freshwater biodiversity. A more precise pre-release Ecological Risk Assessment (ERA) might increase its protectiveness, consequently reducing the likelihood of unacceptable effects on the environment. European ERA currently neglects possible differences in sensitivity between exposed ecosystems. If the taxonomic composition of assemblages would differ systematically among certain types of ecosystems, so might their sensitivity toward pesticides. In that case, a single regulatory threshold would be over- or underprotective.
In this thesis, we evaluate (1) whether the assemblage composition of macroinvertebrates, diatoms, fishes, and aquatic macrophytes differs systematically between the types of a European river typology system, and (2) whether these taxonomical differences engender differences in sensitivity toward pesticides. While a selection of ecoregions is available for Europe, only a single typology system that classifies individual river segments is available at this spatial scale - the Broad River Types (BRT).
In the first two papers of this thesis, we compiled and prepared large databases of macroinvertebrate (paper one), diatom, fish, and aquatic macrophyte (paper two) occurrences throughout Europe to evaluate whether assemblages are more similar within than among BRT types. Additionally, we compared its performance to that of different ecoregion systems. We employed multiple tests to evaluate the performances, two of which were also designed in the studies. All typology systems failed to reach common quality thresholds for the evaluated metrics for most taxa. Nonetheless, performance differed markedly between typology systems and taxa, with the BRT often performing worst. We showed that currently available, European freshwater typology systems are not well suited to capture differences in biotic communities and suggest several possible amelioration.
In the third study, we evaluated whether ecologically meaningful differences in sensitivity exist between BRT types. To this end, we predicted the sensitivity of macroinvertebrate assemblages across Europe toward Atrazine, copper, and Imidacloprid using a hierarchical species sensitivity distribution model. The predicted assemblage sensitives differed only marginally between BRT types. The largest difference between
median river type sensitivities was a factor of 2.6, which is far below the assessment factor suggested for such models (6), as well as the factor of variation commonly observed between toxicity tests of the same species-compound pair (7.5 for copper). Our results don’t support the notion that a type-specific ERA might improve the accuracy of thresholds. However, in addition to the taxonomic composition the bioavailability of chemicals, the interaction with other stressors, and the sensitivity of a given species might differ between river types.
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
Cancer, a complex and multifaceted disease, continues to challenge the boundaries of biomedical research. In this dissertation, we explore the complexity of cancer genesis, employing multiscale modeling, abstract mathematical concepts such as stability analysis, and numerical simulations as powerful tools to decipher its underlying mechanisms. Through a series of comprehensive studies, we mainly investigate the cell cycle dynamics, the delicate balance between quiescence and proliferation, the impact of mutations, and the co-evolution of healthy and cancer stem cell lineages. The introductory chapter provides a comprehensive overview of cancer and the critical importance of understanding its underlying mechanisms. Additionally, it establishes the foundation by elucidating key definitions and presenting various modeling perspectives to address the cancer genesis. Next, cell cycle dynamics have been explored, revealing the temporal oscillatory dynamics that govern the progression of cells through the cell cycle.
The first half of the thesis investigates the cell cycle dynamics and evolution of cancer stem cell lineages by incorporating feedback regulation mechanisms. Thereby, the pivotal role of feedback loops in driving the expansion of cancer stem cells has been thoroughly studied, offering new perspectives on cancer progression. Furthermore, the mathematical rigor of the model has been addressed by deriving wellposedness conditions, thereby strengthening the reliability of our findings and conclusions. Then, expanding our modeling scope, we explore the interplay between quiescent and proliferating cell populations, shedding light on the importance of their equilibrium in cancer biology. The models developed in this context offer potential avenues for targeted cancer therapies, addressing perspective cell populations critical for cancer progression. The second half of the thesis focuses on multiscale modeling of proliferating and quiescent cell populations incorporating cell cycle dynamics and the extension thereof with mutation acquisition. Following rigorous mathematical analysis, the wellposedness of the proposed modeling frameworks have been studied along with steady-state solutions and stability criteria.
In a nutshell, this thesis represents a significant stride in our understanding of cancer genesis, providing a comprehensive view of the complex interplay between cell cycle dynamics, quiescence, proliferation, mutation acquisition, and cancer stem cells. The journey towards conquering cancer is far from over. However, this research provides valuable insights and directions for future investigation, bringing us closer to the ultimate goal of mitigating the impact of this formidable disease.
In this thesis, material removal mechanisms in grinding are investigated considering a gritworkpiece interaction as well as a grinding-wheel workpiece interaction. In grit-workpiece interaction in a micrometer scale, single grit scratch experiments were performed to investigate material removal mechanism in grinding namely rubbing, plowing, and cutting. Experiments performed were analyzed based on material removal, process forces and specific energy. A finite element model is developed to simulate a single-grit scratch process. As part of the development of the finite element scratch model a 2D and 3D model is developed. A 2D model is utilized to test
material parameters and test various mesh discretizational approaches. A 3D model undertaking the tested material parameters from the 2D model is developed and is tested against experimental results for various mesh discretization. The simulation model is validated based on process forces and ground topography from experiments. The model is also further scaled to simulate multiple grit-workpiece interaction validated against experimental results. As a final step, simulation models are developed to simulate material removal, due to the interaction of grinding wheel and workpiece. A developed virtual grinding wheel topographical model is employed to display
an approach, to upscale a grinding process from grit-workpiece interaction to wheel-workpiece
interaction. In conclusion, practical conclusions drawn and scope for future studies are derived
based on the developed simulation models.
Climate change will have severe consequences on Eastern Boundary Upwelling Systems (EBUS). They host the largest fisheries in the world supporting the life of millions of people due to their tremendous primary production. Therefore, it is of utmost importance to better understand predicted impacts like alternating upwelling intensities and light impediment on the structure and the trophic role of protistan plankton communities as they form the basis of the food web. Numerical models estimate the intensification of the frequency in eddy formation. These ocean features are of particular importance due to their influence on the distribution and diversity of plankton communities and the access to resources, which are still not well understood even to the present day. My PhD thesis entails two subjects conducted during large-scaled cooperation projects REEBUS (Role of Eddies in Eastern Boundary Upwelling Systems) and CUSCO (Coastal Upwelling System in a Changing Ocean).
Subject I of my study was conducted within the multidisciplinary framework REEBUS to investigate the influence of eddies on the biological carbon pump in the Canary Current System (CanCS). More specifically, the aim was to find out how mesoscale cyclonic eddies affect the regional diversity, structure, and trophic role of protistan plankton communities in a subtropical oligotrophic oceanic offshore region.
Samples were taken during the M156 and M160 cruises in the Atlantic Ocean around Cape Verde during July and December 2019, respectively. Three eddies with varying ages of emergence and three water layers (deep chlorophyll maximum DCM, right beneath the DCM and oxygen minimum zone OMZ) were sampled. Additional stations without eddy perturbation were analyzed as references. The effect of oceanic mesoscale cyclonic eddies on protistan plankton communities was analyzed by implementing three approaches. (i) V9 18S rRNA gene amplicons were examined to analyze the diversity and structure of the plankton communities and to infer their role in the biological carbon pump. (ii) By assigning functional traits to taxonomically assigned eDNA sequences, functional richness and ecological strategies (ES) were determined. (iii) Grazing experiments were conducted to assess abundance and carbon transfer from prokaryotes to phagotrophic protists.
All three eddies examined in this study differed in their ASV abundance, diversity, and taxonomic composition with the most pronounced differences in the DCM. Dinoflagellates were the most abundant taxa in all three depth layers. Other dominating taxa were radiolarians, Discoba and haptophytes. The trait-approach could only assign ~15% of all ASVs and revealed in general a relatively high functional richness. But no unique ES was determined within a specific eddy. This indicates pronounced functional redundancy, which is recognized to be correlated with ecosystem resilience and robustness by providing a degree of buffering capacity in the face of biodiversity loss. Elevated microbial abundances as well as bacterivory were clearly associated to mesoscale eddy features, albeit with remarkable seasonal fluctuations. Since eddy activity is expected to increase on a global scale in future climate change scenarios, cyclonic eddies could counteract climate change by enhancing carbon sequestration to abyssal depths. The findings demonstrate that cyclonic eddies are unique, heterogeneous, and abundant ecosystems with trapped water masses in which characteristic protistan plankton develop as the eddies age and migrate westward into subtropical oligotrophic offshore waters. Therefore, eddies influence regional protistan plankton diversity qualitatively and quantitatively.
Subject II of my PhD project contributed to the CUSCO field campaign to identify the influence of varying upwelling intensities in combination with distinct light treatments on the whole food web structure and carbon pump in the Humboldt Current System (HCS) off Peru. To accomplish such a task, eight offshore-mesocosms were deployed and two light scenarios (low light, LL; high light, HL) were created by darkening half of the mesocosms. Upwelling was simulated by injecting distinct proportions (0%, 15%, 30% and 45%) of collected deep-water (DW) into each of the moored mesocosms. My aim was to examine the changes in diversity, structure, and trophic role of protistan plankton communities for the induced manipulations by analyzing the V9 18S rRNA gene amplicons and performing short-term grazing experiments.
The upwelling simulations induced a significant increase in alpha diversity under both light conditions. In austral summer, reflected by HL conditions, a generally higher alpha diversity was recorded compared to the austral winter simulation, instigated by LL treatment. Significant alterations of the protistan plankton community structure could likewise be observed. Diatoms were associated to increased levels of DW addition in the mimicked austral winter situation. Under nutrient depletion, chlorophytes exhibited high relative abundances in the simulated austral winter scenario. Dinoflagellates dominated the austral summer condition in all upwelling simulations. Tendencies of reduced unicellular eukaryotes and increased prokaryotic abundances were determined under light impediment. Protistan-mediated mortality of prokaryotes also decreased by ~30% in the mimicked austral winter scenario.
The findings indicate that the microbial loop is a more relevant factor in the structure of the food web in austral summer and is more focused on the utilization of diatoms in austral winter in the HCS off Peru. It was evident that distinct light intensities coupled with multiple upwelling scenarios could lead to alterations in biochemical cycles, trophic interactions, and ecosystem services. Considering the threat of climate change, the predicted relocation of EBUS could limit primary production and lengthen the food web structure with severe socio-economic consequences.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
Lubricated tribological contact processes are important in both nature and in many technical applications. Fluid lubricants play an important role in contact processes, e.g. they reduce friction and cool the contact zone. The fundamentals of lubricated contact processes on the atomistic scale are, however, today not fully understood. A lubricated contact process is defined here as a process, where two solid bodies that are in close proximity and eventually in parts in direct contact, carry out a relative motion, whereat the remaining volume is submersed by a fluid lubricant. Such lubricated contact processes are difficult to examine experimentally. Atomistic simulations are an attractive alternative for investigating the fundamentals of such processes. In this work, molecular dynamics simulations were used for studying different elementary processes of lubricated tribological contacts. A simplified, yet realistic simulation setup was developed in this work for that purpose using classical force fields. In particular, the two solid bodies were fully submersed in the fluid lubricant such that the squeeze-out was realistically modeled. The velocity of the relative motion of the two solid bodies was imposed as a boundary condition. Two types of cases were considered in this work: i) a model system based on synthetic model substances, which enables a direct, but generic, investigation of molecular interaction features on the contact process; and ii) real substance systems, where the force fields describe specific real substances. Using the model system i), also the reproducibility of the findings obtained from the computer experiments was critically assessed. In most cases, also the dry reference case was studied. Both mechanical and thermodynamic properties were studied -- focusing on the influence of lubrication. The following properties were studied: The contact forces, the coefficient of friction, the dislocation behavior in the solid, the chip formation and the formation of the groove, the squeeze-out behavior of the fluid in the contact zone, the local temperature and the energy balance of the system, the adsorption of fluid particles on the solid surfaces, as well as the formation of a tribofilm. Systematic studies were carried out for elucidating the influence of the wetting behavior, the influence of the molecular architecture of the lubricant, and the influence of the lubrication gap height on the contact process. As expected, the presence of a fluid lubricant reduces the temperature in the vicinity of the contact zone. The presence of the lubricant is, moreover, found to have a significant influence on the friction and on the energy balance of the process. The presence of a lubricant reduces the coefficient of friction compared to a dry case in the starting phase of a contact process, while lubricant molecules remain in the contact zone between the two solid bodies. This is a result of an increased normal and slightly decreased tangential force in the starting phase. When the fluid molecules are squeezed out with ongoing contact time and the contact zone is essentially dry, the coefficient of friction is increased by the presence of a fluid compared to a dry case. This is attributed to an imprinting of individual fluid particles into the solid surface, which is energetically unfavorable. By studying the contact process in a wide range of gap height, the entire range of the Stribeck curve is obtained from the molecular simulations. Thereby, the three main lubrication regimes of the Stribeck curve and their transition regions are covered, namely boundary lubrication (significant elastic and plastic deformation of the substrate), mixed lubrication (adsorbed fluid layers dominate the process), and hydrodynamic lubrication (shear flow is set up between the surface and the asperity). The atomistic effects in the different lubrication regimes are elucidated. Notably, the formation of a tribofilm is observed, in which lubricant molecules are immersed into the metal surface. The formation of a tribofilm is found to have important consequences for the contact process. The work done by the relative motion is found to mainly dissipate and thereby heat up the system. Only a minor part of the work causes plastic deformation. Finally, the assumptions, simplifications, and approximations applied in the simulations are critically discussed, which highlights possible future work.
Reactive absorption with amines is the most important technique for the removal of CO2
from gas streams, e.g. from flue gas, natural gas or off-gas from the cement industry.
In this work a rigorous simulation model for the absorption and desorption of CO2 with
an amine-containing solvent is validated using data from pilot plants of various sizes.
This model was then coupled with a detailed simulation of a coal-fired power plant.
The power generation efficiency drop with CO2 capture was determined and process
parameters in the power plant and separation process were optimized. It was shown
that the high energy demand of CO2 separation significantly reduces power generation
efficiencies, which underlines the need for improvements. This can be achieved by better
solvents or by advanced process designs. In this work such improved CO2 separation
processes are described and evaluated by detailed simulation studies.
In order to develop detailed rigorous simulation models for reactive absorption with novel
solvent systems, a precise knowledge of the liquid phase reaction kinetics is necessary.
There are well established techniques for measuring species distributions in equilibirated
aqueous amine solutions by NMR spectrosopy. However, the existing NMR techniques
cannot be used for monitoring fast reactions in these solutions. Therefore, in this work
a novel temperature-controlled micro-reactor NMR probe head was developed which
enables studying reaction kinetics with time constants in the range of seconds.
On this basis, modern solvent systems for CO2 absorption can be characterized and
the scale-up of separation process for future plants can be accompanied using rigorous
process simulation.
Distributed Optimization of Constraint-Coupled Systems via Approximations of the Dual Function
(2024)
This thesis deals with the distributed optimization of constraint-coupled systems. This problem class is often encountered in systems consisting of multiple individual subsystems, which are coupled through shared limited resources. The goal is to optimize each subsystem in a distributed manner while still ensuring that system-wide constraints are satisfied. By introducing dual variables for the system-wide constraints the system-wide problem can be decomposed into individual subproblems. These resulting subproblems can then be coordinated by iteratively adapting the dual variables. This thesis presents two new algorithms that exploit the properties of the dual optimization problem. Both algorithms compute a quadratic surrogate function of the dual function in each iteration, which is optimized to adapt the dual variables. The Quadratically Approximated Dual Ascent (QADA) algorithm computes the surrogate function by solving a regression problem, while the Quasi-Newton Dual Ascent (QNDA) algorithm updates the surrogate function iteratively via a quasi-Newton scheme. Both algorithms employ cutting planes to take the nonsmoothness of the dual function into account. The proposed algorithms are compared to algorithms from the literature on a large number of different benchmark problems, showing superior performance in most cases. In addition to general convex and mixed-integer optimization problems, dual decomposition-based distributed optimization is applied to distributed model predictive control and distributed K-means clustering problems.
Since their introduction, robots have primarily influenced the industrial world, providing new opportunities and challenges for humans and machinery. With the introduction of lightweight robots and mobile robot platforms, the field of robot applications has been expanded, diversified, and brought closer to society. The increased degree of digitalization and the personalization of goods and products require an enhanced and flexible robot deployment by operating several multi-robot systems along production processes, industrial applications, assembly and packaging lines, transport systems, etc.
Efficient and safe robot operation relies on successful task planning followed by the computation and execution of task-performing motion trajectories. This thesis addresses these issues by developing, implementing, and validating optimization-based methods for task and trajectory planning in robotics, considering certain optimality and performance criteria. The focus is mainly on the time optimality of the presented approaches with respect to both execution and computation time without compromising safe robot use.
Driven by a systematic approach, the basis for the algorithm development is established first by modeling the kinematics and dynamics of the considered robots and identifying required dynamic parameters. In a further step, time-optimal task and trajectory planning algorithms for a single robotic arm are developed. Initially, a hierarchical approach is introduced consisting of two decoupled optimization-based control policies, a binary problem for task planning, and a continuous model predictive trajectory planning problem. The two layers of the hierarchical structure are then merged into a monolithic layer, resulting in a hybrid structure in the form of a mixed-integer optimization problem for inherent task and trajectory planning.
Motivated by a multi-robot deployment, the hierarchical control structure for time-optimal task and trajectory planning is extended for the case of a two-arm robotic system with highly overlapping operational spaces, leading to challenging robot motions with high inter-robot collision potential. To this end, a novel predictive approach for collision avoidance is proposed based on a continuous approximation of the robot geometry, resulting in a nonlinear optimization problem capable of online applications with real-time requirements. Towards a mobile and flexible robot platform, a model predictive path-following controller for an omnidirectional mobile robot is introduced. Here, a time-minimal approach is also applied, which consists of the robot following a given parameterized path as accurately as possible and at maximum speed.
The performance of the proposed algorithms and methods is experimentally analyzed and validated under real conditions on robot demonstrators. Implementation details, including the resulting hardware and software architecture, are presented, followed by a detailed description of the results. Concrete and industry-oriented demonstrators for integrating robotic arms in existing manual processes and the indoor navigation of a mobile robot complete the work.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
Modeling and Simulation of Internet of Things Infrastructures for Cyber-Physical Energy Systems
(2024)
This dissertation presents a novel approach to the model-based development and simulation-based validation of Internet of Things (IoT) infrastructures within the context of Cyber-Physical Energy Systems (CPES). CPES represents an evolution in energy management, seamlessly blending physical and cyber components for efficient, secure, and dependable energy distribution. However, the intricate interplay of these components demands innovative modeling and simulation strategies.
The work begins by establishing a robust foundation, exploring essential background elements such as requirements engineering, model-based systems engineering, digitalization approaches, and the intricacies of IoT platforms. It introduces the novel concept of homomorphic encryption, a critical enabler for securing IoT data within CPES.
In the exploration of the state of the art, the dissertation delves into the multifaceted landscape of IoT simulation, emphasizing the significance of versatility, community support, scalability, and synchronization.
The core contribution emerges in the chapter on simulating IoT networks. It introduces a sophisticated framework that encompasses hardware-in-the-loop, software-in-the-loop, and human-in-the-loop simulation. This innovative framework extends the boundaries of conventional simulation, enabling holistic evaluations of IoT systems.
A practical case study on smart energy usage showcases the application of the framework. Detailed SysML models, including requirements, package diagrams, block definition diagrams, internal block diagrams, state machine diagrams, and activity diagrams, are meticulously examined. The performance evaluation encompasses diverse aspects, from hardware and software validation to human interaction.
In conclusion, this dissertation represents a significant leap forward in the integration of IoT infrastructures within CPES. Its contributions extend from a comprehensive understanding of foundational elements to the practical implementation of a holistic simulation framework. This work not only addresses the current challenges but also outlines a path for future research, shaping the landscape of IoT integration within the dynamic realm of CPES. It offers invaluable insights for researchers, engineers, and stakeholders working towards resilient, secure, and energy-efficient infrastructures.
With the expansion of the electromobility and wind energy, the number of frequency inverter-controlled electric motors and generators is increasing. In parallel, the number of the rolling bearing failures caused by inverter-induced parasitic currents also shows an increasing trend. In order to determine the electrical state of the rolling bearing, to develop preventive measures against damages caused by parasitic currents and to support system-level calculations, electrical rolling bearing models have been developed. The models are based on the electrical insulating ability of the lubricant film that develops in the rolling contacts. For the capacitance calculation of the rolling contacts, different correction factors were developed to simplify the complex tribological and electrical interactions of this region. The state-of-the-art correction factors vary widely, and their validity range also differ significantly, which leads to uncertainty in their general application and to the demand for further investigations of this field. In the present work, a combined simulation method is developed that can determine the rolling bearing capacitance of axially loaded rolling bearings. The simulation consists of an electrically extended EHL simulation for calculating the capacitance of the rolling contact, and an electrical FEM simulation for the capacitance calculation of the non-contact regions. With the combination of the resulted capacitance values of the two simulation methods, the total rolling bearing capacitance can be determined with high accuracy and without using correction factors. In addition, due to experimental investigations, the different capacitance sources of the rolling bearing are identified. After the validation of the combined simulation method, it can be applied for the investigation of the different capacitance sources, i.e., to determine their significance compared to the total rolling bearing capacitance. The developed simulation method allows a detailed analysis of the rolling bearing capacitances, taking into account influencing factors that could not be considered before (e.g., oil quantity in the environment of the rolling bearing). As a result, the accurate calculation of the rolling bearing capacitance can improve the prediction of the harmful parasitic currents and help to develop preventive measures against them.
This thesis focuses on the operation of reliability-constrained routes in wireless ad-hoc networks. A complete communication protocol that is capable of guaranteeing a statistical minimum reliability level would have to support several functionalities: first, routes that are capable of supporting the specified Quality of Service requirement have to be discovered. During operation of discovered routes, the current Quality of Service level has to be monitored continuously. Whenever significant deviations are detected and the required level of Quality of Service is endangered, route maintenance has to ensure continuous operation. All four functionalities, route discovery, route operation, route maintenance and collection and distribution of network status information, will be addressed in this thesis.
In the first part of the thesis, we propose a new approach for Quality-of- Service routing in wireless ad-hoc networks called rmin-routing, with the provision of statistical minimum route reliability as main route selection criterion. To achieve specified minimum route reliabilities, we improve the reliability of individual links by well-directed retransmissions, to be applied during the operation of routes. To select among a set of candidate routes, we define and apply route quality criteria concerning network load.
High-quality information about the network status is essential for the discovery and operation of routes and clusters in wireless ad-hoc networks. This requires permanent observation and assessment of nodes, links, and link metrics, and the exchange of gathered status data. In the second part of the thesis, we present cTEx, a configurable topology explorer for wireless ad-hoc networks that efficiently detects and exchanges high-quality network status information during operation.
In the third part, we propose a decentralized algorithm for the discovery and operation of reliability-constrained routes in wireless ad-hoc networks called dRmin-routing. The algorithm uses locally available network status information about network topology and link properties that is collected proactively in order to discover a preliminary route candidate. This is followed by a distributed, reactive search along this preselected route to remove imprecisions of the locally recorded network status before making a final route selection. During route operation, dRmin-routing monitors routes and performs different kinds of route repair actions to maintain route reliability in order to overcome varying link reliabilities.
Knowledge workers face an ever increasing flood of information in their daily work. They live in a “multi-tasking craziness”, involving activities like creating, finding, processing, assessing or organizing information while constantly switching from one context to another, each being associated with different tasks, documents, mails, etc. Hence, their personal information sphere consisting of file, mail and bookmark folders as well as their content, calendar entries, etc. is cluttered with information that has become irrelevant. Finding important information thus gets harder and much of previously gained knowledge is practically lost.
This thesis explores new ways of solving this problem by investigating the potential of self-(re)organizing and especially forgetting-enabled personal knowledge assistants in the given scenario. It utilizes so-called Managed Forgetting, which is an escalating set of measures to overcome the binary keep-or-delete paradigm, ranging from temporal hiding, to condensation, to adaptive reorganization, synchronization, archiving and deletion. Managed Forgetting is combined with two other major ideas: First, it uses the Semantic Desktop as an ecosystem, which brings Semantic Web and thus knowledge graph technologies to a user’s desktop, making it possible to capture and represent major parts of a user’s personal mental model in a machine-understandable way and exploit it in many different applications. Second, the system uses explicated context information – so-called Context Spaces: context is seen as an explicit interaction element users can work with (i.e. a “tangible” object similar to a folder) and in (immersion). The thesis is structured according to the basic interaction cycle with such a system, ranging from evidence collection to information extraction and context elicitation, followed by information value assessment and the actual support measures consisting of self-(re)organization decisions (back-end) and user interface updates (front-end). The system’s data foundation are personal or group knowledge graphs as well as native data. This work makes contributions to all of these aspects, whereas several of them have been investigated and developed in interdisciplinary research with cognitive scientists. On a more general level, searching and trust in such highly autonomous assistants have also been investigated.
In summary, a self-(re)organizing and especially forgetting-enabled support system for information management and knowledge work has been realized. Its different features vary in maturity: the most mature ones are already in practical use (also in industry), while the latest are just well elaborated (position papers) or rough ideas. Different evaluation strategies have been applied ranging from mere data-driven experiments to various user studies. Some of them were rather short-term with controlled laboratory conditions, others less controlled but spanning several months. Different benefits of working with such a system could be quantified, e.g. cognitive offloading effects and reduced task switching/resumption time. Other benefits were gathered qualitatively, e.g. tidiness of the information sphere and its better alignment with the user’s mental model. The presented approach has been shown to hold a lot of potential. In some aspects, however, only first steps have been taken towards tapping it, e.g. several support measures can be further refined and automation further increased.
In recent years, there has been a growing need for accurate 3D scene reconstruction. Recent developments in the automotive industry have led to the increased use of ADAS where 3D reconstruction techniques are used, for example, as part of a collision detection system. For such applications, scene geometry reconstruction is usually performed in the form of depth estimation, where distances to scene objects are obtained.
In general, depth estimation systems can be divided into active and passive. Both systems have their advantages and disadvantages, but passive systems are usually cheaper to produce and easier to assemble and integrate than active systems. Passive systems can be stereo- or multiple-view based. Up to a certain limit, increasing the number of views in multi-view systems usually results in improved depth estimation accuracy.
One potential problem for ensuring the reliability of multi-view systems is the need to accurately estimate the orientation of their optical sensors. One way to ensure sensor placement for multi-view systems is to rigidly fix the sensors at the manufacturing stage. Unlike arbitrary sensor placement, using of a simplified and known sensor placement geometry further simplifies the depth estimation.
We meet with the concept of light field, which parameterizes all visible light passing through all viewpoints by their intersection with angular and spatial planes. When applied to computer vision, this gives us a 2D set of 2D images, where the physical distances between each image are fixed and proportional to each other.
Existing light field depth estimation methods provide good accuracy, which is suitable for industrial applications. However, the main problems of these methods are related to their running time and resource requirements. Most of the algorithms presented in the literature are typically sharpened for accuracy, can only be run on high-performance machines and often require a significant amount of time to process and obtain results.
Real-world applications often have running time requirements. Also, often there is a power-consumption limitation. In this dissertation, we investigate the problem of building a depth estimation system with an light field camera that satisfies the operating time and power consumption constraints without significant loss of estimation accuracy.
First, an algorithm for calibrating light field cameras is proposed, together with an algorithm for automatic calibration refinement, that works on arbitrary captured scenes. An algorithm for classical geometric depth estimation using light field cameras is proposed. Ways to optimize the algorithm for real-time use without significant loss of accuracy are presented. Finally, the ways how the presented depth estimation methods can be extended using modern deep learning paradigms under the two previously mentioned constraints are shown.
Virtual Possibilities: Exploring the Role of Emerging Technologies in Work and Learning Environments
(2024)
The present work aims to investigate whether virtual reality can support learning as well as vocational work environments. To this end, four studies were conducted, with the first set investigating the demands for vocational workers and the impact of input methods on participant performance. These studies laid the foundation needed to create studies incorporating virtual reality research. The second set of studies was concerned with the impact of virtual reality on learning performance as well as the influence of binaural stimuli presentation on task performance. Results of each study are discussed individually and in conjunction with one another. The four studies are further supplemented with further research conducted by the author as well as an analysis of the growing field of virtual reality-based research. The thesis closes by embedding the discussed work into the scientific landscape and tries to give an outlook for virtual reality-based use cases in the future.
The German energy mix, which provides an overview of the sources of electricity available in Germany, is changing as a result of the expansion of renewable energy sources. With this shift towards sustainable energy sources such as wind and solar power, the electricity market situation is also in flux. Whereas in the past there were few uncertainties in electricity generation and only demand was subject to stochastic uncertainties, generation is now subject to stochastic fluctuations as well, especially due to weather dependency. To provide a supportive framework for this different situation, the electricity market has introduced, among other things, the intraday market, products with half-hourly and quarter-hourly time slices, and a modified balancing energy market design. As a result, both electricity price forecasting and optimization issues remain topical.
In this thesis, we first address intraday market modeling and intraday index forecasting. To do so, we move to the level of individual bids in the intraday market and use them to model the limit order books of intraday products. Based on statistics of the modeled limit order books, we present a novel estimator for the intraday indices. Especially for less liquid products, the order book statistics contain relevant information that allows for significantly more accurate predictions in comparison to the benchmark estimator.
Unlike the intraday market, the day ahead market allows smaller companies without their own trading department to participate since it is operated as a market with daily auctions. We optimize the flexibility offer of such a small company in the day ahead market and model the prices with a stochastic multi-factor model already used in the industry. To make this model accessible for stochastic optimization, we discretize it in time and space using scenario trees. Here we present existing algorithms for scenario tree generation as well as our own extensions and adaptations. These are based on the nested distance, which measures the distance between two distributions of stochastic processes. Based on the resulting scenario trees, we apply the stochastic optimization methods of stochastic programming, dynamic programming, and reinforcement learning to illustrate in which context the methods are appropriate.
To increase situational awareness of the crane operator, the aim of this thesis is to develop a vision-based deep learning object detection from crane load-view using an adaptive perception in the construction area. Conventional worker detection methods are based on simple shape or color features from the worker's appearances. Nonetheless, these methods can fail to recognize the workers who do not wear the protective gears. To find out an image representation of the object from the top view manually or handcrafted feature is crucial. We, therefore, employed deep learning methods to automatically learn those features.
To yield optimal results, deep learning methods require mass amount of data.
Due to the data deficit especially in the construction domain, we developed the photorealistic world to create the data in addition to our samples collected from the real construction area. The simulated platform does not benefit only from diverse data types, but also concurrent research development which speeds up the pipeline at a low cost.
Our research findings indicate that the combination of synthetic and real training samples improved the state-of-the-art detector. In line with previous studies to bridge the gap between synthetic and real data, the results of preprocessed synthetic images are substantially better than using the raw data by approximately 10%.
Finding the right deep learning model for load-view detection is challenging.
By investigating our training data, it becomes evident that the majority of bounding box sizes are very small with a complex background.
In addition, we gave the priority to speed over accuracy based on the construction safety criteria. Finally, RetinaNet is chosen out of the three primary object detection models.
Nevertheless, the data-driven detection algorithm can fail to handle scale invariance, especially for detectors whose input size is changed in an extremely wide range.
The adaptive zoom feature can enhance the quality of the worker detection.
To avoid further data gathering and extensive retraining, the proposed automatic zoom method of the load-view crane camera supports the deep learning algorithm, specifically in the high scale variant problem. The finite state machine is employed for control strategies to adapt the zoom level to cope not only with inconsistent detection but also abrupt camera movement during lifting operation. Consequently, the detector is able to detect a small size object by smooth continuous zoom control without additional training.
The adaptive zoom control not only enhances the performance of the top-view object detection but also reduces the interaction of the crane operator with camera system, reducing the risk of fatality during load lifting operation.
Aquatic habitats are closely linked to the adjacent riparian area. Fluxes of nutrients, energy and matter through emerging aquatic insects are a key component of the aquatic subsidy to terrestrial systems. In fact, adult insects serve as high-quality prey for riparian predators. Stressors impacting the aquatic subsidy can thus translate to consequences for the receiving terrestrial food web, while mechanistic knowledge is extremely limited. Against this background, this thesis aimed at (i) assessing the impact of a model stressor specifically targeting insect emergence, that is the mosquito control agent Bacillus thuringiensis var. israelensis, on quantity, temporal dynamics and (ii) quality of emerging aquatic insects. For this purpose, outdoor floodplain pond mesocosms (n = 6) were employed. Since emergence is, in most cases, no point event but occurs over a longer period emergence was monitored over 3.5 months. The model stressor, i.e., Bti applied three times during spring at 2.88 × 10^9 ITU/ha, shifted the emergence time of aquatic insects, especially of non-biting midges (Diptera: Chironomidae), by ten days with a 26% reduced peak, while the nutrient content was not altered. On this basis, (ii) the propagation of the effects in aquatic subsidy emergence to riparian predators was investigated. Stable isotope analyses were used to assess the diet of a model predator, that is the web-building riparian spider Tetragnatha extensa. Results suggested changes in the composition of the spider’s diet to replace missing Chironomidae by other aquatic and terrestrial prey organisms pointing to further negative consequences. Finally, the thesis aimed at (iii) the understanding of processes underlying an altered emergence of aquatic subsidy mainly consisting of chironomids. Using a laboratory-based test design, populations of Chironomus riparius (n = 6) were assessed for their sensitivity towards Bti under different food qualities (high and low nutritious) before and after a long-term (six months) Bti exposure. Signs of phenotypic adaptation were observed in emergence time and nutrient content over multiple generations, resulting in changes in chironomids’ quantity and quality as food source. Overall, it can be concluded that direct and indirect effects of an aquatic stressor, as well as the adaptive response to it, can alter ecosystems at different levels, including individual, population and community level. Furthermore, this thesis highlights the importance of a temporal perspective when investigating the impact of aquatic stressors beyond ecosystem boundaries. It illustrates potential bottom-up effects on riparian predators through altered emergence of aquatic insects, feeding our understanding of meta-ecosystems and how stressors and their effects are transferred across systems. These insights will support efforts to protect and conserve natural ecosystems.
Distributed message-passing systems have become ubiquitous and essential for our daily lives. Hence, designing and implementing them correctly is of utmost importance. This is, however, very challenging at the same time. In fact, it is well-known that verifying such systems is algorithmically undecidable in general due to the interplay of asynchronous communication (messages are buffered) and concurrency. When designing communication in a system, it is natural to start with a global protocol specification of the desired communication behaviour. In such a top-down approach, the implementability problem asks, given such a global protocol, if the specified behaviour can be implemented in a distributed setting without additional synchronisation. This problem has been studied from two perspectives in the literature. On the one hand, there are Multiparty Session Types (MSTs) from process algebra, with global types to specify protocols. Key to the MST approach is a so-called projection operator, which takes a global type and tries to project it onto every participant: if successful, the local specifications are safe to use. This approach is efficient but brittle. On the other hand, High-level Message Sequence Charts (HMSCs) study the implementability problem from an automata-theoretic perspective. They employ very few restrictions on protocol specifications, making the implementability problem for HMSCs undecidable in general. The work in this thesis is the first to formally build a bridge between the world of MSTs and HMSCs. To start, we present a generalised projection operator for sender-driven choice. This allows a sender to send to different receivers when branching, which is crucial to handle common communication patterns from distributed computing. Despite this first step, we also show that the classical MST projection approach is inherently incomplete. We present the first formal encoding from global types to HMSCs. With this, we prove decidability of the implementability problem for global types with sender-driven choice. Furthermore, we develop the first direct and complete projection operator for global types with sender-driven choice, using automata-theoretic techniques, and show its effectiveness with a prototype implementation. We are the first to provide an upper bound for the implementability problem for global types with sender-driven (or directed) choice and show it to be in PSPACE. We also provide a session type system that uses the results from our projection operator. Last, we introduce protocol state machines (PSMs) – an automata-based protocol specification formalism – that subsume both global types from MSTs and HMSCs with regard to expressivity. We use transformations on PSMs to show that many of the syntactic restrictions of global types are not restrictive in terms of protocol expressivity. We prove that the implementability problem for PSMs with mixed choice, which requires no dedicated sender for a branch but solely all labels to be distinct, is undecidable in general. With our results on expressivity, this answers an open question: the implementability problem for mixed-choice global types is undecidable in general.
Coastal port-industrial areas are becoming increasingly significant due to urban shrinkage, population
decline, and climate change. To address social and economic issues and enhance climate resilience, it
is crucial to anticipate urban shrinkage in both stable and growing coastal areas that are undergoing
economic transformation. Urban planning can better understand the dynamics of planning for urban
shrinkage and climate resilience, as port-industrial areas have a large economic impact on nearby
coastal communities.
This dissertation examines the long-term implications of urban shrinkage in coastal port-industrial
areas in the context of climate change and sea level rise in England. The research problem is that
current urban policy does not adequately address the challenges of urban shrinkage and climate
resilience in these areas. The research questions are: What are the population changes in local areas
in England? What effect does population decline have on changing urbanisation patterns in older
industrial areas? What type of adaptation efforts were made in North East Lincolnshire, England, and
Bremerhaven, Germany, in response to the 2013 tidal surge, and how did this affect urban
shrinkage?
The dissertation applies an integrated concept of Shrinkage-Resilience as a framework for analysis.
The methodology includes a review of existing models and frameworks, as well as case studies of
international and local contexts. The findings suggest that between 2013-2019, 68% of older
industrial areas (including coastal ports) in England are undergoing changing urbanisation patterns
relative to population, land use, and green belt areas, and are key areas for urban policy, such as the
Levelling Up agenda. One of the areas, North East Lincolnshire is discussed and compared to
Bremerhaven. These examples demonstrate the link between Shrinkage-Resilience approaches and
their practical implementation in coastal port-industrial areas affected by urban shrinkage.
This research advances the scientific practice of urban planning and policy-making for shrinking cities
by introducing the approach of Shrinkage-Resilience, which emphasises the importance of
considering long-term social, economic, and environmental impacts in urban shrinkage contexts. This
approach is crucial in the transition to a more sustainable and inclusive society, where the welfare of
present and future generations, the environment, and economic development are taken into
account. The dissertation provides recommendations for urban planning to incorporate policy
changes for shrinking cities and coastal port-industrial areas worldwide, to include disaster risk
reduction and climate change adaptation approaches.
Living systems incessantly engage in the regulation of their cellular processes to fulfill their biological functions. Beyond development-related adjustments or cell cycle oscillations, environmental fluctuations compel the system to reorganize metabolic pathways, structural components, or molecular repair and reconstitution mechanisms. These responses manifest across diverse temporal scales, necessitating an intricate regulatory orchestration. Time series experiments have become increasingly popular for charting the chronological order and elucidating the underlying mechanisms. In the era of high-throughput technologies, the majority of cellular molecules can be analyzed in one fell swoop, generating a comprehensive snapshot of the status quo of most present molecules. Methodological advancements also permit the monitoring not only of molecular abundances but also the functional status of transcripts and proteins. However, due to the still high efforts associated with such experiments, the number of measured time points and the replication of measurements remains limited. Resulting datasets contain signals from thousands of molecules, yet they are sparse in temporal resolution and are often imprecise due to biological variability and technical measurement inaccuracies.
This thesis explores the complexities arising from the examination of short time series data and introduces pioneering tools that offer fresh insights into the realm of biological time series analysis. The broad spectrum of analytic possibilities ranges from a molecule-centric investigation of individual time courses to a holistic aggregation of the system’s response to its main characteristics. By creating a modeling framework that applies domain-specific constraints, time-course signals can be transformed from a series of discrete data points into a continuous curve. These curves align with current biological conjectures about molecule kinetics being smooth and devoid of superfluous oscillations. Noise present at individual time points is judiciously accounted for during curve fitting, mitigating the impact of time points with high variance on the curve. Subsequent classification is based on the features of these curves (extreme points and inflection points) and ensures a reduction in data amount and complexity. Succinct labels assigned to each molecule's kinetics encapsulate the signal's most notable features. Besides this modeling approach, an innovative enrichment strategy is introduced, that is independent of prior data partitioning and capable of segregating the temporal response into its thermodynamically relevant components. This approach allows for a continuous assessment of each molecule's contribution to these components, obviating the need for exclusive allocation. The application of various analytical approaches to heat acclimation experiments in Chlamydomonas highlights the relevance and potential of time series experiments and specifically tailored analysis techniques. The integration of different system levels has led to the identification of regulatory peculiarities, such as an increased correlation between transcripts and corresponding proteins during acclimation responses. These and other insights may herald new avenues of research that could ultimately enhance plant robustness in the face of increasing environmental perturbations.
The growing popularity of time series experiments necessitates dedicated analytical approaches that empower researchers and analysts to decipher patterns, discern trends, and unravel the underlying structures within the data, facilitating predictions and the derivation of meaningful conclusions that could potentially build bridges between the interweaved systems levels.
Many amphibians and insects have a biphasic life cycle, linking aquatic and terrestrial ecosystems. In temperate wetlands, insect communities are largely dominated by midges, such as non-biting chironomids and mosquitoes. Particularly chironomids and their aquatic larvae play a key role for both aquatic and terrestrial predators, e.g., dragonflies and damselflies (Odonata), birds, riparian spiders and amphibians. Therefore, adverse effects on chironomid larvae induced by pesticides or biocides can have implications on food webs across ecosystem boundaries.
In floodplains of the Upper Rhine Valley in southwest Germany, the biocide Bacillus thuringiensis var. israelensis (Bti) has been applied for over 40 years to reduce nuisance by mass emergence of mosquitoes. Due to its specific mode of action, Bti is presumed to be a more environmentally friendly alternative to non-selective, highly toxic pesticides used in the past. However, research on indirect effects of Bti on non-target organisms inhabiting these wetlands is still relatively scarce. The aim of this thesis was the investigation of direct and indirect effects of Bti on non-target organisms and, consequently, bottom-up effects on aquatic food webs and propagation to the terrestrial ecosystem. Effects were examined in outdoor floodplain pond mesocosms (FPMs) with natural flora and fauna communities.
Benthic macroinvertebrate communities were significantly altered in Bti-treated FPMs, largely due to the reduction of chironomid density by over 40% compared to untreated FPMs. Sampling of exuviae indicated that the emergence of Libellulidae (Odonata) was reduced by Bti, while larger Aeshnidae were not affected. This finding suggested increased intraguild predation (predation of competing predators) in Bti-treated FPMs as a result of decreased prey availability, i.e. chironomid larvae. This conclusion was partly confirmed in food web analyses using stable isotopes of C and N and fatty acids, with Aeshnidae experiencing a slight diet shift towards larger prey (i.e., newts, Aeshnidae) in Bti-treated FPMs. In contrast, the diet proportions of newt larvae were not affected by Bti treatment, but showed a marginal trend in lower omega-6 fatty acid content. Analyses of oxidative stress biomarkers did not reveal any direct effects of Bti on common frog tadpoles under natural climatic conditions.
This thesis emphasizes that adverse effects of Bti on the base of aquatic-terrestrial food webs, i.e., reduction of larval chironomids, can have implications for higher trophic levels and cascade to terrestrial ecosystems. Affected organisms also include species of concern, such as protected Odonata species. In view of the global insect and amphibian decline, the large-scale use of Bti in (partially protected) wetlands should be carefully considered.
Gliomas are one of the most common types of primary brain tumors. Among
those, high grade astrocytomas - so-called glioblastoma multiforme - are the
most aggressive type of cancer originating in the brain, leaving patients a median survival time of 15 to 20 months after diagnosis. The invasive behavior
of the tumor leads to considerable difficulties regarding the localization of all
tumor cells, and thus impedes successful therapy. Here, mathematical models
can help to enhance the assessment of the tumor’s extent.
In this thesis, we set up a multiscale model for the evolution of a glioblastoma.
Starting on the microscopic level, we model subcellular binding processes and
velocity dynamics of single cancer cells. From the resulting mesoscopic equation, we derive a macroscopic equation via scaling methods. Combining this
equation with macroscopic descriptions of the tumor environment, a nonlinear
PDE-ODE-system is obtained. We consider several variations of the derived
model, amongst others introducing a new model for therapy by gliadel wafers,
a treatment approach indicated i.a. for recurrent glioblastoma.
We prove global existence of a weak solution to a version of the developed
PDE-ODE-system, containing degenerate diffusion and flux limitation in the
taxis terms of the tumor equation. The nonnegativity and boundedness of all
components of the solution by their biological carrying capacities is shown.
Finally, 2D-simulations are performed, illustrating the influence of different
parts of the model on tumor evolution. The effects of treatment by gliadel
wafers are compared to the therapy outcomes of classical chemotherapy in different settings.
An Efficient Automated Machine Learning Framework for Genomics and Proteomics Sequence Analysis
(2023)
Genomics and Proteomics sequence analyses are the scientific studies of understanding the language of Deoxyribonucleic Acid (DNA), Ribonucleic Acid (RNA) and protein biomolecules with an objective of controlling the production of proteins and understanding their core functionalities. It helps to detect chronic diseases in early stages, root causes of clinical changes, key genetic targets for pharmaceutical development and optimization of therapeutics for various age groups. Most Genomics and Proteomics sequence analysis work is performed using typical wet lab experimental approaches that make use of different genetic diagnostic technologies. However, these approaches are costly, time consuming, skill and labor intensive. Hence, these approaches slow down the process of developing an efficient and economical sequence analysis landscape essential to demystify a variety of cellular processes and functioning of biomolecules in living organisms. To empower manual wet lab experiment driven research, many machine learning based approaches have been developed in recent years. However, these approaches cannot be used in practical environment due to their limited performance. Considering the sensitive and inherently demanding nature of Genomics and Proteomics sequence
analysis which can have very far-reaching as well as serious repercussions on account of misdiagnosis, the main
objective of this research is to develop an efficient automated computational framework for Genomics and Proteomics sequence analysis using the predictive and prescriptive analytical powers of Artificial Intelligence (AI) to significantly improve healthcare operations.
The proposed framework is comprised of 3 main components namely sequence encoding, feature engineering and
discrete or continuous value predictor. The sequence encoding module is equipped with a variety of existing and newly developed sequence encoding algorithms that are capable of generating a rich statistical representation of DNA, RNA and protein raw sequences. The feature engineering module has diverse types of feature selection and dimensionality reduction approaches which can be used to generate the most effective feature space. Furthermore, the discrete and/or continuous value predictor module of the proposed framework contains a wide range of existing machine learning and newly developed deep learning regressors and classifiers. To evaluate the integrity and generalizability of the proposed framework, we have performed a large-scale experimentation over diverse types of Genomics and Proteomics sequence analysis tasks (i.e., DNA, RNA and proteins).
In Genomics analysis, Epigenetic modification detection is one of the key component. It helps clinical researchers and practitioners to distinguish normal cellular activities from malfunctioned ones, which can lead to diverse genetic disorders such as metabolic disorders, cancers, etc. To support this analysis, the proposed framework is used to solve the problem of DNA and Histone modification prediction where it has achieved state-of-the-art performance on 27 publicly available benchmark datasets of 17 different species with best accuracy of 97%. RNA sequence analysis is another vital component of Genomics sequence analysis where the identification of different coding and non-coding RNAs as well as their subcellular localization patterns help to demystify the functions of diverse RNAs, root causes of clinical changes, develop precision medicine and optimize therapeutics. To support this analysis, the proposed framework is utilized for non-coding RNA classification and multi-compartment RNA subcellular localization prediction. Where it achieved state-of-the-art performance on 10 publicly available benchmark datasets of Homo sapiens and Mus Musculus species with best accuracy of 98%.
Proteomics sequence analysis is essential to demystify the virus pathogenesis, host immunity responses, the way
proteins affect or are affected by the cell processes, their structure and core functionalities. To support this analysis, the proposed framework is used for host protein-protein and virus-host protein-protein interaction prediction. It has achieved state-of-the-art performance on 2 publicly available protein protein interaction datasets of Homo Sapiens and Mus Musculus species with best accuracy of 96% and 7 viral host protein protein interaction datasets of multiple hosts and viruses with best accuracy of 94%. Considering the performance and practical significance of proposed framework, we believe proposed framework will help researchers in developing cutting-edge practical applications for diverse Genomic and Proteomic sequence analyses tasks (i.e., DNA, RNA and proteins).
Emission trading systems (ETS) represent a widely used instrument to control greenhouse
gas emissions, while minimizing reduction costs. In an ETS, the desired amount of emissions in
a predefined time period is fixed in advance; corresponding to this amount, tradeable allowances
are handed out or auctioned to companies which underlie the system. Emissions which are not
covered by an allowance are subject to a penalty at the end of the time period.
Emissions depend on non-deterministic parameters such as weather and the state of the
economy. Therefore, it is natural to view emissions as a stochastic quantity. This introduces a
challenge for the companies involved: In planning their abatement actions, they need to avoid
penalty payments without knowing their total amount of emissions. We consider a stochastic control approach to address this problem: In a continuous-time model, we use the rate of
emission abatement as a control in minimizing the costs that arise from penalty payments and
abatement costs. In a simplified variant of this model, the resulting Hamilton-Jacobi-Bellman
(HJB) equation can be solved analytically.
Taking the viewpoint of a regulator of an ETS, our main interest is to determine the resulting
emissions and to evaluate their compliance with the given emission target. Additionally, as an
incentive for investments in low-emission technologies, a high allowance price with low variability
is desirable. Both the resulting emissions and the allowance price are not directly given by the
solution to the stochastic control problem. Instead we need to solve a stochastic differential
equation (SDE), where the abatement rate enters as the drift term. Due to the nature of the
penalty function, the abatement rate is not continuous. This means that classical results on
existence and uniqueness of a solution as well as convergence of numerical methods, such as the
Euler-Maruyama scheme, do not apply. Therefore, we prove similar results under assumptions
suitable for our case. By applying a standard verification theorem, we show that the stochastic
control approach delivers an optimal abatement rate.
We extend the model by considering several consecutive time periods. This enables us to
model the transfer of unused allowances to the subsequent time period. In formulating the
multi-period model, we pursue two different approaches: In the first, we assume the value that
the company anticipates for an unused allowance to be constant throughout one time period.
We proceed similarly to the one-period model and again obtain an analytical solution. In the
second approach, we introduce an additional stochastic process to simulate the evolution of the
anticipated price for an unused allowance.
The model so far assumes that allowances are allocated for free. Therefore, we construct
another model extension to incorporate the auctioning of allowances. Then, additionally the
problem of choosing the optimal demand at the auction needs to be solved. We find that
the auction price equals the allowance price at the beginning of the respective time period.
Furthermore, we show that the resulting emissions as well as the allowance price are unaffected
by the introduction of auctioning in the setting of our model.
To perform numerical simulations, we first solve the characteristic partial differential equation
derived from the HJB equation by applying the method of lines. Then we apply the Euler-
Maruyama scheme to solve the SDE, delivering realizations of the resulting emissions and the
allowance price paths.
Simulation results indicate that, under realistic settings, the probability of non-compliance
with the emission target is quite large. It can be reduced for instance by an increase of the
penalty. In the multi-period model, we observe that by allowing the transfer of allowances to the
subsequent time period, the probability of non-compliance decreases considerably.
Estimation of Motion Vector Fields of Complex Microstructures by Time Series of Volume Images
(2023)
Mechanical tests form one of the pillars in development and assessment of modern materials. In a world that will be forced to handle its resources more carefully in the near future, development of materials that are favorable regarding for example weight or material consumption is inevitable. To guarantee that such materials can also be used in critical infrastructure, such as foamed materials in automotive industry or new types of concrete in civil engineering, mechanical properties like tensile or compressive strength have to be thoroughly described. One method to do so is by so called in situ tests, where the mechanical test is combined with an image acquisition technique such as Computed Tomography.
The resulting time series of volume images comprise the delicate and individual nature of each material. The objective of this thesis is to present and develop methods to unveil this behavior and make the motion accessible by algorithms. The estimation of motion has been tackled by many communities, and two of them have already made big effort to solve the problems we are facing. Digital Volume Correlation (DVC) on the one hand has been developed by material scientists and was applied in many different context in mechanical testing, but almost never produces displacement fields that allocate one vector per voxel. Medical Image Registration (MIR) on the other hand does produce voxel precise estimates, but is limited to very smooth motion estimates.
The unification of both families, DVC and MIR, under one roof, will therefore be illustrated in the first half of this thesis. Using the theory of inverse problems, we lay the mathematical foundations to explain why in our impression none of the families is sufficient to deal with all of the problems that come with motion estimation in in situ tests. We then proceed by presenting a third community in motion estimation, namely Optical flow, which is normally only applied in two dimensions. Nevertheless, within this community algorithms have been developed that meet many of our requirements. Strategies for large displacement exist as well as methods that resolve jumps, and on top the displacement is always calculated on pixel level. This thesis therefore proceeds by extending some of the most successful methods to 3D.
To ensure the competitiveness of our approach, the last part of this thesis deals with a detailed evaluation of proposed extensions. We focus on three types of materials, foam, fibre systems and concrete, and use simulated and real in situ tests to compare the Optical flow based methods to their competitors from DVC and MIR. By using synthetically generated and simulated displacement fields, we also assess the quality of the calculated displacement fields - a novelty in this area. We conclude this thesis by two specialized applications of our algorithm, which show how the voxel-precise displacement fields serve as useful information to engineers in investigating their materials.
In tribology laboratories, the management of material samples and test specimens, the planning and execution of experiments, the evaluation of test data and the longterm storage of results are critical processes. However, despite their criticality, they are carried out manually and typically at a low level of computerization and standardization. Therefore, formats for primary data and aggregated results are wildly different between laboratories, and the interoperability of research data is low. Even within laboratories, low levels of standardization, in combination with ambiguous or non-unique identifiers for data files, test specimens and analysis results greatly reduce data integrity and quality. As a consequence, productivity is low, error rates are high, and the lack or low quality of metadata causes the value of produced data to deteriorate very quickly, which makes the re-use of data, e.g. for data mining and meta studies, practically impossible.
In other fields of science, these are mitigated by the use of Laboratory Information Management Systems (LIMS). However, at the moment, such systems do not exist
in tribological research. The main challenge for the implementation of such a system is that it requires extensive interdisciplinary knowledge from otherwise very
disparate fields: tribology, data and process modelling, quality management, databases and programming. So far, existing solutions are either proprietary, very limited
in their scope or focused on merely storing aggregated results without any support for laboratory operations.
Therefore, this thesis describes fundamentals of information technology, data modelling and programming that are required to build a LIMS for tribology laboratories.
Based on an analysis of a typical workflow of a tribology laboratory, a data model for all relevant entities and processes is designed using object-relational data modelling and object-oriented programming and a relational database is used to provide a reference implementation of such a LIMS. It provides critical functionalities
like a materials database, test specimen management, the planning, execution and evaluation of friction and wear tests, automated procedures for tribometer
parameterization and data transmission, storage and evaluation and for aggregating individual tests into test sets and projects. It improves the quality and long-term usability of data by replacing error-prone human processes by automated variants, e.g. automated collection of metadata and data file transmission, homogenization and storage. The usefulness of the developed LIMS is demonstrated by applying it to Transfer Film Luminance Analysis (TLA), which is a newly developed advanced method for the analysis of the formation and stability of transfer films and their impact on friction and wear, but which produces so much data and requires such a large amount of metadata during evaluation that it can only be performed safely, quickly and reliably by integration into the presented LIMS.
Regulation of sucrose transport between source and sink tissues is critical for plant development and properties. In cells, the dynamic vacuolar sugar homeostasis is maintained by the controlled regulation of the activities of sugar importers and exporters residing in the tonoplast. We show here that the EARLY RESPONSE TO DEHYDRATION6-LIKE4 protein, being the closest homolog to the proton/glucose symporter ERDL6, resides in the vacuolar membrane. We raised both, molecular expression and data deriving from non-aqueous fractionation studies indicating that ERDL4 was involved in glucose and fructose allocation across the tonoplast. Surprisingly, overexpression of ERDL4 increased total sugar levels in leaves, which is due to a concomitantly induced stimulation of TST2 expression, coding for the major vacuolar sugar loader. This conclusion is supported by the notion that tst1-2 knockout lines overexpressing ERDL4 lack increased cellular sugar levels. That ERDL4 activity contributes to the coordination of cellular sugar homeostasis is further indicated by two observations. Firstly, ERDL4 and TST genes exhibit an opposite regulation during a diurnal rhythm, secondly, the ERDL4 gene is markedly expressed during cold acclimation representing a situation in which TST activity needs to be upregulated. Moreover, ERDL4-overexpressing plants show larger size of rosettes and roots, a delayed flowering and increased total seed yield. In summary, we identified a novel factor influencing source to sink transfer of sucrose and by this governing plant organ development.
In this thesis, a new concept to prove Mosco convergence of gradient-type Dirichlet forms within the \(L^2\)-framework of K.~Kuwae and T.~Shioya for varying reference measures is developed.
The goal is, to impose as little additional conditions as possible on the sequence of reference measure \({(\mu_N)}_{N\in \mathbb N}\), apart from weak convergence of measures.
Our approach combines the method of Finite Elements from numerical analysis with the topic of Mosco convergence.
We tackle the problem first on a finite-dimensional substructure of the \(L^2\)-framework, which is induced by finitely many basis functions on the state space \(\mathbb R^d\).
These are shifted and rescaled versions of the archetype tent function \(\chi^{(d)}\).
For \(d=1\) the archetype tent function is given by
\[\chi^{(1)}(x):=\big((-x+1)\land(x+1)\big)\lor 0,\quad x\in\mathbb R.\]
For \(d\geq 2\) we define a natural generalization of \(\chi^{(1)}\) as
\[\chi^{(d)}(x):=\Big(\min_{i,j\in\{1,\dots,d\}}\big(\big\{1+x_i-x_j,1+x_i,1-x_i\big\}\big)\Big)_+,\quad x\in\mathbb R^d.\]
Our strategy to obtain Mosco convergence of
\(\mathcal E^N(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu_N\) towards \(\mathcal E(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu\) for \(N\to\infty\)
involves as a preliminary step to restrict those bilinear forms to arguments \(u,v\) from the vector space spanned by the finite family \(\{\chi^{(d)}(\frac{\,\cdot\,}{r}-\alpha)\) \(|\alpha\in Z\}\) for
a finite index set \(Z\subset\mathbb Z^d\) and a scaling parameter \(r\in(0,\infty)\).
In a diagonal procedure, we consider a zero-sequence of scaling parameters and a sequence of index sets exhausting \(\mathbb Z^d\).
The original problem of Mosco convergence, \(\mathcal E^N\) towards \(\mathcal E\) w.r.t.~arguments \(u,v\) form the respective minimal closed form domains extending the pre-domain \(C_b^1(\mathbb R^d)\), can be solved
by such a diagonal procedure if we ask for some additional conditions on the Radon-Nikodym derivatives \(\rho_N(x)=\frac{d\mu_N(x)}{d x}\), \(N\in\mathbb N\). The essential requirement reads
\[\frac{1}{(2r)^d}\int_{[-r,r]^d}|\rho_N(x)- \rho_N(x+y)|d y \quad \overset{r\to 0}{\longrightarrow} \quad 0 \quad \text{in } L^1(d x),\,
\text{uniformly in } N\in\mathbb N.\]
As an intermediate step towards a setting with an infinite-dimensional state space, we let $E$ be a Suslin space and analyse the Mosco convergence of
\(\mathcal E^N(u,v)=\int_E\int_{\mathbb R^d}\langle\nabla_x u(z,x),\nabla_x v(z,x)\rangle_\text{euc}d\mu_N(z,x)\) with reference measure \(\mu_N\) on \(E\times\mathbb R^d\) for \(N\in\mathbb N\).
The form \(\mathcal E^N\) can be seen as a superposition of gradient-type forms on \(\mathbb R^d\).
Subsequently, we derive an abstract result on Mosco convergence for classical gradient-type Dirichlet forms
\(\mathcal E^N(u,v)=\int_E\langle \nabla u,\nabla v\rangle_Hd\mu_N\) with reference measure \(\mu_N\) on a Suslin space $E$ and a tangential Hilbert space \(H\subseteq E\).
The preceding analysis of superposed gradient-type forms can be used on the component forms \(\mathcal E^{N}_k\), which provide the decomposition
\(\mathcal E^{N}=\sum_k\mathcal E^{N}_k\). The index of the component \(k\) runs over a suitable orthonormal basis of admissible elements in \(H\).
For the asymptotic form \(\mathcal E\) and its component forms \(\mathcal E^k\), we have to assume \(D(\mathcal E)=\bigcap_kD(\mathcal E^k)\) regarding their domains, which is equivalent to the Markov uniqueness of \(\mathcal E\).
The abstract results are tested on an example from statistical mechanics.
Under a scaling limit, tightness of the family of laws for a microscopic dynamical stochastic interface model over \((0,1)^d\) is shown and its asymptotic Dirichlet form identified.
The considered model is based on a sequence of weakly converging Gaussian measures \({(\mu_N)}_{N\in\mathbb N}\) on \(L^2((0,1)^d)\), which are
perturbed by a class of physically relevant non-log-concave densities.
This thesis deals with the simulation of large insurance portfolios. On the one hand, we need to model the contracts' development and the insured collective's structure and dynamics. On the other hand, an important task is the forward projection of the given balance sheet. Questions that are interesting in this context, such as the question of the default probability up to a certain time or the question of whether interest rate promises can be kept in the long term, cannot be answered analytically without strong simplifications. Reasons for this are high dependencies between the insurer's assets and liabilities, interactions between existing and new contracts due to claims on a collective reserve, potential policy features such as a guaranteed interest rate, and individual surrender options of the insured. As a consequence, we need numerical calculations, and especially the volatile financial markets require stochastic simulations. Despite the fact that advances in technology with increasing computing capacities allow for faster computations, a contract-specific simulation of all policies is often an impossible task. This is due to the size and heterogeneity of insurance portfolios, long time horizons, and the number of necessary Monte Carlo simulations. Instead, suitable approximation techniques are required.
In this thesis, we therefore develop compression methods, where the insured collective is grouped into cohorts based on selected contract-related criteria and then only an enormously reduced number of representative contracts needs to be simulated. We also show how to efficiently integrate new contracts into the existing insurance portfolio. Our grouping schemes are flexible, can be applied to any insurance portfolio, and maintain the existing structure of the insured collective. Furthermore, we investigate the efficiency of the compression methods and their quality in approximating the real life insurance portfolio.
For the simulation of the insurance business, we introduce a stochastic asset-liability management (ALM) model. Starting with an initial insurance portfolio, our aim is the forward projection of a given balance sheet structure. We investigate conditions for a long-term stability or stationarity corresponding to the idea of a solid and healthy insurance company. Furthermore, a main result is the proof that our model satisfies the fundamental balance sheet equation at the end of every period, which is in line with the principle of double-entry bookkeeping. We analyze several strategies for investing in the capital market and for financing the due obligations. Motivated by observed weaknesses, we develop new, more sophisticated strategies. In extensive simulation studies, we illustrate the short- and long-term behavior of our ALM model and show impacts of different business forms, the predicted new business, and possible capital market crashes on the profitability and stability of a life insurer.
The fifth-generation (5G) of wireless networks promises to bring new advances, such as a huge increase in mobile data rates, a plunge in communications latency, and an increase in the quality of experience perceived by users that can cope with the ever-increasing demand in Internet traffic. However, the high cost of capital and operational expenditure (CAPEX/OPEX) of the new 5G network and the lack of a killer application hinder its rapid adoption. In this context, Mobile Network Operators (MNOs) have turned their attention to the following idea: opening up their infrastructure so that vertical businesses can leverage the new 5G network to improve their primary businesses and develop new ones. However, deploying multiple isolated vertical applications on top of the same infrastructure poses unique challenges that must be addressed. In this thesis, we provide critical contributions to developing 5G networks to accommodate different vertical applications in an isolated, flexible, and automated manner. This thesis contributions spawn on three main areas: (i) the development of an integrated fronthaul and backhaul network, (ii) the development of a network slicing overbooking algorithm, and (iii) the development of a method to mitigate the noisy neighbors' problem in a vRAN deployment.
Scientific research plays a crucial role in the development of a society. With ever-increasing volumes of scientific publications are now making it extremely challenging to analyze and maintain insights into the scientific communities like collaboration or citation trends and evolution of interests etc. This thesis is an effort towards using scientific publications to provide detailed insights into a scientific community from a range of aspects. The contribution of this thesis is five-fold.
Firstly, this thesis proposes approaches for automatic information extraction from scientific publications. The proposed layout-based approach for this purpose is inspired by how human beings perceive individual references relying only on visual queues. The proposed approach significantly outperforms the existing text-based techniques and is independent of any domain or language.
Secondly, this thesis tackles the problem of identifying meaningful topics from a given publication as the keywords provided in the publication are not always accurate representatives of the publication topic. To rectify this problem, this thesis proposes a state-of-the-art keywords extraction approach that employs a domain ontology along with the detected keywords to perform topic modeling for a given set of publications.
Thirdly, this thesis analyses the disposition of each citation to understand its true essence. For this purpose, we proposes a transformer-based approach for analyzing the impact of each citation appearing in a scientific publication. The impact of a citation can be determined by the inherent sentiment and intent of a citation, which refers to the assessment and motive of an author towards citing a scientific publication.
Furthermore, this thesis quantifies the influence of a research contributor in a scientific community by introducing a new semantic index for researchers that takes both quantitative and qualitative aspects of a citation into account to better represent the prestige of a researcher in a scientific community. Semantic Index is also evaluated for conformity to the guidelines and recommendations of various research funding organizations to assess the impact of a researcher.
In this thesis, all of the aforementioned aspects are packaged together in a single framework called Academic Community Explorer (ACE) 2.0, which automatically extracts and analyzes information from scientific publications and visualizes the insights using several interactive visualizations. These visualizations provide an instant glimpse into the scientific communities from a wide range of aspects with different granularity levels.
Human interferences within the Earth System are accelerating, leading to major impacts and feedback that we are just beginning to understand. Summarized under the term 'global change' these impacts put human and natural systems under ever-increasing stress and impose a threat to human well-being, particularly in the Global South. Global governance bodies have acknowledged that decisive measures have to be taken to mitigate the causes and to adapt to these new conditions. Nevertheless, neither current international nor national pledges and measures reach the effectiveness needed to sustain global human well-being under accelerating global change. On the contrary, competing interests are not only paralyzing the international debate but also playing an increasingly important role in debates over social fragmentation and societal polarization on national and local scales. This interconnectedness of the natural and the social system and its impact on social phenomena such as cooperation and conflicts need to be understood better, to strengthen social resilience to future disturbances, drive societal transformation towards socially desirable futures while at the same time avoiding path dependencies along continuing colonial continuities. As a case example, this thesis provides insights into southwestern Amazonia, where the intertwined challenges of human contribution to global change in all its dimensions, as well as human adaptation and mitigation attempts to the imposed changes become exaggeratedly visible. As such, southwestern Amazonia with its high social, economic, and biological diversity is a good example to study the deep interrelations of humans with nature and the consequences these relations have on social cohesion amid an ecological crisis.
Therefore, this thesis takes a social-ecological perspective on conflicts and social cohesion. Social cohesion is in a wider sense understood as the way "how members of a society, group, or organization relate to each other and work together" (Dany and Dijkzeul 2022, p. 12). In particular in contexts of violence, conflicts, and fragility, little has been investigated on the role of social cohesion to govern public goods and build resilience for (future) environmental crises. At the same time, governments and international decision-makers more and more acknowledge the role of social cohesion _ comprising both relations between social groups and between groups and the state _ to build upon resilience against crises. Facing uncertainty in how natural and social systems react to certain disturbances and shocks, the governance of potential tipping points, is an additional challenge for the governance of social-ecological systems (SES). Therefore, this thesis asks: "How does governance shape pathways towards cooperative or conflictive social-ecological tipping points?" The results of this thesis can be distinguished into theoretical/conceptual results and empirical results. Initial systematic literature research on the nexus of climate change, land use, and conflict revealed, an extensive body of literature on direct effects, for example, drought-related land use conflicts, with diverging opinions on whether global warming increases the risk for conflicts or not. Adding the perspective of indirect implications, we further identified research gaps, and also a lack of policy recognition, concerning the negative externalities on land use and conflict through climate mitigation and adaptation measures. On a conceptual note, taking a social cohesion perspective into the analysis is beneficial to shift the focus from the problem-oriented perspective of vulnerabilities and conflicts to global change and potential resulting conflicts to a solution-oriented perspective of enhancing agency and resilience to strengthen collaboration. The developed Social Cohesion Conceptual Model and the related analytical framework facilitate the incorporation of societal dynamics into the analysis of SES dynamics. In addition, the elaborated Tipping Multiverse Framework took up this idea and enhanced it with a more detailed perspective on the soil ecosystem and the household livelihood system to identify entry points to potential social-ecological tipping cascades. As such, the Tipping Multiverse Framework offered two matrices that can advance the understanding of regional SES by identifying core processes, functioning, and links in each TE and thus provide entry points to identify potential tipping cascades across SES sub-systems. The exemplified application of these two frameworks on southwestern Amazonia shows the analytical potential of both proposed frameworks in advancing the understanding of social-ecological tipping points and potential tipping cascades in a regional SES.
On an empirical note, zooming in on questions of governance by applying a political ecology lens to human security, we find that 'glocal' resource governance often reproduces, amplifies, or creates power imbalances and divisions on and between different scales. Our results show that the winners of resource extraction are mostly found at the national and international scale while local communities receive little benefit and are left vulnerable to externalities. Hence, our study contributes to the existing research by stressing the importance of one underlying question: "governance by whom and for whom?" This question raised the demand to understand the underlying dynamics of resource governance and resulting conflicts. Therefore, we aimed at analyzing how (environmental) institutions influence the major drivers of social-ecological conflicts over land in and around three protected areas, Tambopata (Peru), the Extractive Reserve Chico Mendes (Brazil), and Manuripi (Bolivia). We found that state institutions, in particular, have the following effects on key conflict drivers: Overlapping responsibilities of governance institutions and limited enforcement of regulations protecting and empowering rural and disadvantaged populations, enabling external actors to (illegally) access and control resources in the protected areas. Consequently, the already fragile social contract between the residents of the protected area and its surrounding areas and the central state is further weakened by the expanding influence of criminal organizations that oppose the state's authority. For state institutions to avoid aggravating these conflict drivers but instead better manage them or even contribute to conflict prevention and mitigation, a transformation from reactive to reflexive institutions and the development of new reflexive governance competencies is needed.
This need for reflexive governance becomes particularly visible when sudden disturbances or shocks impact the SES. Our analysis of the impacts of the COVID-19 pandemic on the interconnections of land use change, ecosystem services, human agency, conflict, and cooperation that the pandemic has had a severe influence on the human security of marginalized social groups in southwestern Amazonia. Civil society actions have been an essential strategy in the fight against COVID-19, not just in the health sector but also in the economic, political, social, and cultural realms. However, our research also showed that the pandemic has consolidated and partly renewed criminal structures, while the already weak state has fallen further behind due to additional tasks managing the pandemic and other disasters such as floods.
In conclusion, it can be said that the reflexivity of governance is crucial to foster cooperation and preventing conflicts in the realm of social-ecological systems. By not only reacting to already occurring changes but also reflecting upon potential future changes, governance can shape transformation pathways away from the detrimental and towards life-sustaining pathways. It can do so, by exercising agency across scales to avoid the crossing of detrimental social-ecological tipping points but rather to trigger life-sustaining tipping points that contribute to global social-ecological well-being.
This work is concerned with two often separated disciplines. First, experimental studies in which the effect of cooling rate on martensite transformation and the resulting microstructure in a low-alloy steel is investigated. From this, a possible transformation mechanism is derived. Second, the development of a simulation model which describes the martensitic morphology and its evolution. In this context, a phase field model is presented introducing order parameters to simulate the material state, namely austenite and martensite. The evolution of the order parameters is assumed to follow the time-dependent Ginzburg-Landau equation. A major extension to previous models is the consideration of twelve crystallographic martensite variants corresponding to the Nishiyama-Wassermann orientation relationship. To describe the ordered displacement of atoms during transformation and to account for the martensitic substructure, the well-known phenomenological theory of martensite crystallography is employed. The presented experiments as well as thermodynamic calculations are used as a basis in the identification of model parameters. With the presented model, basic features of the martensitic transformation can be reproduced. These include the martensite start temperature and the hierarchical microstructure consisting of blocks and packets. The sizes of the blocks are in good agreement with the real sizes of the experimental database.
In recent years, deep learning has made substantial improvements in various fields like image understanding, Natural Language Processing (NLP), etc. These huge advancements have led to the release of many commercial applications which aim to help users carry out their daily tasks. Personal digital assistants are one such successful application of NLP, having a diverse userbase from all age groups. NLP tasks like Natural Language Understanding (NLU) and Natural Language Generation (NLG) are core components for building these assistants. However, like any other deep learning model, the growth of NLU & NLG models is directly coupled with tremendous amounts of training examples, which are expensive to collect due to annotator costs. Therefore, this work investigates the methodologies to build NLU and NLG systems in a data-constrained setting.
We evaluate the problem of limited training data in multiple scenarios like limited or no data available when building a new system, availability of a few labeled examples when adding a new feature to an existing system, and changes in the distribution of test data during the lifetime of a deployed system.
Motivated by the standard methods to handle data-constrained settings, we propose novel approaches to generate data and exploit latent representations to overcome performance drops emerging from limited training data.We propose a framework to generate high-quality synthetic data when few training examples are available for a newly added feature for dialogue agents. Our interpretation-to-text model uses existing training data for bootstrapping new features and improves the accuracy of downstream tasks of intent classification and slot labeling. Following, we study a few-shot setting and observe that generation systems face a low semantic coverage problem. Hence, we present an unsupervised NLG algorithm that ensures that all relevant semantic information is present in the generated text.
We also study to see if we really need all training examples for learning a generalized model. We propose a data selection method that selects the most informative training examples to train Visual Question Answering (VQA) models without erosion of accuracy. We leverage the already available inter-annotator agreement and design a diagnostic tool, called (EaSe), that leverages the entropy and semantic similarity of answer patterns.
Finally, we discuss two empirical studies to understand the feature space of VQA models and show how language model pre-training and exploiting multimodal embedding space allows for building data constrained models ensuring minimal or no accuracy losses.
This thesis concerns itself with the long-term behavior of generalized Langevin dynamics with multiplicative noise,
i.e. the solutions to a class of two-component stochastic differential equations in \( \mathbb{R}^{d_1}\times\mathbb{R}^{d_2} \)
subject to outer influence induced by potentials \( \Phi \) and \( \Psi \),
where the stochastic term is only present in the second component, on which it is dependent.
In particular, convergence to an equilibrium defined by an invariant initial distribution \( \mu \) is shown
for weak solutions to the generalized Langevin equation obtained via generalized Dirichlet forms,
and the convergence rate is estimated by applying hypocoercivity methods relying on weak or classical Poincaré inequalities.
As a prerequisite, the space of compactly supported smooth functions is proven to be a domain of essential m-dissipativity
for the associated Kolmogorov backward operator on \(L^2(\mu)\).
In the second part of the thesis, similar Langevin dynamics are considered, however defined on a product of infinite-dimensional separable Hilbert spaces.
The set of finitely based smooth bounded functions is shown to be a domain of essential m-dissipativity for the corresponding Kolmogorov operator \( L \) on \( L^2(\mu) \)
for a Gaussian measure \( \mu \), by applying the previous finite-dimensional result to appropriate restrictions of \( L \).
Under further bounding conditions on the diffusion coefficient relative to the covariance operators of \( \mu \),
hypocoercivity of the generated semigroup is proved, as well as the existence of an associated weakly continuous Markov process
which analytically weakly provides a weak solution to the considered Langevin equation.
The generally unsupervised nature of autoencoder models implies that the main training metric is formulated as the error between input images and their corresponding reconstructions. Different reconstruction loss variations and latent space regularization have been shown to improve model performances depending on the tasks to solve and to induce new desirable properties like disentanglement. Nevertheless, measuring the success in, or enforcing properties by, the input pixel space is a challenging endeavor. In this work, we want to make more efficient use of the available data and provide design choices to be considered in the recording or generation of future datasets to implicitly induce desirable properties during training. To this end, we propose a new sampling technique which matches semantically important parts of the image while randomizing the other parts, leading to salient feature extraction and a neglection of unimportant details. Further, we propose to recursively apply a previously trained autoencoder model, which can then be interpreted as a dynamical system with desirable properties for generalization and uncertainty estimation.
The proposed methods can be combined with any existing reconstruction loss. We give a detailed analysis of the resulting properties on various datasets and show improvements on several computer vision tasks: image and illumination normalization, invariances, synthetic to real generalization, uncertainty estimation and improved classification accuracy by means of simple classifiers in the latent space.
These investigations are adopted in the automotive application of vehicle interior rear seat occupant classification. For the latter, we release a synthetic dataset with several fine-grained extensions such that all the aforementioned topics can be investigated in isolation, or together, in a single application environment. We provide quantitative evidence that machine learning, and in particular deep learning methods cannot readily be used in industrial applications when only a limited amount of variation is available for training. The latter can, however, often be the case because of constraints enforced by the application to be considered and financial limitations.
The rising demand for machine learning (ML) models has become a growing concern for stakeholders who depend on automatic decisions. In today's world, black-box solutions (in particular deep neural networks) are being continuously implemented for more and more high-stake scenarios like medical diagnosis or autonomous vehicles. Unfortunately, when these opaque models make predictions that do not align with our expectations, finding a valid justification is simply not possible.
Explainable Artificial Intelligence (XAI) has emerged in response to our need for finding reasons that justify what a machine sees, but we don't. However, contributions in this field are mostly centered around local structures such as individual neurons or single input samples. Global characteristics that govern the behavior of a model are still poorly understood or have not been explored yet. An aggravating factor is the lack of a standard terminology to contextualize and compare contributions in this field. Such lack of consensus is depriving the ML community from ultimately moving away from black-boxes, and start creating systematic methods to design models that are interpretable by design.
So, what are the global patterns that govern the behavior of modern neural networks, and what can we do to make these models more interpretable from the start?
This thesis delves into both issues, unveiling patterns about existing models, and establishing strategies that lead to more interpretable architectures. These include biases coming from imbalanced datasets, quantification of model capacity, and robustness against adversarial attacks. When looking for new models that are interpretable by design, this work proposes a strategy to add more structure to neural networks, based on auxiliary tasks that are semantically related to the main objective. This strategy is the result of applying a novel theoretical framework proposed as part of this work. The XAI framework is meant to contextualize and compare contributions in XAI by providing actionable definitions for terms like "explanation" and "interpretation."
Altogether, these contributions address dire demands for understanding more about the global behavior of modern deep neural networks. More importantly, they can be used as a blueprint for designing novel, and more interpretable architectures. By tackling issues from the present and the future of XAI, results from this work are a firm step towards more interpretable models for computer vision.
In recent years, the formal methods community has made significant progress towards the development of industrial-strength static analysis tools that can check properties of real-world production code. Such tools can help developers detect potential bugs and security vulnerabilities in critical software before deployment. While the potential benefits of static analysis tools are clear, their usability and effectiveness in mainstream software development workflows often comes into question and can prevent software developers from using these tools to their full potential. In this dissertation, we focus on two major challenges that can limit their ability to be incorporated into software development workflows.
The first challenge is unintentional unsoundness. Static program analyzers are complicated tools, implementing sophisticated algorithms and performance heuristics. This makes them highly susceptible to undetected unintentional soundness issues. These issues in program analyzers can cause false negatives and have disastrous consequences e.g., when analyzing safety critical software. In this dissertation, we present novel techniques to detect unintentional unsoundness bugs in two foundational program analysis tools namely SMT solvers and Datalog engines. These tools are used extensively by the formal methods community, for instance, in software verification, systematic testing, and program synthesis. We implemented these techniques as easy-to-use open source tools that are publicly available on Github. With the proposed techniques, we were able to detect more than 55 unique and confirmed critical soundness bugs in popular and widely used SMT solvers and Datalog engines in only a few months of testing.
The second challenge is finding the right balance between soundness, precision, and perfor- mance. In an ideal world, a static analyzer should be as precise as possible while maintaining soundness and being sufficiently fast. However, to overcome undecidability issues, these tools have to employ a variety of techniques to be practical for example, compromising on the sound- ness of the analysis or approximating code behavior. Static analyzers therefore are not trivial to integrate into any usage scenario with different program sizes, resource constraints and SLAs. Most of the times, these tools also don’t scale to large industrial code bases containing millions of lines of code. This makes it extremely challenging to get the most out of these analyzers and integrate them into everyday development activities, especially for average software develop- ment teams with little to no knowledge or understanding of advanced static analysis techniques. In this dissertation we present an approach to automatically tailor an abstract interpreter to the code under analysis and any given resource constraints. We implemented our technique as an open source framework, which is publicly available on Github. The second contribution of this dissertation in this challenge area is a technique to horizontally scale analysis tools in cloud-based static analysis platforms by splitting the input to the analyzer into partitions and analyzing the partitions independently. The technique was developed in collaboration with Amazon Web Services and is now being used in production in their CodeGuru service.
Formaldehyde is an important intermediate in the chemical industry. In technical processes, formaldehyde is used in aqueous or methanolic solutions. In these, it is bound in oligomers that are formed in reversible reactions. These reactions and also the vapor-liquid equilibria of mixtures containing formaldehyde, water, and methanol have been thoroughly studied in the literature. This is, however, not the case for solid-liquid equilibria of these mixtures, even though the precipitation of solids poses important problems in many technical processes. Therefore, in the present thesis, a fundamental study on the formation of solid phases in the system (formaldehyde + water + methanol) was carried out. Based on the experiments, a physico-chemical model of the solid-liquid equilibrium was developed. Furthermore, also kinetic effects, which are important in practice, were described. The results enable, for the first time, to understand the solid formation in these mixtures, which previously was considered to be hard to predict.
The studies on the solid formation in formaldehyde-containing systems were carried out as a part of a project dealing with the production of poly(oxymethylene) dimethyl ethers (OME). OME are formaldehyde-based synthetic fuels that show cleaner combustion than fossil diesel. Different aspects of the OME production were studied. First, a conceptual design for a OME production process based on dimethyl ether (DME) was carried out based on process simulation. This study revealed that the DME route is principally attractive. However, basic data on the formation of OME from DME were missing, and had to be estimated for the conceptual design study. Therefore, in a second step an experimental study on the formation of OME from DME was carried out. In this reaction, trioxane, a cyclic trimer of formaldehyde is used as a water-free formaldehyde source. Trioxane is currently produced from aqueous formaldehyde solution in energy-intensive processes. Therefore, a new trioxane production process was developed in which trioxane is obtained from a crystallization step. In process simulations, the new process was compared to the best previously available process and was found to be promising.
While OME are excellent synthetic fuels, it is also attractive to use them in blends with hydrogenated vegetable oil (HVO), which is available on a large scale. However, blends of OME and HVO that are initially homogenous tend to demix after a while in technical applications. This phenomenon was poorly understood previously. Therefore, in this work, liquid-liquid equilibria in mixtures of individual components of the two fuels in combination with water were systematically studied and a corresponding model was developed.
Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like \( \textit{Computer Vision} \) (CV), \( \textit{Neural Language Processing} \) (NLP), and \( \textit{Reinforcement Learning} \) (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size.
This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and second-order information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time.
Secondly, this thesis presents a regularization method for \( \textit{Generative Adversarial Networks} \) (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training.
Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers.
Lastly, this thesis unifies sparsity and \( \textit{neural architecture search} \) (NAS) through the framework of group sparsity. Group sparsity is achieved through \( \ell_{2,1} \)-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methods.
The present thesis describes the experimental performance determination and numerical
modeling of an aerostatic porous bearing made of an orthotropically layered ceramic
composite material (CMC). The high temperature resistance, low thermal expansion and
high reusability of this material makes it eminently suitable for use in highly stressed
fluid-film bearing applications.
The work involves the development of an aerostatic journal bearing made of porous,
orthotropically layered carbon fiber-reinforced carbon composite (C/C) and the design
of a journal bearing test rig, which contained additional aerostatic support bearings and
six optical laser triangulation sensors. The sensor system enabled the measurement of
lubricant film thickness and shaft misalignment. As a result of the slight air lubrication
clearance of 30 μm, the focus was on low concentricity and the determination of shaft
misalignments.
The preliminary tests included the determination of the permeability of the porous material
and the applicability of Darcy’s law. A scan of the inner surface of the porous bushing
revealed a characteristic grooved structure, which can be attributed to the layered structure
of the material. Bearing tests were conducted up to a rotational speed of 8000 rpm and a
pressure ratio of 5 to 7. No significant effect of rotational speed on load-carrying capacity
and gas consumption was observed in this operating range. The examined operating points
did not indicate any sign of the occurrence of the pneumatic hammer. A temporary load of
below 90N on the bearing and an eccentricity ratio below 0.8 did not cause any significant
wear on the shaft.
Four numerical models, based on Reynolds’ lubricant film equation and Darcy’s law were
developed. The models were gradually extended with consideration of shaft misalignment,
the compressibility of the gas, the geometry of the pressure supply chamber and the
embedding of the groove structure. The models were validated with external publications
and the performed tests.
Numerous studies have investigated aerostatic porous bearings made of sintered metal
and graphite. Current computational approaches to determine a fast preliminary design
reached max. deviations of approximately 20 - 24% compared to experimental tests. One
of the central claims of this research was to extend this area of investigation by porous,
othotropically layered bearings made of C/C. The developed extended Full-Darcy model
achieved a maximum deviation in the load-carrying capacity of 21.6% and in the gas
consumption of 23.5%.
This study demonstrates the applicability of a resistant material from the aerospace field
(reusable thrust chambers made of CMC) for highly stressed and durable fluid-film bearings.
Furthermore, a numerical model for the computation and design of these bearings was
developed and validated.
This thesis focuses on the development and analysis of Stochastic Model Predictive Control (SMPC) strategies for both distributed stochastic systems and centralized stochastic systems with partially known distributional information. The first part deals with the development of distributed SMPC schemes that can be synthesized and operated in a fully distributed manner, establishing rigorous theoretical guarantees such as recursive feasibility, stability and closed-loop chance constraint satisfaction. We study several control problems of practical interest, such as the output-feedback regulation problem or the state-feedback tracking problem under additive stochastic noise, and the regulation problem under multiplicative noise. In the second part of this thesis, a novel research topic known as distributionally robust MPC (DR-MPC) is explored, which enhances the applicability of SMPC to real-world problems. DR-MPC is advantageous as it solely necessitates partial knowledge in the form of samples of the uncertainty, which is usually available in practical scenarios, while SMPC mandates exact knowledge of the (unknown) distributional information. We investigate different so-called ambiguity sets to immunize the DR-MPC optimization problem against sampling inaccuracies, leading to tractable optimization problems with strong theoretical guarantees. Altogether, both parts provide rigorous theoretical guarantees with practical design procedures demonstrated by numerical examples, which are the main contributions of this thesis.
This thesis is primarily motivated by a project with Deutsche Bahn about offer preparation in rail freight transport. At its core, a customer should be offered three train paths to choose from in response to a freight train request. As part of this cooperation with DB Netz AG, we investigated how to compute these train paths efficiently. They should be all "good" but also "as different as possible". We solved this practical problem using combinatorial optimization techniques.
At the beginning of this thesis, we describe the practical aspects of our research collaboration. The more theoretical problems, which we consider afterwards, are divided into two parts.
In Part I, we deal with a dual pair of problems on directed graphs with two designated end-vertices. The Almost Disjoint Paths (ADP) problem asks for a maximum number of paths between the end-vertices any two of which have at most one arc in common. In comparison, for the Separating by Forbidden Pairs (SFP) problem we have to select as few arc pairs as possible such that every path between the end-vertices contains both arcs of a chosen pair. The main results of this more theoretical part are the classifications of ADP as an NP-complete and SFP as a Sigma-2-P-complete problem.
In Part II, we address a simplified version of the practical project: the Fastest Path with Time Profiles and Waiting (FPTPW) problem. In a directed acyclic graph with durations on the arcs and time windows at the vertices, we search for a fastest path from a source to a target vertex. We are only allowed to be at a vertex within its time windows, and we are only allowed to wait at specified vertices. After introducing departure-duration functions we develop solution algorithms based on these. We consider special cases that significantly reduce the complexity or are of practical relevance. Furthermore, we show that already this simplified problem is in general NP-hard and investigate the complexity status more closely.
Processing data streams is a classical and ubiquitous problem.
A query is registered against a potentially endless data stream and continuously delivers results as tuples stream in.
Modern stream processing systems allow users to express queries in different ways.
However, when a query involves joins between multiple input streams, the order of these joins is not transparently optimized.
In this thesis, we explore ways to optimize multi-way theta joins, where the join predicates are not limited to equality and multiple inputs are referenced.
We put forward a novel operator, MultiStream, which joins multiple input streams using iterative probing and bringing minimal materialization effort in.
The order in which tuples are sent inside a MultiStream operator is optimized using a cost-based model.
Further, a query can be answered using an multi-way tree comprising multiple MultiStream operators where each inner operator represents a materialized intermediate result.
We integrate equi-joins in MultiStream to reduce communication, such that mixed queries of theta and equality predicates are supported.
Streaming queries are long-standing and thus multiple queries might be registered at the system at the same time.
Hence, we research joint answering of multiple multi-way join queries and optimize the global ordering using integer linear programming.
All these approaches are implemented in CLASH, a system for generating Apache Storm topologies including runtime components that enables users to pose queries in a declarative way and let the system craft the suitable topology.
Adult emerging aquatic insects can transfer micropollutants, accumulated during their aquatic development, from aquatic to terrestrial ecosystems. This process depends on both contaminant- and organism-specific properties and processes. The transfer of contaminants can result in the dietary exposure of terrestrial insectivores at the aquatic-terrestrial ecosystem boundary. It is, however, unknown whether this route of contaminant transfer is relevant for current-use pesticides, despite their ubiquity in freshwater ecosystems globally. Furthermore, empirical investigation of pesticides in terrestrial insectivores which consume emerging aquatic insects (e.g. riparian spiders) is lacking. In the present work, two laboratory batch-scale studies and a field study were conducted to investigate the transfer of current-use pesticides by emerging aquatic insects and the dietary exposure of riparian spiders preying on emerging insects. In the two laboratory studies, larvae of the model organism, Chironomus riparius, were exposed, either chronically to seven fungicides and two herbicides, or acutely (24-hours) to three individual insecticides during their development. The pesticides were all small organic molecules, selected to cover a low to moderate lipophilicity range (logKow 1.2 – 4.7). Exposure took place at three environmentally relevant concentrations for the fungicides and herbicides (1.2 – 2.5, 17.5 – 35.0 or 50.0 – 100.0 ng/mL) and two for the insecticides (0.1 and either 4 or 16 ng/mL). Eight of the nine fungicides and herbicides, as well as one of the three insecticides were detected in the adult insects after metamorphosis. Concentrations of the pesticides decreased over metamorphosis. However, the transfer of individual pesticides was not well predicted using published models which are based on contaminant lipophilicity andwere developed using other contaminant classes. In the present work, pesticide-specific differences in bioaccumulation by the larvae, retention through metamorphosis and sex-specific bioamplification and elimination over the course of the terrestrial life stage were observed. The neonicotinoid, thiacloprid, was the only insecticide retained by the emerging insects, due to its slow elimination by the larvae. Thiacloprid also decreased insect emergence success. An approximate 30 % higher survival to emergence in the low exposure level (0.1 ng/mL), however, resulted in a relatively higher insecticide flux, from the aquatic to the terrestrial environment compared to the higher exposure (4 ng/mL). For the field study, a method for the analysis of 82 current-use pesticides by high-performance liquid chromatography tandem to triple quadrupole mass spectrometry using small volumes (30 mg) of insect material was validated and applied to samples of emerging insects and Tetragnatha spp. spiders which were collected from stream sites impacted by agricultural activities. Emerging aquatic insects from three orders (Diptera, Ephemeroptera and Trichoptera) contained 27 pesticides whereas 49 pesticides were found in the aquatic environment (water, sediment and aquatic leaf litter). This included mixtures of up to four neonicotinoid insecticides in the insects, with concentrations up to 12300 times greater than were found in the water. Furthermore, the web-building riparian spiders contained 29 pesticides, generally at low concentrations, however concentrations of three neonicotinoids and one herbicide were biomagnified compared to the emerging insects. The three studies included in this thesis thus reveal that the aquatic-terrestrial transfer of current-use pesticides occurs, even at very low environmentally relevant exposure concentrations. Furthermore, new knowledge was generated on the diverse interactions between current-use pesticides and organisms over their entire lifecycles, affecting the propensities for individual pesticides to be transferred via insect emergence. A wide range of pesticides were found to be dietarily bioavailable to riparian spiders, and likely many other riparian insectivores. The neonicotinoid insecticides stood out for their potential to negatively impact adjacent terrestrial food webs through negative impacts on aquatic insect emergence (i.e. biomass flux), while still having a high propensity to be transferred by emerging insects and bioaccumulated in riparian spiders.
From industrial fault detection to medical image analysis or financial fraud prevention: Anomaly detection—the task of identifying data points that show significant deviations from the majority of data—is critical in industrial and technological applications. For efficient and effective anomaly detection, a rich set of semantic features are required to be automatically extracted from the complex data. For example, many recent advances in image anomaly detection are based on self-supervised learning, which learns rich features from a large amount of unlabeled complex image data by exploiting data augmentations. For image data, predefined transformations such as rotations are used to generate varying views of the data. Unfortunately, for data other than images, such as time series, tabular data, graphs, or text, it is unclear what are suitable transformations. This becomes an obstacle to successful self-supervised anomaly detection on other data types.
This thesis proposes Neural Transformation Learning, a self-supervised anomaly detection method that is applicable to general data types. In contrast to previous methods relying on hand-crafted transformations, neural transformation learning learns the transformations from data and uses them for detection. The key ingredient is a novel objective that encourages learning diverse transformations while preserving the relevant semantic content of the data. We prove theoretically and empirically that it is more suited than existing objectives for transformation learning.
We also introduce the extensions of neural transformation learning for anomaly detection within time series and graph-level anomaly detection. The extensions combine transformation learning and other learning paradigms to incorporate vital prior knowledge about time series and graph data. Moreover, we propose a general training strategy for deep anomaly detection with contaminated data. The idea is to infer the unlabeled anomalies and utilize them for updating parameters alternatively. In setups where expert feedback is available, we present a diverse querying strategy based on the seeding algorithm of K-means++ for active anomaly detection.
Our extensive experiments and analysis demonstrate that neural transformation learning achieves remarkable and robust anomaly detection performance on various data types. Finally, we outline specific paths for future research.
The research deals with a question about Architecture and its design strategies, combining historical information and digital tools. Design strategies are historically defined, they rely on geometry, context, building technologies and other factors. The study of Architecture´s own history, particularly in the verge of technological advancements, like the introduction of new materials or tools may shed some light on how to internalize digital tools like parametric design and digital fabrication.
3D joint angles based human pose is needed for applications like activity recognition, musculoskeletal health, sports biomechanics and ergonomics. The microelectromechanical systems (MEMS) based magnetic-inertial measurement units (MIMUs) can estimate 3D orientation. Due to small size, MIMUs can be attached to the body as wearable sensors for obtaining full 3D human pose and this system is termed as inertial motion capture (i-Mocap). But the MIMUs suffer from sensor errors and disturbances, due to which orientation estimated from individual MIMUs can be erroneous. Accurate sensor calibration is essential and subsequently alignment of these sensors to body segments must also be precisely known, which is called sensor-to-segment calibration. Sensor fusion is employed to address the disturbances and noise in MIMUs. Many state-of-art inertial motion capture approaches ignore the magnetometer and only use IMUs to reduce the error arising from inhomogeneous magnetic field. These algorithms rely on kinematic constraints and assumptions regarding joints and are based on IMUs located on the adjacent body segments. The full body coverage requires 13-17 such units and can be quite obtrusive. The setting up and calibration of so many wearable sensors also take time.
This thesis focuses on 3D human pose estimation from a reduced number of MIMUs and deals with this problem systematically. First we propose an accurate simultaneous calibration of multiple MIMUs, which also learns the uncertainty of individual sensors. We then describe a novel sensor fusion algorithm for robust orientation estimation from an MIMU and for updating sensors calibration online. The residual errors in both sensor calibration and fusion can result in drift error in the joint angles. Therefore, we present anatomical (sensor-to-segment) calibration in which an orientation offset correction term is updated and used for online correction of residual drift in individual joint angles. Subsequently we demonstrate that 3D human joint angle constraints can be learned using a data-driven approach in a high dimensional latent space. Owing to temporal and joint angle constraints, it is possible to use only a reduced set of sensors (as opposed to one sensor per segment) and still obtain 3D human pose. But the spatial and temporal prior learning from data is often limited due to finite set of movement patterns in most datasets. This introduces uncertainty while estimating 3D human pose from sparse MIMU sensors. We propose a magnetometer robust orientation parameterization and a data-driven deep learning framework to predict 3D human pose with associated uncertainty from sparse MIMUs. The model is evaluated on real MIMU data and we show that the uncertainty predicted by the trained model is well-correlated with actual error and ambiguity.
This thesis describes the synthesis and extensive characterization of mononuclear
cis-(carboxylato)(hydroxo)iron(III) and cis-(carboxylato)(aqua)iron(II) complexes
among others and illuminates their capability to engage in hydrogen atom transfer
reactions via reactivity studies with suitable substrates. The employed carboxylates
include benzoate, p-nitrobenzoate, and p-methoxybenzoate. Additionally, the first
example for a solution-stable mononuclear cis-di(hydroxo)iron(III) complex is
presented, the extensive characterization of which aims to contribute to the
identification of spectroscopic markers and a better understanding of the role of the
carboxylate ligand in the above-mentioned complexes.
The cis-(carboxylato)(hydroxo/aqua)iron(III/II) complexes match the coordination
environment and the electronic properties of the active iron site in the resting state of
rabbit lipoxygenase as well as of the reaction intermediates postulated for the
enzymatic mechanism. In addition to being excellent structural and electronic models,
the cis-(carboxylato)(hydroxo)iron(III) complexes display reactivity in abstracting
hydrogen atoms from (weak) O–H and C–H bonds of suitable substrates, thus proving
themselves to be worthy functional model complexes for lipoxygenases. The findings
are supported with extensive structural, spectroscopic, spectrometric, magnetic, and
electrochemical investigations as well as with quantified thermodynamic and kinetic
parameters to allow for an adequate comparison between the derivatives with varying
carboxylate ligands and to other works. Moreover, the reactivity investigation for the
cis-(benzoato)(hydroxo)iron(III) (the first example found) was exemplary accompanied
by a thorough theoretical study (done by external cooperation partners), which
validates the experimental results and identifies an underlying concerted protoncoupled-electron-transfer (cPCET) mechanism for the
cis-(carboxylato)(hydroxo)iron(III) complexes – analogous to the one suggested for the
enzyme.
The synthesis and study of a functional structural model complex is extremely
challenging and rarely successful. Thus, this result alone represents a significant
scientific advancement for the field, as no such model for lipoxygenases had been
precedented prior to this project. The in-depth studies with derivatives of the initial cis-(benzoato)(hydroxo/aqua)iron(III/II) complexes further contribute to this
advancement by illuminating structure-function relations.
Semi-structured data is a common data format in many domains.
It is characterized by a hierarchical structure and a schema that is not fixed.
Efficient and scalable processing of this data is therefore challenging, as many existing indexing and processing techniques are not well-suited for this data format.
This dissertation presents a novel approach to processing large JSON datasets.
We describe a new data processor, JODA, that is designed to process semi-structured data by using all available computing resources and state-of-the-art techniques.
Using a custom query language and a vertically-scaling pipeline query execution engine, JODA can process large datasets with high throughput.
We optimize JODA by using a novel optimization for iterative query workloads called delta trees, which succinctly represent the changes between two documents.
This allows us to process iterative and exploratory queries efficiently.
We improve the filtering performance of JODA by implementing a holistic adaptive indexing approach that creates and improves structural and content indices on the fly, depending on the query load.
No prior knowledge about the data is required, and the indices are automatically improved over time.
JODA is also modularized and can be extended with new user-defined predicates, functions, indices, import, and export functionalities.
These modules can be written in an external programming language and integrated into the query execution pipeline at runtime.
To evaluate this system against competitors, we introduce a benchmark generator, coined BETZE, which aims to simulate data scientists exploring unknown JSON datasets.
The generator can be tweaked to generate query workload with different characteristics, or predefined presets can be used to quickly generate a benchmark.
We see that JODA outperforms competitors in most tasks over a wide range of datasets and use-cases.
Facing the demands of the energy transition, gas turbines require continuous development to improve thermal efficiency. Since this can be achieved by further increasing the turbine inlet temperature, advanced cooling techniques are required to protect the highly loaded turbine components. This includes the first nozzle guide vane, which is located just downstream of the combustion chamber. Film cooling, i.e., injecting coolant into the hot-gas path, has been a cornerstone of turbine cooling. While the coolant film is typically supplied through discrete cooling holes, design-related gaps, e.g., the purge slot between the transition duct and the vane platform, can be utilized for injecting coolant. Since the coolant is drawn from the compressor, potentially offsetting thermal efficiency gains from increased turbine inlet temperatures, efficient use of the coolant is critical. In this context, experimental data obtained under engine-like flow conditions, i.e., matching the Mach and Reynolds numbers that are present in the engine, are indispensable for assessing the film cooling performance. Existing research on upstream slot injection has a blind spot, as all high-speed studies were conducted in linear cascades. This approach neglects, by principle, the influence of the radial pressure gradient that naturally occurs in swirling flows and potentially affects coolant propagation. Therefore, a high-speed annular sector cascade has been developed: It allows testing the film cooling performance and aerodynamic effects of coolant flows from various upstream slot configurations, not only at engine-like Mach and Reynolds numbers but also considering the radial pressure gradient. The cascade is equipped with nozzle guide vanes with contoured endwalls representing state-of-the-art turbine design. The results to be expected from the test rig are, therefore, of great relevance.
The annular sector cascade is integrated into the existing high-speed turbine test facility at the Institute of Fluid Mechanics and Turbomachinery (University of Kaiserslautern-Landau), which was previously used for testing a linear cascade with the same nozzle guide vane design. It incorporates various measurement techniques such as five-hole probes, pressure-sensitive paint, and infrared thermography to investigate both the thermal and aerodynamic aspects of film cooling. This thesis provides a detailed description of the cascade development, starting from the aerodynamic design up to the structural implementation. It also includes the results of the previous measurements in the linear cascade, as they provided the basis for refining the measurement methods.
Light is an essential aspect of daily life, exerting a profound influence on various physiological and behavioral processes, including circadian rhythms, alertness, cognition, mood, and behavior. Technological advances, particularly the widespread adoption of light-emitting diodes (LEDs), have significantly accelerated the impact of lighting on the human experience. With the increasing global accessibility to electric and modern lighting systems, there is a pressing need to scientifically investigate the human-centered effects of lighting for the billions of people worldwide who encounter natural and electric lighting in their daily lives. Extensive interdisciplinary research across fields such as physics, engineering, psychology, medicine, business administration, and architecture has explored the biological and psychological effects of lighting, underscoring the immense potential for further advancements in this domain. Notably, innovative lighting technologies and strategies hold tremendous promise in enhancing human health, performance, and overall well-being.
Beyond physical spaces, three-dimensional virtual environments, including metaverse platforms, are becoming increasingly important. Simulated lighting in virtual spaces can have visual and non-visual effects on users. As technological progress and digitalization extend globally, more individuals will be exposed to virtual lighting scenarios. Consequently, exploring the human-centered lighting effects in virtual environments offers a compelling opportunity to improve the quality of user experiences. This thesis demonstrates the adaptability of established measurement methods from physical illumination and perception research for virtual environments.
This thesis comprises three parts. The first part reviews the current state of research on lighting and its influences on humans, examines research methods in lighting research, and identifies research gaps. The second part investigates the effects of lighting on complex emotional and behavioral constructs, specifically conflict handling. Elaborate laboratory experiments explore lighting as an independent variable, including realistic correlated color temperature (CCT) levels and enhanced CCT changes. Statistical analyses provide in-depth examination and critical discussion of the effects. The third part explores lighting in virtual spaces, considering literature, methodological approaches, and challenges. Two studies investigate visual and non-visual effects, and preferences in virtual environment design. Comparative analysis of the data yields implications for research and practice, including the interdisciplinary perspective of a novel approach called human-centric virtual lighting (HCVL).
In conclusion, this thesis comprehensively explores the impact of lighting on the human experience in both physical spaces and virtual environments. By addressing research gaps and employing contemporary methodologies, the findings contribute to our understanding of the effects of lighting on humans. Furthermore, the implications for research and practice offer valuable insights for the development of innovative lighting technologies and strategies aimed at enhancing the well-being and experiences of individuals worldwide. This work highlights the relevance of interdisciplinary research involving fields such as architecture, business management, event management, computer science, design, engineering, ergonomics, lighting research, medicine, physics, and psychology in advancing our understanding of visual and non-visual lighting effects.
Interactions between flow hydrodynamics and biofilm attributes and functioning in stream ecosystems
(2023)
Biofilms constitute an integral part of freshwater ecosystems and are central to regulating essential stream biogeochemical functions, such as nutrient uptake and metabolism. Under-standing the environmental factors that dictate the composition of biofilm communities and their role in whole-system nutrient cycling remains challenging, given the large spatial and temporal variability of biofilm communities. Pristine mountain streams exhibit a heteroge-neous streambed ranging from boulders to sand, provoking high spatiotemporal flow varia-bility. Our current knowledge of the interactions between flow hydrodynamics and biofilm attributes stems from mesocosm studies, which are inherently limited in environmental real-ism. Moreover, the mechanism linking flow hydrodynamics to microbial biodiversity and ecosystem functioning is currently not studied. My thesis aims to link streambed heteroge-neity and the associated development of the flow field to biofilm attributes and nitrogen uptake based on a multidisciplinary field approach. It integrates several spatial and temporal scales ranging from millimeter-sized spots to stream reaches and from milliseconds to minutes (i.e., the hydraulic scale of velocity fluctuations), up to days, months and years (i.e., the hydrological scale of flow fluctuations). I demonstrate that the spatial niche variability of flow hydrodynamics was an essential driver of biofilm community composition, diversity and morphology, in line with the habitat heterogeneity hypothesis initially formulated for terrestrial ecosystems. Furthermore, hydraulic mass transfer associated to flow diversity and biofilm biomass determined biofilm areal nitrogen uptake at scales ranging from spots to the stream reach. At the whole-ecosystem level, flow diversity determined the quantitative role of biofilms compared to other nitrogen uptake compartments by sorting them according to prevailing flow conditions. The magnitude of effects depended on ambient nutrient back-ground and season, suggesting a hierarchy of the environmental controls on biofilms. In summary, my interdisciplinary research provided a mechanistic understanding of how hy-dromorphological diversity determines the diversity, morphology, and the functional role of biofilms in streams. By improving the understanding of these relationships, my research improves our ability to predict and scale measurements of important stream biogeochemical functions. Moreover, it helps to face the challenges imposed by environmental changes and biodiversity loss.
Chromosomal aberrations are manifold changes in the configuration of the DNA. Each cell in a tumor
may accumulate different karyotype changes, making it challenging to determine the causes and
consequences of this instability. Therefore, model systems have been developed in the past to
generate and study specific genome alterations. In this thesis, I present the results of my studies on
three types of chromosomal aberrations, all of which may contribute to tumor development or
progression.
Chromothripsis is a phenomenon that describes a one-off massive chromosomal disruption and
reassembly, perhaps arising via DNA damage micronuclei (MN). MN are small DNA-packed nuclear
envelopes. I tested potential causes of DNA damage in MN and found that the rupture of the MN
envelope and the entry of cytosolic fractions increase DNA damage in MN. Furthermore, I addressed
the question of what physiological consequences cell lines with an additional rearranged chromosome
have compared to those with an intact extra chromosome. Strikingly, the cells with more
rearrangements showed a functional advantage resulting in an improved fitness potential.
However, the engineering of polysomic cell lines with fully intact additional chromosomes increases
various cellular stress responses and reduces the proliferation capacity. To investigate how cancer cells
overcome the detrimental consequences of aneuploidy, I explored physiological adaptations of model
cells with a defined additional chromosome that underwent in vivo and in vitro evolution. Interestingly,
unfavorable phenotypes of aneuploid cells, such as the replication stress, were mitigated upon
evolution. Furthermore, I examined the replication on single molecule resolution, showing alteration
after evolution that might underlie the replication stress bypass or tolerance.
In contrast to these unbalanced forms of genomic aberrations, whole genome doubling (WGD) leads
to a full doubled chromosome set, which was shown to evolve into aneuploid karyotypes by
chromosomal instability (CIN), frequently by losing chromosomes. Cells that underwent WGD
accumulate DNA damage in the S phase. I performed a single molecule analysis on the DNA during the
first cell cycle after WGD to elucidate how the DNA damage arises and found that the number of active
origins is not sufficient to replicate the doubled amount of DNA in the first S phase after WGD faithfully.
This starts a genome-destabilizing cascade that eventually promotes tumorigenesis, metastasis, and
poor patient outcome.
Taken together, these studies provide insights into the causes and consequences of three types of
genomic aberrations: chromothripsis, polysomy, and WGD. However different these phenomena may
be, they share one common feature – they contribute to tumor development and progression.
Therefore, elucidating the aberrant cell functions caused by genomic aberrations contributes to a
better understanding of a cancer cell's nature and will perhaps help to find new cancer therapy targets.
This dissertation presents a generalization of the generalized grey Brownian motion with componentwise independence, called a vector-valued generalized grey Brownian motion (vggBm), and builds a framework of mathematical analysis around this process with the aim of solving stochastic differential equations with respect to this process. Similar to that of the one-dimensional case, the construction of vggBm starts with selecting the appropriate nuclear triple, and construct the corresponding probability measure on the co-nuclear space. Since independence of components are essential in constructing vggBm, a natural way to achieve this is to use the nuclear triple of product spaces: \[ \mathcal{S}_d(\mathbb{R}) \subset L^2_d(\mathbb{R}) \subset \mathcal{S}_d'(\mathbb{R}), \]
where \( L^2_d(\mathbb{R}) \) is the real separable Hilbert space of \( \mathbb{R}^d \)-valued square integrable functions on \( \mathbb{R} \) with respect to the Lebesgue measure, \( \mathcal{S}_d(\mathbb{R}) \) is the external direct sum of \(d\) copies of the nuclear space \(\mathcal{S}(\mathbb{R})\) of Schwartz test functions, and \(\mathcal{S}_d'(\mathbb{R})\) is the dual space of \(\mathcal{S}_d(\mathbb{R})\).
The probability measure used is the the \(d\)-fold product measure of the Mittag-Leffler measure, denoted by \(\mu_{\beta}^{\otimes d}\), whose characteristic function is given by \[ \int_{\mathcal{S}_d'(\mathbb{R})} e^{i\langle\omega,\varphi\rangle}\,\text{d}\mu_{\beta}^{\otimes d}(\omega) = \prod_{k=1}^{d}E_\beta\left(-\frac{1}{2}\langle\varphi_k,\varphi_k\rangle\right),\qquad \varphi\in \mathcal{S}_d(\mathbb{R}), \]
where \( \beta\in(0,1] \), and \( E_\beta \) is the Mittag-Leffler function. Vector-valued generalized grey Brownian motion, denoted by \( B^{\beta,\alpha}_{d}:=(B^{\beta,\alpha}_{d,t})_{t\geq 0}\), is then defined as a process taking values in \( L^2(\mu_{\beta}^{\otimes d};\mathbb{R}^d) \) given by
\[ B^{\beta,\alpha}_{d,t}(\omega) := (\langle\omega_1,M^{\alpha/2}_{-}1\!\!1_{[0,t)}\rangle,\dots,\langle\omega_d,M^{\alpha/2}_{-}1\!\!1_{[0,t)}\rangle),\quad \omega\in\mathcal{S}_d'(\mathbb{R}), \]
where \( M^{\alpha/2} \) is an appropriate fractional operator indexed by \( \alpha\in(0,2) \) and \( 1\!\!1_{[0,t)} \) is the indicator function on the interval \( [0,t) \). This process is, in general, not the aforementioned \(d\)-dimensional analogues of ggBm for \(d\geq 2\), since componentwise independence of the latter process holds only in the Gaussian case.
The study of analysis around vggBm starts with accessibility to Appell systems, so that characterizations and tools for the analysis of the corresponding distribution spaces are established. Then, explicit examples of the use of these characterizations and tools are given: the construction of Donsker's delta function, the existence of local times and self-intersection local times of vggBm, the existence of the derivative of vggBm in the sense of distributions, and the existence of solutions to linear stochastic differential equations with respect to vggBm.
This thesis focuses on novel methods to establish the utility of wearable devices along with machine learning and pattern recognition methods for formal education and address the open research questions posed by existing methods. Firstly, state-of-the-art methods are proposed to analyse the cognitive activities in the learning process, i.e., reading, writing, and their correlation. Furthermore, this thesis presents real-time applications in wearable space as an experimental tool in Physics education, and an air-writing system.
There are two critical components in analysing the reading behaviour, i.e., WHERE a person looks at (gaze analysis) and WHAT a person looks at (content analysis). This thesis proposes novel methods to classify the reading content to address the WHAT AT component. The proposed methods are based on a hybrid approach, which fuses the traditional computer vision methods with deep neural networks. These methods, when evaluated on publicly available datasets, yield state-of-the-art results to define the structure of the document images. Moreover, extensive efforts were made to refine and correct ICDAR2017-POD dataset along with a completely new FFD dataset.
Traditionally, handwriting research focuses on character and number recognition without looking into the type of writing, i.e. text, math, and drawing. This thesis reports multiple contributions for on-line handwriting classification. First, it presents a public dataset for on-line handwriting classification OnTabWriter, collected using iPen and an iPad. In addition, a new feature set is introduced for on-line handwriting classification to establish the benchmark on the proposed dataset to classify handwriting as plain text, mathematical expression, and plot/graph. An ablation study is made to evaluate the performance of the proposed feature set in comparison to existing feature sets. Lastly, this thesis evaluates the importance of context for on-line handwriting classification.
Analysing reading and writing activities individually is not enough to provide insights to identify the student's expertise unless their correlations are analysed. This thesis presents a study where reading data from wearable eye-trackers and writing data from sensor pen are analysed together in correlation to correlate the expertise of the users in Physics education with their actual knowledge. Initial results show a strong correlation between individual's expertise and understanding of the subject.
Augmented reality & virtual applications can play a vital role in making classroom environments more interactive and engaging both for teachers and learners. To validate the hypothesis, different applications are developed and evaluated. First, smart glasses are used as an experimental tool in Physics education to help the learners perform experiments by providing assistance and feedback on head mounted display in understanding acoustics concepts. Second, a real-time application of air-writing with the finger on an imaginary canvas using a single IMU as the FAirWrite system is also presented. FAirWrite system is further equipped with DL methods to classify the air-written characters.
Faces deliver invaluable information about people. Machine-based perception can be of a great benefit in extracting that underlying information in face images if the problem is properly modeled. Classical image processing algorithms may fail to handle the diverse data available today due to several challenges related to varying capturing locations, and conditions. Advanced machine learning methods and algorithms are now highly beneficial due to the rapid development of powerful hardware, enabling feasible advanced solutions based on data learning and summarization into powerful models. In this thesis, novel solutions are provided to the problems of head orientation estimation and gender prediction. Initially, classical machine learning algorithms were used to address head orientation estimation but were limited by their inability to handle large datasets and poor generalization. To overcome these challenges, a new highly accurate head pose dataset was acquired to tackle the identified problems. Novel trained deep neural networks have been exploited, that use the acquired data and provide novel architectures. The information about head pose is then represented in the network weights, thus, allowing predicting the head orientation angles given a new unseen face. The acquired dataset, named AutoPOSE opens the door for further studies in the field of computer vision and especially, face analysis. The problem of gender prediction has also been explored, but unlike humans who can easily identify gender from a face, computers face difficulties due to facial similarities. Therefore, hand-crafted features are not effective for generalization. To address this, a new deep learning method was developed and evaluated on multiple public datasets, with identified challenges in both still images and videos addressed. Finally, the effect of facial appearance changes due to head orientation variation has been investigated on gender prediction accuracy. A novel orientation-guided feature maps recalibration method is presented, that significantly increased the accuracy of gender prediction.
In conclusion, two problems have been addressed in this thesis, independently and joined together. Existing methods have been enhanced with intelligent pre-processing methods and new approaches have been introduced to tackle existing challenges, that arise from pose, illumination, and occlusion variations. The proposed methods have been extensively evaluated, showing that head orientation and gender prediction can be estimated with high accuracy using machine learning-based methods. Also, the evaluations showed that the use of head orientation information consistently improved the gender prediction accuracy. Scientific contributions have been presented, and the new acquired highly accurate dataset motivates the research community to push the state-of-the-art forward.
Though Computer Aided Design (CAD) and Simulation software are mature, well established, and in wide professional use, modern design and prototyping pipelines are challenging the limits of these tools. Advances in 3D printing have brought manufacturing capability to the general public. Moreover, advancements in Machine Learning and sensor technology are enabling enthusiasts and small companies to develop their own autonomous vehicles and machines. This means that many more users are designing (or customizing) 3D objects in CAD, and many are testing machine autonomy in Simulation. Though Graphical User Interfaces (GUIs) are the de-facto standard for these tools, we find that these interfaces are not robust and flexible. For example, designs made using GUI often break when customized, and setting up large simulations can be quite tedious in GUI. Though programmatic interfaces do not suffer from these limitations, they are generally quite difficult to use, and often do not provide appropriate abstractions and language constructs.
In this Thesis, we present our work on bridging the ease of use of GUI with the robustness and flexibility of programming. For CAD, we propose an interactive framework that automatically synthesizes robust programs from GUI-based design operations. Additionally, we apply program analysis to ensure customizations do not lead to invalid objects. Finally, for simulation, we propose a novel programmatic framework that simplifies building of complex test environments, and a test generation mechanism that guarantees good coverage over test parameters. Our contributions help bring some of the advantages of programming to traditionally GUI-dominant workflows. Through novel programmatic interfaces, and without sacrificing ease of use, we show that the design and customization of 3D objects can be made more robust, and that the creation of parameterized simulations can be simplified.
Ecosystems are interconnected through the exchange of resources known as subsidies. Subsidies have the potential to affect the receiving ecosystem, altering its productivity and trophic cascade. The boundary between aquatic and terrestrial ecosystems provides a clear distinction between aquatic and terrestrial organisms and is a particularly interesting location for studying resource subsidies. Process-based models can aid in predicting the effects of anthropogenic stressors on food webs and understanding the functioning of meta-ecosystems. The goal of this thesis is to contribute to the development of theories on how changes in subsidies affect recipient ecosystems using aquatic-terrestrial interface as a case study. In this thesis, a review of process-based food web models applied to the aquatic-terrestrial interface
(aquatic-terrestrial models) and theoretical meta-ecosystems (theoretical models) was carried out (chapter 2). Results show that the models have enhanced our understanding of how terrestrial subsidies affect aquatic ecosystem. General understanding of how subsidies affect the stability and
functions of meta-ecosystems was also enhanced. However, existing aquatic-terrestrial models focused primarily on how subsidies from terrestrial ecosystems affect aquatic ecosystems, with none considering reciprocal flows. Furthermore, the quality characteristics of subsidies were not taken into account, despite potential differences from alternative local resources. Therefore, chapters 3 and 4 developed theories using terrestrial ecosystems with aquatic subsidies as a case study. Chapter 3 focused on how changes in subsidy quality affect the recipient ecosystem and hypothesized that changes in subsidy quality have a cascading effect on the recipient ecosystem (subsidy quality hypothesis). However, the model predictions were most sensitive to the input rate of inorganic nutrients in the recipient ecosystem, indicating that ecosystems are controlled by both top-down (TD) and bottom-up (BU) processes. Chapter 4 shows that the TD and BU processes of ecosystems interact antagonistically. The generated theories can be integrated into empirical research by testing predictions, assumptions, using model equations, and adopting the framework. This thesis improves our understanding of the impacts of subsidies on recipient ecosystems. Future meta-ecosystem models may consider the cross-ecosystem flow of information to further enhance our understanding of
meta-ecosystems. Additionally, aquatic-terrestrial models developed to predict algae blooms may consider developing trait-based models to improve predictions.
Machine Learning (ML) is expected to become an integrated part of future mobile networks due to its capacity for solving complex problems. During inference, ML algorithms extract the hidden knowledge of their input data which is delivered to them through wireless links in many scenarios. Transmission of a massive amount of such input data can impose a huge burden on the mobile network. On the other hand, it is known that ML algorithms can tolerate different levels of distortion on their input components, while the quality of their predictions remains unaffected. Therefore, utilization of the conventional approaches
implies a waste of radio resources, since they target an exact reconstruction of transmitted data, i.e., the input of ML algorithms. In this thesis, we propose a novel relevance based framework that focuses on the quality of final ML outputs instead of such syntax based reconstruction of transmitted inputs. To this end, we quantify the semantics or relevancy of input components in terms of the bit allocation aspect of data compression, where a higher tolerance for distortion implies less relevancy. A lower relevance level is translated into the allocation of less radio resources, e.g., bandwidth. The introduced formulation provides the foundations for the efficient support of ML models with their required data in the inference phase, while wireless resources are employed efficiently.
In this dissertation, a generic relevance based framework utilizing the Kullback-Leibler Divergence (KLD) is developed that is applicable to many realistic scenarios. The system model under study contains multiple sources transmitting correlated multivariate input components of a ML algorithm. The ML model is seen as a black box, which is trained and has fixed parameters while operating in the inference phase. Our proposed bit allocation accounts for the rate-distortion tradeoff. Hence, it is simply adjustable for application to
other problems. Here, an extended version of the proposed bit allocation strategy is introduced for signaling overhead reduction, in which the relevancy level of each input attribute changes instantaneously. In another expansion, to take the effect of dynamic channel states into account, a resource allocation approach for ML based centralized control systems is proposed. The novel quality of service metric takes outputs of ML algorithms into consideration,
and in combination with the designed greedy algorithm, provides significantly
improved end-to-end performance for a network of cart inverted pendulums.
The introduced relevance based framework is comprehensively investigated by considering various case studies, real and synthetic data, regression and classification, different estimators for the KLD, various ML models and codebook designs. Furthermore, the reliability of this proposed solution is explored in presence of packet drops, indicating robustness of the relevance based compression. In all of the simulations, the relevance based solutions deliver the best outcome in terms of the carefully chosen key performance indicators. In most of them, significantly high gains are also achieved compared to the conventional techniques, motivating further research on the subject.
Ambulatory assessment (AA) is becoming an increasingly popular research method in the fields of psychology and life science. Nevertheless, knowledge about the effects that design choices, such as questionnaire length (i.e., number of items per questionnaire), have on AA participants’ perceived burden, data quantity (i.e., compliance with the AA protocol), and data quality is still surprisingly restricted. The aims of this dissertation were to experimentally manipulate aspects of an AA study’s sampling strategy - sampling frequency (Study 1) and questionnaire length (Study 2) - and to investigate their impact on perceived burden, data quantity, and aspects of data quality in three papers. In Study 1, students (n = 313) received either 3 or 9 questionnaires per day for the first 7 days of the study. In Study 2, students (n = 282) received either a 33- or 82-item questionnaire 3 times a day for 14 days.
Paper 1 described that a higher sampling frequency (Study 1) led to a higher perceived participant burden, but did not affect other aspects of data quantity and quality. Furthermore, a longer questionnaire (Study 2) did not affect perceived participant burden or data quantity, but did lead to a lower within-person variability, and a lower within-person relationship between time-varying variables. Paper 2 investigated the effects of the sampling frequency (Study 1) on careless responding by identifying careless responding indices that could be applied to AA data and by extending the multilevel latent class analysis model to a multigroup multilevel latent class analysis model. Results indicated that a higher sampling frequency did not affect careless responding. Paper 3 investigated the effects of questionnaire length (Study 2) on (the relative impact of) response styles by extending the item response tree (IRTree) modeling approach to a multilevel data structure. Results indicated that a longer questionnaire led to a greater relative impact of RS.
Although further validation of the results is essential, I hope that future researchers will integrate the results of this dissertation when designing an AA study.
Hardware devices fabricated with recent process technology are intrinsically
more susceptible to faults than before. Resilience against hardware faults is,
therefore, a major concern for safety-critical embedded systems and has been
addressed in several standards. These standards demand a systematic and
thorough safety evaluation, especially for the highest safety levels. However,
any attempt to cover all faults for all theoretically possible scenarios that a sys-
tem might be used in can easily lead to excessive costs. Instead, an application-
dependent approach should be taken: strategies for test and fault resilience
must target only those faults that can actually have an effect in the situations
in which the hardware is being used.
In order to provide the data for such safety evaluations, we propose scalable
and formal methods to analyse the effects of hardware faults on hardware/soft-
ware systems across three abstraction levels where we:
(1) perform a fault effect analysis at instruction set architecture level by em-
ploying fault injection into a hardware-dependent software model called
program netlist,
(2) use the results from the program netlist analysis to perform a deductive
analysis to determine “application-redundant” faults at the gate level by
exploiting standard combinational test pattern generation,
(3) use the results from the program netlist analysis to perform an inductive
analysis to identify all faults of a given fault list that can have an effect
on selected objects of the high-level software, such as specified safety
functions, by employing Abstract Interpretation.
These methods aid in the certification process for the higher safety levels
by (a) providing formal guarantees that certain faults can be ignored and (b)
pointing to those faults which need to be detected in order to ensure product
safety.
We consider transient and permanent faults corrupting data in program-
visible hardware registers and model them using the single-event upset and
stuck-at fault models, respectively.
Scalability of our approaches results from combining an analysis at the ma-
chine and hardware level with separate analyses on gate level and C level
source code, as well as, exploiting certain properties that are characteristic for
embedded systems software. We demonstrate the effectiveness and scalability
of each method on industry-oriented software, including a software system
with about 138 k lines of C code.
Molecular simulation is an important tool for investigating the behavior of fluids and solids. Nanoscopic processes and physical properties of the material can be studied predictively based on the description of the molecular interactions by force fields. This is used in the present work to tackle engineering questions that are hard to answer with other methods. First, mass transfer at fluid interfaces was investigated on the nanoscopic level. Therefore, two distinct simulation methods were developed and used to systematically investigate the mass transfer in mixtures of simple model fluids, described by the ‘Lennard-Jones truncated and shifted’ (LJTS) potential. The research question was whether the adsorption of components at the interface, which is observed also in many simple fluid mixtures, has an influence on the mass transfer. Such an influence was indeed found in the studies with both scenarios. Furthermore, explosions of nanodroplets caused by a spontaneous evaporation of the liquid phase were investigated with non-equilibrium molecular dynamics (NEMD) simulations. In these simulations, the interior of an LJTS droplet was superheated by a local thermostat, so that a vapor bubble nucleated inside of the droplet. Depending on the degree of superheating, different phenomena were observed, ranging from a simple evaporation of the droplet over oscillatory behavior of the bubble to an immediate droplet explosion. For molecular simulations of real mixtures, suitable force fields are needed. In this work, a set of molecular models for the alkali nitrates was developed and systematically compared to experimental data of thermophysical and structural properties of aqueous alkali nitrate solutions from the literature. Lastly, the structure and clustering of 1:1 electrolytes in aqueous solution was investigated for a broad concentration range starting from near infinite dilution up to high supersaturation. Based on the simulation results, an empirical rule was proposed to provide estimates of the solubility of salts with standard molecular dynamics simulations without the need of elaborate calculation schemes or significant additional computational effort.
This work aims to study textile structures in the frame of linear elasticity to understand how
the structure and material parameters influence the macroscopic homogenized model. More
precisely, we are interested in how the textile design parameters, such as the ratio between
fibers’ distance and cross-section width, the strength of the contact sliding between yarns,
and the partial clamp on the textile boundaries determine the phenomena that one can see in
shear experiments with textiles. Among others, when the warp and weft yarns change their
in-plane angles first and, after reaching some critical shear angle, the textile plate comes out
of the plane, and its folding starts.
The textile structure under consideration is a woven square, partially clamped on the left
and bottom boundary, made of long thin fibers that cross each other in a periodic pattern.
The fibers cannot penetrate each other, and in-plane sliding is allowed. This last assumption,
together with the partial clamp, adds new levels of complexity to the problem due to
the anisotropy in the yarn’s behavior in the unclamped subdomains of the textile.
The limiting behavior and macroscopic strain fields are found by passing to the limit with
respect to the yarn’s thickness r and the distance between them e, parameters that are asymptotically
related. The homogenization and dimension reduction are done via the unfolding
method, which separates the macroscopic scale from the periodicity cell. In addition to the
homogenization, a dimension reduction from a 3D to a 2D problem is applied. Adapting
the classical unfolding results to both the anisotropic context and to lattice grids (which are
constructed starting from the center lines of the rods crossing each other) are the main tools
we developed to tackle this type of model. They represent the first part of the thesis and are
published in Falconi, Griso, and Orlik, 2022b and Falconi, Griso, and Orlik, 2022a.
Given the parameters mentioned above, we then proceed to classify different textile problems,
incorporating the results from other works on the topic and thoroughly investigating
some others. After the study is conducted, we draw conclusions and give a mathematical
explanation concerning the expected approximation of the displacements, the expected solvability
of the limit problems, and the phenomena mentioned above. The results can be found
in “Asymptotic behavior for textiles with loose contact”, which has been recently submitted.
Ecotoxicology is the science that researches effects of toxicants on biological entities. Following the famous toxicological principle formulated 1538 by von Hohenheim, known as Paracelsus, thereby generally all chemicals are able to act as toxicants. Unlike human toxicology that focuses on toxic effects on individuals and populations of one species, Homo sapiens, ecotoxicology is not constrained in its scope of biological entities. It is interested in toxic effects on individuals and populations of any species (excluding humans), and on communities and entire ecosystems (Walker et al., 2012; Köhler & Triebskorn, 2013; Newman 2014). One example of where the ecological foundation of ecotoxicology manifests itself are indirect effects, which are effects on biological entities that are not directly caused by chemicals but instead are mediated by ecological interactions and environmental conditions (Walker et al., 2012). With this large scope, ecotoxicology is an inter- and multidisciplinary science that links chemical, biological and environmental knowledge.
With millions of species and at least 100,000 chemicals that potentially interact with them in the environment (Wang et al., 2021), ecotoxicology has a large ground to cover. Among these sheer numbers, there are some groups that are of special importance regarding their potential environmental impact. Pesticides are one group of chemicals that have a large, if not the largest, ecotoxicological relevance: they are toxic for biological entities, sometimes in very low concentrations , and they are used in large amounts and globally (Bernhardt et al., 2017). The high toxicity of pesticides, much higher than that of most other groups of chemicals, is a result of their intended use: they are designed to reduce detrimental effects of, e.g., insects, plants or fungi on agriculture by controlling respective populations, often, and in the sense of their Latin name, through induced lethality (Walker et al., 2012). However, they act not specific enough to be toxic only for the intended species that are considered pests, but also show toxicity towards species living in habitats next to pesticide-treated areas. The widespread agricultural use of pesticides, on the other hand, is a result of their work-and-cost-efficiency for securing yields, but also results in exposure of ecosystems at a global scale (Sharma et al., 2019). In summary, pesticides can be abstractly seen as toxicity intentionally applied to agricultural areas, unintentionally also exposing organisms in non-agricultural areas to toxicity.
The risks of pesticide use for ecosystems have led major jurisdictions, like the United States of America (US) and the European Union (EU), to enact elaborated regulatory processes that require a registration of pesticides prior use (EFSA, 2013; EPA, 2011; Stehle & Schulz, 2015b). A by-product of these registration processes are regulatory threshold levels (RTL) which can be used for scientific risk analysis outside the regulatory process (Stehle & Schulz, 2015a). The RTL for an organism group is basically derived from the most sensitive effect concentrations found in standardized toxicity tests for species representative for the group, multiplied by a safety factor, although specifics differ among regulatory processes. Conceptually, they mark the threshold that separates environmental concentrations associated with acceptable risk (concentrations below the RTL) from concentrations associated with unacceptable risk (concentrations above the RTL).
Due to the high degree of procedural standardization in the derivation of RTLs, they have been found as a good measure to make the toxicities of different pesticides comparable, and they were employed in a series of studies to characterize environmental pesticide concentrations (e.g., Stehle & Schulz, 2015a; Stehle et al., 2018; Wolfram et al., 2018; Wolfram et al., 2021; Schulz et al., 2021, also, in Appendix B; Bub et al., 2023, also, in Appendix C). RTL reflect, for instance, that insecticides show regulatory unacceptable concentrations towards fish between 3 ng/L (deltamethrin, a pyrethroid) and 110 mg/L (imidacloprid, a neonicotinoid), a range of nine orders of magnitude. At the same, imidacloprid is very toxic to pollinators (RTL of 1.52 ng/organism), while more than 95% of all of the insecticides, with regulatory unacceptable concentrations among insecticides ranging as high as 1,6 mg/organism, indicating a toxicity six orders of magnitude lower than that of imidacloprid.
At large-scales, ecotoxicology deals with pesticide impacts on a national (e.g., Bub et al., 2023; Douglas & Tooker, 2015; Hallmann et al., 2014; Schulz et al., 2021; Stehle et al., 2019; Wolfram et al., 2018), continental (Wolfram et al., 2021) or the global scale (Stehle & Schulz, 2015a; Stehle et al., 2018). This maximization of considered scale is in line with the general tendency of ecotoxicology towards larger scales, but generally requires new methodological and conceptual approaches. Historically, individual chemicals and groups of chemicals have been identified that mark, caused by their immense release into the environment, main disruptors of processes in the Earth system, like greenhouses gases for the climate change, chlorofluorocarbons for the depletion of the atmosphere’s ozone layer, dichlorodiphenyl-trichloroethane and other organochlorides for bioaccumulation in food webs and declines in bird populations, etc., but for other phenomena, like declines in biodiversity or numbers of insect species (Outhwaite et al., 2020; Seibold et al., 2019; Vörösmarty et al., 2010), the active part of chemical pollution is only understood to a much lesser extent. There are indicators that pesticides may play a major role
This dissertation contributes to the research of large-scale risks of pesticide use, and of large-scale ecotoxicology in general, in several ways (Figure 1). In Chapter 2, it presents a labeled property graph, the MAGIC graph (Meta-Analysis of the Global Impact of Chemicals graph), as a solution to the methodological issues that arise when increasing amounts of data from more and more sources are combined for analysis (Bub et al., 2019; also, in Appendix A). The MAGIC graph is able to link chemical information from different sources, even if these sources use different nomenclatures. This enables analyses that incorporate toxicological data, like thousands of RTLs (for different organism groups and jurisdictions) for hundreds of pesticides, and information on pesticide use and chemical classes. The MAGIC graph is implemented in a way that allows it to be organically extended by additional chemical, biological and environmental data, and eventually scaled to all chemicals of environmental interest.
Chapter 3 shows, how the combination of the linked pesticide data with a systemic consideration of pesticide use supports the interpretation of pesticide risks in the US (Schulz et al., 2021; also, in Appendix B). This systemic approach includes a new measure, the total applied toxicity (TAT), which integrates used pesticide amounts and pesticide toxicities, and the consideration of pesticide use as a complex system whose state and evolution can be visualized in phase-space plots. The combination of the described methods and concepts led to a novel view on pesticide risks in the US and can provide a framework for future ecotoxicological research at large scales.
Chapter 4 displays the results of the methods and concepts of the US pesticide risk analysis applied to Germany (Bub et al., 2023; also, in Appendix C). A pesticide risk analysis of Germany is of special importance in the context of the EU’s goal to drastically reduce pesticide risks (European Commission, 2020) and Germany being one of the important agricultural producers in the EU. A comparison of the results for Germany to those for the US did also allow to evaluate the impact of scale and differing RTLs, information that can help other ecotoxicological large-scale assessments. Chapter 5 adds a conclusion and an outlook.
Phycobilisomes (PBS) are the major light-harvesting complexes for the majority of cyanobacteria
and allow these organisms to absorb in the so-called green gap. They consist of smaller units called
phycobiliproteins (PBPs), which are composed of an α- and a β-subunit with covalently bound
linear tetrapyrroles (phycobilins). The latter are attached to the apo-PBPs by phycobiliprotein
lyases. Interestingly, cyanobacteria of the genus Prochlorococcus lack complete PBS and instead
use prochlorophyte chlorophyll-binding proteins (Pcbs), which effectively utilize the energy of the
blue light region. The low-light-adapted (LL) strain Prochlorococcus marinus SS120 has a single
PBP, phycoerythrin-III (PE-III). It has been postulated that PE-III is chromophorylated with the
phycobilins phycourobilin (PUB) and phycoerythrobilin (PEB) in a 3:1 ratio. Thereby, the function
of PE-III remains unclear so far, so that light-gathering function and also photoreceptor function
are discussed.
The main goal of this work was to characterize the assembly of PE-III and thus the function of the
six putative phycobiliprotein lyases of P. marinus SS120. Previous work found that the individual
lyases could not be produced in soluble form, so we switched to a dual pDuet™ plasmid system in
E. coli, which was successfully established. Investigation of the binding of PEB to Apo-PE
revealed that the CpeS lyase specifically chromophorylated Cys82 with 3Z-PEB. Unfortunately,
additional chromophorylation could not be observed using the pDuet system. Therefore, in a
second part of the work, the entire PE gene cluster from P. marinus SS120 was to be introduced
into E. coli and expressed. Although the gene cluster was successfully transcribed within E. coli,
no translation was observed, possibly due to incompatible translation initiation between
Prochlorococcus and E. coli. The introduction of a mini PE cluster (CpeAB) into the
cyanobacterium Synechococcus sp. PCC 7002 was also successfully performed, in which case
production of CpeB but not CpeA from Prochlorococcus was detected. Recombinant CpeB was
also detected together with intrinsic PBP in Synechococcussp. 7002, indicating structural similarity
and incorporation into PBS in Synechococcus sp. 7002. Overall, the obtained results suggest that a
cyanobacterial host is a good option for the studies on the assembly of PE-III from P. marinus and,
based on this, future work could aim at generating an artificial operon using synthetic biology to
achieve efficient translation of all genes.
Biodiversity has declined by approximately 70% in the last 50 years for vertebrate and invertebrate species. This loss in biodiversity is strongly connected with anthropogenic activities, such as agricultural intensification and pollution. Currently, pesticides are needed to secure the growing global food demand, although they are recognized as one of the main drivers of biodiversity loss, mainly in agricultural areas.
In the European Union, pesticides are regulated within the risk assessment framework, which aims to protect both the environment and human health from undesirable effects. The effects on non-target organisms are mostly assessed following a “one-size-fits-all” approach, focused on sensitive species tests. However, it has been recognized that the current methodology can be improved in order to minimize undesirable effects. Aiming to provide valuable data to inform future risk assessment, this thesis focused on two terrestrial organism groups that play beneficial roles, especially in agroecosystems: earthworms and spiders.
Although the earthworm Eisenia fetida is included in pesticide regulation, its use as the only earthworm representative may lead to uncertainties for the risk assessment. Therefore, we collected ecotoxicological data on field-captured earthworm species via acute exposure to imidacloprid and copper. In addition, we investigated the relationships between earthworm chemical sensitivity, biological traits and habitat preferences, and potential links with their ecosystem services (Chapter 2). We found that earthworms sampled from extremely acidic soils were less sensitive to copper than earthworms from neutral soils. Moreover, anecic and endogeic earthworms were more sensitive to imidacloprid than epigeic earthworms.
Spiders have, thus far, been understudied in regulatory risk assessment in comparison to other non-target arthropods. Thus, we aimed to collect ecotoxicological data of spider species sampled in different European climates via acute exposure to lambda-cyhalothrin. Moreover, we explored relationships between spider chemical sensitivity, phylogeny, biological traits and habitat preferences, as well as potential links with their ecosystem services (Chapter 3). Spiders showed a high sensitivity to lambda-cyhalothrin. Furthermore, our results showed that spider sensitivity varies depending on climate. We confirmed this relationship by incorporating different rearing and test temperatures into the toxicity testing protocol (Chapter 4).
The outcomes of this thesis contribute to informing pesticide regulatory practices, allowing for an improved protection and conservation of terrestrial organism groups and the ecosystem services they provide. The consideration of ecological traits, habitat variability and related plasticity, key species, and ecological network structure could improve the risk assessment framework and minimize the effects of pesticides and other stressors on an ecosystem-level.
The vast majority of all mitochondrial proteins are synthesized in the cytosol. These proteins carry characteristic targeting motifs within their sequence, which allows for the binding of chaperones, that in turn usher precursors to the mitochondrial surface for import and assembly. Though, our understanding of these early reactions is still lacking, recent efforts have shown that the ER surface can facilitate the import of mitochondrial proteins (ER-SURF) with the help of the J-protein Djp1. Close cooperation of organelles in form of membrane contact sites is crucial for cellular function. The aim of my work was to investigate whether ER-mitochondria contact sites are critical for the transfer of proteins from the ER to mitochondria.
Several contact sites have been characterized between ER and mitochondria in S. cerevisiae. One contact site is called the ER mitochondria encounter structure (ERMES) and another is partly formed by Tom70. Owing to the high propensity of suppressor mutations in ERMES, I employed a knockdown approach to deplete this contact site. Using an inducible CRISPR interference (CRISPRi) system, I could rapidly and efficiently deplete Mdm34, which is a part of ERMES. I could show that depletion of Mdm34 had a synthetic negative effect in combination with a deletion of TOM70. Loss of both contact sites led to a strong decrease of many mitochondrial proteins in the whole cell proteome. Using affinity purification of ER and mitochondria in conjunction with mass spectrometry I could demonstrate that a specific set of mitochondrial proteins are enriched on the ER upon loss of Mdm34 and Tom70, which mainly were proteins of the inner membrane e.g., Oxa1 and Cox5A. Moreover, I was able to validate that the import of these proteins was hampered upon loss of both contact sites. Also, in vivo the biogenesis of Oxa1 was impeded upon single loss of Mdm34 or Tom70 and strongly impaired if both were lost. Analysis of the maximum hydrophobicity of inner membrane proteins in the ER-SURF set revealed on average a significantly higher peak compared to other inner membrane proteins. I could show that deleting or swapping the transmembrane domain of Cox5A would make it contact site independent or reliant on contact sites respectively, as revealed by an in vitro import assay.
In this study I was able to demonstrate the involvement of membrane contact sites in ER-SURF and identify a list of putative clients. Furthermore, I could show that hydrophobicity of the transmembrane segment of inner membrane proteins is one determinant for ER-SURF dependence.
The booming global market of nanomaterials in the last few decades has led to the inevitable emission of these materials into aquatic environments; hence, understanding their physical, chemical, and biological transformations has become a big concern for environmental scientists. Despite a great deal of effort made to understand the mobility, fate, and risk assessment of e.g, TiO2 nanoparticles, it is still unclear if the obtained results, under lab-controlled conditions, can be generalized to realistic released nanoparticles in aquatic environments since the complex dynamics of environmental conditions are not completely reproducible under controlled conditions.
In the present study, we proposed a new approach to expose TiO2 nanoparticles to environmental conditions of natural surface waters by making use of dialysis membranes as passive reactors. The function of these reactors is based on the permeability of the membrane to the dissolved matter of surface waters while TiO2 nanoparticles do not pass through the membrane. These systems benefit from the fact that although the complexity and temporal variability of most of the environmental parameters of surface waters are reproducible inside the reactors, colloidal and particulate interferences remain separated. Furthermore, no significant reduction in pore size i.e., membrane fouling is observed in dialysis bags after exposure to surface waters which validates the efficiency of the system.
Taking advantage of these reactors to expose nanoparticles to surface waters, we investigated the influential physicochemical parameters of the surface waters on the formation of natural coating onto nanoparticles. Hence, dialysis bags were used to expose TiO2 nanoparticles, in situ, to ten different surface waters in the spring and summer of 2019. Due to the complexity of the natural dissolved matter of the surface waters as long as their low natural concentrations, we needed to use a combination of analytical techniques and multivariate data analysis to investigate the coatings. The initial findings were similar to the lab-controlled exposure studies in the literature showing pH, electrical conductivity, and Ca2+- Mg2+ concentration as the three most important parameters of surface waters controlling the formation of coatings. Nonetheless, we came across a phenomenon being overlooked under lab-controlled conditions; natural coatings are composed of not only organics (DOM: dissolved organic matter) but also inorganics (carbonate) which implies that their realistic coatings are more complex than what the previous studies described.
The second part of this thesis focused on investigating the interactions of more realistic nanoparticles (extracted TiO2 nanoparticles from 11 sunscreens) with DOM. Using ToF-SIMS combined with high-dimensional data analysis, we tried to find a general DOM-sorption pattern among TiO2 nanoparticles since finding this pattern could have ultimately opened a way to assess the fate of (more) realistic nanoparticles in aquatic environments. Contrary to our expectations, the results showed a unique sorption pattern for each sunscreen controlled by the composition of the sunscreens implying that the sorption pattern of each sunscreen should be investigated individually. In the next step of this study, we used random forest to extract the most important fragments of DOM sorbed onto each sunscreen followed by an effort to assign these important masses to chemical fragments.
Trying to provide a comprehensive understanding of interactions of the released n-TiO2 in aquatic environments, in future studies, we are going to expand our coating research to different types of TiO2 nanoparticles, such as extracted particles from paint, where the reaction media (surface waters) are covering a wide range of water parameters representative of various ecosystems. Making use of state-of-the-art techniques as long as multivariate data analysis, we will try to achieve a model describing the sorption mechanisms of dissolved matter of surface waters onto nanoparticles. Such studies can eventually lead us to a better understanding of the fate of the released nanoparticles under natural conditions.
The massive use of chemicals by humans is increasing pollution of the world’s ecosystems. Yet, knowledge about exposure and effects of chemicals in real-world ecosystems remains limited. Prediction of chemical effects in the context of ecotoxicological research and chemical regulation continues to focus on organism- or population-level responses established under simplified conditions while aiming to protect the functioning of ecosystems. A unified, comprehensive framework for the prediction of chemical effects in real-world ecosystems is still lacking. A major limitation of ecotoxicological studies considered in predictive modelling is that they rarely consider spatial dynamics (e.g. gene flow or species dispersal) as relevant processes influencing the trajectory of populations or communities, respectively. For instance, the spatial propagation of pesticide effects from polluted to least impacted sites has been predicted in several modelling studies but has not yet been characterised in the field.
The thesis starts in Chapter 1 with a brief introduction to chemical pollution in ecosystems, chemical effect prediction in ecotoxicology, and pesticides in freshwater ecosystems, then outlines the main objectives of the thesis. Subsequently, Chapter 2 presents a conceptual study about the current prediction of chemical effects in ecotoxicology and potential future avenues to improve ecological relevance of effect predictions by addressing the integration of different levels of biological organisation (termed biological levels). The study shows that approaches and tools that currently contribute to the prediction of chemical effects can be attributed to three idealised perspectives: the suborganismal, organismal and ecological perspective. The perspectives focus on different biological levels and are associated with distinct scientific concepts and communities. They complement each other so theoretical and empirical links between them may enhance prediction by capturing the entire phenomenon of chemical effects, from chemical uptake to ecosystem effects. Complex experimental studies accounting for eco-evolutionary dynamics are needed to cross barriers between biological levels as well as spatiotemporal scales. Overall, the conclusions of Chapter 2 may help to develop overarching frameworks for predicting chemical effects in ecosystems, including for untested species. Chapters 3 and 4 present a field study combined with laboratory analyses on the potential propagation of pesticides and their effects from agricultural stream sections to the edge of least impacted upstream sections, that can serve as refuges for many species. The study examines exposure and effects for different biological levels at three site types, the pesticide-polluted agricultural sites (termed agriculture), least impacted upstream sites (termed refuge) and transitional sites (termed edge) in six small streams of south-west Germany. The results in Chapter 3 show that regional transport of pesticides can lead to ecologically relevant pesticide exposure in forested sections within a few kilometres upstream of agricultural areas (i.e. at both edge and refuge sites). As further demonstrated in Chapter 3, the tested indicators of community responses (Jaccard Index, taxonomic richness, total abundance, SPEARpesticides) together suggest a species turnover from upstream refuge to downstream agricultural sites and a potential influence of adjacent agriculture on the edge sites. In contrast, Chapter 4 does not identify any particular edge effect that distinguish edge organisms and populations in edge sites from those in more upstream refuge sites. Gammarus fossarum populations at edges show equal levels of imidacloprid tolerance, energy reserves (i.e. lipid content) and genetic diversity to populations further upstream. Gammarus spp. from agricultural sites exhibit a lower imidacloprid tolerance compared to edge and refuge, potentially due to energy trade-offs in a multiple stressor environment, but related effects do not propagate to the edges (Chapter 4). Notwithstanding, the results of Chapter 4 indicate bidirectional gene flow between site types, supporting the hypothesis that adapted genotypes – if present at locally polluted sites – could spread to populations at least impacted sites. Taken together, Chapters 3 and 4, illustrate that pesticides and their effects can potentially propagate to least impacted upstream sections, empirically novel findings to our knowledge. These results of this thesis can help in predicting or explaining population and community dynamics in least impacted habitats and can ultimately inform pesticide management as well as freshwater restoration and protection of biodiversity.
Streams and their adjacent terrestrial ecosystem are tightly linked via the flux of organisms and matter. Emergent aquatic insects can be an important food source for riparian predators like bats, birds, spiders, and lizards. Information about the quality, quantity and phenology of emergent aquatic insects is necessary to estimate how riparian predators can benefit from them as food source. Though intensive agriculture is a globally dominant land use, little is known about how agricultural land use affects the quantity, quality as well as phenology of emergent aquatic insects. Typically, emergent aquatic insects contain more long-chain polyunsaturated fatty acids (PUFA) than terrestrial insects. Especially long-chain PUFA, were shown to enhance growth and immune response of spiders and birds.
In chapter 2, the PUFA transfer to spiders and the effect of food sources differing in their PUFA profiles on spiders was examined in outdoor microcosms under environmentally realistic conditions (i.e., normal weather conditions, possibility to construct orb webs as in their natural habitat). The environmental context determined how PUFA can affect the spiders. For instance, besides PUFA profiles of food sources, environmental variables like the temperature were important for the growth and body condition of spiders.
In the third chapter, the effect of agricultural land use on the quantity in terms of biomass as well as abundance, phenology and composition of emergent aquatic insects was assessed. Previous studies were limited to seasons or single time points, which hampered determining annual biomass export and shifts in phenology. Therefore, emergent aquatic insects were sampled continuously over the primary emergence period of one year and environmental variables associated with agricultural land use were monitored. The biomass and abundance in total were higher (61 – 68 and 79 – 86%, respectively) in agricultural than forested sites. In addition to that, a turn-over of emergent aquatic insect assemblages and a shift in phenology of aquatic insects was identified. In agricultural sites, 71% families of aquatic insects emerged earlier than in forested sites. Pesticide toxicity was associated with different aquatic insect order biomass and abundances. During the same experiment spiders were sampled in spring, summer, and autumn. Additionally, the fatty acid (FA) content of the spiders and emergent aquatic insects was determined. These results are presented in chapter 4. The FA export via emergent aquatic insects was higher (26 – 29%) in forested than agricultural sites, which indicated a reduced quality of aquatic insects as food source for riparian predators in agricultural sites. The FA profiles of mayflies, flies and caddisflies differed between land-use types, but not for spiders. Shading and pool habitats were the most important environmental variables for the FA profiles, though environmental variables explained only little variation in FA profiles. Overall, the quantity, quality and phenology of emergent aquatic insects differed between land-use types, which can affect population dynamics in the adjacent terrestrial ecosystem. Our results can be used in modeling food-web dynamics or meta-ecosystems to improve understanding of linked ecosystems.
Research across virtually all subfields of psychology has suffered from construct proliferation, often resulting in redundant constructs that strongly overlap conceptually and/or empirically. Such cases of old wine in new bottles, i.e., established constructs with new labels, are instances of the jangle fallacy and are problematic because they lead to fragmented literatures and thereby considerably impede the accumulation of knowledge.
The present thesis aims at demonstrating how to scrutinize potential jangle fallacies in a theory-driven, deductive, and falsificationist way. Using the example of the common core of aversive traits, D, I discuss the ways one can find and test differences between more or less overlapping, competing constructs. Specifically,the first paper tests the plausibility of a potential jangle fallacy with respect to D and a Fast Life History Strategy, concluding that the latter is unlikely to represent the common core of aversive traits at all. The remaining three papers test the distinctness of D from FFM Agreeableness, HEXACO Honesty-Humility, and a blend of the two, AG+, all of which are conceptually and empirically remarkably similar to, but could nevertheless be dissociated from D, thereby also refuting an instance of the jangle fallacy.
Although research often places emphasis on similarities, it is impossible to conclusively prove the equivalence of constructs. I therefore conclude that a falsificationist approach is more informative in that it allows to test whether any differences identified on a conceptual level can be confirmed empirically. Stated differently, if a new construct is dissociable both theoretically and empirically, one may assume that it is functionally distinct and no instance of the jangle fallacy.
Industrial robots are vital in automation technology, but their limitations become evident in applications requiring high path accuracy. This research focuses on improving the dynamic path accuracy of industrial robots by integrating additional sensor technology and employing intelligent feed-forward control. Specifically, the inclusion of secondary encoder sensors enables explicit measurement and compensation of robot gear deformations. Three types of model-based feed-forward controllers, namely physics-based, data-based, and hybrid, are developed to effectively counteract dynamic effects.
Firstly, a physics-based feed-forward control method is proposed, explicitly modeling joint deformations, hydraulic weight compensation, and other relevant features. Nonlinear friction parameters are accurately identified using a globally optimized design of experiments. The resulting physics-based model is fully continuously differentiable, facilitating its transformation into a code-optimized flatness-based feed-forward control.
Secondly, a data-based feed-forward control approach is introduced, leveraging a continuous-time neural network. The continuous-time approach demonstrates enhanced model generalization capabilities even with limited data. Furthermore, a time domain normalization method is introduced, significantly improving numerical properties by concurrently normalizing measurement timelines, robot states, and state derivatives. Based on previous work, a method ensuring input-to-state and global-asymptotic stability is presented, employing a Lyapunov function. Model stability is enforced already during training using constrained optimization techniques. Moreover, the data-based methods are evaluated on public benchmarks, extending its applicability beyond the field of robotics.
Both the physics-based and data-based models are combined into a hybrid model. Comparative analysis of the three models reveals that the continuous-time neural network yields the highest model accuracy, while the physics-based model delivers the best safety properties. The effectiveness of all three models is experimentally validated using an industrial robot.
Symplectic linear quotient singularities belong to the class of symplectic singularities introduced by Beauville in 2000.
They are linear quotients by a group preserving a symplectic form on the vector space and are necessarily singular by a classical theorem of Chevalley-Serre-Shephard-Todd.
We study \(\mathbb Q\)-factorial terminalizations of such quotient singularities, that is, crepant partial resolutions that are allowed to have mild singularities.
The only symplectic linear quotients that can possibly admit a smooth \(\mathbb Q\)-factorial terminalization are by a theorem of Verbitsky those by symplectic reflection groups.
A smooth \(\mathbb Q\)-factorial terminalization is in this context referred to as a symplectic resolution and over the past two decades, there is an ongoing effort to classify exactly which symplectic reflection groups give rise to quotients that admit symplectic resolutions.
We reduce this classification to finitely many, precisely 45, open cases by proving that for almost all quotients by symplectically primitive symplectic reflection groups no such resolution exists.
Concentrating on the groups themselves, we prove that a parabolic subgroup of a symplectic reflection group is generated by symplectic reflections as well.
This is a direct analogue of a theorem of Steinberg for complex reflection groups.
We further study divisor class groups of \(\mathbb Q\)-factorial terminalizations of linear quotients by finite subgroups \(G\) of the special linear group and prove that such a class group is completely controlled by the symplectic reflections - or more generally junior elements - contained in \(G\).
We finally discuss our implementation of an algorithm by Yamagishi for the computation of the Cox ring of a \(\mathbb Q\)-factorial terminalization of a linear quotient in the computer algebra system OSCAR.
We use this algorithm to construct a generating system of the Cox ring corresponding to the quotient by a dihedral group of order \(2d\) with \(d\) odd acting by symplectic reflections.
Although our argument follows the algorithm, the proof does not logically depend on computer calculations.
We are able to derive the \(\mathbb Q\)-factorial terminalization itself from the Cox ring in this case.
Solving probabilistic-robust optimization problems using methods from semi-infinite optimization
(2023)
Optimization under uncertainty is one field of mathematics which is strongly inspired by real world problems. To handle uncertainties several models have arisen. One of these is the probust model where a combination of probabilistic and worst-case uncertainty is considered. So far, just problem instances with a special structure can be dealt with. In this thesis, we introduce solving techniques applicable for any probust optimization problem. On the one hand, we create upper bounds for the solution value by solving a sequence of chance constrained optimization problems. These bounds are based on discretization schemes which are inspired by semi-infinite optimization. On the other hand, we create lower bounds by solving a sequence of set-approximation problems. Here, we substitute the original event set by an appropriate family of sets. We examine the performance of the corresponding algorithms on simple packing problems where we can provide the probust solution analytically. Afterwards, we solve a water reservoir and a distillation problem and compare the probust solutions with solutions arising from other uncertainty models.
Efforts in decarbonization lead to electrification, not only for road vehicles but also in the sector of mobile machines. Aside from batteries, those machines are electrified by tethering systems, nowadays featuring an AC low voltage system. Those systems are applied, e.g., to underground load haul dumpers with short tethering lines and low machine power. To expand tethering to further markets as agricultural machinery, this work proposes an HVDC tethering system allowing higher machine power and transmission length due to thinner, lighter tethering lines. The HVDC voltage is converted by distribution over a number of series connected DC/DC converters. Less blocking voltage on the semiconductors allows faster switching technology to reduce the converters’ weight and volume. The concepts modularity allows for flexible adaption on various application scenarios. Since comparable concepts exist for offshore wind farms connectivity, its applicability for this is discussed. A full bridge inverter/rectifier LLC resonance DC/DC converter is presented for the modules. A switched LTI converter model is developed and a Common Quadratic Lyapunov Function (CQLF) is computed for prove of stability. The converter control features soft startup and voltage control over all modules. The concepts are validated by simulation and on a scaled prototype.
Augmented (AR), Virtual (VR) and Mixed Reality (MR) are on their way into everyday life. The recent emergence of consumer-friendly hardware to access this technology has greatly benefited the community. Research and application examples for AR, VR and MR can be found in many fields, such as medicine, sports, the area of cultural heritage, teleworking, entertainment and gaming. Although this technology has been around for decades, immersive applications using this technology are still in their infancy. As manufacturers increase accessibility to these technologies by introducing consumer grade hardware with natural input modalities such as eye gaze or hand tracking, new opportunities but also problems and challenges arise. Researchers strive to develop and investigate new techniques for dynamic content creation or novel interaction techniques. It has yet to be found out which interactions can be made intuitively by users. A major issue is that the possibilities for easy prototyping and rapid testing of new interaction techniques are limited and largely unexplored.
In this thesis, different solutions are proposed to improve gesture-based interaction in immersive environments by introducing gesture authoring tools and developing novel applications. Specifically, hand gestures should be made more accessible to people outside this specialised domain. First, a survey which explores one of the largest and most promising application scenario for AR, VR and MR, namely remote collaboration is introduced. Based on the results of this survey, the thesis focuses on several important issues to consider when developing and creating applications. At its core, the thesis is about rapid prototyping based on panorama images and the use of hand gestures for interactions. Therefore, a technique to create immersive applications with panorama based virtual environments including hand gestures is introduced. A framework to rapidly design, prototype, implement, and create arbitrary one-handed gestures is presented. Based on a user study, the potential of the framework as well as efficacy and usability of hand gestures is investigated. Next, the potential of hand gestures for locomotion tasks in VR is investigated. Additionally, it is analysed how lay people can adapt to the use of hand tracking technology in this context. Lastly, the use of hand gestures for grasping virtual objects is explored and compared to state of the art techniques. Within this thesis, different input modalities and techniques are compared in terms of usability, effort, accuracy, task completion time, user rating, and naturalness.
In the context of distributed networked control systems, many issues affect the performance and functionality of the connected subsystems, mainly raised because of the communication medium imposed into the system structure. The communication functionality must generally cope with the data exchange requirements between system entities. Therefore, due to the limited communication resources, especially in wireless networks, an optimal algorithm for the assignment of the communication resources and proper selection of the right Medium Access Control (MAC) protocol are highly needed.
In this dissertation, we studied several problems raised by communication networks in wireless networked control systems, with a particular focus on the effect of standard Medium Access Control (MAC) protocols on the overall control system performance. We examined the effect of both the Time Division Multiple Access (TDMA) and the Orthogonal Frequency Division Multiple Access (OFDMA) protocols and developed a set of distributed algorithms that suit their specification requirements.
As a benchmark, we used a vehicle dynamics optimal control problem where the objective of the optimization problem is to penalize the maximal utilization of the tire's adhesion forces for a given driving maneuver. The problem was decomposed into a distributed form using primal and dual decomposition techniques, and solving algorithms were derived using both primal and dual subgradient methods. The problem solver was tested with respect to a wireless networked system structure and evaluated for different communication typologies, such as uni-directional, bidirectional, and broadcasting topology.
Later, the setup of the solution algorithms was extended concerning the specification of the TDMA and OFDMA protocols, and we introduced an event-triggered scheme into the solver algorithm. The proposed event-triggered scheme is mainly utilized to reduce communication between concurrent computation subsystems, which is primarily intended to facilitate real-time efficiency.
Next, we investigated the effect of the data exchange between subsystems on the overall solver performance and adapted the sensitivity analysis concept within the event-based communication scheme. An adaptive sensitivity-based TDMA algorithm was developed to manage the extensive communication resource requests, and channel utilization was adapted for the optimal solution behavior.
In the last part of the thesis, we extended our research direction to the multi-vehicle concept and investigated the communication resource allocation problem in the context of the OFDMA protocol. We developed an adaptive sensitivity-based OFDMA protocol based on linking the evolution of the application layer to the communication layer and assigning the communication resources concerning the sensitivity analysis of the optimization problem at the application layer.
Tropical reservoirs are recognized as globally important sources of greenhouse gases (GHG). Tropical mountainous areas of high hydroelectric development have been poorly studied. The objective of this study is to understand GHG dynamics in tropical mountain reservoirs. Data on seasonal and diurnal GHG dynamics were collected during six field campaigns in the Porce III reservoir in the Colombian Andes, where the importance of oxic CH4 production in the variability of dissolved gas at the surface, as well as the variation of water levels as an incident factor in GHG fluxes on a seasonal scale, was evidenced. CO2 flux at the reservoir water-atmosphere interface were monitored with a high-resolution technique over periods of several weeks, where the importance of primary productivity in the diurnal cycling of CO2 flux was inferred, showing alternation as sink-source, and pulses of synoptic-scale CO2 flux were observed as a consequence of the simultaneous occurrence of increases in surface concentrations and high wind speed. In laboratory experiments, a relationship was found between rain rate, turbulent kinetic energy dissipation rate and gas transfer rate, contributing to the modeling of this phenomenon with applicability in inland waters. In general, the results obtained contribute to the understanding of GHG dynamics in eutrophic tropical reservoirs.
This dissertation contributes to the emerging research field on men’s underrepresentation in communal domains such as health care, elementary education, and the domestic sphere (HEED). Since these areas are traditionally associated with women and therefore counter-stereotypic for men, various barriers can hinder men’s higher participation. We explored these relations using the example of how men’s interest in parental leave – as a form of communal engagement – is shaped across different stages of the transition to fatherhood. Specifically, we focused on how gendered beliefs regarding masculinity and fatherhood, the possible selves men can imagine for their future, and the social support men receive from their normative environment relate to their intentions to take parental leave and their engagement in care more broadly. In Chapter 2, using experimental designs, we examined how different representations of a prototypical man, varying in stereotypic agentic and counter-stereotypic communal content, affect men’s hypothetical intentions to take leave and their communal possible selves. Findings suggested that a combined description of a prototypical man as agentic and communal tended to increase men’s parental leave-taking intentions as compared to a control condition. In line with contrast effects, also an exclusively agentic male prototype tended to push men towards more communal outcomes. In Chapter 3, in a cross-sectional examination of the parental leave-taking intentions of expectant fathers, we found first evidence for a link between male prototypes and men’s behavioral preferences to take parental leave after birth. Yet, the support that expectant fathers received from their partners for taking parental leave emerged as the strongest predictor of men’s leave-taking desire, intention, and expected duration. In Chapter 4, using longitudinal data collected during men’s transition to fatherhood, we studied discrepancies between men’s prenatal caregiver and breadwinner possible selves and their actual postnatal engagement in each domain. Results suggested that fathers, on average, expected and desired to share childcare and breadwinning rather equally with their partners but had difficulties translating their intentions into behavior. The extent to which fathers experienced discrepancies was related to their attitudes towards the father role and the social support they received for taking parental leave and engaging in childcare. Moreover, experiencing a mismatch between their expected, desired, and actual division of labor had consequences for fathers’ intentions to take parental leave in the future. Across the empirical chapters, we found that men generally had high communal intentions and did not consider care engagement as nonnormative for their gender. However, men continue to face barriers that prevent them from translating their communal intentions into behavior. We outline strengths and limitations of the present research given the emerging nature of the research field. Moreover, we discuss implications for future research on men’s orientation towards care as well as implications for how to foster the realization of communal intentions into actual behavior.
Toxicology, the study of the adverse effects of chemicals and physical agents on living organisms, is a critical process in chemical and drug development. The low throughput, high costs, limited predictivity and ethical concerns related to traditional animal-based toxicity studies render them impractical to assess the growing number and complexity of both existing and new compounds and their formulations. These factors together with the increasing implementation of more demanding regulations, evidence the current need to develop innovative, reliable, cost effective and high throughput toxicological methods.
The use of metabolomics in vitro presents the powerful combination of a human relevant system with a multiparametric approach that allows assessing multiple endpoints in a single biological sample. Applying metabolomics in a cell-based system offers an alternative to both, the ethical concerns and relevance of animal testing and the restraining nature of single endpoint evaluations characteristic of conventional toxicological in vitro assays. However, there are still challenges that hamper the expansion of metabolomics beyond a research tool to a feasible and implementable technology for toxicology assessment.
The aim of this dissertation is to advance the applications of in vitro metabolomics in toxicology by addressing three major challenges that have limited its widespread implementation in the field. In chapter 2 the restrictive high cost and low throughput of in vitro metabolomics was addressed through the development, standardization and proof of concept of a high throughput targeted LC-MS/MS in vitro metabolomics platform for the characterization of hepatotoxicity. In chapter 3, the use of the developed in vitro metabolomics system was expanded beyond hazard identification, to its implementation for deriving dose- and time response metrics that were shown useful for Point of departure (PoD) estimations for human risk assessment. Finally, in chapter 4 in order to increase the reliance and confidence of using in vitro metabolomics data for risk assessment, the human relevance of the metabolomics in vitro assays was attempted to be improved by the implementation and evaluation of in vitro metabolomics in a hiPSCs-derived 3D liver organoid system.
The work developed here demonstrates the suitable of in vitro metabolomics for mechanistic-based hazard identification and risk assessment. By advancing the applications of metabolomics in toxicology, this work has significantly contributed to the aim of toxicology of the 21st century for a human-relevant non-animal toxicological testing, supporting the toxicology task of protecting human health and the environment.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
Inland waters, such as freshwater impoundments, are significant and variable sources of the greenhouse gas methane to the atmosphere. In water bodies, methane is mainly produced in the organic-matter rich bottom sediment, where it can accumulate, form gas voids, and be transported to the atmosphere by gas bubbles escaping the sediment. The bubble mediated transport of methane, known as methane ebullition, is a commonly dominant pathway of methane emissions in freshwater reservoirs. Ebullition results from a complex interplay of several simultaneous physical and bio-geochemical processes acting at different timescales, leading to highly variable fluxes in both space and time. Although the sediment matrix is a hot spot for gas production and accumulation, there is a lack of in-situ data on free gas storage in reservoirs and the interaction among sediment gas storage, methane budget, and methane ebullition. Several environmental variables are known to be ebullition drivers; however, simulating the temporal dynamics of ebullition and identifying the governing factors across different systems remains challenging. Therefore, the main goal of this thesis was to investigate the effect of different drivers on the spatial variability and temporal dynamics of methane ebullition in impoundments. Two contrasting reservoirs, one subtropical and one temperate, were investigated. High-frequency measurements of ebullition fluxes and environmental variables, and acoustic-based mapping of gas content in the sediment were performed in both reservoirs, constituting the dataset for this study. The main findings were presented in three main scientific manuscripts. The spatial distribution of gas content in the sediment was primarily controlled by sediment deposition and water depth, with shallow regions of high sediment deposition were hot spots of free gas accumulation in the sediment. Temporal changes in gas content in the sediment were linked to the methane budget components in the reservoir and further influenced by the temporal dynamics of ebullition. While the sediment could store days of accumulated potential methane production, which could sustain months of mean ebullition flux, periods of intensified ebullition led to a depletion of gas stored in the sediment. Large spatial scale ebullition drivers, such as pressure changes, resulted in the synchronization of ebullition events across different monitoring sites. Nevertheless, the degree of correlation between ebullition and environmental variables varied from one system to another and over time. Thermal stratification was an important modulator in the relationship between ebullition and other environmental variables, such as bottom currents and turbulence. The temporal dynamics of ebullition could be captured and reproduced by empirical models based on known environmental variables. However, these models failed to reproduce the sub-daily variabilities of ebullition and demonstrated poor performance when transferred from one system to another. Lastly, although some questions remain unanswered, the findings from this study contribute to advancing the understanding of the complex dynamics of methane ebullition and its controls in freshwater reservoirs.
The area of Baiturrahman Grand Mosque in Banda Aceh, Indonesia, is a trade and service area and, at the same time, is a historical site that has a lot of historical heritage. Nevertheless, only a few people walk along the corridor of the town. People prefer driving to commute within the area and stop at their destination point. Some of them walk but only for 20 to 30 meters. Synergically, the growing number of motor vehicles increases significantly. Research in 2016 shows that 77% of 3600 respondents go by motorcycle for daily trips. The traffic jam appears during the rush hours.
A walkability concept is an approach to this problem because it gives social, economic, and environmental benefits. Before analyzing the case study of Banda Aceh, the writer determined a definition of walkability for the context of the research. By comparing Journals from Indonesia, Malaysia, and Thailand, it is to know how the researchers of the three countries define walkability. The researchers describe it in 3 ways: creating or adapting the definition, building variables, or starting by defining elements that shape it. After building a definition, the writer chose the researcher's most often used parameters. The chosen parameters become variable to evaluate the site condition in the case study.
The absence of pedestrians and the trend of using a motor vehicle in the case study area is a time bomb that will endanger human survival in the future. This research investigates three aspects: the people, physical environment, and policy to answer the problem. The structured questionnaires spread over the research site to get the people's background of not walking. An observation in the research site is to get a clear idea of the condition of the pedestrian system and its physical environment. The writer learned the official planning documents related to city spatial plans and pedestrian development to get a deeper understanding.
Kaiserslautern in Germany is a comparative study in this research because it is one of the best practice examples of pedestrian development. The local government has built the pedestrian zone system since the 1960s and was entirely successful 38 years after the construction. Moreover, the city has two planning tools for the transport plan. Firstly, Mobilitätsplan Klima+ 2030 provides information about people's mobility, standards, principles of transport development, and strategic guidelines for traffic development. Secondly, Nahverkehrsplan Stadt Kaiserslautern service/trip performance, minimum standards, connection reliability, and local transportation network development/development, including various investment steps.
The questionnaire result shows that two-thirds of respondents who visited the old city center denied walking due to personal reasons and weather. It is because most of them own motor vehicles. Meanwhile, there are obstacles and destroyed parts along the pedestrian lane. The barriers are broken lane material, traders' products, street vendors, street cafés, and plants. Nonetheless, Banda Aceh has a plan for pedestrian system development in its city spatial plan. The document plans four segments of pedestrian lane development.
This research is advantageous in adding some knowledge to the field of urban pedestrian development. It could be a consideration in researching and planning a pedestrian system development for the cities that face a similar problem. Moreover, it helps promote a healthy, sustainable town that can save people and the environment from pollution in the future.
This dissertation project aims to examine the potential of network modelling, an increasingly popular methodology in emotion research (e.g., Fried et al., 2016), to better comprehend age-related differences in structural connections between cognitive processes such as fluid intelligence and executive control functions. Furthermore, it aims to identify the key variables that link self-regulation to executive control functions and age-related discrepancies. Lastly, it seeks to delve into the key variables and correlations between executive control functions, self-regulation, and affect utilizing a longitudinal design in combination with machine learning as a data-driven method.
In study 1, differences between the cognitive performance networks of younger (M = 38.0 years of age, SD = 9.9) and older (M = 64.1 years of age, SD = 7.7) adults were explored. Network modelling showed that while speeded attention is essential throughout the life-span, connections between fluid intelligence and working memory were stronger, and intelligence was more central in the older group. Additionally, confirmatory factor modelling demonstrated that latent correlations were highest between working memory and intelligence, particularly in older adults, whereas inhibition had the lowest correlations with other abilities. This research suggests that the relations of cognitive abilities may differ between younger and older adults, indicating process-specific changes in the cognitive performance network.
In study 2, we investigated the connections of self-regulation (SR) and executive control functions (EF), which are theoretical concepts encompassing various cognitive abilities supporting the regulation of behavior, thoughts, and emotions (Inzlicht et al., 2021; Wiebe & Karbach, 2017). Evidence, however, implies that correlations between self-report measures and performance-based tasks are often difficult to observe (e.g., Eisenberg et al., 2019). We investigated connections and overlap between different aspects of SR and EF in a life-span sample (14-82 years). Participants completed several self-report measures and behavioral tasks, such as sensation seeking, mindfulness, grit, or eating behavior questionnaires and working memory, inhibition, and shifting tasks. Network models for a youth, middle-aged, and older-aged group were estimated to identify key variables that are well connected in the SR and EF construct space. In general, stronger connections were observed within the clusters of SR and EF than between them, and older adults appeared to have more connections between SR and EF than younger individuals, probably because of declining cognitive resources.
In study 3, we analyzed the intricate links between EF, SR and affect, as well as individual differences in these relations. Bridgett et al. (2013) proposed that EF and self-regulation SR are psychological constructs to support the regulation of cognition and affect. A total of 315 participants, aged 14 to 80, answered questionnaires and took part in behavioral tasks which evaluated EF, SR, and both positive and negative affect two times (one-month apart). Combined X-means and deep learning algorithms aided in the separation of two distinct groups who featured different EF performances, SR tendencies, and affective experiences. Network model analysis was then utilized to confirm the connections between the EF, SR, and affect variables in each of the two groups. The two groups displayed a maximal centrality for variables linked to SR and positive affect. Group membership remained mostly consistent (85%) across both measurement occasions. Logistic regression indicated that age and personality (conscientiousness, neuroticism, and agreeableness) predicted group membership. This sheds light on stable individual differences in the complex relations of EF, SR, and affect.
This dissertation project utilized a combination of standard approaches (such as confirmatory factor analysis; CFA) and advanced approaches (such as network models, machine learning algorithms, and deep learning) to explore the connections between cognitive abilities, EF, SR, and affect. Our findings are in line with the theory of process specific changes in age-dedifferentiation. Findings suggested that connections between SR and EF were stronger within clusters, and positive affect was better connected to SR than EF measures. Lastly, age and personality traits were found to predict the clusters. These findings suggest that computational modelling is an effective exploratory tool in understanding how cognitive abilities and other psychological constructs may interact. Further research is necessary to gain further insights on the mechanisms behind differences in network structures.
This thesis deals with modeling and simulation of district heating networks (DHN) and the mathematical analysis of the proposed DHN model. We provide a detailed derivation of the complete system of governing equations, starting from a brief exposition of the physical quantities of interest, continued with the components to set up a graph based network model accounting for fluxes and coupling conditions, the transport equations for water and thermal energy in pipelines, and the terms representing consumers and producers. On this basis, we perform an analysis of the solvability of the model equations, starting from the scalar advection problem in a single–consumer single–producer network, to a generalized problem suitable to model simple networks without loops. We also derive an abstract formulation of the problem, which serves as a rigorous mathematical model that can be utilized for optimization problems. The theoretical results can be utilized to perform tran- sient simulations of real world DHN and optimize their performance by optimal control, as indicated in a case study.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
A new class of amines that are promising solvents for reactive CO2-absorption processes was thoroughly investigated in a comprehensive experimental study. The amines are all derivatives of triacetoneamine and differ only in the substituent of the triacetoneamine ring structure. These amines are abbreviated by the acronym EvA with a consecutive number that designates the derivatives. About 50 EvAs were considered in the present study, from which 26 were actually synthesized and investigated as aqueous solvents. The investigated properties were: solubility of CO2, rate of absorption of CO2, liquidliquid and solid-liquid equilibrium, speciation (qualitative and quantitative), pK-values, pH-values, foaming behavior, density, dynamic viscosity, vapor pressure, and liquid heat capacity. All 26 EvAs were assessed in an experimental screening. The results were compared with the results of two standard solvents from industry: aqueous solvents of monoethanolamine (MEA) and a solvent blend of methyl-diethanolamine and piperazine (MDEA/PZ). Detailed studies were carried out for two EvAs that revealed significantly improved performance compared to MEA and MDEA/PZ: EvA34 combines favorable properties of MEA and MDEA/PZ in one molecule. EvA25 reveals a liquid-liquid phase split that reduces the solubility of CO2 in the solvent and shifts the CO2 into the aqueous phase. This allowed the design of a new CO2-absorption process, that takes advantage of the liquid-liquid phase split. Finally, the chemical speciation in 16 EvAs was investigated by NMR spectroscopy. From the results, relationships between the chemical structure of the EvAs and the observed speciation, basicity, and application properties were established. This enabled giving guidelines for the design of new amines and proposing new types of amines, which were called ADAMs.