Refine
Year of publication
Document Type
- Doctoral Thesis (1877) (remove)
Language
- English (941)
- German (930)
- Multiple languages (6)
Has Fulltext
- yes (1877) (remove)
Keywords
- Visualisierung (21)
- Simulation (19)
- Katalyse (15)
- Stadtplanung (15)
- Apoptosis (12)
- Finite-Elemente-Methode (12)
- Phasengleichgewicht (12)
- Modellierung (11)
- Infrarotspektroskopie (10)
- Mobilfunk (10)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Chemie (389)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (370)
- Kaiserslautern - Fachbereich Mathematik (292)
- Kaiserslautern - Fachbereich Informatik (235)
- Kaiserslautern - Fachbereich Biologie (133)
- Kaiserslautern - Fachbereich Bauingenieurwesen (94)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (92)
- Kaiserslautern - Fachbereich ARUBI (71)
- Kaiserslautern - Fachbereich Sozialwissenschaften (64)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (37)
Many amphibians and insects have a biphasic life cycle, linking aquatic and terrestrial ecosystems. In temperate wetlands, insect communities are largely dominated by midges, such as non-biting chironomids and mosquitoes. Particularly chironomids and their aquatic larvae play a key role for both aquatic and terrestrial predators, e.g., dragonflies and damselflies (Odonata), birds, riparian spiders and amphibians. Therefore, adverse effects on chironomid larvae induced by pesticides or biocides can have implications on food webs across ecosystem boundaries.
In floodplains of the Upper Rhine Valley in southwest Germany, the biocide Bacillus thuringiensis var. israelensis (Bti) has been applied for over 40 years to reduce nuisance by mass emergence of mosquitoes. Due to its specific mode of action, Bti is presumed to be a more environmentally friendly alternative to non-selective, highly toxic pesticides used in the past. However, research on indirect effects of Bti on non-target organisms inhabiting these wetlands is still relatively scarce. The aim of this thesis was the investigation of direct and indirect effects of Bti on non-target organisms and, consequently, bottom-up effects on aquatic food webs and propagation to the terrestrial ecosystem. Effects were examined in outdoor floodplain pond mesocosms (FPMs) with natural flora and fauna communities.
Benthic macroinvertebrate communities were significantly altered in Bti-treated FPMs, largely due to the reduction of chironomid density by over 40% compared to untreated FPMs. Sampling of exuviae indicated that the emergence of Libellulidae (Odonata) was reduced by Bti, while larger Aeshnidae were not affected. This finding suggested increased intraguild predation (predation of competing predators) in Bti-treated FPMs as a result of decreased prey availability, i.e. chironomid larvae. This conclusion was partly confirmed in food web analyses using stable isotopes of C and N and fatty acids, with Aeshnidae experiencing a slight diet shift towards larger prey (i.e., newts, Aeshnidae) in Bti-treated FPMs. In contrast, the diet proportions of newt larvae were not affected by Bti treatment, but showed a marginal trend in lower omega-6 fatty acid content. Analyses of oxidative stress biomarkers did not reveal any direct effects of Bti on common frog tadpoles under natural climatic conditions.
This thesis emphasizes that adverse effects of Bti on the base of aquatic-terrestrial food webs, i.e., reduction of larval chironomids, can have implications for higher trophic levels and cascade to terrestrial ecosystems. Affected organisms also include species of concern, such as protected Odonata species. In view of the global insect and amphibian decline, the large-scale use of Bti in (partially protected) wetlands should be carefully considered.
Living systems incessantly engage in the regulation of their cellular processes to fulfill their biological functions. Beyond development-related adjustments or cell cycle oscillations, environmental fluctuations compel the system to reorganize metabolic pathways, structural components, or molecular repair and reconstitution mechanisms. These responses manifest across diverse temporal scales, necessitating an intricate regulatory orchestration. Time series experiments have become increasingly popular for charting the chronological order and elucidating the underlying mechanisms. In the era of high-throughput technologies, the majority of cellular molecules can be analyzed in one fell swoop, generating a comprehensive snapshot of the status quo of most present molecules. Methodological advancements also permit the monitoring not only of molecular abundances but also the functional status of transcripts and proteins. However, due to the still high efforts associated with such experiments, the number of measured time points and the replication of measurements remains limited. Resulting datasets contain signals from thousands of molecules, yet they are sparse in temporal resolution and are often imprecise due to biological variability and technical measurement inaccuracies.
This thesis explores the complexities arising from the examination of short time series data and introduces pioneering tools that offer fresh insights into the realm of biological time series analysis. The broad spectrum of analytic possibilities ranges from a molecule-centric investigation of individual time courses to a holistic aggregation of the system’s response to its main characteristics. By creating a modeling framework that applies domain-specific constraints, time-course signals can be transformed from a series of discrete data points into a continuous curve. These curves align with current biological conjectures about molecule kinetics being smooth and devoid of superfluous oscillations. Noise present at individual time points is judiciously accounted for during curve fitting, mitigating the impact of time points with high variance on the curve. Subsequent classification is based on the features of these curves (extreme points and inflection points) and ensures a reduction in data amount and complexity. Succinct labels assigned to each molecule's kinetics encapsulate the signal's most notable features. Besides this modeling approach, an innovative enrichment strategy is introduced, that is independent of prior data partitioning and capable of segregating the temporal response into its thermodynamically relevant components. This approach allows for a continuous assessment of each molecule's contribution to these components, obviating the need for exclusive allocation. The application of various analytical approaches to heat acclimation experiments in Chlamydomonas highlights the relevance and potential of time series experiments and specifically tailored analysis techniques. The integration of different system levels has led to the identification of regulatory peculiarities, such as an increased correlation between transcripts and corresponding proteins during acclimation responses. These and other insights may herald new avenues of research that could ultimately enhance plant robustness in the face of increasing environmental perturbations.
The growing popularity of time series experiments necessitates dedicated analytical approaches that empower researchers and analysts to decipher patterns, discern trends, and unravel the underlying structures within the data, facilitating predictions and the derivation of meaningful conclusions that could potentially build bridges between the interweaved systems levels.
Distributed message-passing systems have become ubiquitous and essential for our daily lives. Hence, designing and implementing them correctly is of utmost importance. This is, however, very challenging at the same time. In fact, it is well-known that verifying such systems is algorithmically undecidable in general due to the interplay of asynchronous communication (messages are buffered) and concurrency. When designing communication in a system, it is natural to start with a global protocol specification of the desired communication behaviour. In such a top-down approach, the implementability problem asks, given such a global protocol, if the specified behaviour can be implemented in a distributed setting without additional synchronisation. This problem has been studied from two perspectives in the literature. On the one hand, there are Multiparty Session Types (MSTs) from process algebra, with global types to specify protocols. Key to the MST approach is a so-called projection operator, which takes a global type and tries to project it onto every participant: if successful, the local specifications are safe to use. This approach is efficient but brittle. On the other hand, High-level Message Sequence Charts (HMSCs) study the implementability problem from an automata-theoretic perspective. They employ very few restrictions on protocol specifications, making the implementability problem for HMSCs undecidable in general. The work in this thesis is the first to formally build a bridge between the world of MSTs and HMSCs. To start, we present a generalised projection operator for sender-driven choice. This allows a sender to send to different receivers when branching, which is crucial to handle common communication patterns from distributed computing. Despite this first step, we also show that the classical MST projection approach is inherently incomplete. We present the first formal encoding from global types to HMSCs. With this, we prove decidability of the implementability problem for global types with sender-driven choice. Furthermore, we develop the first direct and complete projection operator for global types with sender-driven choice, using automata-theoretic techniques, and show its effectiveness with a prototype implementation. We are the first to provide an upper bound for the implementability problem for global types with sender-driven (or directed) choice and show it to be in PSPACE. We also provide a session type system that uses the results from our projection operator. Last, we introduce protocol state machines (PSMs) – an automata-based protocol specification formalism – that subsume both global types from MSTs and HMSCs with regard to expressivity. We use transformations on PSMs to show that many of the syntactic restrictions of global types are not restrictive in terms of protocol expressivity. We prove that the implementability problem for PSMs with mixed choice, which requires no dedicated sender for a branch but solely all labels to be distinct, is undecidable in general. With our results on expressivity, this answers an open question: the implementability problem for mixed-choice global types is undecidable in general.
Coastal port-industrial areas are becoming increasingly significant due to urban shrinkage, population
decline, and climate change. To address social and economic issues and enhance climate resilience, it
is crucial to anticipate urban shrinkage in both stable and growing coastal areas that are undergoing
economic transformation. Urban planning can better understand the dynamics of planning for urban
shrinkage and climate resilience, as port-industrial areas have a large economic impact on nearby
coastal communities.
This dissertation examines the long-term implications of urban shrinkage in coastal port-industrial
areas in the context of climate change and sea level rise in England. The research problem is that
current urban policy does not adequately address the challenges of urban shrinkage and climate
resilience in these areas. The research questions are: What are the population changes in local areas
in England? What effect does population decline have on changing urbanisation patterns in older
industrial areas? What type of adaptation efforts were made in North East Lincolnshire, England, and
Bremerhaven, Germany, in response to the 2013 tidal surge, and how did this affect urban
shrinkage?
The dissertation applies an integrated concept of Shrinkage-Resilience as a framework for analysis.
The methodology includes a review of existing models and frameworks, as well as case studies of
international and local contexts. The findings suggest that between 2013-2019, 68% of older
industrial areas (including coastal ports) in England are undergoing changing urbanisation patterns
relative to population, land use, and green belt areas, and are key areas for urban policy, such as the
Levelling Up agenda. One of the areas, North East Lincolnshire is discussed and compared to
Bremerhaven. These examples demonstrate the link between Shrinkage-Resilience approaches and
their practical implementation in coastal port-industrial areas affected by urban shrinkage.
This research advances the scientific practice of urban planning and policy-making for shrinking cities
by introducing the approach of Shrinkage-Resilience, which emphasises the importance of
considering long-term social, economic, and environmental impacts in urban shrinkage contexts. This
approach is crucial in the transition to a more sustainable and inclusive society, where the welfare of
present and future generations, the environment, and economic development are taken into
account. The dissertation provides recommendations for urban planning to incorporate policy
changes for shrinking cities and coastal port-industrial areas worldwide, to include disaster risk
reduction and climate change adaptation approaches.
To increase situational awareness of the crane operator, the aim of this thesis is to develop a vision-based deep learning object detection from crane load-view using an adaptive perception in the construction area. Conventional worker detection methods are based on simple shape or color features from the worker's appearances. Nonetheless, these methods can fail to recognize the workers who do not wear the protective gears. To find out an image representation of the object from the top view manually or handcrafted feature is crucial. We, therefore, employed deep learning methods to automatically learn those features.
To yield optimal results, deep learning methods require mass amount of data.
Due to the data deficit especially in the construction domain, we developed the photorealistic world to create the data in addition to our samples collected from the real construction area. The simulated platform does not benefit only from diverse data types, but also concurrent research development which speeds up the pipeline at a low cost.
Our research findings indicate that the combination of synthetic and real training samples improved the state-of-the-art detector. In line with previous studies to bridge the gap between synthetic and real data, the results of preprocessed synthetic images are substantially better than using the raw data by approximately 10%.
Finding the right deep learning model for load-view detection is challenging.
By investigating our training data, it becomes evident that the majority of bounding box sizes are very small with a complex background.
In addition, we gave the priority to speed over accuracy based on the construction safety criteria. Finally, RetinaNet is chosen out of the three primary object detection models.
Nevertheless, the data-driven detection algorithm can fail to handle scale invariance, especially for detectors whose input size is changed in an extremely wide range.
The adaptive zoom feature can enhance the quality of the worker detection.
To avoid further data gathering and extensive retraining, the proposed automatic zoom method of the load-view crane camera supports the deep learning algorithm, specifically in the high scale variant problem. The finite state machine is employed for control strategies to adapt the zoom level to cope not only with inconsistent detection but also abrupt camera movement during lifting operation. Consequently, the detector is able to detect a small size object by smooth continuous zoom control without additional training.
The adaptive zoom control not only enhances the performance of the top-view object detection but also reduces the interaction of the crane operator with camera system, reducing the risk of fatality during load lifting operation.
Aquatic habitats are closely linked to the adjacent riparian area. Fluxes of nutrients, energy and matter through emerging aquatic insects are a key component of the aquatic subsidy to terrestrial systems. In fact, adult insects serve as high-quality prey for riparian predators. Stressors impacting the aquatic subsidy can thus translate to consequences for the receiving terrestrial food web, while mechanistic knowledge is extremely limited. Against this background, this thesis aimed at (i) assessing the impact of a model stressor specifically targeting insect emergence, that is the mosquito control agent Bacillus thuringiensis var. israelensis, on quantity, temporal dynamics and (ii) quality of emerging aquatic insects. For this purpose, outdoor floodplain pond mesocosms (n = 6) were employed. Since emergence is, in most cases, no point event but occurs over a longer period emergence was monitored over 3.5 months. The model stressor, i.e., Bti applied three times during spring at 2.88 × 10^9 ITU/ha, shifted the emergence time of aquatic insects, especially of non-biting midges (Diptera: Chironomidae), by ten days with a 26% reduced peak, while the nutrient content was not altered. On this basis, (ii) the propagation of the effects in aquatic subsidy emergence to riparian predators was investigated. Stable isotope analyses were used to assess the diet of a model predator, that is the web-building riparian spider Tetragnatha extensa. Results suggested changes in the composition of the spider’s diet to replace missing Chironomidae by other aquatic and terrestrial prey organisms pointing to further negative consequences. Finally, the thesis aimed at (iii) the understanding of processes underlying an altered emergence of aquatic subsidy mainly consisting of chironomids. Using a laboratory-based test design, populations of Chironomus riparius (n = 6) were assessed for their sensitivity towards Bti under different food qualities (high and low nutritious) before and after a long-term (six months) Bti exposure. Signs of phenotypic adaptation were observed in emergence time and nutrient content over multiple generations, resulting in changes in chironomids’ quantity and quality as food source. Overall, it can be concluded that direct and indirect effects of an aquatic stressor, as well as the adaptive response to it, can alter ecosystems at different levels, including individual, population and community level. Furthermore, this thesis highlights the importance of a temporal perspective when investigating the impact of aquatic stressors beyond ecosystem boundaries. It illustrates potential bottom-up effects on riparian predators through altered emergence of aquatic insects, feeding our understanding of meta-ecosystems and how stressors and their effects are transferred across systems. These insights will support efforts to protect and conserve natural ecosystems.
[Halb]- trocken im Unterstockbereich?
„Untersuchungen meteorologisch-hydrologischer Messgrößen im Weinbau als Anpassungsstrategie an den Klimawandel sowie für eine nachhaltige Wassernutzung von Vitis vinifera [cv. Riesling]. “
Christian Ihrig & Sascha Henninger
RPTU Kaiserslautern
Der vom Menschen verursachte Klimawandel beeinflusst sowohl langfristige Klimaprozesse, als auch das aktuelle, kurzfristige Wettergeschehen in allen Regionen der Erde. Er äußert sich in einer Vielzahl an Phänomenen, die sich je nach Klimagebiet unterschiedlich manifestieren lassen oder auch unterschiedliche Auswirkungen mit sich bringen. Diese Forschungsarbeit beschäftigt sich mit dem Wasserhaushalt von Weinreben im Rahmen des rezenten Klimawandels. Ziel dieses Projektes ist es, mittels meteorologisch-hydrologischen Messgrößen eine Anpassungsstrategie zu generieren, die auf alle Weinbauregionen in Rheinland-Pfalz übertragen werden kann, um Winzer*innen die Möglichkeit zu eröffnen, auf natürlicher Art und Weise der Rebe Wasser zugänglich zu machen.
Durch die Zunahme abiotischer Schäden (z.B. Niederschlag) und die Veränderung der Vegetationszeit sowie die Zunahme invasiver Schaderreger ist vermehrt eine Steigerung der Vulnerabilität des Ökosystems „Wingert“ zu erkennen. Winzer*innen werden aufgrund der Zunahme von Extremwetterereignissen (Hitze-/Dürrephasen) zur langfristigen Bewässerung ihrer Weinbauflächen gezwungen. Große Mengen Wasser werden bereits vereinzelt in Weinbergsregionen gepumpt, was langfristig hinsichtlich eines sinkenden Grundwasserspiegels einen fatalen Fehler darstellt. Die ressourcenschonende Gestaltung des Wasserhaushaltes sollte daher in den Mittelpunkt der Weinbauforschung gestellt werden. Weinbauer*innen sind an regional-/ lokalklimatischen Lösungsmöglichkeiten und Anpassungsstrategien interessiert, um Risiken für die Anbaufrucht reduzieren und auf die lokalklimatischen Auswirkungen des Klimawandels reagieren zu können. Um gegen dieses Risiko anzugehen und den Produktionsausfall zu minimieren, muss die Anpassungsfähigkeit in Sachen Wasserhaushalt der Reben bekräftigt werden. Demzufolge wird das Mikroklima in der Weinbauregion Rheinhessen mittels des Einsatzes der Scholander-Druckkammer untersucht. Die Bestimmung des Wasserstatus hinsichtlich der exakten Bewässerungssteuerung von Weinreben hat sich durch das frühmorgendliche Blattwasser- (Ψpd) und mittägliche Stammwasserpotential (Ψstem) bewährt. Physiologische Prozesse, wie die stomatäre Leitfähigkeit der Blattschließzellen sowie das vegetative Wachstum, aber auch die Photosynthese, sind direkt oder indirekt an Ψpd + Ψstem gekoppelt. Darüber hinaus lässt sich der Wasserhaushalt durch ein an Trockenstandorten angepasstes Bodenpflegesystem, wie zum Beispiel einer flächendeckenden Bodenabdeckung mittels Holzhäcksel, deutlich verbessern. Des Weiteren wird das Mikroklima im Weinberg durch die Laubwandstruktur mitbestimmt, was durch eine gesteigerte Photosyntheseleistung der Laubwand, eine optimale Belüftung und Belichtung gewährleistet wird. Im praktischen Weinbau wird dies durch die Höhe der Laubwand realisiert. Um dem Herbizid im Unterstockraum durch das anstehende Glyphosatverbot eine Alternative zu bieten, entwickelt die Landmaschinenbranche bereits heute alternative Arbeitsgeräte, die eine Möglichkeit darstellen, dem Wuchs des Unkrautes im Unterstockbereich entgegenzuwirken.
Daher ist es von gesteigertem Interesse zu analysieren, inwiefern sich eine Bodenabdeckung im Unterstockbereich von einer flächendeckenden bzw. moderaten Tropfbewässerung in Flachlage unterscheidet. Darüber hinaus sollen Möglichkeiten zur Reduzierung des Wasserverbrauchs und zur Reifeverzögerung (Verminderung des Botrytisbefalls, Verlängerung der Reifedauer, Vermeidung eines zu hohen Alkoholgehaltes) durch eine kürzere Laubwandhöhe beim Riesling in Flachlage in diesem Projekt erprobt werden. Als Versuchsvarianten dienen vier Variationen, um abgrenzbare und eindeutige Ergebnisse erzielen zu können (V1: Tropfbewässerung; V2: Unterstockabdeckung Holzhäcksel; V3: Flächendeckende Holzhäcksel; V4: Kontrollvariante).
The German energy mix, which provides an overview of the sources of electricity available in Germany, is changing as a result of the expansion of renewable energy sources. With this shift towards sustainable energy sources such as wind and solar power, the electricity market situation is also in flux. Whereas in the past there were few uncertainties in electricity generation and only demand was subject to stochastic uncertainties, generation is now subject to stochastic fluctuations as well, especially due to weather dependency. To provide a supportive framework for this different situation, the electricity market has introduced, among other things, the intraday market, products with half-hourly and quarter-hourly time slices, and a modified balancing energy market design. As a result, both electricity price forecasting and optimization issues remain topical.
In this thesis, we first address intraday market modeling and intraday index forecasting. To do so, we move to the level of individual bids in the intraday market and use them to model the limit order books of intraday products. Based on statistics of the modeled limit order books, we present a novel estimator for the intraday indices. Especially for less liquid products, the order book statistics contain relevant information that allows for significantly more accurate predictions in comparison to the benchmark estimator.
Unlike the intraday market, the day ahead market allows smaller companies without their own trading department to participate since it is operated as a market with daily auctions. We optimize the flexibility offer of such a small company in the day ahead market and model the prices with a stochastic multi-factor model already used in the industry. To make this model accessible for stochastic optimization, we discretize it in time and space using scenario trees. Here we present existing algorithms for scenario tree generation as well as our own extensions and adaptations. These are based on the nested distance, which measures the distance between two distributions of stochastic processes. Based on the resulting scenario trees, we apply the stochastic optimization methods of stochastic programming, dynamic programming, and reinforcement learning to illustrate in which context the methods are appropriate.
Virtual Possibilities: Exploring the Role of Emerging Technologies in Work and Learning Environments
(2024)
The present work aims to investigate whether virtual reality can support learning as well as vocational work environments. To this end, four studies were conducted, with the first set investigating the demands for vocational workers and the impact of input methods on participant performance. These studies laid the foundation needed to create studies incorporating virtual reality research. The second set of studies was concerned with the impact of virtual reality on learning performance as well as the influence of binaural stimuli presentation on task performance. Results of each study are discussed individually and in conjunction with one another. The four studies are further supplemented with further research conducted by the author as well as an analysis of the growing field of virtual reality-based research. The thesis closes by embedding the discussed work into the scientific landscape and tries to give an outlook for virtual reality-based use cases in the future.
In recent years, there has been a growing need for accurate 3D scene reconstruction. Recent developments in the automotive industry have led to the increased use of ADAS where 3D reconstruction techniques are used, for example, as part of a collision detection system. For such applications, scene geometry reconstruction is usually performed in the form of depth estimation, where distances to scene objects are obtained.
In general, depth estimation systems can be divided into active and passive. Both systems have their advantages and disadvantages, but passive systems are usually cheaper to produce and easier to assemble and integrate than active systems. Passive systems can be stereo- or multiple-view based. Up to a certain limit, increasing the number of views in multi-view systems usually results in improved depth estimation accuracy.
One potential problem for ensuring the reliability of multi-view systems is the need to accurately estimate the orientation of their optical sensors. One way to ensure sensor placement for multi-view systems is to rigidly fix the sensors at the manufacturing stage. Unlike arbitrary sensor placement, using of a simplified and known sensor placement geometry further simplifies the depth estimation.
We meet with the concept of light field, which parameterizes all visible light passing through all viewpoints by their intersection with angular and spatial planes. When applied to computer vision, this gives us a 2D set of 2D images, where the physical distances between each image are fixed and proportional to each other.
Existing light field depth estimation methods provide good accuracy, which is suitable for industrial applications. However, the main problems of these methods are related to their running time and resource requirements. Most of the algorithms presented in the literature are typically sharpened for accuracy, can only be run on high-performance machines and often require a significant amount of time to process and obtain results.
Real-world applications often have running time requirements. Also, often there is a power-consumption limitation. In this dissertation, we investigate the problem of building a depth estimation system with an light field camera that satisfies the operating time and power consumption constraints without significant loss of estimation accuracy.
First, an algorithm for calibrating light field cameras is proposed, together with an algorithm for automatic calibration refinement, that works on arbitrary captured scenes. An algorithm for classical geometric depth estimation using light field cameras is proposed. Ways to optimize the algorithm for real-time use without significant loss of accuracy are presented. Finally, the ways how the presented depth estimation methods can be extended using modern deep learning paradigms under the two previously mentioned constraints are shown.
With the expansion of the electromobility and wind energy, the number of frequency inverter-controlled electric motors and generators is increasing. In parallel, the number of the rolling bearing failures caused by inverter-induced parasitic currents also shows an increasing trend. In order to determine the electrical state of the rolling bearing, to develop preventive measures against damages caused by parasitic currents and to support system-level calculations, electrical rolling bearing models have been developed. The models are based on the electrical insulating ability of the lubricant film that develops in the rolling contacts. For the capacitance calculation of the rolling contacts, different correction factors were developed to simplify the complex tribological and electrical interactions of this region. The state-of-the-art correction factors vary widely, and their validity range also differ significantly, which leads to uncertainty in their general application and to the demand for further investigations of this field. In the present work, a combined simulation method is developed that can determine the rolling bearing capacitance of axially loaded rolling bearings. The simulation consists of an electrically extended EHL simulation for calculating the capacitance of the rolling contact, and an electrical FEM simulation for the capacitance calculation of the non-contact regions. With the combination of the resulted capacitance values of the two simulation methods, the total rolling bearing capacitance can be determined with high accuracy and without using correction factors. In addition, due to experimental investigations, the different capacitance sources of the rolling bearing are identified. After the validation of the combined simulation method, it can be applied for the investigation of the different capacitance sources, i.e., to determine their significance compared to the total rolling bearing capacitance. The developed simulation method allows a detailed analysis of the rolling bearing capacitances, taking into account influencing factors that could not be considered before (e.g., oil quantity in the environment of the rolling bearing). As a result, the accurate calculation of the rolling bearing capacitance can improve the prediction of the harmful parasitic currents and help to develop preventive measures against them.
Knowledge workers face an ever increasing flood of information in their daily work. They live in a “multi-tasking craziness”, involving activities like creating, finding, processing, assessing or organizing information while constantly switching from one context to another, each being associated with different tasks, documents, mails, etc. Hence, their personal information sphere consisting of file, mail and bookmark folders as well as their content, calendar entries, etc. is cluttered with information that has become irrelevant. Finding important information thus gets harder and much of previously gained knowledge is practically lost.
This thesis explores new ways of solving this problem by investigating the potential of self-(re)organizing and especially forgetting-enabled personal knowledge assistants in the given scenario. It utilizes so-called Managed Forgetting, which is an escalating set of measures to overcome the binary keep-or-delete paradigm, ranging from temporal hiding, to condensation, to adaptive reorganization, synchronization, archiving and deletion. Managed Forgetting is combined with two other major ideas: First, it uses the Semantic Desktop as an ecosystem, which brings Semantic Web and thus knowledge graph technologies to a user’s desktop, making it possible to capture and represent major parts of a user’s personal mental model in a machine-understandable way and exploit it in many different applications. Second, the system uses explicated context information – so-called Context Spaces: context is seen as an explicit interaction element users can work with (i.e. a “tangible” object similar to a folder) and in (immersion). The thesis is structured according to the basic interaction cycle with such a system, ranging from evidence collection to information extraction and context elicitation, followed by information value assessment and the actual support measures consisting of self-(re)organization decisions (back-end) and user interface updates (front-end). The system’s data foundation are personal or group knowledge graphs as well as native data. This work makes contributions to all of these aspects, whereas several of them have been investigated and developed in interdisciplinary research with cognitive scientists. On a more general level, searching and trust in such highly autonomous assistants have also been investigated.
In summary, a self-(re)organizing and especially forgetting-enabled support system for information management and knowledge work has been realized. Its different features vary in maturity: the most mature ones are already in practical use (also in industry), while the latest are just well elaborated (position papers) or rough ideas. Different evaluation strategies have been applied ranging from mere data-driven experiments to various user studies. Some of them were rather short-term with controlled laboratory conditions, others less controlled but spanning several months. Different benefits of working with such a system could be quantified, e.g. cognitive offloading effects and reduced task switching/resumption time. Other benefits were gathered qualitatively, e.g. tidiness of the information sphere and its better alignment with the user’s mental model. The presented approach has been shown to hold a lot of potential. In some aspects, however, only first steps have been taken towards tapping it, e.g. several support measures can be further refined and automation further increased.
This thesis focuses on the operation of reliability-constrained routes in wireless ad-hoc networks. A complete communication protocol that is capable of guaranteeing a statistical minimum reliability level would have to support several functionalities: first, routes that are capable of supporting the specified Quality of Service requirement have to be discovered. During operation of discovered routes, the current Quality of Service level has to be monitored continuously. Whenever significant deviations are detected and the required level of Quality of Service is endangered, route maintenance has to ensure continuous operation. All four functionalities, route discovery, route operation, route maintenance and collection and distribution of network status information, will be addressed in this thesis.
In the first part of the thesis, we propose a new approach for Quality-of- Service routing in wireless ad-hoc networks called rmin-routing, with the provision of statistical minimum route reliability as main route selection criterion. To achieve specified minimum route reliabilities, we improve the reliability of individual links by well-directed retransmissions, to be applied during the operation of routes. To select among a set of candidate routes, we define and apply route quality criteria concerning network load.
High-quality information about the network status is essential for the discovery and operation of routes and clusters in wireless ad-hoc networks. This requires permanent observation and assessment of nodes, links, and link metrics, and the exchange of gathered status data. In the second part of the thesis, we present cTEx, a configurable topology explorer for wireless ad-hoc networks that efficiently detects and exchanges high-quality network status information during operation.
In the third part, we propose a decentralized algorithm for the discovery and operation of reliability-constrained routes in wireless ad-hoc networks called dRmin-routing. The algorithm uses locally available network status information about network topology and link properties that is collected proactively in order to discover a preliminary route candidate. This is followed by a distributed, reactive search along this preselected route to remove imprecisions of the locally recorded network status before making a final route selection. During route operation, dRmin-routing monitors routes and performs different kinds of route repair actions to maintain route reliability in order to overcome varying link reliabilities.
Modeling and Simulation of Internet of Things Infrastructures for Cyber-Physical Energy Systems
(2024)
This dissertation presents a novel approach to the model-based development and simulation-based validation of Internet of Things (IoT) infrastructures within the context of Cyber-Physical Energy Systems (CPES). CPES represents an evolution in energy management, seamlessly blending physical and cyber components for efficient, secure, and dependable energy distribution. However, the intricate interplay of these components demands innovative modeling and simulation strategies.
The work begins by establishing a robust foundation, exploring essential background elements such as requirements engineering, model-based systems engineering, digitalization approaches, and the intricacies of IoT platforms. It introduces the novel concept of homomorphic encryption, a critical enabler for securing IoT data within CPES.
In the exploration of the state of the art, the dissertation delves into the multifaceted landscape of IoT simulation, emphasizing the significance of versatility, community support, scalability, and synchronization.
The core contribution emerges in the chapter on simulating IoT networks. It introduces a sophisticated framework that encompasses hardware-in-the-loop, software-in-the-loop, and human-in-the-loop simulation. This innovative framework extends the boundaries of conventional simulation, enabling holistic evaluations of IoT systems.
A practical case study on smart energy usage showcases the application of the framework. Detailed SysML models, including requirements, package diagrams, block definition diagrams, internal block diagrams, state machine diagrams, and activity diagrams, are meticulously examined. The performance evaluation encompasses diverse aspects, from hardware and software validation to human interaction.
In conclusion, this dissertation represents a significant leap forward in the integration of IoT infrastructures within CPES. Its contributions extend from a comprehensive understanding of foundational elements to the practical implementation of a holistic simulation framework. This work not only addresses the current challenges but also outlines a path for future research, shaping the landscape of IoT integration within the dynamic realm of CPES. It offers invaluable insights for researchers, engineers, and stakeholders working towards resilient, secure, and energy-efficient infrastructures.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
Velocity Based Training ist ein Ansatz zur Belastungssteuerung im Widerstandstraining, der die volitional maximale konzentrische Durchschnittsgeschwindigkeit gegen einen bestimmten Lastwiderstand zur Steuerung der Belastungsintensität sowie das Ausmaß der intraseriellen konzentrischen Geschwindigkeitsreduktion zur Steuerung der intraseriellen muskulären Ermüdung verwendet. Die diesem Ansatz inhärente Grundvoraussetzung, sich mit volitional maximalen konzentrischen Geschwindigkeiten zu bewegen, führt jedoch dazu, dass die Steuerung der muskulären Ermüdung auf Basis der relativen Geschwindigkeitsreduktion nicht umsetzbar ist, wenn man sich im Widerstandstraining mit volitional submaximaler Geschwindigkeit bewegt. Deshalb befasste sich dieses Promotionsprojekt mit der übergeordneten Forschungsfrage, inwieweit sich ein adaptierter Ansatz der geschwindigkeitsbasierten Belastungssteuerung im Widerstandstraining auf Basis der Minimum Velocity Threshold (MVT), der eine „Relative Stopping Velocity Threshold“ ([RSVT], berechnet als Vielfaches der MVT in Prozent) zur objektiven Autoregulation der Belastungsdauer verwendet, dazu eignet, den Grad der muskulären Ermüdung innerhalb eines Trainingssatzes mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit zu steuern.
Zur Beantwortung dieser übergeordneten Forschungsfrage wurde eine explanative, prospektive Untersuchung im quasiexperimentellen Design durchgeführt. Dabei wurde für alle Probanden an einem ersten Termin die individuelle dynamische Maximalkraftleistung (1-RM) für die Langhantelübungen Bankdrücken und Kreuzheben ermittelt und an einem zweiten Termin die eigentliche Testung durchgeführt. An diesem zweiten Testtermin wurde pro Übung jeweils ein Testsatz mit volitional maximaler und ein Testsatz mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit bei einer standardisierten Belastungsintensität von 75 % 1-RM ausgeführt, während die konzentrische Bewegungsgeschwindigkeit der einzelnen Wiederholungen mittels einer Inertialsensoreinheit erfasst wurde, um die ermüdungsbedingte Geschwindigkeitsreduktion der Wiederholungen am Ende eines ausbelastenden Testsatzes zu untersuchen.
Als Antwort auf die übergeordnete Forschungsfrage dieser Untersuchung kann festgehalten werden, dass sich die RSVT grundsätzlich zur Steuerung der intraseriellen muskulären Ermüdung im Widerstandstraining mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit eignet. Für fitness- und gesundheitsorientierte Personen wurde ein RSVT-Zielkorridor abgeleitet der RSVT = 171,4 - 186,6 % MVT entspricht. Führt man einen Satz Bankdrücken mit der Langhantel mit einer Belastungsintensität von 75 % 1-RM und volitional submaximaler konzentrischer Bewegungsgeschwindigkeit so lange aus, bis die durchschnittliche konzentrische Bewegungsgeschwindigkeit (MV) einer Wiederholung ermüdungsbedingt in diesen Zielkorridor absinkt, sollten noch zwei bis drei weitere Wiederholungen ausführbar sein, bevor der Punkt des momentanen konzentrischen Muskelversagens erreicht wird. Für leistungsorientierte Personen im trainierten Zustand wurde ein RSVT-Zielkorridor von RSVT = 183,8 - 211,3 % MVT abgeleitet. Sinkt die gemessene MV einer Wiederholung ermüdungsbedingt in diesen Zielkorridor, kann mit vertretbarer Sicherheit davon ausgegangen werden, dass noch eine bis zwei weitere Wiederholungen bis zum Punkt des momentanen konzentrischen Muskelversagens ausgeführt werden können.
Die vorliegende Dissertation liefert durch diese Weiterentwicklung des Velocity Based Training einen adaptierten Steuerungsansatz, mit dem es erstmals möglich wird, die geschwindigkeitsbasierte Belastungssteuerung im Widerstandstraining auch bei volitional submaximalen konzentrischen Bewegungsgeschwindigkeiten sinnvoll anzuwenden. Aufgrund bestehender Limitationen der Untersuchung sind jedoch weitere wissenschaftliche Studien erforderlich, um die Gültigkeit, die Übertragbarkeit sowie die Effektivität des MVT-basierten Steuerungsansatzes weiter zu erforschen.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
In this thesis, material removal mechanisms in grinding are investigated considering a gritworkpiece interaction as well as a grinding-wheel workpiece interaction. In grit-workpiece interaction in a micrometer scale, single grit scratch experiments were performed to investigate material removal mechanism in grinding namely rubbing, plowing, and cutting. Experiments performed were analyzed based on material removal, process forces and specific energy. A finite element model is developed to simulate a single-grit scratch process. As part of the development of the finite element scratch model a 2D and 3D model is developed. A 2D model is utilized to test
material parameters and test various mesh discretizational approaches. A 3D model undertaking the tested material parameters from the 2D model is developed and is tested against experimental results for various mesh discretization. The simulation model is validated based on process forces and ground topography from experiments. The model is also further scaled to simulate multiple grit-workpiece interaction validated against experimental results. As a final step, simulation models are developed to simulate material removal, due to the interaction of grinding wheel and workpiece. A developed virtual grinding wheel topographical model is employed to display
an approach, to upscale a grinding process from grit-workpiece interaction to wheel-workpiece
interaction. In conclusion, practical conclusions drawn and scope for future studies are derived
based on the developed simulation models.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
Climate change will have severe consequences on Eastern Boundary Upwelling Systems (EBUS). They host the largest fisheries in the world supporting the life of millions of people due to their tremendous primary production. Therefore, it is of utmost importance to better understand predicted impacts like alternating upwelling intensities and light impediment on the structure and the trophic role of protistan plankton communities as they form the basis of the food web. Numerical models estimate the intensification of the frequency in eddy formation. These ocean features are of particular importance due to their influence on the distribution and diversity of plankton communities and the access to resources, which are still not well understood even to the present day. My PhD thesis entails two subjects conducted during large-scaled cooperation projects REEBUS (Role of Eddies in Eastern Boundary Upwelling Systems) and CUSCO (Coastal Upwelling System in a Changing Ocean).
Subject I of my study was conducted within the multidisciplinary framework REEBUS to investigate the influence of eddies on the biological carbon pump in the Canary Current System (CanCS). More specifically, the aim was to find out how mesoscale cyclonic eddies affect the regional diversity, structure, and trophic role of protistan plankton communities in a subtropical oligotrophic oceanic offshore region.
Samples were taken during the M156 and M160 cruises in the Atlantic Ocean around Cape Verde during July and December 2019, respectively. Three eddies with varying ages of emergence and three water layers (deep chlorophyll maximum DCM, right beneath the DCM and oxygen minimum zone OMZ) were sampled. Additional stations without eddy perturbation were analyzed as references. The effect of oceanic mesoscale cyclonic eddies on protistan plankton communities was analyzed by implementing three approaches. (i) V9 18S rRNA gene amplicons were examined to analyze the diversity and structure of the plankton communities and to infer their role in the biological carbon pump. (ii) By assigning functional traits to taxonomically assigned eDNA sequences, functional richness and ecological strategies (ES) were determined. (iii) Grazing experiments were conducted to assess abundance and carbon transfer from prokaryotes to phagotrophic protists.
All three eddies examined in this study differed in their ASV abundance, diversity, and taxonomic composition with the most pronounced differences in the DCM. Dinoflagellates were the most abundant taxa in all three depth layers. Other dominating taxa were radiolarians, Discoba and haptophytes. The trait-approach could only assign ~15% of all ASVs and revealed in general a relatively high functional richness. But no unique ES was determined within a specific eddy. This indicates pronounced functional redundancy, which is recognized to be correlated with ecosystem resilience and robustness by providing a degree of buffering capacity in the face of biodiversity loss. Elevated microbial abundances as well as bacterivory were clearly associated to mesoscale eddy features, albeit with remarkable seasonal fluctuations. Since eddy activity is expected to increase on a global scale in future climate change scenarios, cyclonic eddies could counteract climate change by enhancing carbon sequestration to abyssal depths. The findings demonstrate that cyclonic eddies are unique, heterogeneous, and abundant ecosystems with trapped water masses in which characteristic protistan plankton develop as the eddies age and migrate westward into subtropical oligotrophic offshore waters. Therefore, eddies influence regional protistan plankton diversity qualitatively and quantitatively.
Subject II of my PhD project contributed to the CUSCO field campaign to identify the influence of varying upwelling intensities in combination with distinct light treatments on the whole food web structure and carbon pump in the Humboldt Current System (HCS) off Peru. To accomplish such a task, eight offshore-mesocosms were deployed and two light scenarios (low light, LL; high light, HL) were created by darkening half of the mesocosms. Upwelling was simulated by injecting distinct proportions (0%, 15%, 30% and 45%) of collected deep-water (DW) into each of the moored mesocosms. My aim was to examine the changes in diversity, structure, and trophic role of protistan plankton communities for the induced manipulations by analyzing the V9 18S rRNA gene amplicons and performing short-term grazing experiments.
The upwelling simulations induced a significant increase in alpha diversity under both light conditions. In austral summer, reflected by HL conditions, a generally higher alpha diversity was recorded compared to the austral winter simulation, instigated by LL treatment. Significant alterations of the protistan plankton community structure could likewise be observed. Diatoms were associated to increased levels of DW addition in the mimicked austral winter situation. Under nutrient depletion, chlorophytes exhibited high relative abundances in the simulated austral winter scenario. Dinoflagellates dominated the austral summer condition in all upwelling simulations. Tendencies of reduced unicellular eukaryotes and increased prokaryotic abundances were determined under light impediment. Protistan-mediated mortality of prokaryotes also decreased by ~30% in the mimicked austral winter scenario.
The findings indicate that the microbial loop is a more relevant factor in the structure of the food web in austral summer and is more focused on the utilization of diatoms in austral winter in the HCS off Peru. It was evident that distinct light intensities coupled with multiple upwelling scenarios could lead to alterations in biochemical cycles, trophic interactions, and ecosystem services. Considering the threat of climate change, the predicted relocation of EBUS could limit primary production and lengthen the food web structure with severe socio-economic consequences.
Zur Förderung der Nahmobilität, insbesondere der Basismobilität „Zufußgehen“, ist die Möglichkeit zur Teilhabe im öffentlichen Verkehrsraum für alle Menschen und im Besonderen für mobilitätseingeschränkte (Bedürfnisgruppen) unerlässlich. Nur mit Hilfe einer barrierefrei gestalteten Umwelt kann die Teilhabe Aller erreicht werden. In diesem Zusammenhang ist es notwendig, ein durchgehend barrierefreies Fußverkehrsnetz herzustellen. Hierzu sind die Fußverkehrsanlagen (Gehbereiche, Überquerungsstellen, Treppen, Rampen und Aufzüge) entsprechend zu gestalten. Ein nachvollziehbares und praxisorientiertes Verfahren zur Bewertung der Barrierefreiheit von Fußverkehrsnetzen existiert allerdings derzeit nicht. An diesem Punkt setzt die vorliegende Forschungsarbeit an. Durch die Entwicklung
eines Verfahrens zur Bewertung der bestehenden Barrierefreiheit von Fußverkehrsnetzen anhand von Qualitätsstufen wird ein praktisches Anwendungstool geschaffen. Dieses richtet sich an verantwortliche Personen, u.a. aus Planung, Politik und Verwaltung, um eine Priorisierung und Umsetzung von
Maßnahmen zum Abbau von Barrieren vornehmen zu können.
Grundlage für das Bewertungsverfahren bilden Interviews und Befragungen von Fachleuten und Bedürfnisgruppen. Der Schwerpunkt liegt hierbei auf motorisch und visuell eingeschränkten Personen. Die Befragungen befassten sich mit der Höhe der Erschwernisse, je nach Bedürfnisgruppe, bei der Nutzung von Fußverkehrsanlagen im öffentlichen Raum, wenn diese nicht den Vorgaben der Technischen Regelwerke entsprechen. Das Bewertungsverfahren übersetzt die Barrierefreiheit in eine verständliche und nachvollziehbare Größe, indem die Erschwernisse in eine gefühlte zusätzliche Entfernung umgerechnet werden. Weiterhin wird neben der gefühlten auch die tatsächliche zusätzliche Entfernung aufgrund von Umwegen berücksichtigt. Aufbauend auf der Bewertung von Fußverkehrsanlagen können so Routen und Verbindungen sowie Fußverkehrsnetze bewertet werden. Der grundsätzliche Ablauf des Bewertungsverfahrens ist für alle Bedürfnisgruppen gleich. Er besteht aus vier wesentlichen Schritten und hat jeweils eine von sechs Qualitätsstufen der Barrierefreiheit (QSB, Stufen von A bis F) zum Ergebnis. Im Rahmen der Forschungsarbeit wird festgelegt, dass der Übergang von der Stufe D zur
Stufe E für die Mehrheit der betrachteten Bedürfnisgruppen die Grenze zwischen Selbstständigkeit und Notwendigkeit fremder Hilfe beim Nutzen der Fußverkehrsanlagen darstellt. Das entwickelte Bewertungsverfahren bietet eine gute Grundlage zur Bewertung von Fußverkehrsnetzen in Bezug auf die Barrierefreiheit. Aufgrund der Modularität und Flexibilität ist es möglich, sowohl
weitere Aspekte als auch weitere Bedürfnisgruppen zu integrieren. Wichtig sind eine kontinuierliche Anwendung des Verfahrens und die Berücksichtigung der Barrierefreiheit von Anfang an in jeder Planung. Ebenfalls ist eine gesetzliche Integration der barrierefreien schrittweisen Umgestaltung anhand
anerkannter Technischer Regelwerke notwendig. Nur so kann ein durchgehend barrierefreies Netz entstehen und allen Menschen, egal ob mit oder ohne Mobilitätseinschränkung, eine Teilhabe im öffentlichen Verkehrsraum ohne fremde Hilfe ermöglicht werden. Zudem kann durch die Steigerung der
Attraktivität die Nahmobilität gefördert werden. Hiermit kann erreicht werden, Menschen bei kurzen Entfernungen vom zu Fuß gehen bzw. von der Nutzung des Rollstuhls zu überzeugen. Letztlich ist so auch eine Minderung des CO2-Ausstoßes denkbar, wenn für kurze Routen kein oder seltener das Kfz
genutzt wird. Das nachhaltigste und umweltschonendste Fortbewegungsmittel ist das zu Fuß gehen und ein barrierefreies Umfeld trägt somit schlussendlich zum Klimaschutz bei.
Since their introduction, robots have primarily influenced the industrial world, providing new opportunities and challenges for humans and machinery. With the introduction of lightweight robots and mobile robot platforms, the field of robot applications has been expanded, diversified, and brought closer to society. The increased degree of digitalization and the personalization of goods and products require an enhanced and flexible robot deployment by operating several multi-robot systems along production processes, industrial applications, assembly and packaging lines, transport systems, etc.
Efficient and safe robot operation relies on successful task planning followed by the computation and execution of task-performing motion trajectories. This thesis addresses these issues by developing, implementing, and validating optimization-based methods for task and trajectory planning in robotics, considering certain optimality and performance criteria. The focus is mainly on the time optimality of the presented approaches with respect to both execution and computation time without compromising safe robot use.
Driven by a systematic approach, the basis for the algorithm development is established first by modeling the kinematics and dynamics of the considered robots and identifying required dynamic parameters. In a further step, time-optimal task and trajectory planning algorithms for a single robotic arm are developed. Initially, a hierarchical approach is introduced consisting of two decoupled optimization-based control policies, a binary problem for task planning, and a continuous model predictive trajectory planning problem. The two layers of the hierarchical structure are then merged into a monolithic layer, resulting in a hybrid structure in the form of a mixed-integer optimization problem for inherent task and trajectory planning.
Motivated by a multi-robot deployment, the hierarchical control structure for time-optimal task and trajectory planning is extended for the case of a two-arm robotic system with highly overlapping operational spaces, leading to challenging robot motions with high inter-robot collision potential. To this end, a novel predictive approach for collision avoidance is proposed based on a continuous approximation of the robot geometry, resulting in a nonlinear optimization problem capable of online applications with real-time requirements. Towards a mobile and flexible robot platform, a model predictive path-following controller for an omnidirectional mobile robot is introduced. Here, a time-minimal approach is also applied, which consists of the robot following a given parameterized path as accurately as possible and at maximum speed.
The performance of the proposed algorithms and methods is experimentally analyzed and validated under real conditions on robot demonstrators. Implementation details, including the resulting hardware and software architecture, are presented, followed by a detailed description of the results. Concrete and industry-oriented demonstrators for integrating robotic arms in existing manual processes and the indoor navigation of a mobile robot complete the work.
Schneckengetriebe werden meist aus einer Stahlschnecke und einem Bronze-Schneckenrad gefertigt. Diese werden zur einstufigen Übertragung von Drehbewegungen bei hohen Übersetzungen eingesetzt. Einen Nachteil von Schneckengetrieben stellt der relativ hohe Verschleiß infolge der hohen Gleitreibung im Zahneingriff dar. Durch eine geeignete Schmierung können Reibung und Verschleiß reduziert werden. Dies reduziert den Temperaturanstieg
im Betrieb und führt somit zu einer längeren Lebensdauer des Getriebes. Aufgrund der ausgeprägten Kühlwirkung erfolgt die Schmierung von Schneckengetrieben in der Praxis überwiegend mit Schmierölen. Fettartige Schmierstoffe werden ebenfalls verwendet, weisen jedoch eine geringere Kühlwirkung als flüssige Schmierstoffe auf. Bei Vakuumanwendungen oder unter extremen Betriebsbedingungen, wie z.B. Hoch- oder Tieftemperaturanwendungen
sowie bei niedrigen hydrodynamischen Geschwindigkeiten, verlieren die oben genannten konventionellen Schmierstoffe ihre Schmierwirkung. Als Alternative
werden Festschmierstoffe eingesetzt.
Festschmierstoffe können im Allgemeinen auf verschiedene Weise in den Kontaktstellen von Maschinenelementen verwendet werden. In dieser Arbeit wird das Prinzip der Transferschmierung durch ein Opferbauteil eingesetzt. Hierbei werden Compounds aus strahlenmodifiziertem Polytetrafluorethylen (PTFE) und Polyamid (PA) als Opferbauteil im Schneckengetriebe verwendet, sodass die Stahlschnecke zeitgleich mit dem Bronze-Schneckenrad und dem Opferrad aus PA-PTFE-Compound im Zahneingriff steht. Durch die Belastung des Opferrades mit einem relativ kleinen Drehmoment verschleißt das Opferrad, wodurch der PTFE-Festschmierstoff freigesetzt und an der Stahloberfläche deponiert wird. Dies führt zur Bildung eines Transferfilms, welcher zur Schmierung des Kontakts
zwischen der Stahlschnecke und dem Bronze-Schneckenrad führt. Die Mechanismen des Auf- und Abbaus solcher Transferfilme in Schneckengetrieben sind derzeit unbekannt und werden in dieser Arbeit anhand experimenteller Untersuchungen erforscht. Hierzu wurden tribologische Versuche an Modellprüfständen durchgeführt, wodurch das reib- und Verschleißverhalten an Stahl-Bronze-Kontakten untersucht wurde. Als Modellprüfstände kamen der Block-auf-Ring-, der Block-Zwei-Scheiben- und der Drei-Scheiben-Prüfstand zum Einsatz. Anschließend wurden Bauteilversuche auf einem Schneckengetriebeprüfstand durchgeführt, um die aus den Modellversuchen gewonnenen Erkenntnisse zu validieren. Mit Hilfe von oberflächenanalytischen Techniken wurden die Prüfkörper auf der Mikroskala untersucht, um die Qualität und Quantität des aufgebauten Transferfilms zu bestimmen.
Cancer, a complex and multifaceted disease, continues to challenge the boundaries of biomedical research. In this dissertation, we explore the complexity of cancer genesis, employing multiscale modeling, abstract mathematical concepts such as stability analysis, and numerical simulations as powerful tools to decipher its underlying mechanisms. Through a series of comprehensive studies, we mainly investigate the cell cycle dynamics, the delicate balance between quiescence and proliferation, the impact of mutations, and the co-evolution of healthy and cancer stem cell lineages. The introductory chapter provides a comprehensive overview of cancer and the critical importance of understanding its underlying mechanisms. Additionally, it establishes the foundation by elucidating key definitions and presenting various modeling perspectives to address the cancer genesis. Next, cell cycle dynamics have been explored, revealing the temporal oscillatory dynamics that govern the progression of cells through the cell cycle.
The first half of the thesis investigates the cell cycle dynamics and evolution of cancer stem cell lineages by incorporating feedback regulation mechanisms. Thereby, the pivotal role of feedback loops in driving the expansion of cancer stem cells has been thoroughly studied, offering new perspectives on cancer progression. Furthermore, the mathematical rigor of the model has been addressed by deriving wellposedness conditions, thereby strengthening the reliability of our findings and conclusions. Then, expanding our modeling scope, we explore the interplay between quiescent and proliferating cell populations, shedding light on the importance of their equilibrium in cancer biology. The models developed in this context offer potential avenues for targeted cancer therapies, addressing perspective cell populations critical for cancer progression. The second half of the thesis focuses on multiscale modeling of proliferating and quiescent cell populations incorporating cell cycle dynamics and the extension thereof with mutation acquisition. Following rigorous mathematical analysis, the wellposedness of the proposed modeling frameworks have been studied along with steady-state solutions and stability criteria.
In a nutshell, this thesis represents a significant stride in our understanding of cancer genesis, providing a comprehensive view of the complex interplay between cell cycle dynamics, quiescence, proliferation, mutation acquisition, and cancer stem cells. The journey towards conquering cancer is far from over. However, this research provides valuable insights and directions for future investigation, bringing us closer to the ultimate goal of mitigating the impact of this formidable disease.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
Distributed Optimization of Constraint-Coupled Systems via Approximations of the Dual Function
(2024)
This thesis deals with the distributed optimization of constraint-coupled systems. This problem class is often encountered in systems consisting of multiple individual subsystems, which are coupled through shared limited resources. The goal is to optimize each subsystem in a distributed manner while still ensuring that system-wide constraints are satisfied. By introducing dual variables for the system-wide constraints the system-wide problem can be decomposed into individual subproblems. These resulting subproblems can then be coordinated by iteratively adapting the dual variables. This thesis presents two new algorithms that exploit the properties of the dual optimization problem. Both algorithms compute a quadratic surrogate function of the dual function in each iteration, which is optimized to adapt the dual variables. The Quadratically Approximated Dual Ascent (QADA) algorithm computes the surrogate function by solving a regression problem, while the Quasi-Newton Dual Ascent (QNDA) algorithm updates the surrogate function iteratively via a quasi-Newton scheme. Both algorithms employ cutting planes to take the nonsmoothness of the dual function into account. The proposed algorithms are compared to algorithms from the literature on a large number of different benchmark problems, showing superior performance in most cases. In addition to general convex and mixed-integer optimization problems, dual decomposition-based distributed optimization is applied to distributed model predictive control and distributed K-means clustering problems.
Lubricated tribological contact processes are important in both nature and in many technical applications. Fluid lubricants play an important role in contact processes, e.g. they reduce friction and cool the contact zone. The fundamentals of lubricated contact processes on the atomistic scale are, however, today not fully understood. A lubricated contact process is defined here as a process, where two solid bodies that are in close proximity and eventually in parts in direct contact, carry out a relative motion, whereat the remaining volume is submersed by a fluid lubricant. Such lubricated contact processes are difficult to examine experimentally. Atomistic simulations are an attractive alternative for investigating the fundamentals of such processes. In this work, molecular dynamics simulations were used for studying different elementary processes of lubricated tribological contacts. A simplified, yet realistic simulation setup was developed in this work for that purpose using classical force fields. In particular, the two solid bodies were fully submersed in the fluid lubricant such that the squeeze-out was realistically modeled. The velocity of the relative motion of the two solid bodies was imposed as a boundary condition. Two types of cases were considered in this work: i) a model system based on synthetic model substances, which enables a direct, but generic, investigation of molecular interaction features on the contact process; and ii) real substance systems, where the force fields describe specific real substances. Using the model system i), also the reproducibility of the findings obtained from the computer experiments was critically assessed. In most cases, also the dry reference case was studied. Both mechanical and thermodynamic properties were studied -- focusing on the influence of lubrication. The following properties were studied: The contact forces, the coefficient of friction, the dislocation behavior in the solid, the chip formation and the formation of the groove, the squeeze-out behavior of the fluid in the contact zone, the local temperature and the energy balance of the system, the adsorption of fluid particles on the solid surfaces, as well as the formation of a tribofilm. Systematic studies were carried out for elucidating the influence of the wetting behavior, the influence of the molecular architecture of the lubricant, and the influence of the lubrication gap height on the contact process. As expected, the presence of a fluid lubricant reduces the temperature in the vicinity of the contact zone. The presence of the lubricant is, moreover, found to have a significant influence on the friction and on the energy balance of the process. The presence of a lubricant reduces the coefficient of friction compared to a dry case in the starting phase of a contact process, while lubricant molecules remain in the contact zone between the two solid bodies. This is a result of an increased normal and slightly decreased tangential force in the starting phase. When the fluid molecules are squeezed out with ongoing contact time and the contact zone is essentially dry, the coefficient of friction is increased by the presence of a fluid compared to a dry case. This is attributed to an imprinting of individual fluid particles into the solid surface, which is energetically unfavorable. By studying the contact process in a wide range of gap height, the entire range of the Stribeck curve is obtained from the molecular simulations. Thereby, the three main lubrication regimes of the Stribeck curve and their transition regions are covered, namely boundary lubrication (significant elastic and plastic deformation of the substrate), mixed lubrication (adsorbed fluid layers dominate the process), and hydrodynamic lubrication (shear flow is set up between the surface and the asperity). The atomistic effects in the different lubrication regimes are elucidated. Notably, the formation of a tribofilm is observed, in which lubricant molecules are immersed into the metal surface. The formation of a tribofilm is found to have important consequences for the contact process. The work done by the relative motion is found to mainly dissipate and thereby heat up the system. Only a minor part of the work causes plastic deformation. Finally, the assumptions, simplifications, and approximations applied in the simulations are critically discussed, which highlights possible future work.
In dieser Arbeit wird die Co-Konsolidierung im Thermoformen zwischen kontinuierlich faserverstärkten, teilkonsolidierten CF/PEEK Tape-Preforms und kontinuierlich faserverstärkten, vollständig konsolidierten CF/PEEK Tape-Laminaten untersucht. Bei der Co-Konsolidierung handelt es sich um die Herstellung einer Schweißverbindung zwischen zwei oder mehr Thermoplasten durch separates Aufheizen, Zusammenbringen der Fügeflächen und rasches Abkühlen unter Druck im isothermen Werkzeug. Die adressierte Anwendung ist das Verschweißen von Versteifungen auf Tape-Preforms während dem Thermoformen, sodass nachgeschaltete Fügeprozesse solcher Versteifungen obsolet werden und die Zykluszeit des Thermoformens unverändert bleibt.
Die Ergebnisse zeigen, dass der Grad der Teilkonsolidierung der Tape-Preforms -
unabhängig der gewählten Einstellgrößen des Werkzeugdrucks - keinen Einfluss auf die Konsolidierung der Tape-Laminate nach dem Thermoformen nimmt. Im Bereich einer Versteifung ist ein vergleichsweise größerer Werkzeugdruck zur Konsolidierung der teilkonsolidierten Tape-Preform notwendig, damit dort die gleichen Eigenschaften wie fern der Co-Konsolidierung erzeugt werden. Die zwischen Tape-Laminat und Versteifung gemessenen Zugscherfestigkeiten, die mittels Co-Konsolidierung im Thermoformen erzeugt werden, sind niedriger als die der Co-Konsolidierung im Autoklav.
Die von Zhou bereits 1994 erhaltenen tri(tert-butyl)cyclopentadienyltrichloride der vierten Gruppe [Cp'''MCl3] (M = Ti, Zr, Hf) konnten reproduziert, kristallisiert und strukturell untersuchtwerden. Auch konnten neue Di- und Tri(tert-butyl)cyclopentadienylzirconiumbromide und -iodide synthetisiert werden. Von [Cp''ZrI3] wurden röntgendiffraktometertaugliche
Kristalle erhalten, an denen die Struktur der Verbindung
aufgeklärt werden konnte. Bei Substitutionsversuchen mit weiteren Liganden konnten Hydridocluster erhalten werden. Strukturelle Untersuchungen zeigte einen Clusterkomplex mit der Formel (Cp''Zr)4(μ-H)8(μ-Cl)2. Es handelt sich hierbei um einen vierkernigen Zirconiumcluster, welcher von acht Hydrido- und zwei Chloridoliganden verbrückt wird. Jedes Zirconiumatom ist weiterhin
mit einem Di(tert-butyl)cyclopentadienylliganden verbunden. Bei der Untersuchung des Reaktionshergangs wurde ein weiterer Zr-Cluster gefunden. Es konnten röntgendiffraktometertaugliche Kristalle von Tris{di(tert-butyl)cyclopentadienyldi(μ-hydrido)zirconium} {chloridotri(μ-hydrido)aluminat} erhalten werden. Der Cluster besteht aus drei Zirconiumatomen, welche in einem Dreieck angeordnet sind und mit je zwei Hydridoliganden verbrückt. Jedes Zirconium ist über eine Hydridobrücke mit einem Aluminiumchloridfragment verbunden. Zudem ist an je Zirconiumatom je ein Di(tert- butyl)cyclopentadienylligand koordiniert. Weiterhin wurden Experimente zur Herstellung von Alkylderivaten des bislang nicht bekannten
Zirconocengrundkörpers Cp2Zr unternommen. Hierzu wurde Zirconiumtetrachlorid
mit n-Butyllithium zum Dichlorid ZrCl2(THF)2 reduziert. Das Reduktionsprodukt
wurde mit Natriumtetra(isopropyl)cyclopentadienid, Natriumtri(tertbutyl)
cyclopentadienid oder Lithiumpenta(isopropyl)cyclopentadienid umgesetzt.
Die Ergebnisse zeigen keinen eindeutigen erhalt von Zirconocenen, jedoch wurde ein Tri(tert-butyl)cyclopentadienyllithium- salz erhalten, welches strukturell aufgeklärt werden konnte.
Reactive absorption with amines is the most important technique for the removal of CO2
from gas streams, e.g. from flue gas, natural gas or off-gas from the cement industry.
In this work a rigorous simulation model for the absorption and desorption of CO2 with
an amine-containing solvent is validated using data from pilot plants of various sizes.
This model was then coupled with a detailed simulation of a coal-fired power plant.
The power generation efficiency drop with CO2 capture was determined and process
parameters in the power plant and separation process were optimized. It was shown
that the high energy demand of CO2 separation significantly reduces power generation
efficiencies, which underlines the need for improvements. This can be achieved by better
solvents or by advanced process designs. In this work such improved CO2 separation
processes are described and evaluated by detailed simulation studies.
In order to develop detailed rigorous simulation models for reactive absorption with novel
solvent systems, a precise knowledge of the liquid phase reaction kinetics is necessary.
There are well established techniques for measuring species distributions in equilibirated
aqueous amine solutions by NMR spectrosopy. However, the existing NMR techniques
cannot be used for monitoring fast reactions in these solutions. Therefore, in this work
a novel temperature-controlled micro-reactor NMR probe head was developed which
enables studying reaction kinetics with time constants in the range of seconds.
On this basis, modern solvent systems for CO2 absorption can be characterized and
the scale-up of separation process for future plants can be accompanied using rigorous
process simulation.
In 2022 verfehlten Gebäude- und Verkehrssektor die Klimaschutzziele in Deutschland. Im Gegensatz zum Verkehrssektor stehen im Gebäudesektor lange Lebensdauern schnellen Technologiewechseln entgegen, weshalb Strategien besonders frühzeitig umgesetzt werden müssen. Zudem ist der Gebäudebestand durch hohe Investitionskosten bei vergleichsweise geringen Treibhausgaseinsparungen je investiertem Euro geprägt. In Kombination erschweren diese Hemmnisse die Erreichung der Klimaschutzziele für den Wohngebäudebestand deutlich.
Ziel dieser Arbeit ist die Entwicklung eines Wohngebäudebestandsmodells, um Transformationspfade unter dem Einfluss variierender ökonomischer Rahmenbedingungen, wie z.B. dem Einfluss unterschiedlicher CO2-Preisverläufe und eine Reinvestition der CO2-Steuer in die Modernisierung der Gebäude, simulieren und analysieren zu können.
Im ersten Schritt wird ein Wohngebäudebestandsmodell bei Fortschreibung der ökonomischen Rahmenbedingungen im Startjahr entwickelt und angewendet. Hierzu werden wichtige Parameter des Gebäudebestands identifiziert und diese anhand des vergangenen Verlaufs analysiert sowie Szenarien und Prognosen betrachtet. Ergebnis sind Ausgangsbedingungen und Einflussfaktoren auf den weiteren Verlauf, die für die Modellierung genutzt werden. Im zweiten Schritt wird eine Systematik entwickelt, um Modernisierungsraten endogen bei Variation der ökonomischen Rahmenbedingungen berechnen zu können.
In der vorliegenden Arbeit wird ein Modell vorgestellt, dass die ökonomischen Rahmenbedingungen und das Kopplungsprinzip dynamisch bei der Simulation von Vollmodernisierungsraten berücksichtigt. Die Ergebnisse zeigen, dass Vollmodernisierungsraten von 2 %/a über längere Zeiträume extreme Rahmenbedingungen benötigen und unrealistisch sind. Haupthemmnisse sind der Sanierungsbedarf (Kopplungsprinzip), sinkende Energieeinsparpotenziale der jüngeren Baualtersklassen und Mitnahmeeffekte bei verbesserter Förderung. Da eine Erreichung der Klimaschutzziele nur durch Anpassung der CO2-Steuer (auch bei Reinvestition) nicht innerhalb realistischer Steuerhöhen im Modell möglich ist, wird stattdessen ein Maßnahmenpaket aus wirtschaftlichen und legislativen Rahmenbedingungen zur Zielerreichung vorgestellt.
Pervasive human impacts rapidly change freshwater biodiversity. Frequently recorded exceedances of regulatory acceptable thresholds by pesticide concentrations suggest that pesticide pollution is a relevant contributor to broad-scale trends in freshwater biodiversity. A more precise pre-release Ecological Risk Assessment (ERA) might increase its protectiveness, consequently reducing the likelihood of unacceptable effects on the environment. European ERA currently neglects possible differences in sensitivity between exposed ecosystems. If the taxonomic composition of assemblages would differ systematically among certain types of ecosystems, so might their sensitivity toward pesticides. In that case, a single regulatory threshold would be over- or underprotective.
In this thesis, we evaluate (1) whether the assemblage composition of macroinvertebrates, diatoms, fishes, and aquatic macrophytes differs systematically between the types of a European river typology system, and (2) whether these taxonomical differences engender differences in sensitivity toward pesticides. While a selection of ecoregions is available for Europe, only a single typology system that classifies individual river segments is available at this spatial scale - the Broad River Types (BRT).
In the first two papers of this thesis, we compiled and prepared large databases of macroinvertebrate (paper one), diatom, fish, and aquatic macrophyte (paper two) occurrences throughout Europe to evaluate whether assemblages are more similar within than among BRT types. Additionally, we compared its performance to that of different ecoregion systems. We employed multiple tests to evaluate the performances, two of which were also designed in the studies. All typology systems failed to reach common quality thresholds for the evaluated metrics for most taxa. Nonetheless, performance differed markedly between typology systems and taxa, with the BRT often performing worst. We showed that currently available, European freshwater typology systems are not well suited to capture differences in biotic communities and suggest several possible amelioration.
In the third study, we evaluated whether ecologically meaningful differences in sensitivity exist between BRT types. To this end, we predicted the sensitivity of macroinvertebrate assemblages across Europe toward Atrazine, copper, and Imidacloprid using a hierarchical species sensitivity distribution model. The predicted assemblage sensitives differed only marginally between BRT types. The largest difference between
median river type sensitivities was a factor of 2.6, which is far below the assessment factor suggested for such models (6), as well as the factor of variation commonly observed between toxicity tests of the same species-compound pair (7.5 for copper). Our results don’t support the notion that a type-specific ERA might improve the accuracy of thresholds. However, in addition to the taxonomic composition the bioavailability of chemicals, the interaction with other stressors, and the sensitivity of a given species might differ between river types.
Diese Dissertation erläutert die Umsetzung eines RAMI 4.0 konformen Marktplatz in der spanenden Bearbeitung. Ziel ist es einen Lösungsansatz zu definieren, in dem firmenübergreifende Prozessketten für kleine Losgrößen automatisiert identifiziert werden und die Fertigung eines individuellen Produktes realisiert wird. Die Extraktion von Produktinformationen, die Fertigung eines individualisierten Produktes sowie die Beschreibung der Informationen in den Verwaltungsschalen wird validiert. Vor allem stellt sich als Herausforderung für die Zukunft heraus, eine gemeinsame Semantik für die Beschreibung von Capabilities zu definieren. Diese würde ermöglichen, dass ein Matching zwischen proprietären Produktinformationen und Skills möglich wird.
Weak memory consistency models capture the outcomes of concurrent
programs that appear in practice and yet cannot be explained by thread
interleavings. Such outcomes pose two major challenges to formal
methods. First, establishing that a memory model satisfies its
intended properties (e.g., supports a certain compilation scheme) is
extremely error-prone: most proposed language models were initially
broken and required multiple iterations to achieve soundness. Second,
weak memory models make verification of concurrent programs much
harder, as a result of which there are no scalable verification
techniques beyond a few that target very simple models.
This thesis presents solutions to both of these problems.
First, it shows that the relevant metatheory of weak memory
models can be effectively decided (sparing years of manual proof
efforts), and presents Kater, a tool that can answer metatheoretic
queries in a matter of seconds. Second, it presents GenMC, the first
(and only) scalable stateless model checker that is parametric in the
choice of the memory model, often improving the prior state of the art
by orders of magnitude.
This thesis outlines the development of thermoplastic-graphite based plate heat exchangers from material screening to operation including performance evaluation and fouling investi-gations. Polypropylene and polyphenylene sulfide as matrix and graphite as filler were cho-sen as feedstock materials, as they possess a low density and excellent corrosion resistance at a comparatively low price.
For the purpose of material screening, custom-made polymer composite plates with a plate thickness of 1-2 mm and a filler content of up to 80 wt.% were investigated for their thermal and mechanical suitability with regard to their use in plate heat exchangers. Three-point flexural tests show that the loading of polypropylene with graphite leads to mechanical prop-erties that allow the composites to be applied as corrugated heat exchanger plates. The simu-lated maximum overpressure is greater than 7 bar, depending on the wall thickness. The thermal conductivity of the composites was increased by a factor of 12.5 compared to pure polypropylene, resulting in thermal conductivities of up to 2.74 W/mK.
The fabrication of the developed corrugated heat exchanger plates, with a thickness between 0.85 mm and 2.5 mm and a heat transfer surface area of 11.13·10-3 m² was carried out via processes that can be automized, namely extrusion and embossing. With the manufactured plate heat exchanger, overall heat transfer coefficients are determined over a wide range of operating conditions (Re = 200 - 1600), which are used to validate a plate heat exchanger model and consequently to compare the composites with conventional materials. The em-bossing, which seems to result in a shift of the internal graphite structure, leads to a further improvement of the thermal conductivity by 7-20 %, in addition to the impact of the filler. With low plate thicknesses, overall heat transfer coefficients of up to 1850 W/m²K could be obtained. Considering the low density of the manufactured thermal plates, this ensures com-parable performance with metallic materials over a wide range of process conditions (Re = 200 - 4000).
The fouling kinetics and amount of calcium sulfate and calcium carbonate, respectively, on different polypropylene/graphite composites in a flat plate heat exchanger and the developed chevron type plate heat exchanger are determined and compared to the reference material stainless steel. For a straight evaluation of the fouling susceptibility of the materials the for-mation of bubbles on the materials is considered by optical imaging or excluded by a degas-ser. The results are interpreted using surface free energy and roughness of the surfaces. The results show that if bubble formation is avoided, the polymer composites have a very low fouling tendency compared to stainless steel, which is attributed to the low surface free ener-gies of approximately 25 mN/m. This is particularly the case when turbulent flows are pre-sent, as is in plate heat exchangers or when sandblasted specimen are used. Sandblasting also continues to increase heat transfer compared to untreated samples by increasing thermal conductivity and creating local turbulences. Depending on the test conditions, the fouling resistance formed on the stainless steel surface is an order of magnitude greater than on the flat plate polymer composites. In addition, the fouling layers adhere only weakly to the com-posites, which indicates an easy cleaning in place after the formation of deposits. The fouling investigations in the plate heat exchanger reveal sensitivity to calcium sulfate fouling, how-ever, CFD simulations indicate that this is due to flow maldistribution and not the actual pol-ymer composite materials.
Zeolithe werden seit Jahrzehnten als Katalysatoren in der chemischen Industrie und als Ionentauscher in Waschmitteln eingesetzt. Außerdem können Zeolithe als Trägermaterialien für Metalle, die durch Ionenaustausch oder Imprägnierung aufgebracht werden, eingesetzt werden. Ein neuartiges Anwendungsgebiet von Zeolithen ist die Verwendung als antimikrobielles Füllmaterial in Kunststoffen. Hierzu müssen die Zeolithe zuvor mit einem antimikrobiell wirkenden Metall wie z.B. Silber beladen werden. Dieser gefüllte Kunststoff kann zu Filamenten für den 3D-Druck weiterverarbeitet werden. Ein mögliches Anwendungsgebiet für die resultierenden Verbundmaterialien liegt im Bereich der Zahnmedizin in Form von Kronen oder dreigliedrigen Brücken. Ziel dieses Promotionsprojekts war die Modifikation der Zeolithe Beta und ZSM‑5 mit Silber, um die resultierenden Materialien als antimikrobielle Komponenten in einem Polymerverbundwerkstoff einzusetzen. Die beiden Zeolithe sollen mittels Ionenaustausch mit Silberionen beladen werden. Neben der Reaktionstemperatur und dem Gegenion im Zeolithgitter wurde auch die experimentelle Vorgehensweise des Ionenaustauschs (Dauer und Anzahl der Austauschzyklen) variiert, um eine möglichst hohe Beladung mit Silber zu erzielen. Durch die Kombination verschiedener Charakterisierungsmethoden wie Röntgenpulverdiffraktometrie (PXRD) und Festkörper-NMR-Spektrometrie (MAS-NMR) konnte der Erhalt der Zeolithstruktur nach dem Ionenaustausch bestätigt werden. Mittels Atomabsorptionsspektroskopie (AAS) wurde die Silbermenge im Zeolithgitter bestimmt. Da Zeolith ZSM-5 im Einkauf kostengünstiger ist als Zeolith Beta, wurde in den weiteren Schritten mit Silberionen ausgetauschtem Zeolith AgZSM-5 weitergearbeitet. Im nächsten Schritt wurde Zeolith AgZSM‑5 mit verschiedenen Verfahren modifiziert, um eine zeitlich steuerbare Freisetzung der Silberionen aus dem Zeolithgitter zu gewährleisten. Bei der Oberflächenpassivierung mittels Silylierung konnte mittels temperaturprogrammierter Desorption von Ammoniak (NH3-TPD) eine Abschwächung der Säurezentren nachgewiesen werden. Darüber hinaus wurde Zeolith AgZSM-5 noch mittels Imprägnierung mit Calcium bzw. Magnesium sowie durch Reduktion des Silbers im H2-Strom bei unterschiedlichen Temperaturen modifiziert. Bei der Reduktion des Silbers im H2-Strom konnte der Einfluss der Reduktionstemperatur auf die Kristallitgröße des Silbers gezeigt werden.
Machine Learning (ML) is expected to become an integrated part of future mobile networks due to its capacity for solving complex problems. During inference, ML algorithms extract the hidden knowledge of their input data which is delivered to them through wireless links in many scenarios. Transmission of a massive amount of such input data can impose a huge burden on the mobile network. On the other hand, it is known that ML algorithms can tolerate different levels of distortion on their input components, while the quality of their predictions remains unaffected. Therefore, utilization of the conventional approaches
implies a waste of radio resources, since they target an exact reconstruction of transmitted data, i.e., the input of ML algorithms. In this thesis, we propose a novel relevance based framework that focuses on the quality of final ML outputs instead of such syntax based reconstruction of transmitted inputs. To this end, we quantify the semantics or relevancy of input components in terms of the bit allocation aspect of data compression, where a higher tolerance for distortion implies less relevancy. A lower relevance level is translated into the allocation of less radio resources, e.g., bandwidth. The introduced formulation provides the foundations for the efficient support of ML models with their required data in the inference phase, while wireless resources are employed efficiently.
In this dissertation, a generic relevance based framework utilizing the Kullback-Leibler Divergence (KLD) is developed that is applicable to many realistic scenarios. The system model under study contains multiple sources transmitting correlated multivariate input components of a ML algorithm. The ML model is seen as a black box, which is trained and has fixed parameters while operating in the inference phase. Our proposed bit allocation accounts for the rate-distortion tradeoff. Hence, it is simply adjustable for application to
other problems. Here, an extended version of the proposed bit allocation strategy is introduced for signaling overhead reduction, in which the relevancy level of each input attribute changes instantaneously. In another expansion, to take the effect of dynamic channel states into account, a resource allocation approach for ML based centralized control systems is proposed. The novel quality of service metric takes outputs of ML algorithms into consideration,
and in combination with the designed greedy algorithm, provides significantly
improved end-to-end performance for a network of cart inverted pendulums.
The introduced relevance based framework is comprehensively investigated by considering various case studies, real and synthetic data, regression and classification, different estimators for the KLD, various ML models and codebook designs. Furthermore, the reliability of this proposed solution is explored in presence of packet drops, indicating robustness of the relevance based compression. In all of the simulations, the relevance based solutions deliver the best outcome in terms of the carefully chosen key performance indicators. In most of them, significantly high gains are also achieved compared to the conventional techniques, motivating further research on the subject.
Cyber-physische Produktionssysteme (CPPS) ermöglichen die Herstellung kundenindividueller Produkte in kleinen Losgrößen durch Nutzung aktueller Entwicklungen der Informations- und Kommunikationstechnologien. Im Materialfluss in CPPS ist jedoch aufgrund unterschiedlicher physikalischer Eigenschaften der Fördergüter und dynamischer Prozesszuweisungen die Gefahr physikalisch bedingter Störungen erhöht. Diese Arbeit untersucht die Nutzung von Physiksimulation als Basis eines Digitalen Zwillings von Fördermitteln, um diesen Herausforderungen zu begegnen. Das Ziel besteht darin, durch die Simulation der physikalischen Phänomene einzelner Materialflussprozesse die negativen Einflüsse von Störungen zu verringern und somit die Leistungsfähigkeit des Produktionssystems zu erhöhen. Hierzu findet zunächst eine konzeptionelle Entwicklung des Digitalen Zwillings statt, die eine Analyse der beteiligten Systeme, eine Anforderungsdefinition, eine Festlegung von Aufbau- und Ablaufstruktur, sowie eine Formalisierung der einzelnen Funktionsbestandteile umfasst. Im Anschluss wird der Digitale Zwilling softwaretechnisch implementiert, mit einem exemplarischen Fördermittel vernetzt und prototypisch in Betrieb genommen. Die Ergebnisse zeigen die Eignung der Physiksimulation für den beschriebenen Zweck und die Wirksamkeit des Einsatzes auf Produktionssystemebene, indem Materialflussprozesse beschleunigt durchgeführt, überwacht und im Falle von Störungen nachträglich simulativ untersucht werden können.
Mit der vorliegenden Dissertation wurde ein Werkzeug für die Erstellung volldigitaler binnendifferenzierter Arbeitsblätter im Regelunterricht Chemie evaluiert und weiterentwickelt, das ein motivations- und interessensförderndes Potential aufweist. Es konnten Zusammenhänge zur Benutzbarkeit der Anwendung und zum Cognitive Load hergestellt werden. Die Ergebnisse stützen damit die Erkenntnisse im Bereich des Lernens mit digitalen Medien. Die Integration von digitalen Werkzeugen in den Lernprozess ist berechtigt. Sie zeigen einerseits für Schüler:innen ein motivationsförderndes Potential und andererseits für Lehrende praktische Vorteile, indem auf vielfältige Weise Informationen dargeboten werden können – zum Beispiel im Bereich der Differenzierung. Mit HyperDocSystems können binnendifferenzierte digitale Arbeitsblätter erstellt und bearbeitet werden. Diese so genannten HyperDocs können von Lehrenden mit Lernhilfen in verschiedenen Darstellungsformen angereichert und von Lernenden volldigital im Browser mit einem Stylus oder der Tastatur bearbeitet werden.
Im Rahmen einer quasi-experimentellen Feldstudie wurde der Einsatz dieser neuartigen HyperDocs erstmals unter Betrachtung der intrinsischen Motivation und des Interesses, der Usability sowie der Nutzung des multimedialen Differenzierungsangebots analysiert. Die Studie fand über vier Schulstunden im Regelunterricht Chemie der Mittelstufe (Gymnasium / Gesamtschule) und Oberstufe (Gymnasium) statt. Dabei wurden auch der Cognitive Load und die tabletbezogenen Kompetenzen der Lernenden berücksichtigt. Die Ergebnisse lassen auf ein motivationsförderndes Potential der HyperDocs gegenüber analogen Arbeitsblättern schließen. Dabei zeigen sich Unterschiede zwischen den Geschlechtern, die zum Teil auf den Cognitive Load zurückzuführen sind und abhängig vom Alter der Lernenden (Mittel- und Oberstufe) auftreten. Die Lernhilfen werden in diesem Zusammenhang häufig aus Interesse und Neugier verwendet. Schüler:innen nutzen insbesondere Lernhilfen in Form von Text und Bild. Die Nutzungshäufigkeit des Differenzierungsangebots gibt jedoch nicht unmittelbar Aufschlüsse über die Motivation oder den Cognitive Load der Lernenden. Bei der Usability handelt es sich um ein wichtiges Kriterium beim Einsatz von digitalen Lernprogrammen, da sich unter anderem ein Zusammenhang zu den Variablen der intrinsischen Motivation und zum Cognitive Load beim Lernen mit HyperDocs herstellen lässt. Die Usability ist dabei jedoch abhängig vom Messzeitpunkt. HyperDocs weisen eine hohe Usability auf und können daher uneingeschränkt in der Mittel- und Oberstufe eingesetzt werden.
Esse aut non esse - Affirmation und Subversion intergeschlechtlicher Existenzen in der Schule
(2024)
Am 10.10.17 beschloss das Bundesverfassungsgericht in Karlsruhe, ein sog. drittes Geschlecht für den Eintrag im Geburtenregister einzuführen. Intersexuellen Menschen sollte damit ermöglicht werden, ihre geschlechtliche Identität eintragen zu lassen und damit Teilhabe am gesellschaftlichen Leben zu ermöglichen. Zur Begründung verwies das Gericht auf das im Grundgesetz geschützte Persönlichkeitsrecht. Die aktuell geltende Regelung sei mit den grundgesetzlichen Anforderungen insoweit nicht vereinbar, als dass es neben „weiblich" oder „männlich" keine dritte Möglichkeit bietet, ein Geschlecht eintragen zu lassen. Der Gesetzgeber musste nun bis Ende 2018 eine Neuregelung schaffen, in der sie eine Bezeichnung für ein drittes Geschlecht aufnimmt – „divers“.
Schulen als bedeutende soziale Einrichtungen sind nun gefordert, will man die Leitperspektiven der Diversität im Bildungsbereich und damit in der Gesellschaft beibehalten. Schulen stellen Arbeitsfeld, Lebenswelt und Lernumfeld für viele Generationen dar und besitzen damit immer eine gesellschaftliche Vorbildfunktion, wobei Diversität zum stets allgegenwärtigen Imperativ geworden ist. Als Avantgarde müssen Schulen deshalb gerade in gesellschaftlichen Fragen voranschreiten und gleichsam Verantwortung für die Entwicklungen und Lösung wichtiger ethischer Fragen übernehmen ohne dabei die Vermittlung traditioneller Werte und Normen als eine ihrer zentralen Funktionen aufzugeben. Diesen anspruchsvollen Spagat zu vollziehen bleibt konstante Herausforderung der Schulentwicklung.
Mit Vielfalt umgehen bedeutet im schulischen Kontext vor allem neben gegenseitiger Anerkennung und Respekt auch, dass das Zusammenleben der Menschen durch die Eröffnung alternativer Wahrnehmungs-, Denk- und Handlungsansätze bereichert wird. Der Beschluss des Bundesverfassungsgerichts ist folglich in besonderer Weise an Schulen gerichtet.
Doch wie kann dieser Weg erfolgreich und nachhaltig eingeschlagen werden?
Bei Betrachtung der zahlreichen Publikationen zum Thema Gender und Schule sowie der wenigen Entwicklungen in den letzten Jahren wird augenscheinlich, dass das deutsche Schulsystem für die Umsetzung der Entscheidung des Bundesverfassungsgerichts vom 10.10.2017 (1BvR 2019/16) systemisch und strukturell nicht vorbereitet ist.
Daraus lassen sich die Forschungsfragen dieser Promotionsarbeit formulieren:
- Wie verhält sich Schule zum Diskurs des dritten Geschlechts?
- Was sind aus Sicht schulischer Akteure Gelingensbedingungen für eine erfolgreiche Sichtbarmachung des dritten Geschlechts an Schulen?
Es soll in der Arbeit mittels empirischer Untersuchungen eingehend verdeutlicht werden, welche Gelingens- bzw. Misslingensfaktoren bei der Implementierung eines dritten Geschlechts eine Rolle spielen und unter welchen Voraussetzungen überhaupt Schule als Organisation auf die Sichtbarmachung intergeschlechtlicher Kinder und Jugendliche vorbereit ist.
In dieser Arbeit wurde wurde das CASOCI-Programm[1], dessen Implementierung bereits Thema der Dissertation von Dr. Tilmann Bodenstein war und Gegenstand kontinuierlicher Weiterentwicklung in den Arbeitsgruppen Fink (Karlsruher Institut für Technologie) und van Wüllen ist, MPI/OpenMP Hybrid parallelisiert. Dieses wurde im Anschluss daran verwendet, um den fünfkernigen [Ni(tmphen)2]3[Os(CN)6]2- Komplex (tmphen = 3,4,7,8-Tetramethyl-1,10-Phenanthrolin) auf dessen magnetische Eigenschaften hin zu untersuchen. Dieser wurde in der Gruppe von Kim R. Dunbar durch χT-Messungen experimentell untersucht[2,3]. Durch diamagnetische Substitution wurden von diesem Komplex Varianten mir nur ein und zwei aktiven Zentren erzeugt. An diesen wurden CASOCI-Rechnungen durchgeführt und g-Tensoren, Austauschkopplungen, D-Tensoren sowie Tensoren für den anisotropen Austausch bestimmt. Mit Hilfe dieser Tensoren konnte eine χT-Kurve berechnet werden, die eine gute Übereinstimmung mit der aus Dunbars Arbeiten zeigt aufweist. Es konnte gezeigt werden, dass der anisotrope Austausch maßgeblich für den Kurvenverlauf ist und die Einzel-Ionen Nullfeldaufspaltung praktisch keine Rolle spielt.
[1] T. Bodenstein, A. Heimermann, K. Fink, C. van Wüllen, Chem. Phys. Chem. 2022, 23, e202100648.
[2] M. G. Hilfiger, M. Shatruk, A. Prosvirin, K. R. Dunbar, Chem. Commun. 2008, 5752–5754.
[3]A.V.Palii,O.S.Reu,S.M.Ostrovsky,S.I.Klokishner,B.S.Tsukerblat,M.Hilfiger, M. Shatruk, A. Prosvirin, K. R. Dunbar, J. Phys. Chem. A 2009, 113, 6886–6890.
The ability to sense and respond to different environmental conditions allows living organisms to adapt quickly to their surroundings. In order to use light as a source of information, plants, fungi, and bacteria employ phytochromes. With their ability to detect far-red and red light, phytochromes constitute a major photoreceptor family. Bacterial phytochromes (BphPs) are composed of an apo-phytochrome and an open-chain tetrapyrrole, the chromophore biliverdin IXα, which mediates the photosensory properties. Depending on the photoexcitation and the quality of the incident light, phytochromes interconvert between two photoconvertible parental states: the red light-absorbing Pr-form and the far-red light-absorbing Pfr-form. In contrast to prototypical phytochromes, with a thermal stable Pr ground state, there is a group of bacterial phytochromes that exhibit dark reversion from the Pr- to the Pfr-form. These special proteins are classified as bathy phytochromes and range across different classes of bacteria. Moreover, the majority of BphPs act as sensor histidine kinases in two-component regulatory systems. The light-triggered conformational change results in the autophosphorylation of the histidine kinase domain and the transphosphorylation of an associated response regulator, inducing a cellular response. Spectroscopic analysis utilizing homologously produced protein identified PaBphP, the histidine kinase of the human opportunistic pathogen Pseudomonas aeruginosa, as a bathy phytochrome. Intensive research on PaBphP revealed evidence that the interconversion between its physiological active and inactive states is influenced by light and darkness rather than far-red and red light. In order to conduct a comprehensive systematic analysis, further bacterial phytochromes were investigated regarding their biochemical and spectroscopic behavior, as well as their autokinase activity. In addition to PaBphP, this work employs the bathy phytochromes AtBphP2, AvBphP2, XccBphP from the non-photosynthetic plant pathogens Agrobacterium tumefaciens, Allorhizobium vitis, Xanthomonas campestris, as well as RtBphP2 from the soil bacterium Ramlibacter tataouinensis. All investigated BphPs displayed a bathy-typical behavior by developing a distinct Pr-form under far-red light conditions and undergoing dark reversion to their Pfr-form. Different Pr/Pfr-fractions can be identified among the BphP populations in varying natural light conditions, including red or blue light. The Pr-form is considered as the active form due to autophosphorylation activity in the heterologously produced phytochromes when exposed to light. In the absence of light, associated with the development of the Pfr-form, the phytochromes exhibited disabled or strongly reduced autokinase activity. Additionally, light-triggered phosphorylation was observed for the response regulator PaAlgB, which is linked to the phytochrome of P. aeruginosa. This study presents the first comparative investigation of numerous bathy phytochromes under identical conditions. The work addressed a gap in the literature by providing quantitative correlation between kinase activity and calculated Pr/Pfr-fractions obtained from spectroscopic measurements. The biological role of PaBphP was partially elucidated through phenotypic characterization employing P. aeruginosa mutant and overexpression strains. The generation of a functional model was possible by considering the postulated functions of the other phytochromes found in the literature. In summary, bathy BphPs are hypothesized to modulate bacterial virulence according to the circadian day/night rhythm of their hosts. The pathogens are believed to reduce their virulence during daylight hours to evade immune and defense reactions, while increasing their virulence during the evening and night, enabling more effective infections.
In contrast to motorbike tyres, whose friction during cornering has to be as high as possible, the desired effect in skiing is the opposite, that of low friction. The reduced friction between skis and ice or snow is made possible by a film of meltwater that forms as a function of friction power. To support this friction mechanism, skis are waxed with different waxes in both hobby and professional sports, depending on a variety of conditions. Waxes with fluorine additives show best performance in most conditions, corresponding to the lowest friction coefficients. However, for health and environmental reasons, the International Ski Federation (FIS) and the Biathlon Un-ion (IBU) have imposed a complete ban on fluorine additives at all FIS races and IBU events with effect from the 2023/2024 season. As a result, wax manufacturers are required to develop and extensively test fluorine-free waxes in order to remain competitive.
Traditional tests take place either indoors or outdoors in the field. Athletes, who complete a particular distance and whose time is measured, also note the impres-sions that the prepared skis provide to the skiers. The time and cost involved in nu-merous individual tests is a drawback, and the presence of only a single type of snow in the hall or field, air resistance, changing environmental conditions and var-iations in the athlete's movement, limit the depth of information. For the need of re-ducing the time-consuming procedure of indoor and outdoor tests, a tribometer of-fers a solution where friction measurements can be performed on a laboratory scale. Due to the consistent adjustable conditions such as temperature, speed and load applied to the friction partners, scientific studies can be carried out with reduced dis-turbance variables. At present, the tribometric results of laboratory instruments for predicting friction values do not translate into application in practice. The reasons for this are the compromises that have to be made in the design of the tribometers.
This work reviews the existing tribometers for their operating conditions and con-firms the need for a scientific method of characterising different waxes. In order to fill the gap between friction results obtained in laboratory tests which cannot yet be used in the selection of waxes, and traditional field tests, this thesis is dedicated to the methodical design and manufacture of a linear tribometer capable of measuring friction between a ski base made of UHMWPE (ultra high molecular weight polyeth-ylene) and an ice sample. The tribometer provides for the first time results that allow differentiating be-tween different modified waxes with regard to their running performance. Friction-influencing factors such as speed, temperature and the surface pressure below the ski base can be adjusted within the range relevant for ski sports. Furthermore, the laboratory-scale test stand, which is located in a cold chamber, is capable of ac-commodating not only typical ski jumping base lengths and widths, but also cross-country and alpine ski bases. To verify the tribometer, a ski base is treated with three waxes of different fluorine content and measured comparatively. With a minimum of 95% confidence, the friction differences between the tested waxes depending on their fluorine content is validated and proven at the end of this work.
Functional structures as well as materials provided by nature have always been a great source of inspiration for new technologies. Adapting and improving the discovered concepts, however, demands a detailed understanding of their working principles, while employing natural materials for fabrication tasks requires suitable functionalization and modification.
In this thesis, the white scales of the beetle Cyphochilus are examined in order to reveal unknown aspects of their light transport properties. In addition, the monomer of the material they are made of is utilized for 3D microfabrication.
White beetle scales have been fascinating scientists for more than a decade because they display brilliant whiteness despite their small thickness and the low refractive index contrast. Their optical properties arise from highly efficient light scattering within the disordered intra-scale network structure.
To gain a better understanding of the scattering properties, several previous studies have investigated the light transport and its connection to the structural anisotropy with the aid of diffusion theory. While this framework allows to relate the light scattering to macroscopic transport properties, an accurate determination of the effective refractive index of the structure is required. Due to its simplicity, the Maxwell-Garnett mixing rule is frequently used for this task, although its constraint to particle and feature sizes much smaller than the wavelength is clearly violated for the scales.
To provide a correct calculation of the effective refractive index, here, finite-difference time-domain simulations are used to systematically examine the impact of size effects on the effective refractive index. Deploying this simulation approach, the Maxwell-Garnett mixing rule is shown to break down for large particles. In contrast, it is found that a quadratic polynomial function describes the effective refractive index in close approximation, while its coefficients can be obtained from an empirical linear function. As a result, a simple mixing rule is reported that unambiguously surpasses classical mixing rules when composite media containing large feature sizes are considered. This is important not only for the accurate description of white beetle scales, but also for other turbid media, such as biological tissues in opto-biomedical diagnostics.
Describing light transport by means of diffusion theory moreover neglects any coherent effects, such as interference. Hence, their impact on the generation of brilliant whiteness is currently unknown. To shed a light on their role, spatial- and time-resolved light scattering spectromicroscopy is applied to investigate the scales and a model structure of them based on disordered Bragg stacks. For both structures the occurrence of weakly localized photonic modes, i.e., closed scattering loops, is observed, which is further verified in accompanying simulations. As shown in this thesis, leakage from these random photonic modes contributes at least 20% to the overall reflected light. This reveals the importance of coherent effects for a complete description of the underlying light transport properties; an aspect that is entirely missing in the purely diffusive transport presumed so far. Identifying the importance of weak localization for the generation of brilliant whiteness paves the way to further enhance the design of efficient optical scattering media, an issue that recently drawn great attention.
Unlike their plant-based counterparts, rigid carbohydrates, such as chitin, are currently unavailable for 3D microfabrication via direct laser writing, despite their great significance in the animal kingdom for the construction of functional microstructures. To overcome this gap, the monomeric unit of chitin, N-acetyl-D-glucosamine, is here functionalized to serve as a photo-crosslinkable monomer in a non-hydrogel photoresist. Since all previous photoresists based on animal carbohydrates are in the form of hydrogel formulations, a new group of photoresists is established for direct laser writing.
Moreover, it is exhibited that the sensitization effect, previously used only in the context of UV curing, can be successfully transferred to direct laser writing to increase the maximum writing speed. This effect is based on the beneficial combination of two photoinitiators.
In this, one photoinitiator is an efficient crosslinking agent for the monomer used, but a rather poor two-photon absorber. The other photoinitiator (called sensitizer) possesses, conversely, a much higher two-photon absorption coefficient at the applied wavelength but is not well suited as a crosslinking agent. In combination, the energy absorbed by the sensitizer is passed to the photoinitiator, resulting in the formation of radicals needed to start the polymerization. As this greatly increases the rate at which the photoinitiator is radicalized, resists containing a photoinitiator and a sensitizer are shown to outperform resists containing only one of the components. Deploying the sensitization effect in direct laser writing therefore offers a simple way to individually tune the crosslinking ability and the two-photon absorption properties by combining existing compounds, compared to the costly chemical synthesis of novel, customized photoinitiators.
In der heutigen Arbeitswelt stehen Organisationen vor der Herausforderung, sich kontinuierlich an Veränderungen anzupassen. Der demographische Wandel und steigende Zahlen von Arbeitsausfällen durch psychische Belastungen rücken das Wohlergehen und die Zufriedenheit von Mitarbeitenden am Arbeitsplatz in den Fokus. Die Mitarbeiterbefragung als Instrument der Organisationsentwicklung ist eine Möglichkeit Veränderungsprozesse so zu gestalten, dass betriebswirtschaftliche und gleichzeitig humanistische Ziele erreicht werden können. Bei der Umsetzung von Mitarbeiterbefragungen kommt es vor allem auf deren Folgeprozesse an, da hier aus den Ergebnissen einer Befragung Schlussfolgerungen gezogen und diese in Aktionen überführt werden. Der Blick in die Praxis zeigt jedoch, dass Erwartungen an Folgeprozesse und somit Mitarbeiterbefragungen, sowohl auf Seite von Unternehmen, als auch auf Seite von Mitarbeitenden, oft enttäuscht werden.
Die bisherige Forschung zeigt zwar allgemein den positiven Effekt von Mitarbeiterbefragungen und Folgeprozessen auf, jedoch bleibt unklar, wie einzelne Bestandteile eines Folgeprozesses und vor allem deren qualitative Durchführung wirken. Hierin liegt der erste Ansatzpunkt der vorliegenden Arbeit. Darüber hinaus soll die Rolle von Führungskräften in Folgeprozessen beleuchtet werden. Denn aus den vielen Überlegungen und Untersuchungen dazu, welche Aspekte Change-Prozesse beeinflussen, sticht oft die besondere Rolle von Führungskräften hervor. Dabei wird von den Führungskräften Verhalten gefordert, welches über ein klassisch rational-funktionales Verständnis von Führung hinausgeht und Mitarbeitende dazu anregt, sich offen und engagiert in Veränderungsprozessen zu verhalten. Einen Ansatz dies zu erreichen, stellt Positive Leadership dar. Hierbei werden Führungsverhaltensweisen an den Tag gelegt, die die Sinnhaftigkeit der Arbeit betonen, positive Beziehungen zu Mitarbeitenden fördern, Anerkennung und Wertschätzung zeigen, Stärkenorientierung praktizieren, für positives Arbeitsklima sorgen, positive Kommunikation beinhalten, die Mitarbeitenden in ihrer Entwicklung unterstützen und insbesondere Partizipation und Befähigung ermöglichen. Auch wenn sich das Konzept Positive Leadership immer größerer Beliebtheit erfreut, existiert noch keine klare Konzeption des Konstrukts und noch kein etabliertes Messinstrument. Darüber hinaus findet sich noch keine Anwendung des Konzepts im Kontext von Change-Prozessen allgemein und von Folgeprozessen von Mitarbeiterbefragungen im Speziellen.
Das Hauptziel der vorliegenden Arbeit besteht darin, Positive Leadership im Kontext von Folgeprozessen einer Mitarbeiterbefragung zu untersuchen. Dazu wurden vier Studien durchgeführt. In Studie 1 wurde durch teilstrukturierte Experten-Interviews (N = 22) exploriert, welche Schritte ein Folgeprozess einer Mitarbeiterbefragung beinhaltet und woran sich eine hohe Qualität in der Durchführung dieser Schritte festmachen lässt. In Studie 2 wurde in drei Teiluntersuchungen (N1 = 194, N2 = 201, N3 = 124) ein Messinstrument für Positive Leadership entwickelt und validiert.
In Studie 3 wurden in einer Fragebogenstudie an einer Stichprobe von Mitarbeitenden (N = 1302) und Führungskräften (N = 266) der Stellenwert einzelner Schritte des Folgeprozesses und der Qualität in der Durchführung aufgezeigt. Des Weiteren wurde der Einfluss von Positive Leadership auf die Qualität des Folgeprozesses und auch Arbeitsengagement und Arbeitszufriedenheit belegt. Dies galt sowohl für Mitarbeitende als auch für Führungskräfte selbst. Sowohl die Einhaltung und Qualität des Folgeprozesses als auch Positive Leadership wirkten sich zudem (zum Teil indirekt über die Zufriedenheit mit dem Folgeprozess vermittelt) auf die Veränderung in Arbeitsengagement und Arbeitszufriedenheit zwischen zwei Mitarbeiterbefragungen aus. Außerdem konnten an einer Stichprobe von 242 Dyaden aus Führungskraft und Mitarbeitendem die Auswirkungen von Diskrepanz und Kongruenz der Einschätzungen zu Positive Leadership oder dem Folgeprozess aufgezeigt werden. Zuletzt wurde untersucht, inwiefern die Attribution von Erfolgen und Misserfolgen im Folgeprozess durch Positive Leadership beeinflusst wird.
Studie 4 bestätigte in einem experimentellen Design (N = 420) unter Anwendung von Video-Vignetten die positiven Effekte einer hohen Qualität des Folgeprozesses und von Positive Leadership auf das Arbeitsengagement und die Arbeitszufriedenheit. Darüber hinaus konnten die vorigen Erkenntnisse um Aussagen über Interaktionen der untersuchten Faktoren erweitert werden. So zeigte sich, dass positives Führungsverhalten die Effekte mangelhafter Qualität im Folgeprozess oder geringer Einhaltung der Schritte des Folgeprozesses abfedern kann. Eine hohe Einhaltung der Schritte im Folgeprozess wirkte sich zudem nur positiv auf die Zufriedenheit mit dem Folgeprozess aus, wenn die Qualität der durchgeführten Schritte hoch war. Außerdem wurde in Studie 4 der Effekt von angenommenen Unterschieden in der Zufriedenheit mit dem Folgeprozess zwischen Mitarbeitenden und Führungskräften auf die Teilnahmeintention an einer nächsten Mitarbeiterbefragung, sowie der Arbeitszufriedenheit und dem Arbeitsengagement aufgezeigt. Abschließend wurden erneut die Auswirkungen von Positive Leadership auf die Attribution von Erfolgen und Misserfolgen im Folgeprozess analysiert. Zusätzlich wurden auch weiterführende Effekte der Attribution auf die Teilnahmeintention an nächsten Mitarbeiterbefragungen untersucht.
Die vorgestellten Studien der Dissertation werden theoretisch und methodisch diskutiert. Auf Basis der Ergebnisse werden praktische Empfehlungen zum verbesserten Umgang mit Folgeprozessen von Mitarbeiterbefragungen und Positive Leadership abgeleitet.
Production, purification and analysis of novel peptide antibiotics from terrestrial cyanobacteria
(2024)
Cyanobacteria are a known source for bioactive compounds, of which several also show antibiotic activity. In regard to the growing number of multi-resistant pathogens, the search for novel antibiotic substances is of great importance and unexploited sources should be explored. So, this thesis initially dealt with the identification of productive strains, especially within the group of the terrestrial cyanobacteria, which are less well studied than marine and freshwater strains. Amongst these, Chroococcidiopsis cubana, an extremely desiccation and radiation tolerant, unicellular cyanobacterium was found to produce an extracellular antimicrobial metabolite effective against the Gram-positive indicator bacterium Micrococcus luteus as well as the pathogenic yeast Candida auris. However, as the sole identification of a productive cyanobacterium is not sufficient for further analysis and a future production scale-up, the second part of this thesis targeted the identification of compound synthesis prerequisites. As a result, a limitation of nitrogen was shown to be the production trigger, a finding that was used for the establishment of a continuous production system. The increased compound formation was then used for purification and analysis steps. As a second approach, in silico identified bacteriocin gene clusters from C. cubana were cloned and heterologously expressed in Escherichia coli. By this, the bacteriocin B135CC was identified as a strong bacteriolytic agent, active predominantly against the Gram-positive strains Staphylococcus aureus and Mycobacterium phlei. The peptide showed no cytotoxic effects against mouse neuroblastoma (N2a-) cells and a high temperature tolerance up to 60 °C. In order to facilitate the whole project, two standard protocols, specifically adapted for the work with cyanobacteria, were established. First, a method for a quick and easy in vivo vitality estimation of phototrophic cells and second, an approach for a high throughput determination of nitrate concentrations in microalgal cultures. Both methods greatly helped to proceed the main objectives of this work, the first one by simplifying the development of suitable cryopreservation protocols for individual cyanobacteria strains and the second one by accelerating the determination of the optimal nitrate concentration for the production of the antimicrobial compound from C. cubana. In the course of this cultivation optimization, the ability of cyanobacteria to utilize organic carbon sources for an accelerated cell growth was examined in greater detail. It could be shown that C. cubana reaches significantly higher growth rates when mixotrophically cultivated with fructose or glucose. Interestingly, this effect was even further enhanced when light intensity was decreased. Under these low-light conditions, phototrophically cultivated C. cubana cells showed a clearly decreased cell growth. This effect might be extremely useful for a quick and economic preparation of precultures.
Plant-specific factors affecting short-range attraction and oviposition of European grapevine moths
(2024)
The spread of pests and pathogens is increasingly intensified by climate change and globalization. Two of the most serious insect pests threating European viticulture are the European grape berry moth, Eupoecilia ambiguella (Hübner) and the European grapevine moth Lobesia botrana (Denis & Schiffermüller). Larvae feed on fructiferous organs of grapevine Vitis vinifera, resulting in high yield and quality losses. Under the aspects of integrated pest management, insecticide measures are only reasonable when other control strategies become ineffective. In order to support the development of novel decision support system for the application of insecticides, the aim of this thesis was to decipher plant-specific factors, which affect the short-range attraction and oviposition of L. botrana and E. ambiguella.
The focus was set on the visual, volatile, tactile and gustatory stimuli provided by their host plant after settlement. The use of artificial surfaces as model plant showed that oviposition of both species is affected by the color, the shape and the texture of the oviposition site. To explain a susceptibility of certain grapevine cultivars and phenological stages of the berries to egg infestations, we analysed and compared the chemical composition of the epicuticular waxes of the berry surface as well as the volatile organic compounds emitted by the berries. Thereby it turned out that the attractiveness to wax extracts decreased during ripening of the berries, highlighting a preference of earlier phenological stages of the berries for oviposition. In addition, grapevine cultivars exhibited variations in their volatile composition. The principle components perceived by female’s antennae could not explain the differentiation between cultivars, suggesting volatiles do not trigger orientation to certain cultivars. Furthermore, a method was developed to measure real-time behavioural response of female moths to volatiles. The setup allowed to quantify the orientation to a volatile source as well as movements of the antennae and ovipositor. They could be linked to the olfactory and gustatory perception of volatiles during the evaluation of suitable host plants for oviposition. In addition, the risk of potential alternative host plants in the vicinity of the vineyard was investigated. This confirmed that L. botrana in particular prefers the stimuli provided by some plants to those of grapevine. Overall, the results suggest that during oviposition, volatiles emitted by the plants and the composition of the plant surface are the most important factors for host plant differentiation.
VR ist ein stetig wachsendes Forschungsgebiet, das die Perspektiven und Möglichkeiten der Mensch-Computer-Interaktion erweitert (Hassan & Hossain, 2022). Durch Studien konnte bereits vor dem aktiven Einsatz im Schulalltag eine Vielzahl an positiven Auswirkungen auf den Lernprozess durch die Nutzung von VR nachgewiesen werden (Chavez & Bayona, 2018). Das sogenannte Immersive Learning stellt damit einen Schlüsselbereich zur digitalen Transformation im Bildungsbereich dar. Um VR allerdings im Schulunterricht einsetzen zu können, bedarf es Lernumgebungen, die auf die örtlichen Gegebenheiten und alltäglichen Bedürfnisse eines praktischen Schulunterrichts angepasst sind. Solche Gestaltungsprinzipien sind allerdings im Bildungsbereich noch nicht vorhanden (Johnson-Glenberg, 2018). Diese Arbeit beschäftigt sich damit, Prinzipien aus der Theorie abzuleiten, diese mit Gestaltungskomponenten zu vereinen und darauf aufbauend eine VR-Lernumgebung zu gestalten und zu erforschen. Um eine Praxis-nähe bei der Entwicklung und Untersuchung zu gewährleisten, wurde ein Design-Based Research Ansatz gewählt. In aufeinander aufbauenden Mikrozyklen wurden die Gestaltungskomponenten evaluiert und daraus Gestaltungsprinzipien abgeleitet. Die Lernmaterialien wurden fächerübergreifend für die Fächer Chemie und Geografie konzipiert sowie praxisnah mit Teilnehmenden aus vier zehnten Klassen eines Gymnasiums in Rheinland-Pfalz evaluiert. Als Lerninhalt wurde der Kohlenstoffkreislauf gewählt und in den jeweiligen Curricula der Fächer verortet. Der Hauptfokus lag auf dem Fach Chemie, Themenfeld elf „Stoffe im Fokus von Um-welt und Klima“. Als virtueller Ort wurde die Nachbildung eines Abschnitts des außerschulischen Lernorts „Reallabor Queichland“ gewählt. Die Komponenten wurden in insgesamt sieben Mikrozyklen aufgeteilt, nummeriert von null bis sechs. Mikrozyklus null wird genutzt, um den Teilnehmenden den Umgang mit dem VR-System näher zu bringen und den Neuigkeitseffekt abzumildern. Mikrozyklus eins evaluiert die Grundfläche der VR-Lernumgebung mit dem Fokus auf den Realismus der Umgebung. Mikrozyklus zwei beschäftigt sich mit dem zu wählenden Bewegungsradius innerhalb der VR. Mikrozyklus drei untersucht den Effekt von realitätsnahen Hintergrundgeräuschen. Die Mikrozyklen vier bis sechs bestehen aus drei Lernstationen mit unterschiedlichen Interaktionsmöglichkeiten: realitätsnahe Interaktionen, realitätsferne Interaktionen sowie eine Mischung daraus. Erhoben wurden die Skalen räumliches Präsenzerleben, aktuelle Motivation, Realismus, wahrgenommene Bedienbarkeit, wahrgenommene Lerneffektivität und die VR-Skala. Ausgewertet wurden die Daten mit ANOVAs und Pfadanalysen sowie einer übergreifenden Analyse am Ende der Erhebung. Durch das Design der Komponenten konnte ein sehr hohes räumliches Präsenzerleben sowie ein sehr hoher wahrgenommener Realismus erzeugt werden. In den Lernstationen bewerteten die Teilnehmenden die wahrgenommene Lerneffektivität sowie Bedienbarkeit als auch den Zusammenhang von 3-D-Model-len, deren Manipulierbarkeit in VR und der damit verbundene Effekt auf die Lerneffektivität als sehr hoch. Insgesamt konnten aus den vorliegenden Daten zwölf Gestaltungsprinzipien generiert werden. Diese können dafür genutzt werden, neue VR-Lernumgebungen für den praktischen Einsatz im Schulunterricht zu erstellen. Es wurden theoretische Annahmen zur Respezifikation des Prozessmodells des räumlichen Präsenzerlebens getroffen und mit den erhobenen Daten geprüft. Die Anpassung des Modells an moderne VR-Brillen und kognitiv fordernde VR-Lernumgebungen stand dabei im Fokus und ergab sehr gute Modelfit-Werte. In weiterführen-den Studien sollten diese Annahmen mit größeren Stichproben überprüft werden.
Auf Grundlage normativer Regelungen, aktueller Forschungsvorhaben und deren Erkenntnisse (Kuhlmann u. a. 2008 und 2012) wurden experimentelle sowie numerische Untersuchungen an großen Ankerplatten mit mehr als der aktuell normativ zugelassenen Anzahl an Kopfbolzen durchgeführt. Ziel der Untersuchungen war es, einen Ansatz für die Tragfähigkeit großer nachgiebiger Ankerplatten mit Kopfbolzen zu entwickeln. Durch Variationen der maßgebenden Parameter wie der Ankerplattendicke, der Kopfbolzenlänge, des Grads der Rückhängebewehrung sowie des Zustands des Betons konnte anhand der experimentellen Untersuchungen ein Komponentenmodell verifiziert werden. Mögliche Versagensmechanismen, wie Stahlversagen der Kopfbolzen auf Zug, Fließen der Ankerplatte infolge der T-Stummelbildung, kegelförmiger Betonausbruch sowie Stahlversagen der Rückhängebewehrung, konnten mithilfe dieser Parameter abgebildet werden. Weiter hat sich beim Versagensmodus ‚kegelförmiger Betonausbruch‘ die Oberflächenbewehrung im Nachtraglastbereich als zusätzlicher Parameter herausgestellt.
Das auf Grundlage der DIN EN 1993-1-8 entwickelte Modell und die Berücksichtigung der Komponentensteifigkeiten ermöglichen die Bemessung starrer und nachgiebiger Ankerplatten. Durch den Einbezug der Steifigkeiten einzelner Komponenten kann die Gesamtsteifigkeit einer Anschlusskonfiguration berechnet werden, um ein duktiles Tragverhalten zu erhalten. Neben verschiedenen möglichen Fließzonen auf der Ankerplatte infolge unterschiedlicher Geometrien und Anordnungen der Verbindungsmittel werden kegelförmige Betonausbrüche in Abhängigkeit einer möglichen zusätzlichen Rückhängebewehrung im Modell berücksichtigt.
Das in dieser Arbeit beschriebene Modell für die sich bildende Zugseite starrer sowie nachgiebiger Ankerplatten mit mehr als aktuell nach Norm zulässigen Ver-bindungsmitteln konnte anhand experimenteller und numerischer Versuche verifiziert werden. Der plastische Bemessungsansatz zeigt, über alle Versuchsserien hinweg, eine gute Übereinstimmung mit den experimentellen Untersuchungen sowie den numerischen Parameterstudien.
In einem zweiten Schritt wurden Auswirkungen einer Kurzzeitrelaxation des Betons infolge Zwang auf große Ankerplatten in Verbindung mit Kopfbolzen untersucht. Mit dem in Anlehnung an die Komponentenmethode der DIN EN 1993-1-8 entwickelten Federmodell können zeitabhängige Verformungen von Beton infolge von Kriechen und Schwinden berücksichtigt werden. Mithilfe des anhand experimenteller und numerischer Versuche verifizierten Modells ist es möglich, Auswirkungen infolge Zwang auf Ankerplatten zu untersuchen.