Doctoral Thesis
Refine
Year of publication
Document Type
- Doctoral Thesis (1864) (remove)
Language
- German (930)
- English (928)
- Multiple languages (6)
Keywords
- Visualisierung (21)
- Simulation (18)
- Katalyse (15)
- Stadtplanung (15)
- Apoptosis (12)
- Finite-Elemente-Methode (12)
- Phasengleichgewicht (12)
- Modellierung (11)
- Infrarotspektroskopie (10)
- Mobilfunk (10)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Chemie (389)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (369)
- Kaiserslautern - Fachbereich Mathematik (291)
- Kaiserslautern - Fachbereich Informatik (229)
- Kaiserslautern - Fachbereich Biologie (133)
- Kaiserslautern - Fachbereich Bauingenieurwesen (94)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (92)
- Kaiserslautern - Fachbereich ARUBI (71)
- Kaiserslautern - Fachbereich Sozialwissenschaften (63)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (35)
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
Velocity Based Training ist ein Ansatz zur Belastungssteuerung im Widerstandstraining, der die volitional maximale konzentrische Durchschnittsgeschwindigkeit gegen einen bestimmten Lastwiderstand zur Steuerung der Belastungsintensität sowie das Ausmaß der intraseriellen konzentrischen Geschwindigkeitsreduktion zur Steuerung der intraseriellen muskulären Ermüdung verwendet. Die diesem Ansatz inhärente Grundvoraussetzung, sich mit volitional maximalen konzentrischen Geschwindigkeiten zu bewegen, führt jedoch dazu, dass die Steuerung der muskulären Ermüdung auf Basis der relativen Geschwindigkeitsreduktion nicht umsetzbar ist, wenn man sich im Widerstandstraining mit volitional submaximaler Geschwindigkeit bewegt. Deshalb befasste sich dieses Promotionsprojekt mit der übergeordneten Forschungsfrage, inwieweit sich ein adaptierter Ansatz der geschwindigkeitsbasierten Belastungssteuerung im Widerstandstraining auf Basis der Minimum Velocity Threshold (MVT), der eine „Relative Stopping Velocity Threshold“ ([RSVT], berechnet als Vielfaches der MVT in Prozent) zur objektiven Autoregulation der Belastungsdauer verwendet, dazu eignet, den Grad der muskulären Ermüdung innerhalb eines Trainingssatzes mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit zu steuern.
Zur Beantwortung dieser übergeordneten Forschungsfrage wurde eine explanative, prospektive Untersuchung im quasiexperimentellen Design durchgeführt. Dabei wurde für alle Probanden an einem ersten Termin die individuelle dynamische Maximalkraftleistung (1-RM) für die Langhantelübungen Bankdrücken und Kreuzheben ermittelt und an einem zweiten Termin die eigentliche Testung durchgeführt. An diesem zweiten Testtermin wurde pro Übung jeweils ein Testsatz mit volitional maximaler und ein Testsatz mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit bei einer standardisierten Belastungsintensität von 75 % 1-RM ausgeführt, während die konzentrische Bewegungsgeschwindigkeit der einzelnen Wiederholungen mittels einer Inertialsensoreinheit erfasst wurde, um die ermüdungsbedingte Geschwindigkeitsreduktion der Wiederholungen am Ende eines ausbelastenden Testsatzes zu untersuchen.
Als Antwort auf die übergeordnete Forschungsfrage dieser Untersuchung kann festgehalten werden, dass sich die RSVT grundsätzlich zur Steuerung der intraseriellen muskulären Ermüdung im Widerstandstraining mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit eignet. Für fitness- und gesundheitsorientierte Personen wurde ein RSVT-Zielkorridor abgeleitet der RSVT = 171,4 - 186,6 % MVT entspricht. Führt man einen Satz Bankdrücken mit der Langhantel mit einer Belastungsintensität von 75 % 1-RM und volitional submaximaler konzentrischer Bewegungsgeschwindigkeit so lange aus, bis die durchschnittliche konzentrische Bewegungsgeschwindigkeit (MV) einer Wiederholung ermüdungsbedingt in diesen Zielkorridor absinkt, sollten noch zwei bis drei weitere Wiederholungen ausführbar sein, bevor der Punkt des momentanen konzentrischen Muskelversagens erreicht wird. Für leistungsorientierte Personen im trainierten Zustand wurde ein RSVT-Zielkorridor von RSVT = 183,8 - 211,3 % MVT abgeleitet. Sinkt die gemessene MV einer Wiederholung ermüdungsbedingt in diesen Zielkorridor, kann mit vertretbarer Sicherheit davon ausgegangen werden, dass noch eine bis zwei weitere Wiederholungen bis zum Punkt des momentanen konzentrischen Muskelversagens ausgeführt werden können.
Die vorliegende Dissertation liefert durch diese Weiterentwicklung des Velocity Based Training einen adaptierten Steuerungsansatz, mit dem es erstmals möglich wird, die geschwindigkeitsbasierte Belastungssteuerung im Widerstandstraining auch bei volitional submaximalen konzentrischen Bewegungsgeschwindigkeiten sinnvoll anzuwenden. Aufgrund bestehender Limitationen der Untersuchung sind jedoch weitere wissenschaftliche Studien erforderlich, um die Gültigkeit, die Übertragbarkeit sowie die Effektivität des MVT-basierten Steuerungsansatzes weiter zu erforschen.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
In this thesis, material removal mechanisms in grinding are investigated considering a gritworkpiece interaction as well as a grinding-wheel workpiece interaction. In grit-workpiece interaction in a micrometer scale, single grit scratch experiments were performed to investigate material removal mechanism in grinding namely rubbing, plowing, and cutting. Experiments performed were analyzed based on material removal, process forces and specific energy. A finite element model is developed to simulate a single-grit scratch process. As part of the development of the finite element scratch model a 2D and 3D model is developed. A 2D model is utilized to test
material parameters and test various mesh discretizational approaches. A 3D model undertaking the tested material parameters from the 2D model is developed and is tested against experimental results for various mesh discretization. The simulation model is validated based on process forces and ground topography from experiments. The model is also further scaled to simulate multiple grit-workpiece interaction validated against experimental results. As a final step, simulation models are developed to simulate material removal, due to the interaction of grinding wheel and workpiece. A developed virtual grinding wheel topographical model is employed to display
an approach, to upscale a grinding process from grit-workpiece interaction to wheel-workpiece
interaction. In conclusion, practical conclusions drawn and scope for future studies are derived
based on the developed simulation models.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
Climate change will have severe consequences on Eastern Boundary Upwelling Systems (EBUS). They host the largest fisheries in the world supporting the life of millions of people due to their tremendous primary production. Therefore, it is of utmost importance to better understand predicted impacts like alternating upwelling intensities and light impediment on the structure and the trophic role of protistan plankton communities as they form the basis of the food web. Numerical models estimate the intensification of the frequency in eddy formation. These ocean features are of particular importance due to their influence on the distribution and diversity of plankton communities and the access to resources, which are still not well understood even to the present day. My PhD thesis entails two subjects conducted during large-scaled cooperation projects REEBUS (Role of Eddies in Eastern Boundary Upwelling Systems) and CUSCO (Coastal Upwelling System in a Changing Ocean).
Subject I of my study was conducted within the multidisciplinary framework REEBUS to investigate the influence of eddies on the biological carbon pump in the Canary Current System (CanCS). More specifically, the aim was to find out how mesoscale cyclonic eddies affect the regional diversity, structure, and trophic role of protistan plankton communities in a subtropical oligotrophic oceanic offshore region.
Samples were taken during the M156 and M160 cruises in the Atlantic Ocean around Cape Verde during July and December 2019, respectively. Three eddies with varying ages of emergence and three water layers (deep chlorophyll maximum DCM, right beneath the DCM and oxygen minimum zone OMZ) were sampled. Additional stations without eddy perturbation were analyzed as references. The effect of oceanic mesoscale cyclonic eddies on protistan plankton communities was analyzed by implementing three approaches. (i) V9 18S rRNA gene amplicons were examined to analyze the diversity and structure of the plankton communities and to infer their role in the biological carbon pump. (ii) By assigning functional traits to taxonomically assigned eDNA sequences, functional richness and ecological strategies (ES) were determined. (iii) Grazing experiments were conducted to assess abundance and carbon transfer from prokaryotes to phagotrophic protists.
All three eddies examined in this study differed in their ASV abundance, diversity, and taxonomic composition with the most pronounced differences in the DCM. Dinoflagellates were the most abundant taxa in all three depth layers. Other dominating taxa were radiolarians, Discoba and haptophytes. The trait-approach could only assign ~15% of all ASVs and revealed in general a relatively high functional richness. But no unique ES was determined within a specific eddy. This indicates pronounced functional redundancy, which is recognized to be correlated with ecosystem resilience and robustness by providing a degree of buffering capacity in the face of biodiversity loss. Elevated microbial abundances as well as bacterivory were clearly associated to mesoscale eddy features, albeit with remarkable seasonal fluctuations. Since eddy activity is expected to increase on a global scale in future climate change scenarios, cyclonic eddies could counteract climate change by enhancing carbon sequestration to abyssal depths. The findings demonstrate that cyclonic eddies are unique, heterogeneous, and abundant ecosystems with trapped water masses in which characteristic protistan plankton develop as the eddies age and migrate westward into subtropical oligotrophic offshore waters. Therefore, eddies influence regional protistan plankton diversity qualitatively and quantitatively.
Subject II of my PhD project contributed to the CUSCO field campaign to identify the influence of varying upwelling intensities in combination with distinct light treatments on the whole food web structure and carbon pump in the Humboldt Current System (HCS) off Peru. To accomplish such a task, eight offshore-mesocosms were deployed and two light scenarios (low light, LL; high light, HL) were created by darkening half of the mesocosms. Upwelling was simulated by injecting distinct proportions (0%, 15%, 30% and 45%) of collected deep-water (DW) into each of the moored mesocosms. My aim was to examine the changes in diversity, structure, and trophic role of protistan plankton communities for the induced manipulations by analyzing the V9 18S rRNA gene amplicons and performing short-term grazing experiments.
The upwelling simulations induced a significant increase in alpha diversity under both light conditions. In austral summer, reflected by HL conditions, a generally higher alpha diversity was recorded compared to the austral winter simulation, instigated by LL treatment. Significant alterations of the protistan plankton community structure could likewise be observed. Diatoms were associated to increased levels of DW addition in the mimicked austral winter situation. Under nutrient depletion, chlorophytes exhibited high relative abundances in the simulated austral winter scenario. Dinoflagellates dominated the austral summer condition in all upwelling simulations. Tendencies of reduced unicellular eukaryotes and increased prokaryotic abundances were determined under light impediment. Protistan-mediated mortality of prokaryotes also decreased by ~30% in the mimicked austral winter scenario.
The findings indicate that the microbial loop is a more relevant factor in the structure of the food web in austral summer and is more focused on the utilization of diatoms in austral winter in the HCS off Peru. It was evident that distinct light intensities coupled with multiple upwelling scenarios could lead to alterations in biochemical cycles, trophic interactions, and ecosystem services. Considering the threat of climate change, the predicted relocation of EBUS could limit primary production and lengthen the food web structure with severe socio-economic consequences.
Zur Förderung der Nahmobilität, insbesondere der Basismobilität „Zufußgehen“, ist die Möglichkeit zur Teilhabe im öffentlichen Verkehrsraum für alle Menschen und im Besonderen für mobilitätseingeschränkte (Bedürfnisgruppen) unerlässlich. Nur mit Hilfe einer barrierefrei gestalteten Umwelt kann die Teilhabe Aller erreicht werden. In diesem Zusammenhang ist es notwendig, ein durchgehend barrierefreies Fußverkehrsnetz herzustellen. Hierzu sind die Fußverkehrsanlagen (Gehbereiche, Überquerungsstellen, Treppen, Rampen und Aufzüge) entsprechend zu gestalten. Ein nachvollziehbares und praxisorientiertes Verfahren zur Bewertung der Barrierefreiheit von Fußverkehrsnetzen existiert allerdings derzeit nicht. An diesem Punkt setzt die vorliegende Forschungsarbeit an. Durch die Entwicklung
eines Verfahrens zur Bewertung der bestehenden Barrierefreiheit von Fußverkehrsnetzen anhand von Qualitätsstufen wird ein praktisches Anwendungstool geschaffen. Dieses richtet sich an verantwortliche Personen, u.a. aus Planung, Politik und Verwaltung, um eine Priorisierung und Umsetzung von
Maßnahmen zum Abbau von Barrieren vornehmen zu können.
Grundlage für das Bewertungsverfahren bilden Interviews und Befragungen von Fachleuten und Bedürfnisgruppen. Der Schwerpunkt liegt hierbei auf motorisch und visuell eingeschränkten Personen. Die Befragungen befassten sich mit der Höhe der Erschwernisse, je nach Bedürfnisgruppe, bei der Nutzung von Fußverkehrsanlagen im öffentlichen Raum, wenn diese nicht den Vorgaben der Technischen Regelwerke entsprechen. Das Bewertungsverfahren übersetzt die Barrierefreiheit in eine verständliche und nachvollziehbare Größe, indem die Erschwernisse in eine gefühlte zusätzliche Entfernung umgerechnet werden. Weiterhin wird neben der gefühlten auch die tatsächliche zusätzliche Entfernung aufgrund von Umwegen berücksichtigt. Aufbauend auf der Bewertung von Fußverkehrsanlagen können so Routen und Verbindungen sowie Fußverkehrsnetze bewertet werden. Der grundsätzliche Ablauf des Bewertungsverfahrens ist für alle Bedürfnisgruppen gleich. Er besteht aus vier wesentlichen Schritten und hat jeweils eine von sechs Qualitätsstufen der Barrierefreiheit (QSB, Stufen von A bis F) zum Ergebnis. Im Rahmen der Forschungsarbeit wird festgelegt, dass der Übergang von der Stufe D zur
Stufe E für die Mehrheit der betrachteten Bedürfnisgruppen die Grenze zwischen Selbstständigkeit und Notwendigkeit fremder Hilfe beim Nutzen der Fußverkehrsanlagen darstellt. Das entwickelte Bewertungsverfahren bietet eine gute Grundlage zur Bewertung von Fußverkehrsnetzen in Bezug auf die Barrierefreiheit. Aufgrund der Modularität und Flexibilität ist es möglich, sowohl
weitere Aspekte als auch weitere Bedürfnisgruppen zu integrieren. Wichtig sind eine kontinuierliche Anwendung des Verfahrens und die Berücksichtigung der Barrierefreiheit von Anfang an in jeder Planung. Ebenfalls ist eine gesetzliche Integration der barrierefreien schrittweisen Umgestaltung anhand
anerkannter Technischer Regelwerke notwendig. Nur so kann ein durchgehend barrierefreies Netz entstehen und allen Menschen, egal ob mit oder ohne Mobilitätseinschränkung, eine Teilhabe im öffentlichen Verkehrsraum ohne fremde Hilfe ermöglicht werden. Zudem kann durch die Steigerung der
Attraktivität die Nahmobilität gefördert werden. Hiermit kann erreicht werden, Menschen bei kurzen Entfernungen vom zu Fuß gehen bzw. von der Nutzung des Rollstuhls zu überzeugen. Letztlich ist so auch eine Minderung des CO2-Ausstoßes denkbar, wenn für kurze Routen kein oder seltener das Kfz
genutzt wird. Das nachhaltigste und umweltschonendste Fortbewegungsmittel ist das zu Fuß gehen und ein barrierefreies Umfeld trägt somit schlussendlich zum Klimaschutz bei.
Since their introduction, robots have primarily influenced the industrial world, providing new opportunities and challenges for humans and machinery. With the introduction of lightweight robots and mobile robot platforms, the field of robot applications has been expanded, diversified, and brought closer to society. The increased degree of digitalization and the personalization of goods and products require an enhanced and flexible robot deployment by operating several multi-robot systems along production processes, industrial applications, assembly and packaging lines, transport systems, etc.
Efficient and safe robot operation relies on successful task planning followed by the computation and execution of task-performing motion trajectories. This thesis addresses these issues by developing, implementing, and validating optimization-based methods for task and trajectory planning in robotics, considering certain optimality and performance criteria. The focus is mainly on the time optimality of the presented approaches with respect to both execution and computation time without compromising safe robot use.
Driven by a systematic approach, the basis for the algorithm development is established first by modeling the kinematics and dynamics of the considered robots and identifying required dynamic parameters. In a further step, time-optimal task and trajectory planning algorithms for a single robotic arm are developed. Initially, a hierarchical approach is introduced consisting of two decoupled optimization-based control policies, a binary problem for task planning, and a continuous model predictive trajectory planning problem. The two layers of the hierarchical structure are then merged into a monolithic layer, resulting in a hybrid structure in the form of a mixed-integer optimization problem for inherent task and trajectory planning.
Motivated by a multi-robot deployment, the hierarchical control structure for time-optimal task and trajectory planning is extended for the case of a two-arm robotic system with highly overlapping operational spaces, leading to challenging robot motions with high inter-robot collision potential. To this end, a novel predictive approach for collision avoidance is proposed based on a continuous approximation of the robot geometry, resulting in a nonlinear optimization problem capable of online applications with real-time requirements. Towards a mobile and flexible robot platform, a model predictive path-following controller for an omnidirectional mobile robot is introduced. Here, a time-minimal approach is also applied, which consists of the robot following a given parameterized path as accurately as possible and at maximum speed.
The performance of the proposed algorithms and methods is experimentally analyzed and validated under real conditions on robot demonstrators. Implementation details, including the resulting hardware and software architecture, are presented, followed by a detailed description of the results. Concrete and industry-oriented demonstrators for integrating robotic arms in existing manual processes and the indoor navigation of a mobile robot complete the work.
Schneckengetriebe werden meist aus einer Stahlschnecke und einem Bronze-Schneckenrad gefertigt. Diese werden zur einstufigen Übertragung von Drehbewegungen bei hohen Übersetzungen eingesetzt. Einen Nachteil von Schneckengetrieben stellt der relativ hohe Verschleiß infolge der hohen Gleitreibung im Zahneingriff dar. Durch eine geeignete Schmierung können Reibung und Verschleiß reduziert werden. Dies reduziert den Temperaturanstieg
im Betrieb und führt somit zu einer längeren Lebensdauer des Getriebes. Aufgrund der ausgeprägten Kühlwirkung erfolgt die Schmierung von Schneckengetrieben in der Praxis überwiegend mit Schmierölen. Fettartige Schmierstoffe werden ebenfalls verwendet, weisen jedoch eine geringere Kühlwirkung als flüssige Schmierstoffe auf. Bei Vakuumanwendungen oder unter extremen Betriebsbedingungen, wie z.B. Hoch- oder Tieftemperaturanwendungen
sowie bei niedrigen hydrodynamischen Geschwindigkeiten, verlieren die oben genannten konventionellen Schmierstoffe ihre Schmierwirkung. Als Alternative
werden Festschmierstoffe eingesetzt.
Festschmierstoffe können im Allgemeinen auf verschiedene Weise in den Kontaktstellen von Maschinenelementen verwendet werden. In dieser Arbeit wird das Prinzip der Transferschmierung durch ein Opferbauteil eingesetzt. Hierbei werden Compounds aus strahlenmodifiziertem Polytetrafluorethylen (PTFE) und Polyamid (PA) als Opferbauteil im Schneckengetriebe verwendet, sodass die Stahlschnecke zeitgleich mit dem Bronze-Schneckenrad und dem Opferrad aus PA-PTFE-Compound im Zahneingriff steht. Durch die Belastung des Opferrades mit einem relativ kleinen Drehmoment verschleißt das Opferrad, wodurch der PTFE-Festschmierstoff freigesetzt und an der Stahloberfläche deponiert wird. Dies führt zur Bildung eines Transferfilms, welcher zur Schmierung des Kontakts
zwischen der Stahlschnecke und dem Bronze-Schneckenrad führt. Die Mechanismen des Auf- und Abbaus solcher Transferfilme in Schneckengetrieben sind derzeit unbekannt und werden in dieser Arbeit anhand experimenteller Untersuchungen erforscht. Hierzu wurden tribologische Versuche an Modellprüfständen durchgeführt, wodurch das reib- und Verschleißverhalten an Stahl-Bronze-Kontakten untersucht wurde. Als Modellprüfstände kamen der Block-auf-Ring-, der Block-Zwei-Scheiben- und der Drei-Scheiben-Prüfstand zum Einsatz. Anschließend wurden Bauteilversuche auf einem Schneckengetriebeprüfstand durchgeführt, um die aus den Modellversuchen gewonnenen Erkenntnisse zu validieren. Mit Hilfe von oberflächenanalytischen Techniken wurden die Prüfkörper auf der Mikroskala untersucht, um die Qualität und Quantität des aufgebauten Transferfilms zu bestimmen.
Cancer, a complex and multifaceted disease, continues to challenge the boundaries of biomedical research. In this dissertation, we explore the complexity of cancer genesis, employing multiscale modeling, abstract mathematical concepts such as stability analysis, and numerical simulations as powerful tools to decipher its underlying mechanisms. Through a series of comprehensive studies, we mainly investigate the cell cycle dynamics, the delicate balance between quiescence and proliferation, the impact of mutations, and the co-evolution of healthy and cancer stem cell lineages. The introductory chapter provides a comprehensive overview of cancer and the critical importance of understanding its underlying mechanisms. Additionally, it establishes the foundation by elucidating key definitions and presenting various modeling perspectives to address the cancer genesis. Next, cell cycle dynamics have been explored, revealing the temporal oscillatory dynamics that govern the progression of cells through the cell cycle.
The first half of the thesis investigates the cell cycle dynamics and evolution of cancer stem cell lineages by incorporating feedback regulation mechanisms. Thereby, the pivotal role of feedback loops in driving the expansion of cancer stem cells has been thoroughly studied, offering new perspectives on cancer progression. Furthermore, the mathematical rigor of the model has been addressed by deriving wellposedness conditions, thereby strengthening the reliability of our findings and conclusions. Then, expanding our modeling scope, we explore the interplay between quiescent and proliferating cell populations, shedding light on the importance of their equilibrium in cancer biology. The models developed in this context offer potential avenues for targeted cancer therapies, addressing perspective cell populations critical for cancer progression. The second half of the thesis focuses on multiscale modeling of proliferating and quiescent cell populations incorporating cell cycle dynamics and the extension thereof with mutation acquisition. Following rigorous mathematical analysis, the wellposedness of the proposed modeling frameworks have been studied along with steady-state solutions and stability criteria.
In a nutshell, this thesis represents a significant stride in our understanding of cancer genesis, providing a comprehensive view of the complex interplay between cell cycle dynamics, quiescence, proliferation, mutation acquisition, and cancer stem cells. The journey towards conquering cancer is far from over. However, this research provides valuable insights and directions for future investigation, bringing us closer to the ultimate goal of mitigating the impact of this formidable disease.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
Distributed Optimization of Constraint-Coupled Systems via Approximations of the Dual Function
(2024)
This thesis deals with the distributed optimization of constraint-coupled systems. This problem class is often encountered in systems consisting of multiple individual subsystems, which are coupled through shared limited resources. The goal is to optimize each subsystem in a distributed manner while still ensuring that system-wide constraints are satisfied. By introducing dual variables for the system-wide constraints the system-wide problem can be decomposed into individual subproblems. These resulting subproblems can then be coordinated by iteratively adapting the dual variables. This thesis presents two new algorithms that exploit the properties of the dual optimization problem. Both algorithms compute a quadratic surrogate function of the dual function in each iteration, which is optimized to adapt the dual variables. The Quadratically Approximated Dual Ascent (QADA) algorithm computes the surrogate function by solving a regression problem, while the Quasi-Newton Dual Ascent (QNDA) algorithm updates the surrogate function iteratively via a quasi-Newton scheme. Both algorithms employ cutting planes to take the nonsmoothness of the dual function into account. The proposed algorithms are compared to algorithms from the literature on a large number of different benchmark problems, showing superior performance in most cases. In addition to general convex and mixed-integer optimization problems, dual decomposition-based distributed optimization is applied to distributed model predictive control and distributed K-means clustering problems.
Lubricated tribological contact processes are important in both nature and in many technical applications. Fluid lubricants play an important role in contact processes, e.g. they reduce friction and cool the contact zone. The fundamentals of lubricated contact processes on the atomistic scale are, however, today not fully understood. A lubricated contact process is defined here as a process, where two solid bodies that are in close proximity and eventually in parts in direct contact, carry out a relative motion, whereat the remaining volume is submersed by a fluid lubricant. Such lubricated contact processes are difficult to examine experimentally. Atomistic simulations are an attractive alternative for investigating the fundamentals of such processes. In this work, molecular dynamics simulations were used for studying different elementary processes of lubricated tribological contacts. A simplified, yet realistic simulation setup was developed in this work for that purpose using classical force fields. In particular, the two solid bodies were fully submersed in the fluid lubricant such that the squeeze-out was realistically modeled. The velocity of the relative motion of the two solid bodies was imposed as a boundary condition. Two types of cases were considered in this work: i) a model system based on synthetic model substances, which enables a direct, but generic, investigation of molecular interaction features on the contact process; and ii) real substance systems, where the force fields describe specific real substances. Using the model system i), also the reproducibility of the findings obtained from the computer experiments was critically assessed. In most cases, also the dry reference case was studied. Both mechanical and thermodynamic properties were studied -- focusing on the influence of lubrication. The following properties were studied: The contact forces, the coefficient of friction, the dislocation behavior in the solid, the chip formation and the formation of the groove, the squeeze-out behavior of the fluid in the contact zone, the local temperature and the energy balance of the system, the adsorption of fluid particles on the solid surfaces, as well as the formation of a tribofilm. Systematic studies were carried out for elucidating the influence of the wetting behavior, the influence of the molecular architecture of the lubricant, and the influence of the lubrication gap height on the contact process. As expected, the presence of a fluid lubricant reduces the temperature in the vicinity of the contact zone. The presence of the lubricant is, moreover, found to have a significant influence on the friction and on the energy balance of the process. The presence of a lubricant reduces the coefficient of friction compared to a dry case in the starting phase of a contact process, while lubricant molecules remain in the contact zone between the two solid bodies. This is a result of an increased normal and slightly decreased tangential force in the starting phase. When the fluid molecules are squeezed out with ongoing contact time and the contact zone is essentially dry, the coefficient of friction is increased by the presence of a fluid compared to a dry case. This is attributed to an imprinting of individual fluid particles into the solid surface, which is energetically unfavorable. By studying the contact process in a wide range of gap height, the entire range of the Stribeck curve is obtained from the molecular simulations. Thereby, the three main lubrication regimes of the Stribeck curve and their transition regions are covered, namely boundary lubrication (significant elastic and plastic deformation of the substrate), mixed lubrication (adsorbed fluid layers dominate the process), and hydrodynamic lubrication (shear flow is set up between the surface and the asperity). The atomistic effects in the different lubrication regimes are elucidated. Notably, the formation of a tribofilm is observed, in which lubricant molecules are immersed into the metal surface. The formation of a tribofilm is found to have important consequences for the contact process. The work done by the relative motion is found to mainly dissipate and thereby heat up the system. Only a minor part of the work causes plastic deformation. Finally, the assumptions, simplifications, and approximations applied in the simulations are critically discussed, which highlights possible future work.
In dieser Arbeit wird die Co-Konsolidierung im Thermoformen zwischen kontinuierlich faserverstärkten, teilkonsolidierten CF/PEEK Tape-Preforms und kontinuierlich faserverstärkten, vollständig konsolidierten CF/PEEK Tape-Laminaten untersucht. Bei der Co-Konsolidierung handelt es sich um die Herstellung einer Schweißverbindung zwischen zwei oder mehr Thermoplasten durch separates Aufheizen, Zusammenbringen der Fügeflächen und rasches Abkühlen unter Druck im isothermen Werkzeug. Die adressierte Anwendung ist das Verschweißen von Versteifungen auf Tape-Preforms während dem Thermoformen, sodass nachgeschaltete Fügeprozesse solcher Versteifungen obsolet werden und die Zykluszeit des Thermoformens unverändert bleibt.
Die Ergebnisse zeigen, dass der Grad der Teilkonsolidierung der Tape-Preforms -
unabhängig der gewählten Einstellgrößen des Werkzeugdrucks - keinen Einfluss auf die Konsolidierung der Tape-Laminate nach dem Thermoformen nimmt. Im Bereich einer Versteifung ist ein vergleichsweise größerer Werkzeugdruck zur Konsolidierung der teilkonsolidierten Tape-Preform notwendig, damit dort die gleichen Eigenschaften wie fern der Co-Konsolidierung erzeugt werden. Die zwischen Tape-Laminat und Versteifung gemessenen Zugscherfestigkeiten, die mittels Co-Konsolidierung im Thermoformen erzeugt werden, sind niedriger als die der Co-Konsolidierung im Autoklav.
Die von Zhou bereits 1994 erhaltenen tri(tert-butyl)cyclopentadienyltrichloride der vierten Gruppe [Cp'''MCl3] (M = Ti, Zr, Hf) konnten reproduziert, kristallisiert und strukturell untersuchtwerden. Auch konnten neue Di- und Tri(tert-butyl)cyclopentadienylzirconiumbromide und -iodide synthetisiert werden. Von [Cp''ZrI3] wurden röntgendiffraktometertaugliche
Kristalle erhalten, an denen die Struktur der Verbindung
aufgeklärt werden konnte. Bei Substitutionsversuchen mit weiteren Liganden konnten Hydridocluster erhalten werden. Strukturelle Untersuchungen zeigte einen Clusterkomplex mit der Formel (Cp''Zr)4(μ-H)8(μ-Cl)2. Es handelt sich hierbei um einen vierkernigen Zirconiumcluster, welcher von acht Hydrido- und zwei Chloridoliganden verbrückt wird. Jedes Zirconiumatom ist weiterhin
mit einem Di(tert-butyl)cyclopentadienylliganden verbunden. Bei der Untersuchung des Reaktionshergangs wurde ein weiterer Zr-Cluster gefunden. Es konnten röntgendiffraktometertaugliche Kristalle von Tris{di(tert-butyl)cyclopentadienyldi(μ-hydrido)zirconium} {chloridotri(μ-hydrido)aluminat} erhalten werden. Der Cluster besteht aus drei Zirconiumatomen, welche in einem Dreieck angeordnet sind und mit je zwei Hydridoliganden verbrückt. Jedes Zirconium ist über eine Hydridobrücke mit einem Aluminiumchloridfragment verbunden. Zudem ist an je Zirconiumatom je ein Di(tert- butyl)cyclopentadienylligand koordiniert. Weiterhin wurden Experimente zur Herstellung von Alkylderivaten des bislang nicht bekannten
Zirconocengrundkörpers Cp2Zr unternommen. Hierzu wurde Zirconiumtetrachlorid
mit n-Butyllithium zum Dichlorid ZrCl2(THF)2 reduziert. Das Reduktionsprodukt
wurde mit Natriumtetra(isopropyl)cyclopentadienid, Natriumtri(tertbutyl)
cyclopentadienid oder Lithiumpenta(isopropyl)cyclopentadienid umgesetzt.
Die Ergebnisse zeigen keinen eindeutigen erhalt von Zirconocenen, jedoch wurde ein Tri(tert-butyl)cyclopentadienyllithium- salz erhalten, welches strukturell aufgeklärt werden konnte.
Reactive absorption with amines is the most important technique for the removal of CO2
from gas streams, e.g. from flue gas, natural gas or off-gas from the cement industry.
In this work a rigorous simulation model for the absorption and desorption of CO2 with
an amine-containing solvent is validated using data from pilot plants of various sizes.
This model was then coupled with a detailed simulation of a coal-fired power plant.
The power generation efficiency drop with CO2 capture was determined and process
parameters in the power plant and separation process were optimized. It was shown
that the high energy demand of CO2 separation significantly reduces power generation
efficiencies, which underlines the need for improvements. This can be achieved by better
solvents or by advanced process designs. In this work such improved CO2 separation
processes are described and evaluated by detailed simulation studies.
In order to develop detailed rigorous simulation models for reactive absorption with novel
solvent systems, a precise knowledge of the liquid phase reaction kinetics is necessary.
There are well established techniques for measuring species distributions in equilibirated
aqueous amine solutions by NMR spectrosopy. However, the existing NMR techniques
cannot be used for monitoring fast reactions in these solutions. Therefore, in this work
a novel temperature-controlled micro-reactor NMR probe head was developed which
enables studying reaction kinetics with time constants in the range of seconds.
On this basis, modern solvent systems for CO2 absorption can be characterized and
the scale-up of separation process for future plants can be accompanied using rigorous
process simulation.
In 2022 verfehlten Gebäude- und Verkehrssektor die Klimaschutzziele in Deutschland. Im Gegensatz zum Verkehrssektor stehen im Gebäudesektor lange Lebensdauern schnellen Technologiewechseln entgegen, weshalb Strategien besonders frühzeitig umgesetzt werden müssen. Zudem ist der Gebäudebestand durch hohe Investitionskosten bei vergleichsweise geringen Treibhausgaseinsparungen je investiertem Euro geprägt. In Kombination erschweren diese Hemmnisse die Erreichung der Klimaschutzziele für den Wohngebäudebestand deutlich.
Ziel dieser Arbeit ist die Entwicklung eines Wohngebäudebestandsmodells, um Transformationspfade unter dem Einfluss variierender ökonomischer Rahmenbedingungen, wie z.B. dem Einfluss unterschiedlicher CO2-Preisverläufe und eine Reinvestition der CO2-Steuer in die Modernisierung der Gebäude, simulieren und analysieren zu können.
Im ersten Schritt wird ein Wohngebäudebestandsmodell bei Fortschreibung der ökonomischen Rahmenbedingungen im Startjahr entwickelt und angewendet. Hierzu werden wichtige Parameter des Gebäudebestands identifiziert und diese anhand des vergangenen Verlaufs analysiert sowie Szenarien und Prognosen betrachtet. Ergebnis sind Ausgangsbedingungen und Einflussfaktoren auf den weiteren Verlauf, die für die Modellierung genutzt werden. Im zweiten Schritt wird eine Systematik entwickelt, um Modernisierungsraten endogen bei Variation der ökonomischen Rahmenbedingungen berechnen zu können.
In der vorliegenden Arbeit wird ein Modell vorgestellt, dass die ökonomischen Rahmenbedingungen und das Kopplungsprinzip dynamisch bei der Simulation von Vollmodernisierungsraten berücksichtigt. Die Ergebnisse zeigen, dass Vollmodernisierungsraten von 2 %/a über längere Zeiträume extreme Rahmenbedingungen benötigen und unrealistisch sind. Haupthemmnisse sind der Sanierungsbedarf (Kopplungsprinzip), sinkende Energieeinsparpotenziale der jüngeren Baualtersklassen und Mitnahmeeffekte bei verbesserter Förderung. Da eine Erreichung der Klimaschutzziele nur durch Anpassung der CO2-Steuer (auch bei Reinvestition) nicht innerhalb realistischer Steuerhöhen im Modell möglich ist, wird stattdessen ein Maßnahmenpaket aus wirtschaftlichen und legislativen Rahmenbedingungen zur Zielerreichung vorgestellt.
Pervasive human impacts rapidly change freshwater biodiversity. Frequently recorded exceedances of regulatory acceptable thresholds by pesticide concentrations suggest that pesticide pollution is a relevant contributor to broad-scale trends in freshwater biodiversity. A more precise pre-release Ecological Risk Assessment (ERA) might increase its protectiveness, consequently reducing the likelihood of unacceptable effects on the environment. European ERA currently neglects possible differences in sensitivity between exposed ecosystems. If the taxonomic composition of assemblages would differ systematically among certain types of ecosystems, so might their sensitivity toward pesticides. In that case, a single regulatory threshold would be over- or underprotective.
In this thesis, we evaluate (1) whether the assemblage composition of macroinvertebrates, diatoms, fishes, and aquatic macrophytes differs systematically between the types of a European river typology system, and (2) whether these taxonomical differences engender differences in sensitivity toward pesticides. While a selection of ecoregions is available for Europe, only a single typology system that classifies individual river segments is available at this spatial scale - the Broad River Types (BRT).
In the first two papers of this thesis, we compiled and prepared large databases of macroinvertebrate (paper one), diatom, fish, and aquatic macrophyte (paper two) occurrences throughout Europe to evaluate whether assemblages are more similar within than among BRT types. Additionally, we compared its performance to that of different ecoregion systems. We employed multiple tests to evaluate the performances, two of which were also designed in the studies. All typology systems failed to reach common quality thresholds for the evaluated metrics for most taxa. Nonetheless, performance differed markedly between typology systems and taxa, with the BRT often performing worst. We showed that currently available, European freshwater typology systems are not well suited to capture differences in biotic communities and suggest several possible amelioration.
In the third study, we evaluated whether ecologically meaningful differences in sensitivity exist between BRT types. To this end, we predicted the sensitivity of macroinvertebrate assemblages across Europe toward Atrazine, copper, and Imidacloprid using a hierarchical species sensitivity distribution model. The predicted assemblage sensitives differed only marginally between BRT types. The largest difference between
median river type sensitivities was a factor of 2.6, which is far below the assessment factor suggested for such models (6), as well as the factor of variation commonly observed between toxicity tests of the same species-compound pair (7.5 for copper). Our results don’t support the notion that a type-specific ERA might improve the accuracy of thresholds. However, in addition to the taxonomic composition the bioavailability of chemicals, the interaction with other stressors, and the sensitivity of a given species might differ between river types.
Diese Dissertation erläutert die Umsetzung eines RAMI 4.0 konformen Marktplatz in der spanenden Bearbeitung. Ziel ist es einen Lösungsansatz zu definieren, in dem firmenübergreifende Prozessketten für kleine Losgrößen automatisiert identifiziert werden und die Fertigung eines individuellen Produktes realisiert wird. Die Extraktion von Produktinformationen, die Fertigung eines individualisierten Produktes sowie die Beschreibung der Informationen in den Verwaltungsschalen wird validiert. Vor allem stellt sich als Herausforderung für die Zukunft heraus, eine gemeinsame Semantik für die Beschreibung von Capabilities zu definieren. Diese würde ermöglichen, dass ein Matching zwischen proprietären Produktinformationen und Skills möglich wird.
Weak memory consistency models capture the outcomes of concurrent
programs that appear in practice and yet cannot be explained by thread
interleavings. Such outcomes pose two major challenges to formal
methods. First, establishing that a memory model satisfies its
intended properties (e.g., supports a certain compilation scheme) is
extremely error-prone: most proposed language models were initially
broken and required multiple iterations to achieve soundness. Second,
weak memory models make verification of concurrent programs much
harder, as a result of which there are no scalable verification
techniques beyond a few that target very simple models.
This thesis presents solutions to both of these problems.
First, it shows that the relevant metatheory of weak memory
models can be effectively decided (sparing years of manual proof
efforts), and presents Kater, a tool that can answer metatheoretic
queries in a matter of seconds. Second, it presents GenMC, the first
(and only) scalable stateless model checker that is parametric in the
choice of the memory model, often improving the prior state of the art
by orders of magnitude.
This thesis outlines the development of thermoplastic-graphite based plate heat exchangers from material screening to operation including performance evaluation and fouling investi-gations. Polypropylene and polyphenylene sulfide as matrix and graphite as filler were cho-sen as feedstock materials, as they possess a low density and excellent corrosion resistance at a comparatively low price.
For the purpose of material screening, custom-made polymer composite plates with a plate thickness of 1-2 mm and a filler content of up to 80 wt.% were investigated for their thermal and mechanical suitability with regard to their use in plate heat exchangers. Three-point flexural tests show that the loading of polypropylene with graphite leads to mechanical prop-erties that allow the composites to be applied as corrugated heat exchanger plates. The simu-lated maximum overpressure is greater than 7 bar, depending on the wall thickness. The thermal conductivity of the composites was increased by a factor of 12.5 compared to pure polypropylene, resulting in thermal conductivities of up to 2.74 W/mK.
The fabrication of the developed corrugated heat exchanger plates, with a thickness between 0.85 mm and 2.5 mm and a heat transfer surface area of 11.13·10-3 m² was carried out via processes that can be automized, namely extrusion and embossing. With the manufactured plate heat exchanger, overall heat transfer coefficients are determined over a wide range of operating conditions (Re = 200 - 1600), which are used to validate a plate heat exchanger model and consequently to compare the composites with conventional materials. The em-bossing, which seems to result in a shift of the internal graphite structure, leads to a further improvement of the thermal conductivity by 7-20 %, in addition to the impact of the filler. With low plate thicknesses, overall heat transfer coefficients of up to 1850 W/m²K could be obtained. Considering the low density of the manufactured thermal plates, this ensures com-parable performance with metallic materials over a wide range of process conditions (Re = 200 - 4000).
The fouling kinetics and amount of calcium sulfate and calcium carbonate, respectively, on different polypropylene/graphite composites in a flat plate heat exchanger and the developed chevron type plate heat exchanger are determined and compared to the reference material stainless steel. For a straight evaluation of the fouling susceptibility of the materials the for-mation of bubbles on the materials is considered by optical imaging or excluded by a degas-ser. The results are interpreted using surface free energy and roughness of the surfaces. The results show that if bubble formation is avoided, the polymer composites have a very low fouling tendency compared to stainless steel, which is attributed to the low surface free ener-gies of approximately 25 mN/m. This is particularly the case when turbulent flows are pre-sent, as is in plate heat exchangers or when sandblasted specimen are used. Sandblasting also continues to increase heat transfer compared to untreated samples by increasing thermal conductivity and creating local turbulences. Depending on the test conditions, the fouling resistance formed on the stainless steel surface is an order of magnitude greater than on the flat plate polymer composites. In addition, the fouling layers adhere only weakly to the com-posites, which indicates an easy cleaning in place after the formation of deposits. The fouling investigations in the plate heat exchanger reveal sensitivity to calcium sulfate fouling, how-ever, CFD simulations indicate that this is due to flow maldistribution and not the actual pol-ymer composite materials.
Zeolithe werden seit Jahrzehnten als Katalysatoren in der chemischen Industrie und als Ionentauscher in Waschmitteln eingesetzt. Außerdem können Zeolithe als Trägermaterialien für Metalle, die durch Ionenaustausch oder Imprägnierung aufgebracht werden, eingesetzt werden. Ein neuartiges Anwendungsgebiet von Zeolithen ist die Verwendung als antimikrobielles Füllmaterial in Kunststoffen. Hierzu müssen die Zeolithe zuvor mit einem antimikrobiell wirkenden Metall wie z.B. Silber beladen werden. Dieser gefüllte Kunststoff kann zu Filamenten für den 3D-Druck weiterverarbeitet werden. Ein mögliches Anwendungsgebiet für die resultierenden Verbundmaterialien liegt im Bereich der Zahnmedizin in Form von Kronen oder dreigliedrigen Brücken. Ziel dieses Promotionsprojekts war die Modifikation der Zeolithe Beta und ZSM‑5 mit Silber, um die resultierenden Materialien als antimikrobielle Komponenten in einem Polymerverbundwerkstoff einzusetzen. Die beiden Zeolithe sollen mittels Ionenaustausch mit Silberionen beladen werden. Neben der Reaktionstemperatur und dem Gegenion im Zeolithgitter wurde auch die experimentelle Vorgehensweise des Ionenaustauschs (Dauer und Anzahl der Austauschzyklen) variiert, um eine möglichst hohe Beladung mit Silber zu erzielen. Durch die Kombination verschiedener Charakterisierungsmethoden wie Röntgenpulverdiffraktometrie (PXRD) und Festkörper-NMR-Spektrometrie (MAS-NMR) konnte der Erhalt der Zeolithstruktur nach dem Ionenaustausch bestätigt werden. Mittels Atomabsorptionsspektroskopie (AAS) wurde die Silbermenge im Zeolithgitter bestimmt. Da Zeolith ZSM-5 im Einkauf kostengünstiger ist als Zeolith Beta, wurde in den weiteren Schritten mit Silberionen ausgetauschtem Zeolith AgZSM-5 weitergearbeitet. Im nächsten Schritt wurde Zeolith AgZSM‑5 mit verschiedenen Verfahren modifiziert, um eine zeitlich steuerbare Freisetzung der Silberionen aus dem Zeolithgitter zu gewährleisten. Bei der Oberflächenpassivierung mittels Silylierung konnte mittels temperaturprogrammierter Desorption von Ammoniak (NH3-TPD) eine Abschwächung der Säurezentren nachgewiesen werden. Darüber hinaus wurde Zeolith AgZSM-5 noch mittels Imprägnierung mit Calcium bzw. Magnesium sowie durch Reduktion des Silbers im H2-Strom bei unterschiedlichen Temperaturen modifiziert. Bei der Reduktion des Silbers im H2-Strom konnte der Einfluss der Reduktionstemperatur auf die Kristallitgröße des Silbers gezeigt werden.
Machine Learning (ML) is expected to become an integrated part of future mobile networks due to its capacity for solving complex problems. During inference, ML algorithms extract the hidden knowledge of their input data which is delivered to them through wireless links in many scenarios. Transmission of a massive amount of such input data can impose a huge burden on the mobile network. On the other hand, it is known that ML algorithms can tolerate different levels of distortion on their input components, while the quality of their predictions remains unaffected. Therefore, utilization of the conventional approaches
implies a waste of radio resources, since they target an exact reconstruction of transmitted data, i.e., the input of ML algorithms. In this thesis, we propose a novel relevance based framework that focuses on the quality of final ML outputs instead of such syntax based reconstruction of transmitted inputs. To this end, we quantify the semantics or relevancy of input components in terms of the bit allocation aspect of data compression, where a higher tolerance for distortion implies less relevancy. A lower relevance level is translated into the allocation of less radio resources, e.g., bandwidth. The introduced formulation provides the foundations for the efficient support of ML models with their required data in the inference phase, while wireless resources are employed efficiently.
In this dissertation, a generic relevance based framework utilizing the Kullback-Leibler Divergence (KLD) is developed that is applicable to many realistic scenarios. The system model under study contains multiple sources transmitting correlated multivariate input components of a ML algorithm. The ML model is seen as a black box, which is trained and has fixed parameters while operating in the inference phase. Our proposed bit allocation accounts for the rate-distortion tradeoff. Hence, it is simply adjustable for application to
other problems. Here, an extended version of the proposed bit allocation strategy is introduced for signaling overhead reduction, in which the relevancy level of each input attribute changes instantaneously. In another expansion, to take the effect of dynamic channel states into account, a resource allocation approach for ML based centralized control systems is proposed. The novel quality of service metric takes outputs of ML algorithms into consideration,
and in combination with the designed greedy algorithm, provides significantly
improved end-to-end performance for a network of cart inverted pendulums.
The introduced relevance based framework is comprehensively investigated by considering various case studies, real and synthetic data, regression and classification, different estimators for the KLD, various ML models and codebook designs. Furthermore, the reliability of this proposed solution is explored in presence of packet drops, indicating robustness of the relevance based compression. In all of the simulations, the relevance based solutions deliver the best outcome in terms of the carefully chosen key performance indicators. In most of them, significantly high gains are also achieved compared to the conventional techniques, motivating further research on the subject.
Cyber-physische Produktionssysteme (CPPS) ermöglichen die Herstellung kundenindividueller Produkte in kleinen Losgrößen durch Nutzung aktueller Entwicklungen der Informations- und Kommunikationstechnologien. Im Materialfluss in CPPS ist jedoch aufgrund unterschiedlicher physikalischer Eigenschaften der Fördergüter und dynamischer Prozesszuweisungen die Gefahr physikalisch bedingter Störungen erhöht. Diese Arbeit untersucht die Nutzung von Physiksimulation als Basis eines Digitalen Zwillings von Fördermitteln, um diesen Herausforderungen zu begegnen. Das Ziel besteht darin, durch die Simulation der physikalischen Phänomene einzelner Materialflussprozesse die negativen Einflüsse von Störungen zu verringern und somit die Leistungsfähigkeit des Produktionssystems zu erhöhen. Hierzu findet zunächst eine konzeptionelle Entwicklung des Digitalen Zwillings statt, die eine Analyse der beteiligten Systeme, eine Anforderungsdefinition, eine Festlegung von Aufbau- und Ablaufstruktur, sowie eine Formalisierung der einzelnen Funktionsbestandteile umfasst. Im Anschluss wird der Digitale Zwilling softwaretechnisch implementiert, mit einem exemplarischen Fördermittel vernetzt und prototypisch in Betrieb genommen. Die Ergebnisse zeigen die Eignung der Physiksimulation für den beschriebenen Zweck und die Wirksamkeit des Einsatzes auf Produktionssystemebene, indem Materialflussprozesse beschleunigt durchgeführt, überwacht und im Falle von Störungen nachträglich simulativ untersucht werden können.
Mit der vorliegenden Dissertation wurde ein Werkzeug für die Erstellung volldigitaler binnendifferenzierter Arbeitsblätter im Regelunterricht Chemie evaluiert und weiterentwickelt, das ein motivations- und interessensförderndes Potential aufweist. Es konnten Zusammenhänge zur Benutzbarkeit der Anwendung und zum Cognitive Load hergestellt werden. Die Ergebnisse stützen damit die Erkenntnisse im Bereich des Lernens mit digitalen Medien. Die Integration von digitalen Werkzeugen in den Lernprozess ist berechtigt. Sie zeigen einerseits für Schüler:innen ein motivationsförderndes Potential und andererseits für Lehrende praktische Vorteile, indem auf vielfältige Weise Informationen dargeboten werden können – zum Beispiel im Bereich der Differenzierung. Mit HyperDocSystems können binnendifferenzierte digitale Arbeitsblätter erstellt und bearbeitet werden. Diese so genannten HyperDocs können von Lehrenden mit Lernhilfen in verschiedenen Darstellungsformen angereichert und von Lernenden volldigital im Browser mit einem Stylus oder der Tastatur bearbeitet werden.
Im Rahmen einer quasi-experimentellen Feldstudie wurde der Einsatz dieser neuartigen HyperDocs erstmals unter Betrachtung der intrinsischen Motivation und des Interesses, der Usability sowie der Nutzung des multimedialen Differenzierungsangebots analysiert. Die Studie fand über vier Schulstunden im Regelunterricht Chemie der Mittelstufe (Gymnasium / Gesamtschule) und Oberstufe (Gymnasium) statt. Dabei wurden auch der Cognitive Load und die tabletbezogenen Kompetenzen der Lernenden berücksichtigt. Die Ergebnisse lassen auf ein motivationsförderndes Potential der HyperDocs gegenüber analogen Arbeitsblättern schließen. Dabei zeigen sich Unterschiede zwischen den Geschlechtern, die zum Teil auf den Cognitive Load zurückzuführen sind und abhängig vom Alter der Lernenden (Mittel- und Oberstufe) auftreten. Die Lernhilfen werden in diesem Zusammenhang häufig aus Interesse und Neugier verwendet. Schüler:innen nutzen insbesondere Lernhilfen in Form von Text und Bild. Die Nutzungshäufigkeit des Differenzierungsangebots gibt jedoch nicht unmittelbar Aufschlüsse über die Motivation oder den Cognitive Load der Lernenden. Bei der Usability handelt es sich um ein wichtiges Kriterium beim Einsatz von digitalen Lernprogrammen, da sich unter anderem ein Zusammenhang zu den Variablen der intrinsischen Motivation und zum Cognitive Load beim Lernen mit HyperDocs herstellen lässt. Die Usability ist dabei jedoch abhängig vom Messzeitpunkt. HyperDocs weisen eine hohe Usability auf und können daher uneingeschränkt in der Mittel- und Oberstufe eingesetzt werden.
Esse aut non esse - Affirmation und Subversion intergeschlechtlicher Existenzen in der Schule
(2024)
Am 10.10.17 beschloss das Bundesverfassungsgericht in Karlsruhe, ein sog. drittes Geschlecht für den Eintrag im Geburtenregister einzuführen. Intersexuellen Menschen sollte damit ermöglicht werden, ihre geschlechtliche Identität eintragen zu lassen und damit Teilhabe am gesellschaftlichen Leben zu ermöglichen. Zur Begründung verwies das Gericht auf das im Grundgesetz geschützte Persönlichkeitsrecht. Die aktuell geltende Regelung sei mit den grundgesetzlichen Anforderungen insoweit nicht vereinbar, als dass es neben „weiblich" oder „männlich" keine dritte Möglichkeit bietet, ein Geschlecht eintragen zu lassen. Der Gesetzgeber musste nun bis Ende 2018 eine Neuregelung schaffen, in der sie eine Bezeichnung für ein drittes Geschlecht aufnimmt – „divers“.
Schulen als bedeutende soziale Einrichtungen sind nun gefordert, will man die Leitperspektiven der Diversität im Bildungsbereich und damit in der Gesellschaft beibehalten. Schulen stellen Arbeitsfeld, Lebenswelt und Lernumfeld für viele Generationen dar und besitzen damit immer eine gesellschaftliche Vorbildfunktion, wobei Diversität zum stets allgegenwärtigen Imperativ geworden ist. Als Avantgarde müssen Schulen deshalb gerade in gesellschaftlichen Fragen voranschreiten und gleichsam Verantwortung für die Entwicklungen und Lösung wichtiger ethischer Fragen übernehmen ohne dabei die Vermittlung traditioneller Werte und Normen als eine ihrer zentralen Funktionen aufzugeben. Diesen anspruchsvollen Spagat zu vollziehen bleibt konstante Herausforderung der Schulentwicklung.
Mit Vielfalt umgehen bedeutet im schulischen Kontext vor allem neben gegenseitiger Anerkennung und Respekt auch, dass das Zusammenleben der Menschen durch die Eröffnung alternativer Wahrnehmungs-, Denk- und Handlungsansätze bereichert wird. Der Beschluss des Bundesverfassungsgerichts ist folglich in besonderer Weise an Schulen gerichtet.
Doch wie kann dieser Weg erfolgreich und nachhaltig eingeschlagen werden?
Bei Betrachtung der zahlreichen Publikationen zum Thema Gender und Schule sowie der wenigen Entwicklungen in den letzten Jahren wird augenscheinlich, dass das deutsche Schulsystem für die Umsetzung der Entscheidung des Bundesverfassungsgerichts vom 10.10.2017 (1BvR 2019/16) systemisch und strukturell nicht vorbereitet ist.
Daraus lassen sich die Forschungsfragen dieser Promotionsarbeit formulieren:
- Wie verhält sich Schule zum Diskurs des dritten Geschlechts?
- Was sind aus Sicht schulischer Akteure Gelingensbedingungen für eine erfolgreiche Sichtbarmachung des dritten Geschlechts an Schulen?
Es soll in der Arbeit mittels empirischer Untersuchungen eingehend verdeutlicht werden, welche Gelingens- bzw. Misslingensfaktoren bei der Implementierung eines dritten Geschlechts eine Rolle spielen und unter welchen Voraussetzungen überhaupt Schule als Organisation auf die Sichtbarmachung intergeschlechtlicher Kinder und Jugendliche vorbereit ist.
In dieser Arbeit wurde wurde das CASOCI-Programm[1], dessen Implementierung bereits Thema der Dissertation von Dr. Tilmann Bodenstein war und Gegenstand kontinuierlicher Weiterentwicklung in den Arbeitsgruppen Fink (Karlsruher Institut für Technologie) und van Wüllen ist, MPI/OpenMP Hybrid parallelisiert. Dieses wurde im Anschluss daran verwendet, um den fünfkernigen [Ni(tmphen)2]3[Os(CN)6]2- Komplex (tmphen = 3,4,7,8-Tetramethyl-1,10-Phenanthrolin) auf dessen magnetische Eigenschaften hin zu untersuchen. Dieser wurde in der Gruppe von Kim R. Dunbar durch χT-Messungen experimentell untersucht[2,3]. Durch diamagnetische Substitution wurden von diesem Komplex Varianten mir nur ein und zwei aktiven Zentren erzeugt. An diesen wurden CASOCI-Rechnungen durchgeführt und g-Tensoren, Austauschkopplungen, D-Tensoren sowie Tensoren für den anisotropen Austausch bestimmt. Mit Hilfe dieser Tensoren konnte eine χT-Kurve berechnet werden, die eine gute Übereinstimmung mit der aus Dunbars Arbeiten zeigt aufweist. Es konnte gezeigt werden, dass der anisotrope Austausch maßgeblich für den Kurvenverlauf ist und die Einzel-Ionen Nullfeldaufspaltung praktisch keine Rolle spielt.
[1] T. Bodenstein, A. Heimermann, K. Fink, C. van Wüllen, Chem. Phys. Chem. 2022, 23, e202100648.
[2] M. G. Hilfiger, M. Shatruk, A. Prosvirin, K. R. Dunbar, Chem. Commun. 2008, 5752–5754.
[3]A.V.Palii,O.S.Reu,S.M.Ostrovsky,S.I.Klokishner,B.S.Tsukerblat,M.Hilfiger, M. Shatruk, A. Prosvirin, K. R. Dunbar, J. Phys. Chem. A 2009, 113, 6886–6890.
The ability to sense and respond to different environmental conditions allows living organisms to adapt quickly to their surroundings. In order to use light as a source of information, plants, fungi, and bacteria employ phytochromes. With their ability to detect far-red and red light, phytochromes constitute a major photoreceptor family. Bacterial phytochromes (BphPs) are composed of an apo-phytochrome and an open-chain tetrapyrrole, the chromophore biliverdin IXα, which mediates the photosensory properties. Depending on the photoexcitation and the quality of the incident light, phytochromes interconvert between two photoconvertible parental states: the red light-absorbing Pr-form and the far-red light-absorbing Pfr-form. In contrast to prototypical phytochromes, with a thermal stable Pr ground state, there is a group of bacterial phytochromes that exhibit dark reversion from the Pr- to the Pfr-form. These special proteins are classified as bathy phytochromes and range across different classes of bacteria. Moreover, the majority of BphPs act as sensor histidine kinases in two-component regulatory systems. The light-triggered conformational change results in the autophosphorylation of the histidine kinase domain and the transphosphorylation of an associated response regulator, inducing a cellular response. Spectroscopic analysis utilizing homologously produced protein identified PaBphP, the histidine kinase of the human opportunistic pathogen Pseudomonas aeruginosa, as a bathy phytochrome. Intensive research on PaBphP revealed evidence that the interconversion between its physiological active and inactive states is influenced by light and darkness rather than far-red and red light. In order to conduct a comprehensive systematic analysis, further bacterial phytochromes were investigated regarding their biochemical and spectroscopic behavior, as well as their autokinase activity. In addition to PaBphP, this work employs the bathy phytochromes AtBphP2, AvBphP2, XccBphP from the non-photosynthetic plant pathogens Agrobacterium tumefaciens, Allorhizobium vitis, Xanthomonas campestris, as well as RtBphP2 from the soil bacterium Ramlibacter tataouinensis. All investigated BphPs displayed a bathy-typical behavior by developing a distinct Pr-form under far-red light conditions and undergoing dark reversion to their Pfr-form. Different Pr/Pfr-fractions can be identified among the BphP populations in varying natural light conditions, including red or blue light. The Pr-form is considered as the active form due to autophosphorylation activity in the heterologously produced phytochromes when exposed to light. In the absence of light, associated with the development of the Pfr-form, the phytochromes exhibited disabled or strongly reduced autokinase activity. Additionally, light-triggered phosphorylation was observed for the response regulator PaAlgB, which is linked to the phytochrome of P. aeruginosa. This study presents the first comparative investigation of numerous bathy phytochromes under identical conditions. The work addressed a gap in the literature by providing quantitative correlation between kinase activity and calculated Pr/Pfr-fractions obtained from spectroscopic measurements. The biological role of PaBphP was partially elucidated through phenotypic characterization employing P. aeruginosa mutant and overexpression strains. The generation of a functional model was possible by considering the postulated functions of the other phytochromes found in the literature. In summary, bathy BphPs are hypothesized to modulate bacterial virulence according to the circadian day/night rhythm of their hosts. The pathogens are believed to reduce their virulence during daylight hours to evade immune and defense reactions, while increasing their virulence during the evening and night, enabling more effective infections.
In contrast to motorbike tyres, whose friction during cornering has to be as high as possible, the desired effect in skiing is the opposite, that of low friction. The reduced friction between skis and ice or snow is made possible by a film of meltwater that forms as a function of friction power. To support this friction mechanism, skis are waxed with different waxes in both hobby and professional sports, depending on a variety of conditions. Waxes with fluorine additives show best performance in most conditions, corresponding to the lowest friction coefficients. However, for health and environmental reasons, the International Ski Federation (FIS) and the Biathlon Un-ion (IBU) have imposed a complete ban on fluorine additives at all FIS races and IBU events with effect from the 2023/2024 season. As a result, wax manufacturers are required to develop and extensively test fluorine-free waxes in order to remain competitive.
Traditional tests take place either indoors or outdoors in the field. Athletes, who complete a particular distance and whose time is measured, also note the impres-sions that the prepared skis provide to the skiers. The time and cost involved in nu-merous individual tests is a drawback, and the presence of only a single type of snow in the hall or field, air resistance, changing environmental conditions and var-iations in the athlete's movement, limit the depth of information. For the need of re-ducing the time-consuming procedure of indoor and outdoor tests, a tribometer of-fers a solution where friction measurements can be performed on a laboratory scale. Due to the consistent adjustable conditions such as temperature, speed and load applied to the friction partners, scientific studies can be carried out with reduced dis-turbance variables. At present, the tribometric results of laboratory instruments for predicting friction values do not translate into application in practice. The reasons for this are the compromises that have to be made in the design of the tribometers.
This work reviews the existing tribometers for their operating conditions and con-firms the need for a scientific method of characterising different waxes. In order to fill the gap between friction results obtained in laboratory tests which cannot yet be used in the selection of waxes, and traditional field tests, this thesis is dedicated to the methodical design and manufacture of a linear tribometer capable of measuring friction between a ski base made of UHMWPE (ultra high molecular weight polyeth-ylene) and an ice sample. The tribometer provides for the first time results that allow differentiating be-tween different modified waxes with regard to their running performance. Friction-influencing factors such as speed, temperature and the surface pressure below the ski base can be adjusted within the range relevant for ski sports. Furthermore, the laboratory-scale test stand, which is located in a cold chamber, is capable of ac-commodating not only typical ski jumping base lengths and widths, but also cross-country and alpine ski bases. To verify the tribometer, a ski base is treated with three waxes of different fluorine content and measured comparatively. With a minimum of 95% confidence, the friction differences between the tested waxes depending on their fluorine content is validated and proven at the end of this work.
Functional structures as well as materials provided by nature have always been a great source of inspiration for new technologies. Adapting and improving the discovered concepts, however, demands a detailed understanding of their working principles, while employing natural materials for fabrication tasks requires suitable functionalization and modification.
In this thesis, the white scales of the beetle Cyphochilus are examined in order to reveal unknown aspects of their light transport properties. In addition, the monomer of the material they are made of is utilized for 3D microfabrication.
White beetle scales have been fascinating scientists for more than a decade because they display brilliant whiteness despite their small thickness and the low refractive index contrast. Their optical properties arise from highly efficient light scattering within the disordered intra-scale network structure.
To gain a better understanding of the scattering properties, several previous studies have investigated the light transport and its connection to the structural anisotropy with the aid of diffusion theory. While this framework allows to relate the light scattering to macroscopic transport properties, an accurate determination of the effective refractive index of the structure is required. Due to its simplicity, the Maxwell-Garnett mixing rule is frequently used for this task, although its constraint to particle and feature sizes much smaller than the wavelength is clearly violated for the scales.
To provide a correct calculation of the effective refractive index, here, finite-difference time-domain simulations are used to systematically examine the impact of size effects on the effective refractive index. Deploying this simulation approach, the Maxwell-Garnett mixing rule is shown to break down for large particles. In contrast, it is found that a quadratic polynomial function describes the effective refractive index in close approximation, while its coefficients can be obtained from an empirical linear function. As a result, a simple mixing rule is reported that unambiguously surpasses classical mixing rules when composite media containing large feature sizes are considered. This is important not only for the accurate description of white beetle scales, but also for other turbid media, such as biological tissues in opto-biomedical diagnostics.
Describing light transport by means of diffusion theory moreover neglects any coherent effects, such as interference. Hence, their impact on the generation of brilliant whiteness is currently unknown. To shed a light on their role, spatial- and time-resolved light scattering spectromicroscopy is applied to investigate the scales and a model structure of them based on disordered Bragg stacks. For both structures the occurrence of weakly localized photonic modes, i.e., closed scattering loops, is observed, which is further verified in accompanying simulations. As shown in this thesis, leakage from these random photonic modes contributes at least 20% to the overall reflected light. This reveals the importance of coherent effects for a complete description of the underlying light transport properties; an aspect that is entirely missing in the purely diffusive transport presumed so far. Identifying the importance of weak localization for the generation of brilliant whiteness paves the way to further enhance the design of efficient optical scattering media, an issue that recently drawn great attention.
Unlike their plant-based counterparts, rigid carbohydrates, such as chitin, are currently unavailable for 3D microfabrication via direct laser writing, despite their great significance in the animal kingdom for the construction of functional microstructures. To overcome this gap, the monomeric unit of chitin, N-acetyl-D-glucosamine, is here functionalized to serve as a photo-crosslinkable monomer in a non-hydrogel photoresist. Since all previous photoresists based on animal carbohydrates are in the form of hydrogel formulations, a new group of photoresists is established for direct laser writing.
Moreover, it is exhibited that the sensitization effect, previously used only in the context of UV curing, can be successfully transferred to direct laser writing to increase the maximum writing speed. This effect is based on the beneficial combination of two photoinitiators.
In this, one photoinitiator is an efficient crosslinking agent for the monomer used, but a rather poor two-photon absorber. The other photoinitiator (called sensitizer) possesses, conversely, a much higher two-photon absorption coefficient at the applied wavelength but is not well suited as a crosslinking agent. In combination, the energy absorbed by the sensitizer is passed to the photoinitiator, resulting in the formation of radicals needed to start the polymerization. As this greatly increases the rate at which the photoinitiator is radicalized, resists containing a photoinitiator and a sensitizer are shown to outperform resists containing only one of the components. Deploying the sensitization effect in direct laser writing therefore offers a simple way to individually tune the crosslinking ability and the two-photon absorption properties by combining existing compounds, compared to the costly chemical synthesis of novel, customized photoinitiators.
In der heutigen Arbeitswelt stehen Organisationen vor der Herausforderung, sich kontinuierlich an Veränderungen anzupassen. Der demographische Wandel und steigende Zahlen von Arbeitsausfällen durch psychische Belastungen rücken das Wohlergehen und die Zufriedenheit von Mitarbeitenden am Arbeitsplatz in den Fokus. Die Mitarbeiterbefragung als Instrument der Organisationsentwicklung ist eine Möglichkeit Veränderungsprozesse so zu gestalten, dass betriebswirtschaftliche und gleichzeitig humanistische Ziele erreicht werden können. Bei der Umsetzung von Mitarbeiterbefragungen kommt es vor allem auf deren Folgeprozesse an, da hier aus den Ergebnissen einer Befragung Schlussfolgerungen gezogen und diese in Aktionen überführt werden. Der Blick in die Praxis zeigt jedoch, dass Erwartungen an Folgeprozesse und somit Mitarbeiterbefragungen, sowohl auf Seite von Unternehmen, als auch auf Seite von Mitarbeitenden, oft enttäuscht werden.
Die bisherige Forschung zeigt zwar allgemein den positiven Effekt von Mitarbeiterbefragungen und Folgeprozessen auf, jedoch bleibt unklar, wie einzelne Bestandteile eines Folgeprozesses und vor allem deren qualitative Durchführung wirken. Hierin liegt der erste Ansatzpunkt der vorliegenden Arbeit. Darüber hinaus soll die Rolle von Führungskräften in Folgeprozessen beleuchtet werden. Denn aus den vielen Überlegungen und Untersuchungen dazu, welche Aspekte Change-Prozesse beeinflussen, sticht oft die besondere Rolle von Führungskräften hervor. Dabei wird von den Führungskräften Verhalten gefordert, welches über ein klassisch rational-funktionales Verständnis von Führung hinausgeht und Mitarbeitende dazu anregt, sich offen und engagiert in Veränderungsprozessen zu verhalten. Einen Ansatz dies zu erreichen, stellt Positive Leadership dar. Hierbei werden Führungsverhaltensweisen an den Tag gelegt, die die Sinnhaftigkeit der Arbeit betonen, positive Beziehungen zu Mitarbeitenden fördern, Anerkennung und Wertschätzung zeigen, Stärkenorientierung praktizieren, für positives Arbeitsklima sorgen, positive Kommunikation beinhalten, die Mitarbeitenden in ihrer Entwicklung unterstützen und insbesondere Partizipation und Befähigung ermöglichen. Auch wenn sich das Konzept Positive Leadership immer größerer Beliebtheit erfreut, existiert noch keine klare Konzeption des Konstrukts und noch kein etabliertes Messinstrument. Darüber hinaus findet sich noch keine Anwendung des Konzepts im Kontext von Change-Prozessen allgemein und von Folgeprozessen von Mitarbeiterbefragungen im Speziellen.
Das Hauptziel der vorliegenden Arbeit besteht darin, Positive Leadership im Kontext von Folgeprozessen einer Mitarbeiterbefragung zu untersuchen. Dazu wurden vier Studien durchgeführt. In Studie 1 wurde durch teilstrukturierte Experten-Interviews (N = 22) exploriert, welche Schritte ein Folgeprozess einer Mitarbeiterbefragung beinhaltet und woran sich eine hohe Qualität in der Durchführung dieser Schritte festmachen lässt. In Studie 2 wurde in drei Teiluntersuchungen (N1 = 194, N2 = 201, N3 = 124) ein Messinstrument für Positive Leadership entwickelt und validiert.
In Studie 3 wurden in einer Fragebogenstudie an einer Stichprobe von Mitarbeitenden (N = 1302) und Führungskräften (N = 266) der Stellenwert einzelner Schritte des Folgeprozesses und der Qualität in der Durchführung aufgezeigt. Des Weiteren wurde der Einfluss von Positive Leadership auf die Qualität des Folgeprozesses und auch Arbeitsengagement und Arbeitszufriedenheit belegt. Dies galt sowohl für Mitarbeitende als auch für Führungskräfte selbst. Sowohl die Einhaltung und Qualität des Folgeprozesses als auch Positive Leadership wirkten sich zudem (zum Teil indirekt über die Zufriedenheit mit dem Folgeprozess vermittelt) auf die Veränderung in Arbeitsengagement und Arbeitszufriedenheit zwischen zwei Mitarbeiterbefragungen aus. Außerdem konnten an einer Stichprobe von 242 Dyaden aus Führungskraft und Mitarbeitendem die Auswirkungen von Diskrepanz und Kongruenz der Einschätzungen zu Positive Leadership oder dem Folgeprozess aufgezeigt werden. Zuletzt wurde untersucht, inwiefern die Attribution von Erfolgen und Misserfolgen im Folgeprozess durch Positive Leadership beeinflusst wird.
Studie 4 bestätigte in einem experimentellen Design (N = 420) unter Anwendung von Video-Vignetten die positiven Effekte einer hohen Qualität des Folgeprozesses und von Positive Leadership auf das Arbeitsengagement und die Arbeitszufriedenheit. Darüber hinaus konnten die vorigen Erkenntnisse um Aussagen über Interaktionen der untersuchten Faktoren erweitert werden. So zeigte sich, dass positives Führungsverhalten die Effekte mangelhafter Qualität im Folgeprozess oder geringer Einhaltung der Schritte des Folgeprozesses abfedern kann. Eine hohe Einhaltung der Schritte im Folgeprozess wirkte sich zudem nur positiv auf die Zufriedenheit mit dem Folgeprozess aus, wenn die Qualität der durchgeführten Schritte hoch war. Außerdem wurde in Studie 4 der Effekt von angenommenen Unterschieden in der Zufriedenheit mit dem Folgeprozess zwischen Mitarbeitenden und Führungskräften auf die Teilnahmeintention an einer nächsten Mitarbeiterbefragung, sowie der Arbeitszufriedenheit und dem Arbeitsengagement aufgezeigt. Abschließend wurden erneut die Auswirkungen von Positive Leadership auf die Attribution von Erfolgen und Misserfolgen im Folgeprozess analysiert. Zusätzlich wurden auch weiterführende Effekte der Attribution auf die Teilnahmeintention an nächsten Mitarbeiterbefragungen untersucht.
Die vorgestellten Studien der Dissertation werden theoretisch und methodisch diskutiert. Auf Basis der Ergebnisse werden praktische Empfehlungen zum verbesserten Umgang mit Folgeprozessen von Mitarbeiterbefragungen und Positive Leadership abgeleitet.
Production, purification and analysis of novel peptide antibiotics from terrestrial cyanobacteria
(2024)
Cyanobacteria are a known source for bioactive compounds, of which several also show antibiotic activity. In regard to the growing number of multi-resistant pathogens, the search for novel antibiotic substances is of great importance and unexploited sources should be explored. So, this thesis initially dealt with the identification of productive strains, especially within the group of the terrestrial cyanobacteria, which are less well studied than marine and freshwater strains. Amongst these, Chroococcidiopsis cubana, an extremely desiccation and radiation tolerant, unicellular cyanobacterium was found to produce an extracellular antimicrobial metabolite effective against the Gram-positive indicator bacterium Micrococcus luteus as well as the pathogenic yeast Candida auris. However, as the sole identification of a productive cyanobacterium is not sufficient for further analysis and a future production scale-up, the second part of this thesis targeted the identification of compound synthesis prerequisites. As a result, a limitation of nitrogen was shown to be the production trigger, a finding that was used for the establishment of a continuous production system. The increased compound formation was then used for purification and analysis steps. As a second approach, in silico identified bacteriocin gene clusters from C. cubana were cloned and heterologously expressed in Escherichia coli. By this, the bacteriocin B135CC was identified as a strong bacteriolytic agent, active predominantly against the Gram-positive strains Staphylococcus aureus and Mycobacterium phlei. The peptide showed no cytotoxic effects against mouse neuroblastoma (N2a-) cells and a high temperature tolerance up to 60 °C. In order to facilitate the whole project, two standard protocols, specifically adapted for the work with cyanobacteria, were established. First, a method for a quick and easy in vivo vitality estimation of phototrophic cells and second, an approach for a high throughput determination of nitrate concentrations in microalgal cultures. Both methods greatly helped to proceed the main objectives of this work, the first one by simplifying the development of suitable cryopreservation protocols for individual cyanobacteria strains and the second one by accelerating the determination of the optimal nitrate concentration for the production of the antimicrobial compound from C. cubana. In the course of this cultivation optimization, the ability of cyanobacteria to utilize organic carbon sources for an accelerated cell growth was examined in greater detail. It could be shown that C. cubana reaches significantly higher growth rates when mixotrophically cultivated with fructose or glucose. Interestingly, this effect was even further enhanced when light intensity was decreased. Under these low-light conditions, phototrophically cultivated C. cubana cells showed a clearly decreased cell growth. This effect might be extremely useful for a quick and economic preparation of precultures.
Plant-specific factors affecting short-range attraction and oviposition of European grapevine moths
(2024)
The spread of pests and pathogens is increasingly intensified by climate change and globalization. Two of the most serious insect pests threating European viticulture are the European grape berry moth, Eupoecilia ambiguella (Hübner) and the European grapevine moth Lobesia botrana (Denis & Schiffermüller). Larvae feed on fructiferous organs of grapevine Vitis vinifera, resulting in high yield and quality losses. Under the aspects of integrated pest management, insecticide measures are only reasonable when other control strategies become ineffective. In order to support the development of novel decision support system for the application of insecticides, the aim of this thesis was to decipher plant-specific factors, which affect the short-range attraction and oviposition of L. botrana and E. ambiguella.
The focus was set on the visual, volatile, tactile and gustatory stimuli provided by their host plant after settlement. The use of artificial surfaces as model plant showed that oviposition of both species is affected by the color, the shape and the texture of the oviposition site. To explain a susceptibility of certain grapevine cultivars and phenological stages of the berries to egg infestations, we analysed and compared the chemical composition of the epicuticular waxes of the berry surface as well as the volatile organic compounds emitted by the berries. Thereby it turned out that the attractiveness to wax extracts decreased during ripening of the berries, highlighting a preference of earlier phenological stages of the berries for oviposition. In addition, grapevine cultivars exhibited variations in their volatile composition. The principle components perceived by female’s antennae could not explain the differentiation between cultivars, suggesting volatiles do not trigger orientation to certain cultivars. Furthermore, a method was developed to measure real-time behavioural response of female moths to volatiles. The setup allowed to quantify the orientation to a volatile source as well as movements of the antennae and ovipositor. They could be linked to the olfactory and gustatory perception of volatiles during the evaluation of suitable host plants for oviposition. In addition, the risk of potential alternative host plants in the vicinity of the vineyard was investigated. This confirmed that L. botrana in particular prefers the stimuli provided by some plants to those of grapevine. Overall, the results suggest that during oviposition, volatiles emitted by the plants and the composition of the plant surface are the most important factors for host plant differentiation.
VR ist ein stetig wachsendes Forschungsgebiet, das die Perspektiven und Möglichkeiten der Mensch-Computer-Interaktion erweitert (Hassan & Hossain, 2022). Durch Studien konnte bereits vor dem aktiven Einsatz im Schulalltag eine Vielzahl an positiven Auswirkungen auf den Lernprozess durch die Nutzung von VR nachgewiesen werden (Chavez & Bayona, 2018). Das sogenannte Immersive Learning stellt damit einen Schlüsselbereich zur digitalen Transformation im Bildungsbereich dar. Um VR allerdings im Schulunterricht einsetzen zu können, bedarf es Lernumgebungen, die auf die örtlichen Gegebenheiten und alltäglichen Bedürfnisse eines praktischen Schulunterrichts angepasst sind. Solche Gestaltungsprinzipien sind allerdings im Bildungsbereich noch nicht vorhanden (Johnson-Glenberg, 2018). Diese Arbeit beschäftigt sich damit, Prinzipien aus der Theorie abzuleiten, diese mit Gestaltungskomponenten zu vereinen und darauf aufbauend eine VR-Lernumgebung zu gestalten und zu erforschen. Um eine Praxis-nähe bei der Entwicklung und Untersuchung zu gewährleisten, wurde ein Design-Based Research Ansatz gewählt. In aufeinander aufbauenden Mikrozyklen wurden die Gestaltungskomponenten evaluiert und daraus Gestaltungsprinzipien abgeleitet. Die Lernmaterialien wurden fächerübergreifend für die Fächer Chemie und Geografie konzipiert sowie praxisnah mit Teilnehmenden aus vier zehnten Klassen eines Gymnasiums in Rheinland-Pfalz evaluiert. Als Lerninhalt wurde der Kohlenstoffkreislauf gewählt und in den jeweiligen Curricula der Fächer verortet. Der Hauptfokus lag auf dem Fach Chemie, Themenfeld elf „Stoffe im Fokus von Um-welt und Klima“. Als virtueller Ort wurde die Nachbildung eines Abschnitts des außerschulischen Lernorts „Reallabor Queichland“ gewählt. Die Komponenten wurden in insgesamt sieben Mikrozyklen aufgeteilt, nummeriert von null bis sechs. Mikrozyklus null wird genutzt, um den Teilnehmenden den Umgang mit dem VR-System näher zu bringen und den Neuigkeitseffekt abzumildern. Mikrozyklus eins evaluiert die Grundfläche der VR-Lernumgebung mit dem Fokus auf den Realismus der Umgebung. Mikrozyklus zwei beschäftigt sich mit dem zu wählenden Bewegungsradius innerhalb der VR. Mikrozyklus drei untersucht den Effekt von realitätsnahen Hintergrundgeräuschen. Die Mikrozyklen vier bis sechs bestehen aus drei Lernstationen mit unterschiedlichen Interaktionsmöglichkeiten: realitätsnahe Interaktionen, realitätsferne Interaktionen sowie eine Mischung daraus. Erhoben wurden die Skalen räumliches Präsenzerleben, aktuelle Motivation, Realismus, wahrgenommene Bedienbarkeit, wahrgenommene Lerneffektivität und die VR-Skala. Ausgewertet wurden die Daten mit ANOVAs und Pfadanalysen sowie einer übergreifenden Analyse am Ende der Erhebung. Durch das Design der Komponenten konnte ein sehr hohes räumliches Präsenzerleben sowie ein sehr hoher wahrgenommener Realismus erzeugt werden. In den Lernstationen bewerteten die Teilnehmenden die wahrgenommene Lerneffektivität sowie Bedienbarkeit als auch den Zusammenhang von 3-D-Model-len, deren Manipulierbarkeit in VR und der damit verbundene Effekt auf die Lerneffektivität als sehr hoch. Insgesamt konnten aus den vorliegenden Daten zwölf Gestaltungsprinzipien generiert werden. Diese können dafür genutzt werden, neue VR-Lernumgebungen für den praktischen Einsatz im Schulunterricht zu erstellen. Es wurden theoretische Annahmen zur Respezifikation des Prozessmodells des räumlichen Präsenzerlebens getroffen und mit den erhobenen Daten geprüft. Die Anpassung des Modells an moderne VR-Brillen und kognitiv fordernde VR-Lernumgebungen stand dabei im Fokus und ergab sehr gute Modelfit-Werte. In weiterführen-den Studien sollten diese Annahmen mit größeren Stichproben überprüft werden.
Auf Grundlage normativer Regelungen, aktueller Forschungsvorhaben und deren Erkenntnisse (Kuhlmann u. a. 2008 und 2012) wurden experimentelle sowie numerische Untersuchungen an großen Ankerplatten mit mehr als der aktuell normativ zugelassenen Anzahl an Kopfbolzen durchgeführt. Ziel der Untersuchungen war es, einen Ansatz für die Tragfähigkeit großer nachgiebiger Ankerplatten mit Kopfbolzen zu entwickeln. Durch Variationen der maßgebenden Parameter wie der Ankerplattendicke, der Kopfbolzenlänge, des Grads der Rückhängebewehrung sowie des Zustands des Betons konnte anhand der experimentellen Untersuchungen ein Komponentenmodell verifiziert werden. Mögliche Versagensmechanismen, wie Stahlversagen der Kopfbolzen auf Zug, Fließen der Ankerplatte infolge der T-Stummelbildung, kegelförmiger Betonausbruch sowie Stahlversagen der Rückhängebewehrung, konnten mithilfe dieser Parameter abgebildet werden. Weiter hat sich beim Versagensmodus ‚kegelförmiger Betonausbruch‘ die Oberflächenbewehrung im Nachtraglastbereich als zusätzlicher Parameter herausgestellt.
Das auf Grundlage der DIN EN 1993-1-8 entwickelte Modell und die Berücksichtigung der Komponentensteifigkeiten ermöglichen die Bemessung starrer und nachgiebiger Ankerplatten. Durch den Einbezug der Steifigkeiten einzelner Komponenten kann die Gesamtsteifigkeit einer Anschlusskonfiguration berechnet werden, um ein duktiles Tragverhalten zu erhalten. Neben verschiedenen möglichen Fließzonen auf der Ankerplatte infolge unterschiedlicher Geometrien und Anordnungen der Verbindungsmittel werden kegelförmige Betonausbrüche in Abhängigkeit einer möglichen zusätzlichen Rückhängebewehrung im Modell berücksichtigt.
Das in dieser Arbeit beschriebene Modell für die sich bildende Zugseite starrer sowie nachgiebiger Ankerplatten mit mehr als aktuell nach Norm zulässigen Ver-bindungsmitteln konnte anhand experimenteller und numerischer Versuche verifiziert werden. Der plastische Bemessungsansatz zeigt, über alle Versuchsserien hinweg, eine gute Übereinstimmung mit den experimentellen Untersuchungen sowie den numerischen Parameterstudien.
In einem zweiten Schritt wurden Auswirkungen einer Kurzzeitrelaxation des Betons infolge Zwang auf große Ankerplatten in Verbindung mit Kopfbolzen untersucht. Mit dem in Anlehnung an die Komponentenmethode der DIN EN 1993-1-8 entwickelten Federmodell können zeitabhängige Verformungen von Beton infolge von Kriechen und Schwinden berücksichtigt werden. Mithilfe des anhand experimenteller und numerischer Versuche verifizierten Modells ist es möglich, Auswirkungen infolge Zwang auf Ankerplatten zu untersuchen.
In der vorliegenden Arbeit wird das Querkrafttragverhalten vorgefertigter Wand-elemente aus haufwerksporigem Leichtbeton (LAC) untersucht.
Leichtbeton mit haufwerksporigem Gefüge gilt als gut wärmedämmend und ressourcenschonend, weshalb er bei Verwendung in Fassadenelementen weiter an Bedeutung gewinnt. Allerdings führen unzureichende Erkenntnisse zum Querkraft-tragverhalten freitragender Wandbauteile ohne und mit Querkraftbewehrung zu normativen Einschränkungen in der konstruktiven Durchbildung und Herstellung der Fertigteile.
Das Ziel dieser Arbeit ist die Entwicklung eines Bemessungsmodells für freitragende Wandelemente aus LAC, das mit geeigneten Bewehrungskonstruktionen sowohl die einwandfreie Herstellung im üblichen Walzverfahren ermöglicht als auch die Tragfähigkeit der Elemente sicherstellt.
Zunächst werden experimentelle Untersuchungen zur Verdichtung von Elementen aus haufwerksporigem Leichtbeton durchgeführt. Darauf aufbauend ermöglichen Auszug-versuche mit einem modifizierten BeamEndTest eine Aussage über das Verbund- und Verankerungsverhalten von Bewehrungsstäben in LAC.
Mit den Ergebnissen der Auszugversuche werden Großversuche an Wandbauteilen konzipiert und durchgeführt, welche die Beurteilung der Querkrafttragfähigkeit von Bauteilen ohne und mit Querkraftbewehrung erlauben. Aus den experimentellen Untersuchungen wird ein an den Eurocode 2 und DIN EN 1520:2011-06 angelehnter Bemessungsvorschlag entwickelt, der unter anderem auch die Schubschlankheit der Elemente und die Verankerung der Querkraftbewehrung als Einflussparameter berücksichtigt.
Teile dieser Arbeit wurden im Forschungsvorhaben „Innovative Konstruktions- und Bemessungsregeln zur Optimierung der Querkraft- und Torsionstragfähigkeit von freitragenden Wandplatten aus LAC“ erarbeitet. Dabei handelt es sich um ein Kooperationsprojekt zwischen der Technischen Universität Kaiserslautern (Prof. Dr.-Ing. Matthias Pahn, Prof. Dr.-Ing. Jürgen Schnell), der Hochschule Koblenz (Prof. Dr.-Ing. Ralf Zeitler) und dem Bundesverband Leichtbeton e.V. (Dieter Heller). Das Projekt wurde gefördert durch das Bundesministerium fürWirtschaft und Energie.
Verbundträger stellen in der Regel aufgrund der Anordnung der Ausgangsmaterialien im System entsprechend den jeweils vorteilhaften Eigenschaften sehr wirtschaftliche Konstruktionen dar. Neben den enormen Steifigkeiten und Tragfähigkeiten im Vergleich zu losen aufeinanderliegenden Querschnitten und dem deutlichen Einsparen an Eigengewicht im Vergleich zu Massivkonstruktionen zeichnen sich Verbundträger aufgrund hoher Vorfertigungsgrade durch einen schnellen Bauablauf aus.
Da die Verbundfuge zwischen Träger und Betonplatte in der Regel nicht unendlich starr ausgeführt werden kann, treten unter Last Relativverschiebungen zwischen den Teilquerschnitten auf, welche das Trag- und Verformungsverhalten des Gesamtsystems maßgebend beeinflussen können. Zum detaillierten Abbilden des realen Last- Verformungs-Verhaltens ist eine Berücksichtigung der Nachgiebigkeit der Verbundfuge daher unerlässlich. Da die Wirtschaftlichkeit von Stahl-Beton-Verbundträgern, den „klassischen Verbundträgern“, maßgebend durch das Ausnutzen plastischer Tragreserven im Grenzzustand der Tragfähigkeit abhängt, werden im Rahmen vorliegender Arbeit Berechnungsverfahren untersucht, die sowohl das nichtlineare Materialverhalten der Teilquerschnitte als auch das nichtlineare Last-Verformungs- Verhalten der Verbundfuge berücksichtigen.
Der Fokus vorliegender Arbeit liegt auf statisch unbestimmten Systemen, da sich die äußeren Schnittgrößen und die Steifigkeitsverteilung entlang der Trägerlänge gegenseitig beeinflussen. Konkret erfolgt die Modellierung mit der Segment-Lamellen- Methode, welche aufbauend auf der Differentialgleichung des elastischen Verbundes über zahlreiche elastische Lastschritte unter Reduktion der Steifigkeit einzelner Querschnittsbereiche das nichtlineare Tragverhalten mit guter Näherung darstellen kann. Darüber hinaus wird eine Übertragung der im Holzbau üblichen Stabwerkmodelle durch Berücksichtigung nichtlinearen Materialverhaltens auf Stahl-Beton-Verbundträger gegeben.
Zur Kalibrierung der Modelle und um zusätzliche Erkenntnisse zum Trag- und Verformungsverhalten von Verbundträgern vor allem in der Verbundfuge zu erhalten, werden drei großmaßstäbliche, statisch unbestimmt gelagerte Stahl-Beton- Verbundträger mit unterschiedlicher Verbundwirkung experimentell untersucht. Als Besonderheit ist zu nennen, dass abweichend von der aktuellen Normenregelung sowohl im Feld- als auch im Stützbereich eine Teilverdübelung vorliegt. Zusätzlich werden drei Versuche an großmaßstäblichen statisch bestimmt gelagerten Holz-Beton- Verbundträgern durchgeführt.
Beim Vergleich der Versuchsergebnisse mit der Modellrechnung kann gezeigt werden, dass das globale Last- und Verformungsverhalten von Verbundträgern mit sehr guter Näherung abgebildet werden kann. Hierdurch eignen sich beide Modellrechnungen als Basis einer verformungsorientierten Bemessung, bei der die Traglast des Systems bei Erreichen von Grenzdehnungen der Teilquerschnitte oder der Grenzverformung in der Verbundfuge erreicht wird. Vorteilhaft bei dieser Bemessung ist, dass einerseits sowohl nichtlineares Materialverhalten als auch die Nachgiebigkeit der Verbundfuge bei der Bemessung berücksichtigt werden kann und andererseits der Verformungszustand unmittelbar bestimmt werden kann.
Streams and their adjacent terrestrial ecosystem are tightly linked via the flux of organisms and matter. Emergent aquatic insects can be an important food source for riparian predators like bats, birds, spiders, and lizards. Information about the quality, quantity and phenology of emergent aquatic insects is necessary to estimate how riparian predators can benefit from them as food source. Though intensive agriculture is a globally dominant land use, little is known about how agricultural land use affects the quantity, quality as well as phenology of emergent aquatic insects. Typically, emergent aquatic insects contain more long-chain polyunsaturated fatty acids (PUFA) than terrestrial insects. Especially long-chain PUFA, were shown to enhance growth and immune response of spiders and birds.
In chapter 2, the PUFA transfer to spiders and the effect of food sources differing in their PUFA profiles on spiders was examined in outdoor microcosms under environmentally realistic conditions (i.e., normal weather conditions, possibility to construct orb webs as in their natural habitat). The environmental context determined how PUFA can affect the spiders. For instance, besides PUFA profiles of food sources, environmental variables like the temperature were important for the growth and body condition of spiders.
In the third chapter, the effect of agricultural land use on the quantity in terms of biomass as well as abundance, phenology and composition of emergent aquatic insects was assessed. Previous studies were limited to seasons or single time points, which hampered determining annual biomass export and shifts in phenology. Therefore, emergent aquatic insects were sampled continuously over the primary emergence period of one year and environmental variables associated with agricultural land use were monitored. The biomass and abundance in total were higher (61 – 68 and 79 – 86%, respectively) in agricultural than forested sites. In addition to that, a turn-over of emergent aquatic insect assemblages and a shift in phenology of aquatic insects was identified. In agricultural sites, 71% families of aquatic insects emerged earlier than in forested sites. Pesticide toxicity was associated with different aquatic insect order biomass and abundances. During the same experiment spiders were sampled in spring, summer, and autumn. Additionally, the fatty acid (FA) content of the spiders and emergent aquatic insects was determined. These results are presented in chapter 4. The FA export via emergent aquatic insects was higher (26 – 29%) in forested than agricultural sites, which indicated a reduced quality of aquatic insects as food source for riparian predators in agricultural sites. The FA profiles of mayflies, flies and caddisflies differed between land-use types, but not for spiders. Shading and pool habitats were the most important environmental variables for the FA profiles, though environmental variables explained only little variation in FA profiles. Overall, the quantity, quality and phenology of emergent aquatic insects differed between land-use types, which can affect population dynamics in the adjacent terrestrial ecosystem. Our results can be used in modeling food-web dynamics or meta-ecosystems to improve understanding of linked ecosystems.
Biodiversity has declined by approximately 70% in the last 50 years for vertebrate and invertebrate species. This loss in biodiversity is strongly connected with anthropogenic activities, such as agricultural intensification and pollution. Currently, pesticides are needed to secure the growing global food demand, although they are recognized as one of the main drivers of biodiversity loss, mainly in agricultural areas.
In the European Union, pesticides are regulated within the risk assessment framework, which aims to protect both the environment and human health from undesirable effects. The effects on non-target organisms are mostly assessed following a “one-size-fits-all” approach, focused on sensitive species tests. However, it has been recognized that the current methodology can be improved in order to minimize undesirable effects. Aiming to provide valuable data to inform future risk assessment, this thesis focused on two terrestrial organism groups that play beneficial roles, especially in agroecosystems: earthworms and spiders.
Although the earthworm Eisenia fetida is included in pesticide regulation, its use as the only earthworm representative may lead to uncertainties for the risk assessment. Therefore, we collected ecotoxicological data on field-captured earthworm species via acute exposure to imidacloprid and copper. In addition, we investigated the relationships between earthworm chemical sensitivity, biological traits and habitat preferences, and potential links with their ecosystem services (Chapter 2). We found that earthworms sampled from extremely acidic soils were less sensitive to copper than earthworms from neutral soils. Moreover, anecic and endogeic earthworms were more sensitive to imidacloprid than epigeic earthworms.
Spiders have, thus far, been understudied in regulatory risk assessment in comparison to other non-target arthropods. Thus, we aimed to collect ecotoxicological data of spider species sampled in different European climates via acute exposure to lambda-cyhalothrin. Moreover, we explored relationships between spider chemical sensitivity, phylogeny, biological traits and habitat preferences, as well as potential links with their ecosystem services (Chapter 3). Spiders showed a high sensitivity to lambda-cyhalothrin. Furthermore, our results showed that spider sensitivity varies depending on climate. We confirmed this relationship by incorporating different rearing and test temperatures into the toxicity testing protocol (Chapter 4).
The outcomes of this thesis contribute to informing pesticide regulatory practices, allowing for an improved protection and conservation of terrestrial organism groups and the ecosystem services they provide. The consideration of ecological traits, habitat variability and related plasticity, key species, and ecological network structure could improve the risk assessment framework and minimize the effects of pesticides and other stressors on an ecosystem-level.
The vast majority of all mitochondrial proteins are synthesized in the cytosol. These proteins carry characteristic targeting motifs within their sequence, which allows for the binding of chaperones, that in turn usher precursors to the mitochondrial surface for import and assembly. Though, our understanding of these early reactions is still lacking, recent efforts have shown that the ER surface can facilitate the import of mitochondrial proteins (ER-SURF) with the help of the J-protein Djp1. Close cooperation of organelles in form of membrane contact sites is crucial for cellular function. The aim of my work was to investigate whether ER-mitochondria contact sites are critical for the transfer of proteins from the ER to mitochondria.
Several contact sites have been characterized between ER and mitochondria in S. cerevisiae. One contact site is called the ER mitochondria encounter structure (ERMES) and another is partly formed by Tom70. Owing to the high propensity of suppressor mutations in ERMES, I employed a knockdown approach to deplete this contact site. Using an inducible CRISPR interference (CRISPRi) system, I could rapidly and efficiently deplete Mdm34, which is a part of ERMES. I could show that depletion of Mdm34 had a synthetic negative effect in combination with a deletion of TOM70. Loss of both contact sites led to a strong decrease of many mitochondrial proteins in the whole cell proteome. Using affinity purification of ER and mitochondria in conjunction with mass spectrometry I could demonstrate that a specific set of mitochondrial proteins are enriched on the ER upon loss of Mdm34 and Tom70, which mainly were proteins of the inner membrane e.g., Oxa1 and Cox5A. Moreover, I was able to validate that the import of these proteins was hampered upon loss of both contact sites. Also, in vivo the biogenesis of Oxa1 was impeded upon single loss of Mdm34 or Tom70 and strongly impaired if both were lost. Analysis of the maximum hydrophobicity of inner membrane proteins in the ER-SURF set revealed on average a significantly higher peak compared to other inner membrane proteins. I could show that deleting or swapping the transmembrane domain of Cox5A would make it contact site independent or reliant on contact sites respectively, as revealed by an in vitro import assay.
In this study I was able to demonstrate the involvement of membrane contact sites in ER-SURF and identify a list of putative clients. Furthermore, I could show that hydrophobicity of the transmembrane segment of inner membrane proteins is one determinant for ER-SURF dependence.
Chemical pollution is a ubiquitous stressor affecting streams and their linkages to riparian forests. Contaminants act by altering the emergence of aquatic insects from streams. Emergent insects can also take up contaminants and transfer them into the terrestrial ecosystem. Emergent insects are an important source of prey for riparian insectivores and changes in the emergence flux or contamination of insects can affect the riparian food web. However, little is knownabout the implications of emerging contaminants such as agricultural pesticides and wastewatereffluent on the terrestrial food web. In this dissertation, I address possible consequences ofagricultural and wastewater stream pollution for riparian insectivores, namely bats and spiders.
The contribution of aquatic prey to riparian spider diets has mainly been determined by stableisotope analysis, but DNA metabarcoding, a highly sensitive method of identifying consumedprey using DNA, promises to further detangle changes in these trophic interactions. In Chapter2, we tested a bleaching decontamination protocol to determine the suitability of using metabarcoding on spiders contaminated during sampling. We confirmed the applicability of metabarcoding, but also found that the wolf spiders (Lyocsidae) collected in riparian areas did not appear to rely strongly on aquatic prey. This informed our choice of Tetragnatha montana, which are highly reliant on aquatic prey, for the field study in Chapter 3.
We then conducted three field studies. Chapters 3 and 4 evaluate indirect trophic effects of chemical stream pollution on spiders and bats, respectively. Chapter 5 quantifies the accumulation of pesticides from the stream to riparian spiders via emergent insects. We found that riparian bats foraged more and that spiders consumed more Chironomidae at more polluted sites, indicating that there was no overall decrease in emergence due to chemical pollution. We also found that certain pesticides accumulated in emergent insects and riparian spiders. Together, this suggests that chemical stream pollution resulted in an increased dietary exposure of riparian insectivores to contaminants, rather than a decrease in prey availability.
These results demonstrate the role of streams and aquatic-terrestrial linkages in propagating stressors across ecosystem boundaries. They also show the benefit of using sensitive methods like DNA metabarcoding to unveil trophic effects of chemical pollution. Future studies should focus on quantifying the risk of contaminant uptake and potential effects for riparian bats, as well as considering how the observed drivers change in different contamination scenarios and ecosystems. This knowledge is important to protect the functionality of the riparian ecosystem and its inhabitants.
Since the turn of the millennium, character research has been on the rise among psychological researchers. In 2004, the field of positive psychology introduced the Values in Action (VIA) framework encompassing 24 theoretically justified and empirically supported character strengths intended for the measurement of good character. Their assignment to six "core virtues" according to Linnaean principles links the 24 character strengths to philosophical and religious theories of virtue. However, the originally developed proprietary VIA Inventory of Strengths (VIA-IS) for the measurement of the 24 character strengths and its public domain counterpart, the IPIP-VIA, are based on a relatively crude scale development approach. Yet, the VIA-IS and the IPIP-VIA dominated (applied) character research for a long time. While researchers recently refined the proprietary VIA instruments, no character strength scales developed according to the state of the art are available in the public domain, thwarting progress in character research. Furthermore, most factor-analytic studies on the hierarchical structure of the 24 VIA character strengths yielded inconsistent results regarding the number and nature of global VIA constructs due to differing methodological standards and strategies. Only recently, a growing body of research consonantly has suggested that three global constructs span the VIA trait space. Consequently, there is only one proprietary inventory for measuring global VIA constructs and none that is available in the public domain. Against this backdrop, this dissertation addressed three methodological challenges in character assessment, taking an open-science approach, a (cross-country) replicability approach, and an integrative approach (i.e., integrating the results into the larger picture of personality science, particularly linking the VIA character traits to the Big Five and value traits).
Study 1 revised the English-language IPIP-VIA and concurrently translated/adapted it to German to yield character strength scales especially suitable for cross-cultural large-scale assessment: The 96-item IPIP-VIA-R measures each character strength with four balanced-keyed, content-valid, and cross-culturally adaptable items building scales that showed satisfactory reliability, (partial) scalar measurement invariance across Germany and the UK, and evidence of construct and criterion validity. Study 2 applied the IPIP-VIA-R and a rigorous factor-analytic approach to revisit the hierarchical structure of the 24 VIA character strengths, revealing three well-interpretable global “core strengths” that were replicable across Germany and the UK: positivity, dependability, and mastery. Study 3 applied an Ant Colony Optimization algorithm to select an optimal 18-item subset of the IPIP-VIA-R to measure each core strength with a balanced-keyed, content-valid six-item scale that again showed satisfactory reliability, scalar measurement invariance across Germany and the UK, and evidence of construct and criterion validity.
Taken as a whole, the dissertation advanced the measurement of VIA character traits in the public domain, the understanding of the VIA character trait space (especially its intersection with Big Five personality and basic human values), and the establishment of the VIA trait hierarchy. To address its research questions framed as methodological challenges, the dissertation introduced and elaborated methodological approaches that researchers might adapt to other individual differences constructs. Even though there remain challenges to be taken up in future work (e.g., adapting the IPIP-VIA-R character and core strength scales for use in a more diverse set of cultures; multi-informant assessment), researchers and survey programs can readily apply the character scales developed as part of this dissertation.
One of the main tasks of molecular biology is understanding the mechanisms of molecular biological processes. This brings the problem of creating regulatory networks and therefore finding key regulators. In order to do it, it is important to have such representation of the data that can reveal the distinct patterns within the big groups. On one side, there are numerous experimentally determined kinetic information about the alteration of molecular presence in the observed system. On the other side, there are documented throughout the years evidences of the involvement of molecules in different biological processes. Both sources of the information have their drawbacks: experimental data reflect only a fleeting molecular state of each individual organism and therefore are often high-variant and noisy; functional groups were determined as generalization of known roles of molecules in biological processes and therefore can be not complete and only partially relevant to certain experimental conditions and individual organisms. Our goal is to get the overview of the experimentally observed molecules and extract the knowledge from both sources, avoiding constrains of noise distractions and generalization bias. The resulted optimal representation of the experimental data then would help to pinpoint potential regulators.
The proposed method is called the Signature Topology (ST) approach, as it uses the functional topology as the prior knowledge source and creates a specific signature for the given experimental data. The ST approach is based on knowledge-and-data-driven machine learning algorithm, that is implemented via a dynamic programming approach. Based on both prior knowledge and learning from the data, the proposed approach represents a combination of supervised and unsupervised machine learning. The resulting network structure deals with data abundance and avoids an over-detailed description that may lead to misinterpretation and is able to pick out elements with minor behavior patterns.
The method is tested with artificial data and applied to real-world mass-spectrometry proteome data and NGS-transcriptome data of Chlamydomonas reinhardtii. The proposed approach helps with identification of the potential regulatory genes, whose roles are not explicitly provided in the used functional ontology. Moreover, it shows a successful reduction in data complexity while preserving all individual molecular information reported in the literature and stored in the functional ontology. If the proposed approach analyzes different experimental data with the same ontology, the resulting networks are uniform and therefore can be compared. That gives an opportunity to compare between a great variety of experimental conditions, from different organisms to different
system levels.
Industrial robots are vital in automation technology, but their limitations become evident in applications requiring high path accuracy. This research focuses on improving the dynamic path accuracy of industrial robots by integrating additional sensor technology and employing intelligent feed-forward control. Specifically, the inclusion of secondary encoder sensors enables explicit measurement and compensation of robot gear deformations. Three types of model-based feed-forward controllers, namely physics-based, data-based, and hybrid, are developed to effectively counteract dynamic effects.
Firstly, a physics-based feed-forward control method is proposed, explicitly modeling joint deformations, hydraulic weight compensation, and other relevant features. Nonlinear friction parameters are accurately identified using a globally optimized design of experiments. The resulting physics-based model is fully continuously differentiable, facilitating its transformation into a code-optimized flatness-based feed-forward control.
Secondly, a data-based feed-forward control approach is introduced, leveraging a continuous-time neural network. The continuous-time approach demonstrates enhanced model generalization capabilities even with limited data. Furthermore, a time domain normalization method is introduced, significantly improving numerical properties by concurrently normalizing measurement timelines, robot states, and state derivatives. Based on previous work, a method ensuring input-to-state and global-asymptotic stability is presented, employing a Lyapunov function. Model stability is enforced already during training using constrained optimization techniques. Moreover, the data-based methods are evaluated on public benchmarks, extending its applicability beyond the field of robotics.
Both the physics-based and data-based models are combined into a hybrid model. Comparative analysis of the three models reveals that the continuous-time neural network yields the highest model accuracy, while the physics-based model delivers the best safety properties. The effectiveness of all three models is experimentally validated using an industrial robot.
The present thesis reports on studies of atomically precise, size-selected tantalum
cluster ions \(Ta_n^±\) under cryogenic conditions in a FT-ICR mass spectrometer with respect to surface adsorbate interactions at the fundamental level, focusing on \(N_2\) and \(H_2\) adsorption and activation. The wealth of results presented here is the result of systematic studies that have revealed valuable kinetic, spectroscopic, and quantum chemical information, which together paint a comprehensive picture of the elementary adsorption steps and mechanisms in detail.
The \(N_2\) and \(H_2\) adsorption processes to \(Ta_n^+\) clusters exhibit dependencies on cluster size n and on adsorbate load. In terms of \(N_2\) adsorption, there is evidence for spontaneous \(N_2\) activation and cleavage by \(Ta_2^+\) - \(Ta_4^+\), while it appears to be suppressedby \(Ta_5^+\) - \(Ta_8^+\). The activation and cleavage of \(N_2\) molecules proceeds across
surmountable barriers and along much-involved multidimensional reaction paths.
Underlying reaction processes and involved intermediates are elucidated. Two different processes are characteristic of \(H_2\) adsorption: There are fast adsoprtion processes without competing desorption reactions at low \(H_2\) loadings, indicating dissociative adsorption processes, followed by slow adsorption reactions accompanied by multiple desorption reactions at high \(H_2\) loadings, indicating molecular \(H_2\) adsorption. The threshold is the completion of the first adsorbate shell. The \(N_2\) adsorption study of \(Ta_n^-\) clusters revealed that the \(N_2\) adsorption ability of anionic tantalum clusters depends strongly on cluster size n. The cluster size n = 9 is the minimum size for \(N_2\) adsorption onto \(Ta_n^-\) clusters to yield stable and detectable cluster adsorbate species \([Ta_n(N_2)_m]^-\).
The booming global market of nanomaterials in the last few decades has led to the inevitable emission of these materials into aquatic environments; hence, understanding their physical, chemical, and biological transformations has become a big concern for environmental scientists. Despite a great deal of effort made to understand the mobility, fate, and risk assessment of e.g, TiO2 nanoparticles, it is still unclear if the obtained results, under lab-controlled conditions, can be generalized to realistic released nanoparticles in aquatic environments since the complex dynamics of environmental conditions are not completely reproducible under controlled conditions.
In the present study, we proposed a new approach to expose TiO2 nanoparticles to environmental conditions of natural surface waters by making use of dialysis membranes as passive reactors. The function of these reactors is based on the permeability of the membrane to the dissolved matter of surface waters while TiO2 nanoparticles do not pass through the membrane. These systems benefit from the fact that although the complexity and temporal variability of most of the environmental parameters of surface waters are reproducible inside the reactors, colloidal and particulate interferences remain separated. Furthermore, no significant reduction in pore size i.e., membrane fouling is observed in dialysis bags after exposure to surface waters which validates the efficiency of the system.
Taking advantage of these reactors to expose nanoparticles to surface waters, we investigated the influential physicochemical parameters of the surface waters on the formation of natural coating onto nanoparticles. Hence, dialysis bags were used to expose TiO2 nanoparticles, in situ, to ten different surface waters in the spring and summer of 2019. Due to the complexity of the natural dissolved matter of the surface waters as long as their low natural concentrations, we needed to use a combination of analytical techniques and multivariate data analysis to investigate the coatings. The initial findings were similar to the lab-controlled exposure studies in the literature showing pH, electrical conductivity, and Ca2+- Mg2+ concentration as the three most important parameters of surface waters controlling the formation of coatings. Nonetheless, we came across a phenomenon being overlooked under lab-controlled conditions; natural coatings are composed of not only organics (DOM: dissolved organic matter) but also inorganics (carbonate) which implies that their realistic coatings are more complex than what the previous studies described.
The second part of this thesis focused on investigating the interactions of more realistic nanoparticles (extracted TiO2 nanoparticles from 11 sunscreens) with DOM. Using ToF-SIMS combined with high-dimensional data analysis, we tried to find a general DOM-sorption pattern among TiO2 nanoparticles since finding this pattern could have ultimately opened a way to assess the fate of (more) realistic nanoparticles in aquatic environments. Contrary to our expectations, the results showed a unique sorption pattern for each sunscreen controlled by the composition of the sunscreens implying that the sorption pattern of each sunscreen should be investigated individually. In the next step of this study, we used random forest to extract the most important fragments of DOM sorbed onto each sunscreen followed by an effort to assign these important masses to chemical fragments.
Trying to provide a comprehensive understanding of interactions of the released n-TiO2 in aquatic environments, in future studies, we are going to expand our coating research to different types of TiO2 nanoparticles, such as extracted particles from paint, where the reaction media (surface waters) are covering a wide range of water parameters representative of various ecosystems. Making use of state-of-the-art techniques as long as multivariate data analysis, we will try to achieve a model describing the sorption mechanisms of dissolved matter of surface waters onto nanoparticles. Such studies can eventually lead us to a better understanding of the fate of the released nanoparticles under natural conditions.
The massive use of chemicals by humans is increasing pollution of the world’s ecosystems. Yet, knowledge about exposure and effects of chemicals in real-world ecosystems remains limited. Prediction of chemical effects in the context of ecotoxicological research and chemical regulation continues to focus on organism- or population-level responses established under simplified conditions while aiming to protect the functioning of ecosystems. A unified, comprehensive framework for the prediction of chemical effects in real-world ecosystems is still lacking. A major limitation of ecotoxicological studies considered in predictive modelling is that they rarely consider spatial dynamics (e.g. gene flow or species dispersal) as relevant processes influencing the trajectory of populations or communities, respectively. For instance, the spatial propagation of pesticide effects from polluted to least impacted sites has been predicted in several modelling studies but has not yet been characterised in the field.
The thesis starts in Chapter 1 with a brief introduction to chemical pollution in ecosystems, chemical effect prediction in ecotoxicology, and pesticides in freshwater ecosystems, then outlines the main objectives of the thesis. Subsequently, Chapter 2 presents a conceptual study about the current prediction of chemical effects in ecotoxicology and potential future avenues to improve ecological relevance of effect predictions by addressing the integration of different levels of biological organisation (termed biological levels). The study shows that approaches and tools that currently contribute to the prediction of chemical effects can be attributed to three idealised perspectives: the suborganismal, organismal and ecological perspective. The perspectives focus on different biological levels and are associated with distinct scientific concepts and communities. They complement each other so theoretical and empirical links between them may enhance prediction by capturing the entire phenomenon of chemical effects, from chemical uptake to ecosystem effects. Complex experimental studies accounting for eco-evolutionary dynamics are needed to cross barriers between biological levels as well as spatiotemporal scales. Overall, the conclusions of Chapter 2 may help to develop overarching frameworks for predicting chemical effects in ecosystems, including for untested species. Chapters 3 and 4 present a field study combined with laboratory analyses on the potential propagation of pesticides and their effects from agricultural stream sections to the edge of least impacted upstream sections, that can serve as refuges for many species. The study examines exposure and effects for different biological levels at three site types, the pesticide-polluted agricultural sites (termed agriculture), least impacted upstream sites (termed refuge) and transitional sites (termed edge) in six small streams of south-west Germany. The results in Chapter 3 show that regional transport of pesticides can lead to ecologically relevant pesticide exposure in forested sections within a few kilometres upstream of agricultural areas (i.e. at both edge and refuge sites). As further demonstrated in Chapter 3, the tested indicators of community responses (Jaccard Index, taxonomic richness, total abundance, SPEARpesticides) together suggest a species turnover from upstream refuge to downstream agricultural sites and a potential influence of adjacent agriculture on the edge sites. In contrast, Chapter 4 does not identify any particular edge effect that distinguish edge organisms and populations in edge sites from those in more upstream refuge sites. Gammarus fossarum populations at edges show equal levels of imidacloprid tolerance, energy reserves (i.e. lipid content) and genetic diversity to populations further upstream. Gammarus spp. from agricultural sites exhibit a lower imidacloprid tolerance compared to edge and refuge, potentially due to energy trade-offs in a multiple stressor environment, but related effects do not propagate to the edges (Chapter 4). Notwithstanding, the results of Chapter 4 indicate bidirectional gene flow between site types, supporting the hypothesis that adapted genotypes – if present at locally polluted sites – could spread to populations at least impacted sites. Taken together, Chapters 3 and 4, illustrate that pesticides and their effects can potentially propagate to least impacted upstream sections, empirically novel findings to our knowledge. These results of this thesis can help in predicting or explaining population and community dynamics in least impacted habitats and can ultimately inform pesticide management as well as freshwater restoration and protection of biodiversity.
Die Interaktion zwischen Prozess, Werkzeug, Spindel und Maschine kann die erreichbare Bearbeitungsgenauigkeit spanender Bearbeitungsverfahren beeinflussen. Bei der spanenden Mikrobearbeitung sind die Größen- und Kraftverhältnisse zwischen Span, Werkzeug und Werkzeugmaschine im Vergleich zur spanenden Bearbeitung mit Werkzeuggrößen über einem Millimeter jedoch grundlegend unterschiedlich. Aufgrund dessen können dort gewonnene Erkenntnisse nicht ohne Weiteres für die spanende Mikrobearbeitung adaptiert werden. So gilt es für die spanende Mikrobearbeitung gesondert zu identifizieren, welche Effekte und Faktoren die erreichbare Bearbeitungsgenauigkeit beeinflussen. Die veränderten Größenverhältnisse, Eingriffsverhältnisse und eingesetzten Maschinenkomponenten erschweren jedoch eine experimentelle Untersuchung. Eine simulationsgestützte Analyse des Prozesses und der Maschinenkomponenten kann deshalb maßgeblich dazu beitragen, die Interaktion zwischen Prozess, Werkzeug, Spindel und Maschine bei der spanenden Mikrobearbeitung zu verstehen.
Diese Arbeit präsentiert simulationsgestützte Methoden zur Analyse der Interaktion zwischen Prozess, Werkzeug, Spindel und Maschine bei der spanenden Mikrobearbeitung. Darauf aufbauend werden die Interaktion zwischen Spindelwelle und Elektromotor sowie die Interaktion zwischen Prozess, Werkzeug, Spindel und Maschine für das Mikrofräsen und Mikroschleifen untersucht. Zwischen der Spindelwelle und dem Elektromotor kann keine Interaktion identifiziert werden. Stattdessen liegt ein nicht vernachlässigbarer unidirektionaler Einfluss des Elektromotors auf die Spindelwelle vor. Ebenso konnte eine unidirektionale Beeinflussung des Werkzeugs durch die Werkzeugspindel ermittelt werden. Zwischen dem Prozess und dem Werkzeug kommt es zu einer Interaktion. Jedoch beschränkt sich diese Interaktion auf das Werkzeug, das heißt, die Spindelwelle wird nicht vom Werkzeug beeinflusst. Insgesamt zeigt sich, dass bei der spanenden Mikrobearbeitung nicht nur die Auftrennung der Werkzeugmaschine und des Spindel-Werkzeug-Systems zweckmäßig ist, sondern dass auch das Werkzeug und die Werkzeugspindel als separate Aspekte betrachtet werden müssen.
Single-phase flows are attracting significant attention in Digital Rock Physics (DRP), primarily for the computation of permeability of rock samples. Despite the active development of algorithms and software for DRP, pore-scale simulations for tight reservoirs — typically characterized by low multiscale porosity and low permeability — remain challenging. The term "multiscale porosity" means that, despite the high imaging resolution, unresolved porosity regions may appear in the image in addition to pure fluid regions. Due to the enormous complexity of pore space geometries, physical processes occurring at different scales, large variations in coefficients, and the extensive size of computational domains, existing numerical algorithms cannot always provide satisfactory results.
Even without unresolved porosity, conventional Stokes solvers designed for computing permeability at higher porosities, in certain cases, tend to stagnate for images of tight rocks. If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behavior, we perform spectral analysis of the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more efficient and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media.
As a central part of the work, a reliable workflow has been developed which includes robust and efficient Stokes-Brinkman and Darcy solvers tailored for low-porosity multiclass samples and is accompanied by a sample classification tool. Extensive studies have been conducted to validate and assess the performance of the workflow. The simulation results illustrate the high accuracy and robustness of the developed flow solvers. Their superior efficiency in computing permeability of tight rocks is demonstrated in comparison with the state-of-the-art commercial solver for DRP.
Additionally, the Navier-Stokes solver for binary images from tight sandstones is discussed.
Die funktionale Wechselwirkung zwischen geometrischen Oberflächeneigenschaften und dem daraus resultierenden Haftreibwert wird in der vorliegenden Arbeit anhand von mechanisch bearbeiteten Stahloberflächen untersucht. Dabei wird der Fokus neben einer umfangreichen Analyse der Einflussfaktoren auf die Oberflächencharakterisierung gelegt. Basierend auf Drückversuchen und der Untersuchung der Oberflächendeformation wird eine Methode zur funktional relevanten Beschreibung der Oberfläche entwickelt. Die am Haftreibwert beteiligten Oberflächenanteile sind durch die Parameter Inselanzahl, projizierte Durchschnittsoberfläche und Durchschnittsmaterialvolumen beschrieben. Diese Kenngrößen fließen in eine mathema-tische Berechnung eines theoretischen Haftreibwertes ein. Es werden der theoretisch errech-nete und der aus einer statistischen Versuchsreihe ermittelte Haftreibwert miteinander vergli-chen. Statistische Untersuchungen sowie die Aufstellung eines Messunsicherheitsbilanz stüt-zen die Forschungsergebnisse. Damit leistet diese Arbeit nicht nur einen Beitrag zur funktion-sorientierten Oberflächenbeschreibung, sondern auch zur methodischen Korrelations-/ Re-gressionsanalyse und zur Integration geometrischer Oberflächenparameter in Haftreibwert-untersuchungen.
Die Studie begreift Schule und Unterricht mit Bourdieu als sprachlichen Markt, auf dem über den Wert und die Legitimität von Sprache(n) und Sprechen verhandelt wird. Anhand der Zuschreibung sprachbiografischer Merkmale und sprachlicher Kompetenzen sowie der Bewertung sprachlicher Leistungen konstruieren schulische Akteur:innen – Lehrkräfte wie auch Schüler:innen - Differenzen innerhalb schulischer Sprachmärkte. Hierdurch werden Sprecher:innen soziale Positionen in einem komplex strukturierten (Klassen-)Raum zugewiesen, der sich durch konkurrierende Diskurse um das richtige und angemessene Sprechen auszeichnet (Spotti 2013; Fürstenau & Niedrig 2011). Um das Phänomen sprachbezogener Differenzkonstruktion in Schule und Unterricht multiperspektivisch zu beleuchten und hierbei wirksame Language Policies (Shohamy 2006; Spolsky 2010) aufzudecken, nutzt die Studie die Nexus Analysis (Scollon & Scollon 2004) als Meta-Methodologie, die verschiedene Analysedimensionen vereint: Betrachtet und verknüpft werden schulische und außerschulische Erfahrungen schulischer Akteur:innen in Bezug auf Sprache(n) und Sprachgebrauch, schulische Diskurse über sprachliche Vielfalt und die Legitimität von Sprache(n), die Gestaltung und Wahrnehmung schulischer Sprachlandschaften (Linguistic Schoolscapes) sowie Konstruktionen sprachbezogener Differenz und damit einhergehende Selbst- und Fremdpositionierungen in Unterrichtsinteraktionen. Die Untersuchung stützt sich auf Datenmaterial aus sechs Schulklassen dreier rheinland-pfälzischer Grundschulen: 100 Schüler:innen der sechs Klassen nahmen an einer Fragebogenerhebung teil, drei Schulleitungen und sieben Lehrkräfte wurden im Rahmen von Leitfadeninterviews befragt. Über die sechs Klassenzimmer hinweg wurden 1.797 schriftsprachliche Artefakte fotografisch dokumentiert und mithilfe eines Kodiermanuals in der Tradition der Linguistic Landscape-Forschung ausgewertet. In allen sechs Schulklassen wurde Unterricht audiovisuell aufgezeichnet; insgesamt liegen Aufnahmen von 40 Unterrichtsstunden vor. Die Unterrichtsaufzeichnungen wurden interaktionsanalytisch untersucht, teils multimodal, um mikroanalytisch offenzulegen, wie Lehrkräfte und Schüler:innen in ihrem Unterrichtshandeln zur Konstruktion von Differenz beitragen, etwa indem sie Schüler:innen als Personen mit Migrationsbiografien und/oder als Sprecher:innen des Deutschen als Zweitsprache exponieren. Die Zusammenführung der vielfältigen Zugänge und Perspektiven führt die Spannungen und Ambivalenzen dynamischer schulischer Sprachmärkte vor Augen und belegt datenbasiert die Komplexität des Anforderungs- und Handlungsfelds Schule.
Particulate matter has been considered an indicator for the pollution of urban stormwater runoff for quite some time. There are only few studies that have investigated the contamination with organic micropollutants and metals both in the dis-solved and particulate phase as well as across different particle size classes. Yet, this distribution plays an important role in better understanding and optimising urban stormwater treatment measures. Therefore, this work aimed at assessing the composition of particulate matter in urban stormwater in terms of the physico-chemical properties (particle size distribution and organic content), as well as the occurrence of organic micropollutants and metals, their association to particulate matter and their removal from urban runoff. An intensive long term monitoring campaign at a centralised stormwater treatment facility of an industrial area was conducted. The stormwater runoff was sampled with large volume sampling tanks filled volume-proportional to the runoff at the two outlets of the facility. This allowed the determination of the event mean concentrations as well as the load-related removal efficiencies of the treatment facility for different parameters. Within each sample the concentrations of total suspended solids across different particle size fractions (< 63 µm, 63 – 125 µm, 125 – 250 µm, 250 – 2000 µm) were measured as well as their organic content. Furthermore, the concentrations and the phase distribution of 5 metals (Chromium, Copper, Zinc, Cadmium, Lead) and 29 organic micropollutants including polycyclic aromatic hydrocarbons, industrial chemicals (e.g. organophosphates, alkyphenols) and biocides were ana-lysed across different particle size fractions. In this study, over a period of almost 2.5 years, a total of 36 sampling events were recorded and investigated within two sampling periods (2015 – 2016 and 2017 – 2019) at the rainwater treatment facility in Freiburg Haid. The occurrence of organic micropollutants was determined in 22 of these events and the occurrence of metals in 17. The evaluation of the event mean concentration of total suspended solids showed that the fine fraction of the solids is of particular importance, as it showed an event mean concentration more than twice as high (34 mg \(L^{-1}\)) as the coarser particle fraction (14.9 mg \(L^{-1}\)). Regarding the occurrence of total suspended solids in terms of the transported solid load, the solids < 63 µm accounted for a mean proportion of 61 %, the fraction 63 – 125 µm for 13 %, the fraction 125 – 250 µm for 6 % and the fraction 250 – 2000 µm for 9 % of the total solid mass. In terms of the organic content of the solids, the results showed a clear increase of the organic content with increasing particle size (measured as loss on ignition).
As in the case of solids, the highest concentrations of the organic micropollutants and metals investigated were found in the particle size fraction < 63 µm. This fine fraction of the particles also accounted for the largest load of organic micropollutants and metals. Therefore, the particle loading with organic micropollutants or metals respectively the particle-bound micropollutant/metal concentration was calculated in this study. For most substances, a rather equal distribution over the smallest three particle size fractions was found. A certain correlation of the organic content with the occurrence of organic micropollutants and metals could be shown, therefore it can be assumed that the particle-bound concentration is certainly influenced by the organic content of the particulate matter. However, due to the fact that, among other things, the largest particle-bound pollutant loads are transported with particles < 63 µm, the fine fraction represents the relevant particle size in urban stormwater runoff. Regarding the total treatment efficiency (including sedimentation efficiency and volume retention), the investigated facility in this study was able to reduce the load of fine particles by only a quarter. The larger particle size classes were reduced by far more than half in most cases. If total suspended solids in its entire particle size range were used as a proxy to estimate the removal efficiency of metals and organic micropollutants, the efficiency would be overestimated and the actual pollutant load released into the environment would thus be underestimated. However, the investigation, weather the particle size fraction < 63 µm would be more suitable, showed that even for substances with a high tendency to adsorb onto particles (e.g. Cr, Cu, IND, GHI), the total treatment efficiency was still overestimated by the fine fraction.
Previous research has shown the importance of early science, technology, engineering, and math education for children’s knowledge, as it establishes a groundwork for their later learning and academic achievement. However, the engagement of preschool teachers especially in science learning activities is infrequent, and some teachers still pronounce the belief that science education is inappropriate for the early childhood years. Furthermore, there is a lack of clarity regarding the connections between teachers' attitudes (including their knowledge, beliefs, and willingness) towards teaching early science and their actual teaching practice, as well as the subsequent effects of teacher practice on children's learning outcomes. This dissertation primarily aims to clarify these associations. Block play offers the possibility to link scientific concepts (e.g., stability) to children’s everyday activities and thus represents an age-appropriate way to examine young children’s STEM-learning. The present dissertation encompasses three research articles, focusing specifically on the interplay between preschool teachers’ dispositions and practice in block play and 4- to 6-year old children’s knowledge. The first article focused on the validation of a self-developed instrument to assess preschool teachers’ willingness to engage in science teaching and examined the predictive power of teachers’ willingness for teachers’ practice. Results suggested that the instrument measured teachers’ willingness reliably and validly, however, teachers’ willingness did not predict their practice in block play. The second article examined the relationship between the preschool teachers’ instructional quality during block play and various aspects of children's knowledge. Specifically, the study explored how instructional quality in block play influenced children's knowledge in stability, math, and spatial language. Additionally, children’s academic self-concept and cognitive aspects (i.e., intelligence, working memory) were considered. Results implied that preschool teachers’ scaffolding activities were related to children’s stability knowledge in block play. Moreover, teachers’ instructional quality was positively correlated with children’s academic self-concept in block play. The primary focus of the third article was on implementing a block play curriculum. Therefore, study 3 employed a longitudinal design to assess the effectiveness of a teacher training on teachers’ practice with the curriculum, which included both, guided and free play. Teachers were randomly assigned to either a control group or an experimental group. The experimental groups received training with the block play curriculum, while the control group did not receive any training. Results showed no change in teachers’ knowledge before and after training. Nonetheless, teachers in the experimental group applied more scaffolding after the training. Furthermore, preschool teachers applied more scaffolding during guided than during free play. Children’s math score in the experimental group, but not in the control group, significantly improved from pre- to post-test. In the general discussion, the findings of the three articles are reflected in the light of the interplay between teachers’ dispositions and their teaching practice as well as the impact of teacher practice on children’s knowledge. Besides, the discussion reflects on methodological difficulties of empirical studies in early childcare settings, providing a prospective view on multimethod approaches for future research. Taken together, the present dissertation contributes to a more profound understanding of how teacher practices and children's knowledge interact. Further, the research holds great relevance for practical application as it illustrates the differential effects of teacher training on preschool teachers’ knowledge and their teaching practice.
Nitrogen removal from wastewater is increasingly important to protect natural water sources and has proven a challenge for wastewater treatment plants in different countries. Strict discharge norms for nitrogen components and unfavourable wastewater quality are among the main challenges observed.
An example WWTP (450,000 PECOD,120), representative of these challenges (i.e. strict discharge norm for NH4-N and TN, partially unfavourable wastewater composition for upstream denitrification) was modelled with the software SIMBA. The model was calibrated, and validated, using different statistical parameters. The model was used for dynamic simulation to test different operational and automation strategies, to improve nitrogen removal.
The tested strategies considered the bypass of primary clarifiers, changes in the anaerobic, anoxic, and aerobic reactors configuration, changes in the aeration system (DO setpoint, the inclusion of online sensors and different control approaches in the aeration loop), the adjustment of the internal recirculation rate, the implementation of intermittent denitrification, among others. The addition of an anaerobic digestion stage, considering the adjustment of the sludge age in the biological treatment and the treatment of the centrate (including nitrogen backload), was tested as well.
To evaluate the strategies' performance, an evaluation criteria chart was created to select the best strategies from an overall perspective, considering the improvements or deterioration in norm compliance, aeration requirements, pollutant emissions to the environment, and biogas production (if applicable).
The best overall results were obtained with strategies that aimed to improve the denitrification capacity (e.g. increase anoxic volume by reducing aerobic volume), adjusted the air requirements (e.g. inclusion of an NH4-N online measurement in the aeration control loop), and provided flexibility (e.g. intermittent denitrification). With the right combination of strategies, the norm compliance was significantly improved e.g. reduced from 31 to 4 in a year, as well as the emissions to the environment.
The inclusion of an anaerobic digestion stage for sewage sludge treatment challenges the nitrogen removal even further, but similar optimisation strategies, based on the same approach were able to improve norm compliance.
However, none of the combinations, with or without anaerobic digestion, achieved total norm compliance. Therefore, a different technology than A2/O, an SBR treatment stage was designed, providing increased operational flexibility. The A2/O system in the computer model was replaced by an SBR process. This showed the best results, based on the criteria previously defined, with total norm compliance.
Based on the learnings of the design, redesign, and strategies tested, a guideline for an integral optimisation of nitrogen removal was developed, based on six pillars, considering a detailed WWTP operational analysis, the use of dynamic simulation as a tool, the testing of known and simple optimization approaches, the definition of clear and objective evaluation criteria, the consideration of anaerobic digestion (and the backload) and finally the re-evaluation of the type of technology for biological wastewater treatment.
This dissertation project aims to examine the potential of network modelling, an increasingly popular methodology in emotion research (e.g., Fried et al., 2016), to better comprehend age-related differences in structural connections between cognitive processes such as fluid intelligence and executive control functions. Furthermore, it aims to identify the key variables that link self-regulation to executive control functions and age-related discrepancies. Lastly, it seeks to delve into the key variables and correlations between executive control functions, self-regulation, and affect utilizing a longitudinal design in combination with machine learning as a data-driven method.
In study 1, differences between the cognitive performance networks of younger (M = 38.0 years of age, SD = 9.9) and older (M = 64.1 years of age, SD = 7.7) adults were explored. Network modelling showed that while speeded attention is essential throughout the life-span, connections between fluid intelligence and working memory were stronger, and intelligence was more central in the older group. Additionally, confirmatory factor modelling demonstrated that latent correlations were highest between working memory and intelligence, particularly in older adults, whereas inhibition had the lowest correlations with other abilities. This research suggests that the relations of cognitive abilities may differ between younger and older adults, indicating process-specific changes in the cognitive performance network.
In study 2, we investigated the connections of self-regulation (SR) and executive control functions (EF), which are theoretical concepts encompassing various cognitive abilities supporting the regulation of behavior, thoughts, and emotions (Inzlicht et al., 2021; Wiebe & Karbach, 2017). Evidence, however, implies that correlations between self-report measures and performance-based tasks are often difficult to observe (e.g., Eisenberg et al., 2019). We investigated connections and overlap between different aspects of SR and EF in a life-span sample (14-82 years). Participants completed several self-report measures and behavioral tasks, such as sensation seeking, mindfulness, grit, or eating behavior questionnaires and working memory, inhibition, and shifting tasks. Network models for a youth, middle-aged, and older-aged group were estimated to identify key variables that are well connected in the SR and EF construct space. In general, stronger connections were observed within the clusters of SR and EF than between them, and older adults appeared to have more connections between SR and EF than younger individuals, probably because of declining cognitive resources.
In study 3, we analyzed the intricate links between EF, SR and affect, as well as individual differences in these relations. Bridgett et al. (2013) proposed that EF and self-regulation SR are psychological constructs to support the regulation of cognition and affect. A total of 315 participants, aged 14 to 80, answered questionnaires and took part in behavioral tasks which evaluated EF, SR, and both positive and negative affect two times (one-month apart). Combined X-means and deep learning algorithms aided in the separation of two distinct groups who featured different EF performances, SR tendencies, and affective experiences. Network model analysis was then utilized to confirm the connections between the EF, SR, and affect variables in each of the two groups. The two groups displayed a maximal centrality for variables linked to SR and positive affect. Group membership remained mostly consistent (85%) across both measurement occasions. Logistic regression indicated that age and personality (conscientiousness, neuroticism, and agreeableness) predicted group membership. This sheds light on stable individual differences in the complex relations of EF, SR, and affect.
This dissertation project utilized a combination of standard approaches (such as confirmatory factor analysis; CFA) and advanced approaches (such as network models, machine learning algorithms, and deep learning) to explore the connections between cognitive abilities, EF, SR, and affect. Our findings are in line with the theory of process specific changes in age-dedifferentiation. Findings suggested that connections between SR and EF were stronger within clusters, and positive affect was better connected to SR than EF measures. Lastly, age and personality traits were found to predict the clusters. These findings suggest that computational modelling is an effective exploratory tool in understanding how cognitive abilities and other psychological constructs may interact. Further research is necessary to gain further insights on the mechanisms behind differences in network structures.
This thesis deals with modeling and simulation of district heating networks (DHN) and the mathematical analysis of the proposed DHN model. We provide a detailed derivation of the complete system of governing equations, starting from a brief exposition of the physical quantities of interest, continued with the components to set up a graph based network model accounting for fluxes and coupling conditions, the transport equations for water and thermal energy in pipelines, and the terms representing consumers and producers. On this basis, we perform an analysis of the solvability of the model equations, starting from the scalar advection problem in a single–consumer single–producer network, to a generalized problem suitable to model simple networks without loops. We also derive an abstract formulation of the problem, which serves as a rigorous mathematical model that can be utilized for optimization problems. The theoretical results can be utilized to perform tran- sient simulations of real world DHN and optimize their performance by optimal control, as indicated in a case study.
In one-dimensional (1-D) Ultrasound (US) measurements, signals are
acquired that form the basis of more sophisticated two-dimensional (2-D) or
three-dimensional (3-D) US imaging. These 1-D signals contain a lot of raw
information about the US wave propagation and interaction with the
medium that is only processed in parts during image generation. While
image representations are easy to interpret for humans, the analysis of US
wave signals is hard to perform without applying algorithms to extract
desired features.
This work investigates reliable and fast 1-D US signal classifications to
distinguish between different stages or states in biomedical US scenarios and
shows how the new field of Machine Learning (ML) on raw US wave data
provides advantages and different applications. To achieve good results, the
input signals are treated as time series, which requires the deployment of
comparatively complex Time Series Classification (TSC) algorithms.
The literature shows that a lot of research efforts have previously only
tackled the classification and segmentation of US Brightness mode (B-Mode)
images, while neglecting approaches to classify 1-D signals to a large extent.
This research contributes by developing, deploying and evaluating
classification approaches for three distinct biomedical US classification tasks
and finds that respective signal classifications for different scenarios are
possible with varying degrees of accuracies. It entails the comparison of
several combinations of data types (e.g. temporal, spectral and statistical
features or raw signals), ML models and pre-processing steps to provide a
strong foundation for robust, binary classifications of 1-D US signals for
scenarios based on low-cost wearable, mobile and stationary devices. This
research addresses scientific questions not answered before by informing on
detailed descriptions of beneficial domain specific knowledge (domain specific
knowledge (DSK)), achieved accuracies and times needed for training and
evaluation of the examined ML models.
The resulting ML pipelines includes solutions based on data acquired from
custom experimental setups or clinical trials. Possible real-world applications
might include muscle contraction trackers, muscle fatigue detectors,
epiphyseal radius bone closure detectors or devices providing information
about advanced liver disease stages.
Automated machine-assisted
classifications requiring as little DSK as possible from the end user enable
application scenarios ranging from fitness or rehabilitation trackers as
consumer devices to solutions providing diagnostic support without requiring
extensive knowledge from professional medical practitioners. For example,
decision support systems for bone age assessments in clinical use or liver
health assessment systems for gastroenterologists.
This work shows that reliable, robust and fast classifications based on 1-D
US signals are possible with high degrees of accuracies depending on the
examined scenario with achieved F 1 -scores ranging from ≈ 70% to ≈ 87%.
These results prove that real-life applications for recreational purposes are
already possible and that critical applications for clinical use are highly likely
to be achieved once the presented approaches are further optimized in the future.
A new class of amines that are promising solvents for reactive CO2-absorption processes was thoroughly investigated in a comprehensive experimental study. The amines are all derivatives of triacetoneamine and differ only in the substituent of the triacetoneamine ring structure. These amines are abbreviated by the acronym EvA with a consecutive number that designates the derivatives. About 50 EvAs were considered in the present study, from which 26 were actually synthesized and investigated as aqueous solvents. The investigated properties were: solubility of CO2, rate of absorption of CO2, liquidliquid and solid-liquid equilibrium, speciation (qualitative and quantitative), pK-values, pH-values, foaming behavior, density, dynamic viscosity, vapor pressure, and liquid heat capacity. All 26 EvAs were assessed in an experimental screening. The results were compared with the results of two standard solvents from industry: aqueous solvents of monoethanolamine (MEA) and a solvent blend of methyl-diethanolamine and piperazine (MDEA/PZ). Detailed studies were carried out for two EvAs that revealed significantly improved performance compared to MEA and MDEA/PZ: EvA34 combines favorable properties of MEA and MDEA/PZ in one molecule. EvA25 reveals a liquid-liquid phase split that reduces the solubility of CO2 in the solvent and shifts the CO2 into the aqueous phase. This allowed the design of a new CO2-absorption process, that takes advantage of the liquid-liquid phase split. Finally, the chemical speciation in 16 EvAs was investigated by NMR spectroscopy. From the results, relationships between the chemical structure of the EvAs and the observed speciation, basicity, and application properties were established. This enabled giving guidelines for the design of new amines and proposing new types of amines, which were called ADAMs.
This thesis aims to establish a transient electro-thermomechanical model capable of characterizing the shape-morphing capabilities of shape memory alloy hybrid composites (SMAHCs). The particular SMAHC type examined in this study comprises a rigid substrate, a soft interlayer, and SMA wires sewed on top. The model was synthesized from the bottom up using well-established equations, methodologies, and solution procedures, taking into account appropriate simplifications and assumptions. The implementation was done with open-source solutions to ensure free availability. The model extends existing models to include aspects of external influences so that, for example, the efficiency and dynamics of the SMAHC can be predicted as a function of external mechanical loads and different ambient temperatures. Inputs to the model include geometric and material design factors and Joule’s heat and ambient conditions, while outputs include the SMAHC’s deflection, load-carrying capacities, bandwidth, and energy consumption. Individual components of the SMAHC were characterized to create simulation input parameters, and methodologies for characterization were devised. The thermomechanical and electro-thermomechanical model was validated by comparing experimental and simulated data. Regardless of the various assumptions and simplifications, the findings demonstrate that the transient deformation behavior during the electrically induced thermal activation of a SMAHC at room temperature and external loads of less than 19.2 N can be predicted with variations of less than 20 percent. With increasing mechanical stresses in the shape memory alloy attributable to external loads or rigid substrates and temperatures above the austenite start temperature or below -10°C, the model’s applicability may become unreasonable.
The field of 3D reconstruction is one of the most important areas in computer
vision. It is not only of theoretical importance, but it is also increasingly
used in practical applications, be it in reverse engineering, quality control or
robotics. In practical applications, where high precision reconstructions are
required for a large variety of different objects, structured light reconstruction
is often the method of choice. It allows to achieve accurate and dense
point correspondences over the entire scene, regardless of object texture or
features. Techniques that project phase-shifted sinusoidals are widely used
because, based on the harmonic addition theorem, they theoretically allow
surface encoding in full camera resolution invariant to the object’s coloring.
In this thesis, a fully-automatic reconstruction pipeline based on the sinusoidal
structured light technique is presented. From the projection of the
fringe patterns for encoding the object’s surface, the robust matching of the
point correspondences in sub-pixel accuracy, the auto-calibration of the setup
including the active device, up to the fully-automatic alignment of the partial
reconstructions, all steps will be described and examined in detail. During
that, improvements will be achieved in the area of matching, obtaining highly
accurate and topologically consistent correspondences in sub-pixel precision
between all the devices used. Furthermore, the auto-calibration from point
correspondences, based on the epipolar geometry of the structured light system
is improved. Weaknesses of previous methods in the extraction of focal
lengths from the fundamental matrices are discovered and addressed. The partial
point clouds, reconstructed from the auto-calibrated devices, are finally
pre-aligned using a neural network approach, based on light-resistant optical
flow estimation and subsequently refined using a global approach.
The weaknesses of the structured light method itself will also be addressed
and partially fixed during the course of this work. Since it is an active reconstruction method, certain surface properties can affect the quality of the
reconstruction. It will be shown how these problems can be eliminated or at
least be reduced using an iterative approach that combines fringe patterns with
an inverse texture. Another weakness of the method is its time-consuming acquisition procedure. Typically, a large number of horizontal and vertical fringe
patterns are projected onto the scene to achieve high-precision encoding despite
the limited dynamic range and resolution of the projector. Therefore, a
method will be presented which allows to combine the horizontal and vertical
patterns for a simultaneous two dimensional surface encoding.
Der Klimawandel erfordert den Ausbau urbaner blau-grüner Infrastrukturen, was jedoch mit einem erheblichen Mehrbedarf an Wasser einhergeht. Zentrale Abwasserinfrastrukturen genügen nicht den Ansprüchen der Ressourceneffizienz und Nachhaltigkeit. Daher ist ein neuer Umgang mit Wasser im städtischen Kontext notwendig. Die getrennte Erfassung von schwach belastetem Grauwasser aus Duschen und Handwaschbecken bietet eine nahezu kontinuierliche, wenig verschmutzte Wasserressource zur Wiederverwendung. Naturnahe Verfahren wie Bodenfilter können zur Grauwasseraufbereitung eingesetzt werden; der hohe Flächenbedarf beschränkte jedoch bisher den Einsatz in dicht besiedelten Gebieten. In dieser Arbeit werden technologiebasierte und konzeptionelle Ansätze vorgestellt. Dabei wurden acht vertikal durchströmte Bodenfilter zur nutzungsorientierten Grauwasseraufbereitung im kleintechnischen und Pilotmaßstab untersucht und zusätzlich ein Excel-basiertes Instrument entwickelt, das die Auswirkungen der Grauwasserseparation auf konventionelle zentrale Kläranlagen bewertet. Die Ergebnisse zeigen schwankende Zusammensetzungen und Mengen von Grauwasser. Aufgrund begrenzter Datenverfügbarkeit in der Fachliteratur wird empfohlen, die hier ermittelten 85-Perzentilwerte von 13 g CSB (chemischer Sauerstoffbedarf) pro Einwohner (E) und Tag sowie 55 L/(E·d) für die Bemessung von Anlagen zur Behandlung von gesiebtem, schwach belastetem Grauwasser heranzuziehen. Die ermittelten Stickstofffrachten und -konzentrationen waren aufgrund von Urinkontamination um 60 – 130 % höher als bisher angenommen, während die Phosphorkonzentrationen gesetzlich bedingt um ca. 60 % niedriger lagen. Alle Vertikalfilter wiesen im Ablauf meist < 2,0 mg/l abfiltrierbare Stoffe (AFS) bzw. < 10 mg/l CSB auf (also Eliminationen von überwiegend > 98 % AFS bzw. > 97 % CSB). Der aufgeständerte Rheinsandfilter zeigte bei < 12°C eine eingeschränkte Nitrifikation, während der Lavasandfilter bei > 5°C vollständig nitrifizierte. Die Vertikalfilter entfernten bis zu 50 – 70 % Stickstoff bei Drainageeinstau und Nitratrückführung. Der Lavasandfilter hielt Phosphor weitestgehend zurück. Die Reduktion von Escherichia coli, Enterokokken und Gesamtcoliformen betrug > 3 log-Stufen, während organische Spurenstoffe meist zu > 85 % entfernt wurden. Durch gezielte Anpassungen im Aufbau und Betrieb wurden für verschiedene Nutzungszwecke (Bewässerung, Versickerung und Toilettenspülung) geeignete Qualitäten erreicht. Der erforderliche Flächenbedarf für Bodenfilter zur Behandlung von schwach belastetem Grauwasser wurde zu 0,4 m2/E bestimmt (bezogen auf 85-Perzentilwerte). Dem liegen eine CSB-Flächenbelastung von 32 g/(m2·d) und eine hydraulische Flächenbelastung von 130 L/(m2·d) zugrunde. Die Anwendung von Lavasandfiltern in aufgeständerter Bauweise erwies sich als praxistauglich. Damit wird die Ausweitung des Bodenfilterverfahrens auf den urbanen Raum gefördert. Die Bilanzierungen zeigen, dass die Abtrennung von bis zu 17 % des an die Kläranlage angeschlossenen Grauwassers förderlich für den Kläranlagenbetrieb ist. Bei höheren Abtrennungsraten könnte jedoch eine Stickstoffrückgewinnung/-entfernung aus stickstoffreichen Schlammströmen erforderlich werden. Die Trennung bzw. dezentrale Aufbereitung von Grauwasser hat Vorteile wie Verdunstungskühlung und Wasserwiederverwendung und unterstützt zentral die Transition zu ressourcenorientierten Sanitärsystemen. Insgesamt können betrieblich und baulich angepasste Bodenfilter eine wichtige Rolle in dieser Umstellung spielen und einen deutlichen Beitrag zum nachhaltigen Umgang mit Wasser im städtischen Bereich leisten.
The intensive use of pesticides is one of the main causes for global arthropod decline which can subsequently affect ecosystem services such as pollination, natural pest control, and soil fertility and cascade to higher trophic levels including bats and birds. However, agriculture in large parts is strongly dependent on pesticides, and viticulture in particular is one of the major consumers of fungicides. Fungus-resistant grape varieties offer a very good opportunity to reduce fungicide applications by more than 80 % while maintaining healthy grapes. Here, the effects of fungicide reduction on arthropods and natural pest control were investigated on the one hand in a long- term study in an experimental vineyard and on the other hand in 32 commercially managed vineyards in southwestern Germany. In both designs, fungicide reduction resulted in mostly positive effects on arthropods and natural pest control. Particularly beneficial arthropods such as predatory mites and spiders were promoted by reduced fungicide applications. Contrastingly, potential vineyard pests such as phytophagous mites and leafhoppers decreased under fungicide reduction. Fungus-resistant grape varieties are thus a promising approach to foster resilient agroecosystems and a more sustainable viticulture.
Northwest Africa is predicted to undergo a climatic shift from a temperate to an arid climate resulting in increased aridity, water salinity, and river intermittency. These changes have the potential to impact freshwater communities, ecosystem functioning, and related ecosystem services. However, there is still limited data on the impact of climate change and salinity on river ecosystems and the people depending on it, particularly in understudied regions such as Northwest Africa. In this dissertation, I focus on the Draa River basin in southern Morocco to assess the primary factors shaping and altering macroinvertebrate communities. A particular focus is placed on the impacts of salt on the ecosystem and the consequences for human well-being. We conducted a meta-analysis covering 195 sites in Northwest Africa to examine the responses of insect communities and their trait profiles to climate change and anthropogenically induced stressors. To exclude large-scale geographic patterns such as variations in climate conditions we conducted a confluence-based study focusing on tributaries and their joint downstream sections near three confluences in the Draa River basin. Additionally, we investigated the water and biological quality of 17 further sites, aiming to explore the relationship between human well-being and the ecosystem. Our approach involved conducting water measurements, biological monitoring, and household surveys to create water, biological, and human satisfaction indices. Our findings revealed that insect family richness in arid sites of Northwest Africa was, on average, 37 % lower than in temperate sites. Among the strongest factors contributing to reduced richness and low biological quality were low flow and high water salinity. Based on the results of the confluence study only around five taxa comprised over 90 % of specimens per site, with a higher proportion of salt-tolerant generalist species in saline sites. Resistance and resilience traits such as small body size, aerial dispersal, and air breathing were found to promote survival in arid and saline sites. However, low γ-diversity in the basin caused minimal differences in macroinvertebrate community composition suggesting that the community was generally adapted to the arid climate. We observed positive associations between river water quality and biological quality indices. However, no significant associations were found between these indices and human satisfaction. Human satisfaction was particularly low in the Middle Draa, where 89 % of respondents reported emotional distress due to water salinity and scarcity. Inhabitants in areas characterized by higher levels of water salinity and scarcity generally rated drinking and irrigation water quality lower. Considering that large parts of Northwest Africa will become arid by the end of the century, we can expect a loss of macroinvertebrate diversity affecting the entire ecosystem, which might potentially affect human well-being negatively. To protect the integrity of the ecosystem in the face of ongoing climate change, it is crucial to limit anthropogenic stressors such as secondary salinization and the pressures on water resources. Protecting both more and less saline rivers, preserving natural water flow, and maintaining connectivity between habitats will allow to maintain the Draa River biodiversity, ensure ecosystem functioning, and benefit inhabitants through ecosystem services. Future policies and action plans should consider the interdependence between ecosystems and human inhabitants to enhance overall well-being.
In recent decades, there has been a strong global decline in biodiversity which is attributed, among other reasons, to intensified agriculture and the loss of habitats. Due to the significant ecological impacts it is crucial to comprehensively understand how management practices and the surrounding landscape affect species, as well as how these factors influence their populations over the long term. We studied the influence of weather and trapping effort on multi-day Malaise trap sampling, examining their effects on long-term monitoring data. We further explored how vineyard management and the presence of semi-natural habitats (SNH) affect arthropods in the wine-growing region Palatinate in southwest Germany.
We evaluated the impact of ambient weather conditions and trapping effort during Malaise trap exposure on biomass and taxa richness using metabarcoding. Insect activity was highest when the weather was warm and dry. Taxa accumulation increased fourfold from three days of monthly trapping to continuous trap exposure and nearly sixfold from sampling at a single site to 32 sites. Common species are likely to be captured with short trapping durations and a small number of sampling sites, while it remains challenging to comprehensively sample rare species. Metabarcoding provides a valuable method for long-term monitoring. However, additional sequencing efforts are required to establish more comprehensive DNA databases.
Furthermore, we investigated how organic and conventional management, reduction of pesticides, and SNH in the surrounding landscape affect arthropod diversity in vineyards. Biodiversity was assessed in 32 vineyards in a crossed design of management (organic vs. conventional) and pesticide use (regular vs. reduced in fungus-resistant grape varieties). The pairs of vineyards were located in 16 landscapes, with increasing proportions of SNH in the surrounding area of the vineyards. We measured the biomass of captured specimens and used metabarcoding to assess the general arthropod biodiversity. Furthermore, we used morphological and acoustic species identification to investigate effects on wild bees and orthopterans. Biomass was almost one-third higher in conventional compared to organic vineyards, while organic vineyards had almost 50 % more bees. Densities of herb-dwelling orthopterans were 2.9 times higher in fungus-resistant compared to classic grape varieties under organic management. Higher proportions of SNH increased arthropod richness as well as abundance and richness of above-ground-nesting bees and further changed community composition of arthropods, including wild bees and orthopterans. Increased inter-row vegetation had positive effects on various groups of organisms. Our studies on the influence of vineyard management show that reducing pesticide use, particularly under organic management, can enhance sustainability in viticulture and promote biodiversity. Moreover, further species benefit from diverse inter-row vegetation and SNH in the surrounding landscape. We conclude that the cultivation of fungus-resistant grape varieties is of importance to minimize the need for non-specific pesticides, while it is also important to provide diverse vegetation in inter-rows and create a structurally rich environment with suitable SNH to conserve biodiversity in viticulture.
During our daily lives, we are confronted with vast amounts of data, the processing of which can dramatically influence our lives, both positively and negatively. The enormous amount of data (images, texts, tables, and time series), its variety and possible applications are not always obvious. Due to advancements in the internet of things (IoT), there exist billions of sensors that produce time series which can be found everywhere, whether in medicine, the financial sector or the agricultural economy. This incredible amount of time series data has many hidden features which are useful for industry as well as for daily use, e.g. improving the cancer prediction can save real human lives. Recently, several deep learning methods have been proposed for analyzing this time series data. However, due to their black box nature, their applicability is limited in critical sectors like medicine, finance, and communication. In addition, it is now a compulsion as per artificial intelligence (AI) Act and per General Data Protection Regulation (GDPR) to protect any sensitive data and provide explanations in safety-critical domains. To enable use of DNNs in a broader domain scope, this thesis presents a framework for privacy-preserved and interpretable time series analysis. TimeFrame consists of four main components, namely, post-hoc interpretability, intrinsic interpretability, direct privacy, and indirect privacy. Interpretability is indispensable to avoid damaging people or the infrastructure. In the past years, the development mostly focused on image data, which prevented the full potential of DNNs in time series processing from being exploited. To overcome this limitation, TimeFrame introduces five (Time to Focus, TSViz, TimeREISE, TSInsight, Data Lens) novel post-hoc and two (PatchX, P2ExNet) novel intrinsic interpretability components. TimeFrame addresses multiple perspectives such as attribution, compression, visualization, influence, prototyping, and hierarchical splitting. Compared to existing methods, the components show better explanations, robustness, and scalability. Another crucial factor is the privacy when dealing with sensitive data and deep learning. In this context, TimeFrame introduces two (PPML, PPML x XAI) components for direct and one (From Private to Public) component for indirect privacy. These components benchmark privacy approaches, their effect on interpretability, and the synthetic generation of data to overcome privacy concerns. TimeFrame offers a large set of interpretability and privacy components that can be combined and consider numerous different aspects. Furthermore, the novel approaches have shown to consistently outperform twenty existing state-of-the-art methods across up to 20 different datasets. To guarantee the fairness, various metrics were used including performance change, Sensitivity, Infidelity, Continuity, runtime, model dependency, compression rate, and others. This broad set of metrics makes it possible to provide guidelines for a more appropriate use of existing state-of-the-art approaches as well as the novel components included in TimeFrame.
Individual thermal comfort in buildings, especially in office workplaces, is becoming increasingly
important in modern society. While technical devices for user-specific heating are well known and
implemented, only a few proven methods for individual cooling of a single person are available, most
of which are limited to convective heat transfer.
The primary goal of this research was the development of an effective and efficient cooling system
for individual building occupants based on longwave radiation exchange. To achieve this, the
technological concept of a thermoelectric cooling partition with latent heat storage (Thecla) was
developed. The system combines Peltier elements and heat storage based on a phase change
material to provide a tempered surface for directional radiative cooling of a person.
Thecla has been practically evaluated in the form of real prototypes in hardware tests and human
subject studies. In addition, the concept was evaluated theoretically through precise thermodynamic
analyses of each individual component and of the overall system. Based on these assessments, an
explicit computational model of Thecla was developed, which calculates the thermodynamic
behavior and energy balance of the system for varying environmental and operating parameters.
Coupled with measured and simulated building energy data, the overall energy efficiency of Thecla in
combination with central space cooling systems was assessed.
The analysis suggests, that the system concept of the thermoelectric partition is effective for
individual user cooling. Thecla provides a perceptible and measurable cooling effect associated with
a reduction in the overall thermal sensation. The applied technologies allow cooling operation over
relevant periods of time and, through latent heat storage, a temporal shift of cooling loads in
buildings. For realistic application scenarios in buildings with central air conditioning, an existing
energy-saving potential by using Thecla could be proven and quantified.
Thermoplastische Faserkunststoffverbunde (TP-FKV) werden aufgrund ihres Leichtbaupotentials zusammen mit Metallen vermehrt in Multimaterialstrukturen eingesetzt [1, 2]. Der Einsatz TP-FKV ermöglicht das thermische Fügen, wobei die Benetzung der Metalloberfläche mit Polymer und hierdurch die Zug-Scher-Festigkeit der Verbindung mit steigendem Fügeweg ebenfalls ansteigt. Im Rahmen der Arbeit wurde der Fügeweg als Indikator für eine qualitätsgesicherte Fügung mittels induktiver Erwärmung validiert.
Der Fügeweg wird maßgeblich durch die Fügetemperatur, den Fügedruck und die verwendeten Materialien bestimmt. Um den im Prozess gemessenen Fügeweg als Qualitätssicherungsmerkmal nutzen zu können sind bei der Beurteilung der Messkurven auch mechanische und insbesondere thermische Dehnungen zu berücksichtigen. Diese Einflüsse konnten durch analytische Methoden erfasst, bewertet und bei der Beurteilung des Fügeweges entsprechend herausgerechnet werden.
Zur quantitativen Bewertung wurden TP-FKV/Stahl-Verbindungen durch induktives Fügen hergestellt und der Einfluss der Oberflächenvorbehandlung, Prozessparameter, Witterung und Wechsellasten auf die Verbindungsfestigkeit untersucht. Es konnte gezeigt werden, dass für alle untersuchten TP-FKV in Kombination mit einer laserstrukturierten Stahloberfläche Verbindungsfestigkeiten im Bereich von Referenzklebungen erreicht wurden. Durch den Einsatz von für das jeweilige Matrixpolymer optimierten Haftvermittlern auf den Stahloberflächen konnten ebenfalls gute Verbindungsfestigkeiten erreicht werden. Das Schädigungsverhalten der TP-FKV/Stahl-Verbindung nach Bewitterung beziehungsweise Wechsellasten wurde anhand von Schliffbildanalysen und Simulationen analysiert. Die Arbeit schließt mit der Implementierung des induktiven Fügens in eine Fertigungszelle.
Eine große Untergruppe der Soft Robotik sind die mehrkammerigen pneumatischen Biegeaktuatoren.Diese Arbeit beschäftigt sich mit der Modellierung dieser Art Aktuatoren mittels eines Balkenmodells und arbeitet speziell die Vorteile heraus, die sich aus einer Berücksichtigung der wechselseitigen Abhängigkeiten von Design und Modellierung ergeben. Die Verwendung eines Balkenmodells ist numerisch deutlich effizienter als die Simulation eines dreidimensionalen Körpers, trotzdem werden die Freiheitsgrade axiale Dehnung, Biegung, Scherung und Torsion vom verwendeten Cosserat-Balken berücksichtigt. Zunächst wird ein sinnvolles Design für mehrkammerige
Biegeaktuatoren durch systematische Untersuchung von Designaspekten und unter Berücksichtigung von Aspekten der Modellierung hergeleitet. Zur Modellierung der einzelnen Kammern des Biegeaktuators wird ein Ansatz mittels des Prinzips der virtuellen Arbeit gewählt. Das resultierende Modell erlaubt insbesondere Rückschlüsse auf die Sensitivität der Kammern gegenüber externen axialen Kräften. Dies ist ein wichtiger Aspekt, der bei der Modellierung des mehrkammerigen Biegeaktuators mittels Cosserat-Balken aufgegriffen wird. Dieses Modell wiederum nutzt das besonders gut zu modellierende Design des Aktuators, um einen Zusammenhang zwischen Dehn- und Biegesteifigkeit herzustellen, der, im Gegensatz zu bisherigen Modellen, auch Rücksicht auf die axiale Dehnung des Aktuators nimmt. So lassen sich dreidimensionale Simulationen einzig auf Grundlage axialer Parameteridentifikation durchführen. Die in dieser Arbeit hergeleiteten Zusammenhänge zwischen Design und Modellierung und die daraus entwickelten Methoden sind eine wichtige Grundlage für komplexere Anwendungen in der Zukunft.
Sub-zero Kühlschmierstoffe: Wirkmechanismen und Einsatzverhalten am Beispiel der Drehbearbeitung
(2023)
Die für die Kühlung und Schmierung eingesetzte Technologie ist für die Leistungsfähigkeit fertigungstechnischer Prozesse von hoher Bedeutung, sowohl für die Produktivität und Prozessstabilität als auch hinsichtlich der Qualität gefertigter Werkstücke. Beim Drehen werden dazu üblicherweise Emulsionen eingesetzt. Der Einsatz kryogener Medien weist zwar ein hohes Verbesserungspotential auf, wird aber bislang nur selten industriell eingesetzt. In dieser Arbeit wird auf Basis einer umfangreichen Analyse der strömungsmechanischen, thermodynamischen und tribologischen Vorgänge verschiedener Kühlschmierstrategien eine neuartige sub-zero Kühlschmierstrategie motiviert und entwickelt. Zu diesem Zweck werden sub-zero Kühlschmierstoffe formuliert, die einen Gefrierpunkt von weit unter 0 °C aufweisen und dadurch unter geringen Zufuhrtemperaturen in einem flüssigen stabilen Zustand eingesetzt werden können. In dieser Arbeit wird die sub-zero Kühlschmierstrategie ganzheitlich mit Blick auf die Strahlerzeugung, die Zufuhrmethodik sowie hinsichtlich der Kühl- und Schmierwirkung erforscht. Auf Basis dieser Erkenntnisse wird eine optimierte sub-zero Kühlschmierstrategie am Beispiel der Drehbearbeitung von Titanlegierungen und Stahlwerkstoffen analysiert. Im Vergleich zur Trockenzerspanung sowie der Verwendung kryogener und herkömmlicher KSS weist der neuartige sub-zero Ansatz ein hohes Potenzial auf. Die sub-zero Kühlschmierstrategie ist eine universell einsetzbare Kühlschmierstrategie, mit der die Vorteile der kryogenen Zerspanung mit den Vorteilen herkömmlicher Emulsion kombiniert werden können. Weitgehend unabhängig von dem zerspanten Werkstoff, der Werkstückgeometrie, den Zerspanungswerkzeugen oder den Stellgrößen wird der Drehprozess durch die Kombination aus hoher Kühl- und Schmierwirkung verbessert.
Eisen-Schwefel-(Fe/S)-Cluster-haltige Proteine sind in allen Domänen des Lebens vertreten und gehören zu den wichtigsten Metallocofaktoren. Eines der Hauptaufgabengebiete von Fe/S-Proteinen stellt der Elektronentransport dar. Sie sind aber auch an einer Vielzahl von enzymatischen oder regulatorischen Funktionen beteiligt. Bis heute sind ca. 90 Fe/S-Proteine im Menschen bekannt und es werden verschiedene, schwerwiegende Erkrankungen mit Defekten dieser Proteine oder deren Biogenese in Verbindung gebracht. In der Bäckerhefe S. cerevisiae sind nach der yeast genome database 39 Proteineinträge als Fe/S-Cluster bindend dotiert, jedoch sind bis heute noch etwa 10 % des Hefeproteoms nicht charakterisiert. Untersuchungen zu Fe/S-haltigen Proteinen im Modellorganismus Hefe können zum Verständnis komplexer Stoffwechselwege in höheren Eukaryoten einen entscheidenden Beitrag liefern. Die beiden [2Fe-2S]-Proteine Apd1 und Aim32 aus S. cerevisiae wurden mittels verschiedener in vivo resp. in vitro Methoden in dieser Arbeit näher untersucht. Im ersten Schritt wurden verschiedene Wachstumsexperimente zur Phänotypisierung von Δaim32 und Δapd1 durchgeführt. Durch zu Hilfenahme einer chemogenomischen Datenbank konnten für Δapd1-Hefezellen und durch weitere Modifikationen auch für Δaim32 Wachstumsbedingungen gefunden werden, unter denen diese Gene essenziell sind. Mittels einer Mutagenesestudie wurden mit dem angepassten chemogenomischen Screen die Liganden des [2Fe-2S]-Clusters für Aim32 und Apd1 in vivo identifiziert.
Bis heute ist der Zusammenhang zwischen der CIA-Maschinerie und dem Einbau von [2Fe 2S]-Clustern noch nicht aufgeklärt. Durch Ergebnisse dieser Arbeit konnten erstmals Hinweise auf den CIA-Maschinerie-abhängigen Einbau des [2Fe-2S]-Clusters durch die ESR-Spektroskopie gewonnen werden.
Das Protein Aim32 wurde nach heterologer Proteinexpression in E. coli spektroskopisch untersucht. Mößbauer- und ESR-Spektren wiesen eine hohe Similarität zu jenen von Apd1 auf und beide Proteine wurden Gründungsmitglieder einer neuen Klasse von [2Fe 2S]-Proteinen. Die Clusterliganden von Aim32 konnten in vitro identifiziert werden und durch Austausch der clusterbindenden Histidine zu Cysteinen wurde eine Änderung der ESR-Parameter detektiert, was zusammen mit den Mößbauerspektren des Proteins einen unabhängigen Nachweis für zwei Histidin-Reste in der Clusterkoordination darstellt.
Im letzten Teil der Arbeit wurde ein Protein aus Thermomonospora curvata näher untersucht. Das Protein zeichnet sich durch die Anwesenheit einer Sequenz des HXGGH-Motivs von Aim32/Apd1 aus, jedoch ist hier das zweite Histidin durch ein Aspartat ersetzt. UV/Vis-, ESR- und Mößbauerspektroskopie belegen die Einzigartigkeit dieses Proteins. Durch Mutagenese der clusterbindenden Aminosäuren in alternative Liganden konnte die Koordination eines [2Fe-2S]-Clusters durch zwei Cysteine, ein Aspartat und ein Histidin spektroskopisch belegt werden. Diese Arbeit lieferte grundlegende Erkenntnisse zu einer neuen Klasse von [2Fe-2S]-Proteinen mit flexibler Koordination.
Toxicology, the study of the adverse effects of chemicals and physical agents on living organisms, is a critical process in chemical and drug development. The low throughput, high costs, limited predictivity and ethical concerns related to traditional animal-based toxicity studies render them impractical to assess the growing number and complexity of both existing and new compounds and their formulations. These factors together with the increasing implementation of more demanding regulations, evidence the current need to develop innovative, reliable, cost effective and high throughput toxicological methods.
The use of metabolomics in vitro presents the powerful combination of a human relevant system with a multiparametric approach that allows assessing multiple endpoints in a single biological sample. Applying metabolomics in a cell-based system offers an alternative to both, the ethical concerns and relevance of animal testing and the restraining nature of single endpoint evaluations characteristic of conventional toxicological in vitro assays. However, there are still challenges that hamper the expansion of metabolomics beyond a research tool to a feasible and implementable technology for toxicology assessment.
The aim of this dissertation is to advance the applications of in vitro metabolomics in toxicology by addressing three major challenges that have limited its widespread implementation in the field. In chapter 2 the restrictive high cost and low throughput of in vitro metabolomics was addressed through the development, standardization and proof of concept of a high throughput targeted LC-MS/MS in vitro metabolomics platform for the characterization of hepatotoxicity. In chapter 3, the use of the developed in vitro metabolomics system was expanded beyond hazard identification, to its implementation for deriving dose- and time response metrics that were shown useful for Point of departure (PoD) estimations for human risk assessment. Finally, in chapter 4 in order to increase the reliance and confidence of using in vitro metabolomics data for risk assessment, the human relevance of the metabolomics in vitro assays was attempted to be improved by the implementation and evaluation of in vitro metabolomics in a hiPSCs-derived 3D liver organoid system.
The work developed here demonstrates the suitable of in vitro metabolomics for mechanistic-based hazard identification and risk assessment. By advancing the applications of metabolomics in toxicology, this work has significantly contributed to the aim of toxicology of the 21st century for a human-relevant non-animal toxicological testing, supporting the toxicology task of protecting human health and the environment.
This dissertation contributes to the emerging research field on men’s underrepresentation in communal domains such as health care, elementary education, and the domestic sphere (HEED). Since these areas are traditionally associated with women and therefore counter-stereotypic for men, various barriers can hinder men’s higher participation. We explored these relations using the example of how men’s interest in parental leave – as a form of communal engagement – is shaped across different stages of the transition to fatherhood. Specifically, we focused on how gendered beliefs regarding masculinity and fatherhood, the possible selves men can imagine for their future, and the social support men receive from their normative environment relate to their intentions to take parental leave and their engagement in care more broadly. In Chapter 2, using experimental designs, we examined how different representations of a prototypical man, varying in stereotypic agentic and counter-stereotypic communal content, affect men’s hypothetical intentions to take leave and their communal possible selves. Findings suggested that a combined description of a prototypical man as agentic and communal tended to increase men’s parental leave-taking intentions as compared to a control condition. In line with contrast effects, also an exclusively agentic male prototype tended to push men towards more communal outcomes. In Chapter 3, in a cross-sectional examination of the parental leave-taking intentions of expectant fathers, we found first evidence for a link between male prototypes and men’s behavioral preferences to take parental leave after birth. Yet, the support that expectant fathers received from their partners for taking parental leave emerged as the strongest predictor of men’s leave-taking desire, intention, and expected duration. In Chapter 4, using longitudinal data collected during men’s transition to fatherhood, we studied discrepancies between men’s prenatal caregiver and breadwinner possible selves and their actual postnatal engagement in each domain. Results suggested that fathers, on average, expected and desired to share childcare and breadwinning rather equally with their partners but had difficulties translating their intentions into behavior. The extent to which fathers experienced discrepancies was related to their attitudes towards the father role and the social support they received for taking parental leave and engaging in childcare. Moreover, experiencing a mismatch between their expected, desired, and actual division of labor had consequences for fathers’ intentions to take parental leave in the future. Across the empirical chapters, we found that men generally had high communal intentions and did not consider care engagement as nonnormative for their gender. However, men continue to face barriers that prevent them from translating their communal intentions into behavior. We outline strengths and limitations of the present research given the emerging nature of the research field. Moreover, we discuss implications for future research on men’s orientation towards care as well as implications for how to foster the realization of communal intentions into actual behavior.
Die vorliegende Arbeit beschreibt die Synthese und Charakterisierung von oktaedrischen Eisen(II)- und Co(II)-Komplexen der Form [M(L-N4R2)(Y)](X)2 (M = Fe, Co) mit den vierzähnigen Diazapyridinophanliganden L-N4R2 (R = Me, tBu), zweizähnigen Pyridin- bzw. Chinolinimidazolliganden (Y) und verschiedenen Gegenionen X (X = ClO4 , PF6 , BPh4 , OTf-). Hierbei wurde zunächst gezeigt, dass sich die Einführung von Substituenten auf die Ligandenfeldstärke auswirkt und somit die Komplexe unterschiedliche magnetische Eigenschaften zeigen können, wobei sowohl low-spin- als auch high-spin Komplexe erhalten wurden. Zusätzlich wurde in vielen Fällen Spin Crossover-Verhalten beobachtet. Hierbei findet meistens ein thermisch induzierter Übergang vom low-spin- in den high-spin-Zustand statt, weshalb solche Verbindungen auch als molekulare Schalter bezeichnet werden. Von besonderem Interesse waren photochemisch schaltbare Liganden. Diese zeichnen sich in dieser Arbeit durch zwei 2,5-Dimethylthienylsubstituenten in 4- und 5-Position am Imidazolring des Co-Liganden aus. Nach Einstrahlung von UV-Licht einer für die Verbindung charakteristischen Wellenlänge kann ein intramolekularer Ringschluss über eine neue C-C-Bindung erfolgen. Mit den Liganden 2-[4,5-Bis(2,5-dimethyl-3-thienyl)-1-methyl-imidazol-2-yl]pyridin (dtpyim-m), 2-[4,5-Bis(2,5-dimethyl-3-thienyl)-1-phenyl-imidazol-2-yl]pyridin (dtpyim-ph) und 2-[4,5-Bis(2,5-dimethyl-3-thienyl)-1-methyl-imidazol-2-yl]chinolin (dtchim-m) wurden erfolgreich Komplexe mit einem solchen Chromophor im Ligandenrückgrat synthetisiert. Die somit erhaltenen Verbindungen [Fe(L-N4Me2)(dtpyim-m)](PF6)2, [Fe(L-N4tBu2)(dtpyim-m](BPh4)2 · 2 MeCN, [Fe(L-N4tBu2)(dtpyim-ph](BPh4)2 · Et2O, [Fe(L-N4Me2)(dtchim-m](ClO4)2 · 1.5 DCM, [Co(L-N4Me2)(dtpyim-m)](ClO4)2, [Co(L-N4tBu2)(dtpyim-m)](BPh4)2 ∙ MeCN ∙ Et2O und [Co(L-N4Me2)(dtchim-m)](ClO4)2 ∙ 1.5 DCM wurden sowohl auf ihre magnetischen als auch photochemischen Eigenschaften untersucht. Von großem Interesse war hierbei ob durch die photochemisch initiierte Ringbildung die -Donor und -Akzeptoreigenschaften des Liganden so verändert werden können, dass der Spinzustand des Metallzentrums beeinflusst werden kann.
Highly Automated Driving (HAD) vehicles represent complex and safety critical systems. They are deployed in an open context i.e., an intricate environment which undergoes continual changes. The complexity of these systems and insufficiencies in sensing and understanding the open context may result in unsafe and uncertain behaviour. The safety critical nature of the HAD vehicles requires modelling of root causes for unsafe behaviour and their mitigation to argue sufficient reduction of residual risk.
Standardization activities such as ISO 21448 provide guidelines on the Safety Of The Intended Functionality (SOTIF) and focus on the analysis of performance limitations under the influence of triggering conditions that can lead to hazardous behaviour. SOTIF references traditional safety analyses methods e.g., Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) to perform safety analysis. These analyses methods are based on certain assumptions e.g., single point failure in FMEA and independence of basic events in FTA. Moreover, these analyses are generally based on expert knowledge i.e., data-based models or hybrid approaches (expert and data) are seldom practised. The resulting safety model is fixed i.e., it is generally seen as a one-time artefact. Open context environment may contain triggering conditions which may not be evident to the expert. Open context also evolves over time and new phenomena may emerge.
This thesis explores the applicability of the traditional safety analyses techniques to provide safety models for HAD vehicles operating in the open context, under the light of modelling assumptions taken by traditional safety analyses techniques. Moreover, incorporating uncertainties into safety analyses models is also explored. An explicit distinction between the inherent uncertainty of a probabilistic event (aleatory) and uncertainty due to lack of knowledge (epistemic) is made to formalize models to perform SOTIF analysis. A further distinction is made for conditions of complete ignorance and termed as ontological uncertainty. The distinction is important as for HAD vehicles operating in open context the ontological uncertainty can never be completely disregarded.
This thesis proposes a novel framework of SOTIF to model, estimate and dis cover triggering conditions relevant to performance limitations. The framework provides the ability to model uncertainties while also providing a hybrid approach i.e., supporting inclusion of expert knowledge as well as data driven engineering processes. Two representative algorithms are provided to support the framework. Bayesian Network (BN) and p-value hypothesis testing are utilised in this regard. The framework is implemented on a real-world case study in which LIDARs based perception systems are used as vehicle detection system.
Inland waters, such as freshwater impoundments, are significant and variable sources of the greenhouse gas methane to the atmosphere. In water bodies, methane is mainly produced in the organic-matter rich bottom sediment, where it can accumulate, form gas voids, and be transported to the atmosphere by gas bubbles escaping the sediment. The bubble mediated transport of methane, known as methane ebullition, is a commonly dominant pathway of methane emissions in freshwater reservoirs. Ebullition results from a complex interplay of several simultaneous physical and bio-geochemical processes acting at different timescales, leading to highly variable fluxes in both space and time. Although the sediment matrix is a hot spot for gas production and accumulation, there is a lack of in-situ data on free gas storage in reservoirs and the interaction among sediment gas storage, methane budget, and methane ebullition. Several environmental variables are known to be ebullition drivers; however, simulating the temporal dynamics of ebullition and identifying the governing factors across different systems remains challenging. Therefore, the main goal of this thesis was to investigate the effect of different drivers on the spatial variability and temporal dynamics of methane ebullition in impoundments. Two contrasting reservoirs, one subtropical and one temperate, were investigated. High-frequency measurements of ebullition fluxes and environmental variables, and acoustic-based mapping of gas content in the sediment were performed in both reservoirs, constituting the dataset for this study. The main findings were presented in three main scientific manuscripts. The spatial distribution of gas content in the sediment was primarily controlled by sediment deposition and water depth, with shallow regions of high sediment deposition were hot spots of free gas accumulation in the sediment. Temporal changes in gas content in the sediment were linked to the methane budget components in the reservoir and further influenced by the temporal dynamics of ebullition. While the sediment could store days of accumulated potential methane production, which could sustain months of mean ebullition flux, periods of intensified ebullition led to a depletion of gas stored in the sediment. Large spatial scale ebullition drivers, such as pressure changes, resulted in the synchronization of ebullition events across different monitoring sites. Nevertheless, the degree of correlation between ebullition and environmental variables varied from one system to another and over time. Thermal stratification was an important modulator in the relationship between ebullition and other environmental variables, such as bottom currents and turbulence. The temporal dynamics of ebullition could be captured and reproduced by empirical models based on known environmental variables. However, these models failed to reproduce the sub-daily variabilities of ebullition and demonstrated poor performance when transferred from one system to another. Lastly, although some questions remain unanswered, the findings from this study contribute to advancing the understanding of the complex dynamics of methane ebullition and its controls in freshwater reservoirs.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
Agricultural intensification has increased substantially in the last century to meet the globally growing demand for food, fodder, and bioenergy, thus agricultural cropland became the largest terrestrial biome globally. Pesticides became a central tool to this intensification strategy, thus pesticide application rose drastically over the last sixty years to secure or increase crop yields. However, pesticides are by design biologically active and known to contaminate non-target ecosystems, thereby adversely affecting their function or structure. Even though ecotoxicological knowledge about probable fate and effects has grown, little remains known about the spatiotemporal occurrence, potential effects, and risk drivers of pesticides on larger, i.e. macro, scales.
Consequently, the thesis gathered primarily pesticide exposure data via meta-analysis and from public monitoring databases to describe (i) detailed risks in aquatic ecosystems, (ii) the underlying risk drivers, (iii) associated spatiotemporal trends, (iv) the effect of land use and land-protection and (v) the protectiveness of regulatory frameworks. First, a meta-analysis of insecticides occurring in US surface waters (n = 5,817, 259 studies) revealed large-scale risks for aquatic ecosystems based on the exceedance of regulatory threshold levels (RTL) and identified high-risk substances, particularly pyrethroids, with increasing application trends (publication I). Following this, spatiotemporal factors driving insecticide risks were identified via model-building demonstrating that toxicity-weighted pesticide use was the primary driver in surface waters with subsequent model application generating a spatially comprehensive risk assessment for the United States (publication II). The toxicity-weighted pesticide use was subsequently expanded to an ongoing project covering additional species groups and all pesticides used in the US from 1992 – 2016, highlighting a drastic shift of toxic pressures from vertebrates to aquatic invertebrates. Large-scale monitoring data from European surface waters (n > 8.3 million) of 352 organic chemicals identified pesticides as the main class or organic contaminants causing risks in aquatic ecosystems. Additional analyses established links between agricultural intensity and resulting environmental risks for aquatic invertebrates and plants on this macro scale (publication III). Finally, high-resolution monitoring data from Saxony, Germany, provided, for the first time, detailed insights into the occurrence and resulting risks of organic contaminants (primarily pesticides) in protected surface waters of nature conservation areas (publication IV).
In summary, the thesis gathered and used large-scale datasets to analyze the impact of agricultural intensification – and later anthropogenic land use – on ecosystems to reduce knowledge deficits in ecotoxicology on macro scales. Insecticides were shown to be important and spatially extensive agents of impairments to surface water quality and being directly linked to their use in respective landscapes. Changes in the pesticide use composition over time shifted environmental risks from vertebrates to other central species groups (e.g. aquatic invertebrates), highlighting a new challenge to the integrity of aquatic environments. The thesis provided novel insights into contaminants' individual risk characteristics, their interaction with various spatiotemporal drivers and their relevance on various macro scales. Overall, a discrepancy remains evident between estimated environmental impacts of pesticides derived during regulatory approval processes contrasted by a posteriori field measurements detailing larger than assumed adverse exposures and effects. This discrepancy led to pesticides being the most impactful chemical stressor for aquatic ecosystems compared to other organic contaminants on a continental scale; a threat that even increased for some species groups. The extensive use of pesticides has reached levels where even strictly protected surface waters in Germany are regularly exposed adversely, hence threatening conservation areas’ function as ecological refugia. Taken together, the thesis provides new macro-scale evidence regarding the contribution of pesticides (and associated drivers) to large-scale changes in biological systems evidenced over the last decades, underlining their likely contribution to the ongoing freshwater biodiversity crisis globally. Particularly agricultural systems will require substantial changes going forward to protect or reestablish the integrity of aquatic ecosystems and their provision of vital ecological services.
In der vorliegenden Dissertation wurde das Korrosionssystem verzinkter Stahl in alkalischen, zementbasierten Feststoffelektrolyten untersucht. Dabei kamen drei Zementarten mit je vier Chloridgehalten zur Anwendung. Neben dem Korrosionssystem lag das Augenmerk ebenso auf einer adäquaten Beschreibung der Elektrolyteigenschaften. Hierzu wurde die Masseänderung, die Änderung des elektrischen Mörtelwiderstandes und des IR-Drop über die Zeit sowie die Porosität und die Porenwasserzusammensetzung, um nur einige Kenngrößen zu nennen, untersucht.
Das Hauptaugenmerk ruht jedoch auf der Beschreibung des Korrosionssystems verzinkter Stahl in Mörtel. Dazu wurden die gebildeten Deckschichten und deren Einfluss auf den Korrosionsfortschritt in Abhängigkeit von der Zementart und des Chloridgehaltes sowie die daraus resultierenden elektrochemischen Kennwerte bestimmt.
Die aus den Untersuchungen hervorgehenden Ergebnisse ermöglichen es, die Korrosionssysteme in Abhängigkeit vom Chloridgehalt oder von geometrischen Inhomogenitäten, im Phasengrenzbereich Mörtel/verzinkter Stahl zu differenzieren. Dabei konnte unabhängig von den verwendeten Zementen eine Klassifizierung der Korrosionssysteme über den Phasenwinkel bei 0,1 Hz erfolgen. Klassifiziert wurden durchtrittskontrollierte und diffusionskontrollierte Korrosionssysteme, Mischsysteme, Übergangssysteme und Korrosionssysteme mit Spaltgeometrie.
Für diese Korrosionssysteme konnte festgestellt werden, welche Deckschichten maßgebenden Einfluss auf die Ausbildung eines durchtrittskontrollierten Korrosionssystem haben. Dazu gehören das bereits bekannte Simonkolleit und eine Deckschichtvariante, die bisher noch nicht in der Literatur als Deckschicht an verzinktem Betonstahl in Mörtel oder Beton beschrieben wurde.
Für die Mischsysteme erfolgte eine Darstellung der anteiligen Bedeckung mit Simonkolleit, um den Übergang zu einem durchtrittskontrollierten Korrosionssystemen zu beschreiben.
Neben der Klassifizierung der Korrosionssysteme kann über die Bestimmung des Phasenwinkels bei 0,1 Hz jedem Korrosionssystem nun auch ein spezifischer B-Wert zugewiesen werden. In Kombination mit der für diese Korrosionssysteme angepassten LPR-Messungen zur Bestimmung des Polarisationswiderstandes können Korrosionsraten ohne signifikante Beeinflussung des Korrosionssystems bestimmt werden.
Zur Abkehr von fossilen Rohstoffen, wird die Bedeutung der Produktion von
Kunststoffen aus alternativen Rohstoffquellen in den kommenden Jahrzehnten
zunehmen. In dieser Arbeit wurde daher ein Verfahren untersucht, bei dem
Primärschlamm (PS) einer kommunalen Kläranlage (KA) und Brauereiabwasser als
Rohstoffe zur Polymerproduktion dienen. Zur Anwendung kam dabei ein dreistufiges
Verfahren, mit dem Polyhydroxyalkanoate (PHA), eine Gruppe biologisch abbaubarer
Polymere, durch Bakterienmischkulturen wie Überschussschlamm (ÜSS) nach gezielter
Selektion und Akkumulation synthetisiert werden können. Als Substrat für diesen
Prozess dienen kurzkettigen organischen Säuren (VFA) aus der Versäuerung der
Abwasserströme. Das Ziel der Arbeit war, vorhandenes Grundlagenwissen zu
erweitern, um den PHA-Produktionsprozess in größere Produktionsmaßstäbe zu
skalieren und damit den Betrieb unter realen Bedingungen vorzubereiten. Zur
Erreichung dieser Ziele wurde die Arbeit in drei Teile untergliedert.
Im ersten Teil konnte im Labormaßstab anhand einer Voruntersuchung und
anschließenden Versäuerungsversuchen mit vier Lebensmittelabwässern aufgezeigt
werden wie die Eignung potentieller Abwasserströme zur VFA- und PHA-Produktion
überprüft werden kann und die Versäuerungsbedingungen in Abhängigkeit der
Substratzusammensetzung gezielt an das jeweilige Abwasser gewählt werden können.
In einem nächsten Schritt wurde nachgewiesen, dass gleichbleibende
Selektionsbedingungen auf ÜSS verschiedener KA zu einer vergleichbaren PHA-
Speicherfähigkeit führten. Eine bisher fehlende langfristige Stabilität relevanter
Betriebsparameter erfordert jedoch weitergehende Untersuchungen. Weiterhin
wurde am Beispiel von Brauereiabwasser gezeigt, dass durch die Anwendung einer
niedrigen Raumbelastung in der Selektion der Betrieb mit Abwässern mit einem
geringem CSB möglich ist.
Im zweiten Teil der Arbeit wurde eine Pilotanlage auf der KA Buchenhofen in
Wuppertal mit PS unter möglichst niedrigem Ressourceneinsatz betrieben. Dabei wies
der CSBVFA des versäuerten PS eine Jahreszeit-unabhängige große Spanne auf, dennoch
war eine ganzjährige Eignung zur PHA-Produktion gegeben. Die VFA-
Zusammensetzung war ganzjährig stabil und ermöglichte damit eine gleichbleibende
PHA-Zusammensetzung. Die PHA-Speicherfähigkeit konnte in einem Selektionsbetrieb
mit niedriger Raumbelastung nicht reproduziert werden. Es wird angenommen, dass
Unterschiede des VFA-Anteils im Substrat vorrangig ursächlich waren.
Für den dritten Teil der Arbeit wurde aufbauend auf den Ergebnissen des Pilotbetriebs,
unter Verzicht auf Hilfsströme, eine verfahrenstechnische Auslegung für die
Anschlussgröße der KA Buchenhofen mit 550.000 E durchgeführt. Mit den Ergebnissen
konnten die Dimensionen einer PHA-Produktionsanlage eingeordnet werden.
Weitergehend wurden bei der Auslegung Ansatzpunkte aufgedeckt, die im
versuchstechnischen Betrieb angepasst oder untersucht werden müssen, um eine
reale und wirtschaftliche Betriebsweise in der weiteren Prozessentwicklung zu
ermöglichen.
More than 2.4 % of the continental surface area is covered by shallow aquatic systems such as ponds. Despite occupying only a tiny fraction of the earth's surface area, ponds are globally significant sites of carbon cycling. They receive carbon, process it and emit large amounts of greenhouse gases into the atmosphere, the most potent among others are carbon dioxide (CO2) and methane (CH4). Tube-dwelling macroinvertebrates, such as chironomid larvae (Diptera: Chironomidae) change biogeochemical functions, particularly in shallow aquatic systems. Through bioturbation involving burrow ventilation and sediment particle reworking, tube-dwelling macroinvertebrates enhance solute exchange between sediment and water. Stimulate the benthic microbial community, and regulate organic matter decomposition. This doctoral project integrates aquatic carbon biogeochemical processes with the research field of ecology to relate knowledge of biogeochemical reaction dynamics upon application of the mosquito control biocide Bacillus thuringiensis israelensis (Bti), which is an entomopathogen that kills mosquitos larvae, but also reduces the abundance of chironomids. The interdisciplinary approach combines field measurements and laboratory experiments. First, an experiment was conducted in 12 outdoor floodplain ponds mesocosms (FPMs), where the effect of Bti application on carbon transformations, carbon pools, and carbon fluxes was monitored for one year. Half of the FPMs were Bti-treated and the remaining half were controls. The study revealed that seasonal variations governed changes in transformations, pools, and fluxes on the carbon components. Treated FPMs, for which a 26 % and 41% reduction in emerging merolimnic insects and macroinvertebrates abundance, respectively was reported (in companion studies) were higher CH4emitters (137% higher than in control mesocosms). The higher CH4 emissions occurred specifically in the shallow zone where the macroinvertebrate reduction was also significant. In the same treated FPMs, a tendency towards less dissolved organic carbon in porewater (33% lower than in control mesocosms), was potentially caused by the reduction in bioturbation activities of chironomids, whereas the remaining measured components of the carbon budget were not affected by the treatment with Bti. Second, laboratory microcosm (LMs) experiments that excluded environmental constraints were developed, to clarify the findings of the FPMs experiment. Out of the 15 microcosms, 3 were treated (each set) with standard Bti dose, 5 times standard Bti dose, chironomid larvae with low and high areal density, and control. The findings demonstrated that bioturbation increased CH4 and CO2 efflux and sediment oxygen (O2) consumption, while it did not affect the net production of CH4 and CO2. The negligible effect on net production rates in treatments with chironomids indicates that the increase in emissions rate was predominantly caused by bioturbation, which reduced the gas accumulation in the sediment. In the absence of chironomids, the application of any dose of Bti led to a three-fold higher net production rate of CH4 and CO2 (by up to 2.7 times than in control), due to the high addition of bioavailable carbon through the Bti excipients. However, the sole addition of carbon through the Bti excipients could not justify the high net production rate suggesting that the addition of Bti triggered a more robust carbon metabolism process. Both FPMs and LMs results suggested that the application of Bti may have functional implications on carbon biogeochemistry in affected aquatic systems beyond those mediated by changes in macroinvertebrate communities.
Verbundträger aus Stahlprofilen, die mit der Betonplatte der Decke schubfest verbunden sind, stellen eine besonders wirtschaftliche Bauweise dar. Bei den heute üblichen Bauteilen des Hoch- und Brückenbaus kommen mechanische Verbundmittel zum Einsatz, die zu hohen lokalen Beanspruchungen insbesondere des Betons führen.
Durch den Ersatz dieser mechanischen Verbundmittel durch die Klebtechnik können diese hohen lokalen Beanspruchungsspitzen vermieden werden. Zusätzlich ermöglicht die Klebtechnik neuartige Konstruktionen auch unter Verwendung neuartiger Materialien.
Die Dauerhaftigkeit der Verbindung unter klimatischen und thermischen Einflüssen sowie mechanischer Beanspruchung ist dabei für die Anwendbarkeit der Klebtechnik von entscheidender Bedeutung.
Fehlende Kenntnisse des Langzeitverhaltens von geklebten Verbindungen, insbesondere unter langen andauernden klimatischen Einwirkungen, stehen einer weitgehenden und über die Ausführung von Pilotprojekten hinausgehenden Anwendung struktureller Klebungen im Bauwesen zurzeit entgegen.
Deshalb sollen Erkenntnisse zum dauerhaften Tragverhalten der Verbundfuge zwischen Stahl und Beton unter Beachtung der Eigenspannungszustände in der Klebfuge gewonnen werden. Der Beschreibung des Klebstoffverhaltens und Beachtung der Eigenspannungszustände und der Einflüsse der behinderten Querdehnung kommt dabei eine besondere Bedeutung zu. Bei dieser Bewertung des Trag- und Verformungsverhaltens unter lang andauernden klimatischen Einwirkungen ist auch die Wechselwirkung des Klebstoffs mit den Fügepartnern Stahl und insbesondere Beton zu behandeln, da auch die mechanischen Eigenschaften dieser Substrate maßgebend für das Verhalten der Klebfuge werden können. Insbesondere der Fügepartner Beton mit seiner hohen Alkalität, seinem Wassergehalt und seiner Permeabilität für Feuchte steht dabei im Blickpunkt der Untersuchungen. Durch die Anwendung der Klebtechnik ergibt sich die Möglichkeit, neue Geometrien für Stahlverbundbauteile zu entwickeln, die nicht mehr auf die Randbedingungen der mechanischen Verbundmittel wie Mindestdicken, Mindestbreiten oder minimale Randabstände Rücksicht nehmen müssen.
This doctoral dissertation is comprised of nine published articles covering different
methods for ‘Fast, Robust Rigid and Non-Rigid Registration for Globally Consistent
3D Scene and Shape Reconstruction’. Overall the contributing articles are separated
and discussed in three stages – The first part of the thesis i.e., chapter 2 explains
three novel method classes of rigid point set registration namely Gravitational Approach (GA), Fast Gravitational Approach (FGA), and RPSRNet. GA was introduced as the first physics-based rigid point set registration. It includes elegant modeling of rigid by dynamics using Newtonian mechanics. The method proposed many new avenues for other types of pattern matching tasks thank point set registration. Next, FGA method, published 4 years after GA presented as an extension that breaks the algorithmic complexity of GA from O(M N ) to O(M log N ) using Barnes-Hut tree representation of point cloud. It also eliminates the requirement of heuristic optimization parameter settings by GA, and achieve state-of-the-art alignment accuracy on LiDAR odometry. Finally, RPSRNet presents deep learning version of FGA, with custom convolution layers for hierarchical point feature embedding. RPSRNet is robust and the fastest among SoA methods for LiDAR data registration. The second part, i.e., chapter 3, of the thesis introduces NRGA as the fist physics-based non-rigid point set
registration method which is computationally slow but robust against noisy and partial inputs. NRGA preserves structural consistency as it coherently regularize motion of deformable vertices. For articulated hand shape reconstruction, a tailored version of NRGA -- Articulated-NRGA -- is effective to refine final hand shape. Collision and penetration avoidance between source and target surfaces are tackled by constrained optimization in NRGA. This setting has improved hand and object interaction reconstruction. Next contribution FoldMatch method remodels the shape deformation by introducing wrinkle vector field (WVF) for capturing complex clothing and garment details while fitting body models onto 3D Scans. Quantitative evaluation of FoldMatch and NRGA shows their effectiveness in geometrically consistent surface modeling and reconstruction tasks. Finally, the third part of the thesis explains globally consistent outdoor scene reconstruciton, odometry estimation, and uncertainty guided pose-graph optimization in a novel LiDAR-based localization and map building method, called Deep Evidential LiDAR Odometry (DELO). This is the first Odometry method to use predictive uncertainty modeling for sensor pose prediction network.
Molecular simulation is an important tool for investigating the behavior of fluids and solids. Nanoscopic processes and physical properties of the material can be studied predictively based on the description of the molecular interactions by force fields. This is used in the present work to tackle engineering questions that are hard to answer with other methods. First, mass transfer at fluid interfaces was investigated on the nanoscopic level. Therefore, two distinct simulation methods were developed and used to systematically investigate the mass transfer in mixtures of simple model fluids, described by the ‘Lennard-Jones truncated and shifted’ (LJTS) potential. The research question was whether the adsorption of components at the interface, which is observed also in many simple fluid mixtures, has an influence on the mass transfer. Such an influence was indeed found in the studies with both scenarios. Furthermore, explosions of nanodroplets caused by a spontaneous evaporation of the liquid phase were investigated with non-equilibrium molecular dynamics (NEMD) simulations. In these simulations, the interior of an LJTS droplet was superheated by a local thermostat, so that a vapor bubble nucleated inside of the droplet. Depending on the degree of superheating, different phenomena were observed, ranging from a simple evaporation of the droplet over oscillatory behavior of the bubble to an immediate droplet explosion. For molecular simulations of real mixtures, suitable force fields are needed. In this work, a set of molecular models for the alkali nitrates was developed and systematically compared to experimental data of thermophysical and structural properties of aqueous alkali nitrate solutions from the literature. Lastly, the structure and clustering of 1:1 electrolytes in aqueous solution was investigated for a broad concentration range starting from near infinite dilution up to high supersaturation. Based on the simulation results, an empirical rule was proposed to provide estimates of the solubility of salts with standard molecular dynamics simulations without the need of elaborate calculation schemes or significant additional computational effort.
Drought is a significant environmental factor that can impair plant growth and development, leading to reduced crop productivity or even plant death. Maintaining sugar distribution from source to sink is crucial for increasing crop production under water limitation conditions. Numerous studies have suggested that nutrition fertilization, especially potassium (K), can enhance plant growth and yield production. To investigate the mechanism of K in sugar long-distance transportation under drought stress, we established a soil-grow system and a hydroponic-grow system with varying amounts of potassium supplementation and analyzed the biochemical and molecular responses in Arabidopsis and potato plants under drought stress conditions. Our findings showed that excess potassium fertilization limited sucrose metabolism, leading to lower drought tolerance in Arabidopsis in both grow systems. However, higher potassium supplementation altered sugar relocation and potassium movement, resulting in an increase in starch yield production in both potato plants with different sink strength capacities. We also proposed that a low amount of sodium increases Arabidopsis drought tolerance under low potassium conditions since a low amount of sodium can improve the control of osmotic potential, leading to more water being retained in plant cells.
Silicon (Si) has received considerable attention recently for its potential in mitigating drought stress, although the effects vary among different plant species. To investigate the mechanism of Si in drought stress tolerance, we applied monosilicic acid in hydroponic media and then applied PEG8000 to simulate drought stress. Our findings revealed that Si-dependent drought mitigation occurred more in the shoot than in the root of Arabidopsis, and we observed silicon accumulation in the shoot of Arabidopsis. In Si-treated plants, more glucose was accumulated in the vacuole, leading to better osmotic potential control under drought stress. RNA sequencing analysis showed that Si altered the activity of sugar transporters and the sugar metabolism process, and increased photosynthesis. However, Si-dependent regulation in sugar transporter showed different responses in potato. Understanding the mechanism of Si in potato requires further studies. Overall, our dissertation provides important information for clarifying the mechanism of Si in drought stress, which forms the basis for further investigation.
Zur Querkrafttragfähigkeit von Stahlbetondecken mit integrierten Hohlräumen unter Zugbeanspruchung
(2023)
Zur Realisierung von schlanken, weit gespannten Deckensystemen im Hochbau werden zunehmend einachsig und zweiachsig gespannte Hohlkörperdecken eingesetzt. Bei dieser Bauweise sind die Deckenquerschnitte im Vergleich zu massiven Stahlbetondecken planmäßig geschwächt. Gleichermaßen gilt dies für Stahlbetondecken, die als Installationsebene für Leitungen der Gebäudetechnik genutzt werden. In beiden Fällen führen die integrierten Hohlräume zu einer Minderung der Querkrafttragfähigkeit von nicht querkraftbewehrten Stahlbetondecken. An der TU Kaiserslautern wurde in den vergangenen Jahren die Querkrafttragfähigkeit von Stahlbetondecken mit integrierten Leitungen eingehend erforscht. Die entwickelten Bemessungskonzepte stehen der Baupraxis mit den nationalen Erläuterungen in DAfStb Heft 600 zu DIN EN 1992-1-1 zur Verfügung. Der Einfluss von Längszug auf die Querkrafttragfähigkeit von Stahlbetondecken mit integrierten Hohlräumen ist jedoch weitgehend unbekannt. In dieser Arbeit wird mit experimentellen Untersuchungen und numerischen Simulationen das Querkraftversagen von einachsig gespannten Stahlbetondecken mit integrierten Hohlräumen und Hohlkörpern unter Längszug erforscht. Die Arbeit gibt Aufschluss über das Tragverhalten geschwächter Stahlbetonbauteile unter Längszug infolge einer direkten Einwirkung. Die Versuchsergebnisse zeigen einen erwartungsgemäß ungünstigen Einfluss von Längszug auf die Querkrafttragfähigkeit massiver Stahlbetondecken. Im Gegensatz dazu zeigt sich bei Stahlbetondecken mit integrierten Hohlräumen und bei Stahlbetondecken mit integrierten Hohlkörpern ein deutlich geringerer, ungünstiger Einfluss von Längszug auf die Querkrafttragfähigkeit, der mit zunehmender Querschnittsschwächung deutlich zurückgeht. Auf Grundlage der Ergebnisse wird ein Vorschlag zur Erweiterung des bestehenden Bemessungskonzeptes nach Dafstb Heft 600 für die Bemessung der Querkrafttragfähigkeit von Stahlbetonbauteilen mit integrierten Hohlräumen unter Längszug erarbeitet. Gleichermaßen wird ein Vorschlag zur Erweiterung des Bemessungskonzeptes für die Querkrafttragfähigkeit von Hohlkörperdecken nach den allgemeinen bauaufsichtlichen Zulassungen für Hohlkörperdecken vom Typ Cobiax Eco-Line, Cobiax Slim-Line und Unidome XS unterbreitet.
Im Rahmen dieser Arbeit wurden chirale Liganden mit einem Rückgrat aus 2,6 Bis(pyrazol-1-yl)pyridin aus den Naturstoffen Campher, Menthon und Carvon dargestellt. Diese wurden mit Ruthenium(II)komplexvorstufen umgesetzt. Hierbei zeigte sich die Notwendigkeit einer dazu erforderlichen Vorkoordination durch räumliche Nähe eines π-Systems. Deshalb wurden durch Reduktion chiraler Ruthenium(III)vorläuferverbindungen weitere asymmetrische Rutheniumkomplexe dargestellt. Diese erwiesen sich als monokationische Verbindungen und wurden mittels Kristallstrukturanalyse verifiziert. Durch Einsatz der katalytisch aktiven Komplexe in der homogen katalysierten Transferhydrierung von Acetophenon wurden Enantiomerenüberschüsse des (S)-konfigurierten Alkohols festgestellt. Zusätzlich gelang die Darstellung weiterer chiraler 3d- und 4d-Übergangsmetallkomplexe. Weiterhin wurden dikationische Rutheniumkomplexe dargestellt, an welchen die Koordination kleiner Donormoleküle vorgenommen wurde, wodurch diese für die katalytische Ammoniaksynthese verwendet werden könnten. Außerdem wurden die signifikanten Eigenschaften intramolekular koordinierender Allylfunktionen des Liganden deutlich.
Methods for scale and orientation invariant analysis of lower dimensional structures in 3d images
(2023)
This thesis is motivated by two groups of scientific disciplines: engineering sciences and mathematics. On the one hand, engineering sciences such as civil engineering want to design sustainable and cost-effective materials with desirable mechanical properties. The material behaviour depends on physical properties and production parameters. Therefore, physical properties are measured experimentally from real samples. In our case, computed tomography (CT) is used to non-destructively gain insight into the materials’ microstructure. This results in large 3d images which yield information on geometric microstructure characteristics. On the other hand, mathematical sciences are interested in designing methods with suitable and guaranteed properties. For example, a natural assumption of human vision is to analyse images regardless of object position, orientation, or scale. This assumption is formalized through the concepts of equivariance and invariance.
In Part I, we deal with oriented structures in materials such as concrete or fiber-reinforced composites. In image processing, knowledge of the local structure orientation can be used for various tasks, e.g. structure enhancement. The idea of using banks of directed filters parameterized in the orientation space is effective in 2d. However, this class of methods is prohibitive in 3d due to the high computational burden of filtering when using a fine discretization of the unit sphere. Hence, we introduce a method for 3d pixel-wise orientation estimation and directional filtering inspired by the idea of adaptive refinement in discretized settings. Furthermore, an operator for distinction between isotropic and anisotropic structures is defined based on our method. Finally, usefulness of the method is shown on 3d CT images in three different tasks on a fiber-reinforced polymer, concrete with cracks, and partially closed foams. Additionally, our method is extended to construct line granulometry and characterize fiber length and orientation distributions in fiber-reinforced polymers produced by either 3d printing or by injection moulding.
In Part II, we investigate how to introduce scale invariance for neural networks by using the Riesz transform. In classical convolutional neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, the network may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform, a scale equivariant operator. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider segmenting cracks in CT images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real CT images featuring a wide range of crack widths. As an alternative to deep learning models, the Riesz transform is utilized to construct a scale equivariant scattering network, which does not require a lengthy training procedure and works with very few training examples. Mathematical foundations behind this representation are laid out and analyzed. We show that this representation with 4 times less features than the original scattering networks from Mallat performs comparably well on texture classification and gives superior performance when dealing with scales outside the training set distribution.
Die vorliegende Arbeit betrachtet die Eigenschaften eines tribologischen Kontaktes bei elektrischem Stromdurchgang. Diese unerwünschten Stromdurchgänge entstehen etwa durch die Kombination von unvermeidbaren parasitären Kapazitäten in einem Elektromotor unter dem Einsatz von schnellschaltenden Frequenzumrichtern mit steilen Spannungsflanken. Folge dieser Stromdurchgänge sind Schäden an den Kontaktpartnern als auch an dem dazwischen befindlichen Schmiermittel. Zur Vorhersage der Gefährdung eines Antriebsstrangs hinsichtlich parasitärem Stromdurchgang werden elektro-mechanische Simulationen eingesetzt. Diese erlauben eine Beurteilung des tribologisch-elektrischen Kontaktes hin auf seine Gefährdung gegen Stromdurchgang. Basierend hierauf können dann geeignete Abhilfemaßnahmen getroffen werden.
Zur weiteren Verbesserung solcher elektro-mechanischen Simulationen werden im ersten Teil der Arbeit experimentell die Durchschlagspannung als auch der Widerstand des Entladekanals ermittelt. Hierbei zeigt sich ein ausgeprägtes nichtlineares Verhalten des Entladewiderstands, welches in dieser Form nicht vollumfänglich mit den Kenntnissen aus der Hochspannungstechnik erklärt werden kann. Hierauf aufbauend werden die langzeitlichen Auswirkungen des parasitären Stromdurchgangs im definierten tribologischen Zustand der Mischreibung betrachtet. Zur Ermittlung von Wechselwirkungen werden umfangreiche Messgrößen aufgezeichnet und analysiert. Im abschließenden Teil der Arbeit werden die Auswirkungen des elektrischen Stromdurchgangs auf die Oberflächenrauheiten simulativ ermittelt.
Hardware devices fabricated with recent process technology are intrinsically
more susceptible to faults than before. Resilience against hardware faults is,
therefore, a major concern for safety-critical embedded systems and has been
addressed in several standards. These standards demand a systematic and
thorough safety evaluation, especially for the highest safety levels. However,
any attempt to cover all faults for all theoretically possible scenarios that a sys-
tem might be used in can easily lead to excessive costs. Instead, an application-
dependent approach should be taken: strategies for test and fault resilience
must target only those faults that can actually have an effect in the situations
in which the hardware is being used.
In order to provide the data for such safety evaluations, we propose scalable
and formal methods to analyse the effects of hardware faults on hardware/soft-
ware systems across three abstraction levels where we:
(1) perform a fault effect analysis at instruction set architecture level by em-
ploying fault injection into a hardware-dependent software model called
program netlist,
(2) use the results from the program netlist analysis to perform a deductive
analysis to determine “application-redundant” faults at the gate level by
exploiting standard combinational test pattern generation,
(3) use the results from the program netlist analysis to perform an inductive
analysis to identify all faults of a given fault list that can have an effect
on selected objects of the high-level software, such as specified safety
functions, by employing Abstract Interpretation.
These methods aid in the certification process for the higher safety levels
by (a) providing formal guarantees that certain faults can be ignored and (b)
pointing to those faults which need to be detected in order to ensure product
safety.
We consider transient and permanent faults corrupting data in program-
visible hardware registers and model them using the single-event upset and
stuck-at fault models, respectively.
Scalability of our approaches results from combining an analysis at the ma-
chine and hardware level with separate analyses on gate level and C level
source code, as well as, exploiting certain properties that are characteristic for
embedded systems software. We demonstrate the effectiveness and scalability
of each method on industry-oriented software, including a software system
with about 138 k lines of C code.
Ambulatory assessment (AA) is becoming an increasingly popular research method in the fields of psychology and life science. Nevertheless, knowledge about the effects that design choices, such as questionnaire length (i.e., number of items per questionnaire), have on AA participants’ perceived burden, data quantity (i.e., compliance with the AA protocol), and data quality is still surprisingly restricted. The aims of this dissertation were to experimentally manipulate aspects of an AA study’s sampling strategy - sampling frequency (Study 1) and questionnaire length (Study 2) - and to investigate their impact on perceived burden, data quantity, and aspects of data quality in three papers. In Study 1, students (n = 313) received either 3 or 9 questionnaires per day for the first 7 days of the study. In Study 2, students (n = 282) received either a 33- or 82-item questionnaire 3 times a day for 14 days.
Paper 1 described that a higher sampling frequency (Study 1) led to a higher perceived participant burden, but did not affect other aspects of data quantity and quality. Furthermore, a longer questionnaire (Study 2) did not affect perceived participant burden or data quantity, but did lead to a lower within-person variability, and a lower within-person relationship between time-varying variables. Paper 2 investigated the effects of the sampling frequency (Study 1) on careless responding by identifying careless responding indices that could be applied to AA data and by extending the multilevel latent class analysis model to a multigroup multilevel latent class analysis model. Results indicated that a higher sampling frequency did not affect careless responding. Paper 3 investigated the effects of questionnaire length (Study 2) on (the relative impact of) response styles by extending the item response tree (IRTree) modeling approach to a multilevel data structure. Results indicated that a longer questionnaire led to a greater relative impact of RS.
Although further validation of the results is essential, I hope that future researchers will integrate the results of this dissertation when designing an AA study.
Thermodynamic Modeling of Poorly Specified Mixtures using NMR Fingerprinting and Machine Learning
(2023)
Poorly specified mixtures, i.e., mixtures of unknown or incompletely known composition,
are common in many fields of process engineering. Dealing with such mixtures in process
design is challenging as their properties cannot be described with classical thermodynamic
models, which require a full specification. As a workaround, pseudo-components
can be introduced, which are generally defined using ad-hoc assumptions. In the present
thesis, a new framework is developed for the thermodynamic modeling of such mixtures
using nuclear magnetic resonance (NMR) experiments in combination with machine-learning
(ML) methods. In the framework, a characterization of a mixture in terms of
structural groups (“NMR fingerprint”) is obtained by using the ML concept of support
vector classification. Based on the group-specific fingerprint, quantum-chemical descriptors
of the unknown part of the mixture as well as activity coefficients can already be
predicted. Furthermore, a meaningful definition of pseudo-components is achieved by
clustering the structural groups into pseudo-components with the K-medians algorithm
based on their self-diffusion coefficients measured by pulsed-field gradient (PFG) NMR.
It is demonstrated that the characterization of poorly specified mixtures in terms of
pseudo-components can be combined with several thermodynamic group-contribution
methods. The resulting thermodynamic models were applied to various poorly specified
mixtures and used for solving two typical tasks from conceptual fluid separation process
design: the solvent screening for liquid-liquid extraction processes and the simulation
of open evaporation processes. The predictions with the methods developed here show
very good agreement with the results obtained for the fully specified mixtures.
Internal waves are oscillating disturbances within a stable density-stratified fluid. In stratified water basins, these waves have been detected and pointed out as one of the most important processes of water movement and vertical mixing. A fraction of the wind momentum and energy that cross the water surface are responsible for generating large standing internal waves, also called basin-scale internal seiches, in stratified basins. Despite the huge number of publications describing different mechanisms that can influence the dissipation rates and accelerate the wave damping of internal seiche in thermally stratified lakes and reservoirs, many details of their application to field observations are site specific and do not evaluate the effects in a combined way. This research paid particular attention to some mechanisms that may contribute in inhibiting the generation of internal seiche through field measurements and numerical simulations. Our results underline the importance of bathymetry on energy dissipation, indicating that the gentle sloping bottom may act as a primary mechanism to inhibit the formation of internal seiches. The basin shapes (reservoir bends) and self-induced mixing near the wave crest act as secondary mechanisms to extract energy from upwelling events, which is responsible for triggering internal seiches in thermally stratified lakes. Numerical simulations indicate that a higher amount of energy is transferred from the wind to the internal seiche for an increasing deviation of the stratification from a two-layer structure, suggesting that the stratification profile is not responsible for inhibiting the occurrence of basin-scale internal waves, but only for modifying its structure, favoring the formation of internal waves with higher vertical modes. The outcome of this study may be of great relevance in describing the biogeochemical cycle in lakes and reservoirs, since each mechanism may have different trigger effects on the cycle of nutrients and other elements in thermally stratified lakes.
Numerical study on the kinematic response of piled foundations to a stationary or moving load
(2023)
The present numerical study focuses on the problem of dynamic interaction of piled foundations under harmonic excitation at high frequencies relevant for the vibration protection practice. The finite-element programs Plaxis (2D & 3D) and Abaqus are employed for time- and frequency-domain analyses, respectively.
As a first step, dynamic impedances of pile groups, piled rafts and embedded footings are derived for all oscillation modes in order to gain insight into the problem of inertial loading.
Emphasis is placed on the kinematic response of single piles, pile groups and piled rafts to a wave field emanating from a distant stationary or moving harmonic vertical point load acting on the surface of the soil. Transfer functions, which are ratios relating the response of the foundation to that of the free-field, quantify the kinematic interaction. Only the vertical component of the response is assessed as mostly critical in the frame of the selected excitation. It is shown that a stationary harmonic load is a good approximation for a moving harmonic load; this is true for a travelling speed of the load that is relatively low in comparison with the Rayleigh wave velocity in the soil, which is quite common in engineering practice. Analogously, a static load is a good approximation of a moving load of constant magnitude. Moreover, analytical solutions are presented for single pile and pile group response under Rayleigh wave excitation, which can be also employed in the near-field, as shown herein.
The extension of piled foundations by additional rows against the wave propagation direction is examined under the scope of vibration protection. Indeed, for a considerable frequency range, the further addition of pile rows to a piled foundation has a favorable effect on the reduction of the vibration level calculated at the furthest-back pile row or at the free-field behind the foundation. This is, however, not valid, as the excitation frequency increases further, and the interplay between the piles becomes more complex. On the other hand, the extension of the piled foundation by additional pile columns parallel to the wave propagation direction has a positive effect at high frequencies.
The accuracy of the results is assessed by verification against rigorous solutions. The importance of key aspects in finite-element modelling is also highlighted.
In dieser Arbeit wird ein flexibles dimeres Ligandensystem basierend auf 2,6-Bis(1H-pyrazol-3-yl)pyridin (bpp) und 2,6-Bis(1H-5-butyl-pyrazol-3-yl)pyridin (bpp(n-bu)) vorgestellt. Dabei wurden je zwei Einheiten bpp/bpp(n-bu) über variable Bausteine miteinander verknüpft. Auf diese Weise wurden verschiede makrocyclische sowie offenkettige Liganden erhalten. Die Liganden kamen in der Synthese multinuklearer Komplexverbindungen zum Einsatz. Die Verwendung der 3d-Metallionen Eisen(II), Kobalt(II) und Kupfer(II) ermöglichte den Erhalt verschiedener homo- als auch heterobimetallischer Komplexe mit makrocyclischen Liganden. Die offenkettigen Liganden lieferten dabei selbstorganisiert homoleptische Komplexverbindungen unter Ausbildung einer helikalen Struktur. Einblick in das Verhalten der Komplexverbindungen in Lösung konnten Palladium(II)- und Platin(II)-Verbindungen geben. Hieran konnte gezeigt werden, dass die Art der Ligandenverknüpfung (makrocyclisch vs. offenkettig) ebenso die Spezifikation der verknüpfenden Einheit einen Einfluss auf die Eigenschaften der Komplexe in Lösung ausübt. Unter Einbau von Ruthenium(II)-Zentren wurden mononukleare Komplexe einer Reihe von substituierten bppn-bu-Derivaten erhalten. Hieran wurde der Einfluss des Substituenten auf die katalytische Aktivität der Verbindungen in der Transferhydrierung untersucht. Eine Übertragung des Strukturmotivs auf einen dimeren Liganden lieferte einen dinuklearen Komplex, dessen Aktivität jedoch durch die mononuklearen Verbindungen übertroffen wird.
Verschiedene Ereignisse, wie die Nuklearkatastrophe in Fukushima oder kürzlich die Corona Pandemie, haben gezeigt, dass globale Lieferketten immer anfälliger für Risiken und Unsicherheiten unterschiedlicher Art werden. Aus diesem Grund wird das Thema Supply Chain Resilience sowohl für Forscher als auch für Manager zunehmend wichtiger.
Da in der bestehenden Forschung häufig theoretisch-konzeptionelle Ansätze zuvorderst stehen, beschäftigt sich diese Arbeit unter Einnahme einer strukturationstheoretischen Perspektive mit der Frage, wie ein an Praktiken orientierter Ansatz zur Steigerung der Resilienz globaler Lieferketten aussehen kann.
Dazu bedient sich diese Arbeit einer vergleichenden qualitativen Fallstudie, bei der einerseits mehrere Unternehmen der deutschen Spielwarenbranche befragt und ein Großunternehmen der Elektroindustrie analysiert werden. Dazu werden mehrere qualitative Methoden eingesetzt, die Daten trianguliert und im Anschluss mittels qualitativer Inhaltsanalyse aufbereitet und interpretiert.
Als Ergebnis entsteht eine Sammlung von insgesamt 29 Praktiken, die entlang der drei Resilienzphasen Readiness, Response und Recovery eingeordnet werden können. Weiterhin zeigt sich, dass die identifizierten Praktiken ebenfalls anhand des Status der Implementierung kategorisiert werden können.
Aus dieser Erkenntnis ergibt sich eine Matrix, in der Resilienzpraktiken entlang beider Kategoriensysteme aufgetragen werden können und somit einen Überblick über den Status der Resilienz einer globalen Lieferkette liefert. Diese Matrix bildet die Grundlage für einen Supply Chain Resilience Management Ansatz, der im Rahmen dieser Arbeit entwickelt und erläutert wird. Dieser bietet eine Handlungsanleitung für Manager und unterstützt somit beim Streben nach mehr Resilienz entlang der Lieferkette.
Damit erweitert diese Arbeit nicht nur die bestehende Literatur zur Supply Chain Resilience um einen strukturationstheoretischen Ansatz, sondern liefert einen entscheidenden Beitrag zum Management globaler Lieferketten.
Ecotoxicology is the science that researches effects of toxicants on biological entities. Following the famous toxicological principle formulated 1538 by von Hohenheim, known as Paracelsus, thereby generally all chemicals are able to act as toxicants. Unlike human toxicology that focuses on toxic effects on individuals and populations of one species, Homo sapiens, ecotoxicology is not constrained in its scope of biological entities. It is interested in toxic effects on individuals and populations of any species (excluding humans), and on communities and entire ecosystems (Walker et al., 2012; Köhler & Triebskorn, 2013; Newman 2014). One example of where the ecological foundation of ecotoxicology manifests itself are indirect effects, which are effects on biological entities that are not directly caused by chemicals but instead are mediated by ecological interactions and environmental conditions (Walker et al., 2012). With this large scope, ecotoxicology is an inter- and multidisciplinary science that links chemical, biological and environmental knowledge.
With millions of species and at least 100,000 chemicals that potentially interact with them in the environment (Wang et al., 2021), ecotoxicology has a large ground to cover. Among these sheer numbers, there are some groups that are of special importance regarding their potential environmental impact. Pesticides are one group of chemicals that have a large, if not the largest, ecotoxicological relevance: they are toxic for biological entities, sometimes in very low concentrations , and they are used in large amounts and globally (Bernhardt et al., 2017). The high toxicity of pesticides, much higher than that of most other groups of chemicals, is a result of their intended use: they are designed to reduce detrimental effects of, e.g., insects, plants or fungi on agriculture by controlling respective populations, often, and in the sense of their Latin name, through induced lethality (Walker et al., 2012). However, they act not specific enough to be toxic only for the intended species that are considered pests, but also show toxicity towards species living in habitats next to pesticide-treated areas. The widespread agricultural use of pesticides, on the other hand, is a result of their work-and-cost-efficiency for securing yields, but also results in exposure of ecosystems at a global scale (Sharma et al., 2019). In summary, pesticides can be abstractly seen as toxicity intentionally applied to agricultural areas, unintentionally also exposing organisms in non-agricultural areas to toxicity.
The risks of pesticide use for ecosystems have led major jurisdictions, like the United States of America (US) and the European Union (EU), to enact elaborated regulatory processes that require a registration of pesticides prior use (EFSA, 2013; EPA, 2011; Stehle & Schulz, 2015b). A by-product of these registration processes are regulatory threshold levels (RTL) which can be used for scientific risk analysis outside the regulatory process (Stehle & Schulz, 2015a). The RTL for an organism group is basically derived from the most sensitive effect concentrations found in standardized toxicity tests for species representative for the group, multiplied by a safety factor, although specifics differ among regulatory processes. Conceptually, they mark the threshold that separates environmental concentrations associated with acceptable risk (concentrations below the RTL) from concentrations associated with unacceptable risk (concentrations above the RTL).
Due to the high degree of procedural standardization in the derivation of RTLs, they have been found as a good measure to make the toxicities of different pesticides comparable, and they were employed in a series of studies to characterize environmental pesticide concentrations (e.g., Stehle & Schulz, 2015a; Stehle et al., 2018; Wolfram et al., 2018; Wolfram et al., 2021; Schulz et al., 2021, also, in Appendix B; Bub et al., 2023, also, in Appendix C). RTL reflect, for instance, that insecticides show regulatory unacceptable concentrations towards fish between 3 ng/L (deltamethrin, a pyrethroid) and 110 mg/L (imidacloprid, a neonicotinoid), a range of nine orders of magnitude. At the same, imidacloprid is very toxic to pollinators (RTL of 1.52 ng/organism), while more than 95% of all of the insecticides, with regulatory unacceptable concentrations among insecticides ranging as high as 1,6 mg/organism, indicating a toxicity six orders of magnitude lower than that of imidacloprid.
At large-scales, ecotoxicology deals with pesticide impacts on a national (e.g., Bub et al., 2023; Douglas & Tooker, 2015; Hallmann et al., 2014; Schulz et al., 2021; Stehle et al., 2019; Wolfram et al., 2018), continental (Wolfram et al., 2021) or the global scale (Stehle & Schulz, 2015a; Stehle et al., 2018). This maximization of considered scale is in line with the general tendency of ecotoxicology towards larger scales, but generally requires new methodological and conceptual approaches. Historically, individual chemicals and groups of chemicals have been identified that mark, caused by their immense release into the environment, main disruptors of processes in the Earth system, like greenhouses gases for the climate change, chlorofluorocarbons for the depletion of the atmosphere’s ozone layer, dichlorodiphenyl-trichloroethane and other organochlorides for bioaccumulation in food webs and declines in bird populations, etc., but for other phenomena, like declines in biodiversity or numbers of insect species (Outhwaite et al., 2020; Seibold et al., 2019; Vörösmarty et al., 2010), the active part of chemical pollution is only understood to a much lesser extent. There are indicators that pesticides may play a major role
This dissertation contributes to the research of large-scale risks of pesticide use, and of large-scale ecotoxicology in general, in several ways (Figure 1). In Chapter 2, it presents a labeled property graph, the MAGIC graph (Meta-Analysis of the Global Impact of Chemicals graph), as a solution to the methodological issues that arise when increasing amounts of data from more and more sources are combined for analysis (Bub et al., 2019; also, in Appendix A). The MAGIC graph is able to link chemical information from different sources, even if these sources use different nomenclatures. This enables analyses that incorporate toxicological data, like thousands of RTLs (for different organism groups and jurisdictions) for hundreds of pesticides, and information on pesticide use and chemical classes. The MAGIC graph is implemented in a way that allows it to be organically extended by additional chemical, biological and environmental data, and eventually scaled to all chemicals of environmental interest.
Chapter 3 shows, how the combination of the linked pesticide data with a systemic consideration of pesticide use supports the interpretation of pesticide risks in the US (Schulz et al., 2021; also, in Appendix B). This systemic approach includes a new measure, the total applied toxicity (TAT), which integrates used pesticide amounts and pesticide toxicities, and the consideration of pesticide use as a complex system whose state and evolution can be visualized in phase-space plots. The combination of the described methods and concepts led to a novel view on pesticide risks in the US and can provide a framework for future ecotoxicological research at large scales.
Chapter 4 displays the results of the methods and concepts of the US pesticide risk analysis applied to Germany (Bub et al., 2023; also, in Appendix C). A pesticide risk analysis of Germany is of special importance in the context of the EU’s goal to drastically reduce pesticide risks (European Commission, 2020) and Germany being one of the important agricultural producers in the EU. A comparison of the results for Germany to those for the US did also allow to evaluate the impact of scale and differing RTLs, information that can help other ecotoxicological large-scale assessments. Chapter 5 adds a conclusion and an outlook.
This work aims to study textile structures in the frame of linear elasticity to understand how
the structure and material parameters influence the macroscopic homogenized model. More
precisely, we are interested in how the textile design parameters, such as the ratio between
fibers’ distance and cross-section width, the strength of the contact sliding between yarns,
and the partial clamp on the textile boundaries determine the phenomena that one can see in
shear experiments with textiles. Among others, when the warp and weft yarns change their
in-plane angles first and, after reaching some critical shear angle, the textile plate comes out
of the plane, and its folding starts.
The textile structure under consideration is a woven square, partially clamped on the left
and bottom boundary, made of long thin fibers that cross each other in a periodic pattern.
The fibers cannot penetrate each other, and in-plane sliding is allowed. This last assumption,
together with the partial clamp, adds new levels of complexity to the problem due to
the anisotropy in the yarn’s behavior in the unclamped subdomains of the textile.
The limiting behavior and macroscopic strain fields are found by passing to the limit with
respect to the yarn’s thickness r and the distance between them e, parameters that are asymptotically
related. The homogenization and dimension reduction are done via the unfolding
method, which separates the macroscopic scale from the periodicity cell. In addition to the
homogenization, a dimension reduction from a 3D to a 2D problem is applied. Adapting
the classical unfolding results to both the anisotropic context and to lattice grids (which are
constructed starting from the center lines of the rods crossing each other) are the main tools
we developed to tackle this type of model. They represent the first part of the thesis and are
published in Falconi, Griso, and Orlik, 2022b and Falconi, Griso, and Orlik, 2022a.
Given the parameters mentioned above, we then proceed to classify different textile problems,
incorporating the results from other works on the topic and thoroughly investigating
some others. After the study is conducted, we draw conclusions and give a mathematical
explanation concerning the expected approximation of the displacements, the expected solvability
of the limit problems, and the phenomena mentioned above. The results can be found
in “Asymptotic behavior for textiles with loose contact”, which has been recently submitted.
Die steigende Verfügbarkeit von Smartphones und Tablet-PCs in der Schule bieten neue methodische und mediale Wege der Unterrichtsgestaltung, die zur Beurteilung der Lernwirksamkeit eine wissenschaftliche Betrachtung erfordern. Während zahlreiche Publikationen zu Augmented Reality (AR) im Bildungskontext existieren, fehlt es an einer differenzierten Betrachtung der Lernwirksamkeit AR-typischer Merkmale sowie breit angelegter Untersuchungsdesigns. Ziel dieser Arbeit ist es, die Lernwirksamkeit von AR in authentischen biologischen Unterrichtsszenarien multiperspektivisch zu betrachten. Zur Beurteilung der Lernwirksamkeit wurden Daten zu Lernzuwachs (LZ), Cognitive Load (CL), Nutzungserlebnis (UX), empfundener Lernunterstützung (ELU) sowie zur Immersion gemessen. Die Forschungsfragen beziehen sich auf den Einfluss der Art des Mediums, der Steuerung, des Triggers und der medialen Repräsentation auf die Lernwirksamkeit.
Die Untersuchungen wurden mithilfe eigens entwickelnder AR-Apps mit 769 Teilnehmenden aus rheinland-pfälzischen Gymnasien und zwei Universitäten durchgeführt. Neben dem Nachweis theoretischer Zusammenhänge der untersuchten Parameter mittels Strukturgleichungsmodellierung konnten mehrheitlich signifikante Unterschiede im LZ zugunsten von AR festgestellt werden. Darüber hinaus zeigten die Studien dieser Arbeit einen positiven Einfluss von AR auf den CL, sodass sich der Einsatz von AR nicht nachteilig auf die kognitive Belastung auswirkt. Neben des überdurchschnittlich bis exzellenten Abschneidens der AR-Apps im Benchmark-Vergleich (Vergleichsgruppe n = 20190 (Schrepp, 2019)), konnten positive Effekte der UX-Dimension Stimulation auf die Reduktion des lernhinderlichen Extraneous Cognitive Loads und die Steigerung des lernförderlichen Germane Cognitive Loads nachgewiesen werden. Hinsichtlich der ELU zeigten sich verschiedene mediale Präferenzen der Teilnehmenden, sodass durch die Verwendung von AR im Sinne eines wechselnden Medieneinsatzes die Bedürfnisse aller Lernenden abgedeckt werden kann. Weiterhin erreichten die Teilnehmenden die höchste der drei Ebenen der Immersion, sodass die Art des Triggers die 21 Immersionsfaktoren von Georgiou und Kyza (2017a) ergänzt.
Die Arbeit leistet auf Grundlage der Identifikation von AR-typischen Merkmalen einen Beitrag zum besseren Verständnis der lernwirksamen Potenziale von AR-basierten Lernumgebungen und zeigt darüber hinaus theoriebildende Implikationen zur Messung des CL auf. Ob sich mithilfe von AR der CL senken lässt, bedarf es Untersuchungen in Lernsettings, die einen hohen CL aufweisen. Weiterhin bietet der ARI-Fragebogen einen geeigneten Ausgangspunkt zur Erforschung AR-typischer Immersionsfaktoren. Trotzdem bedarf es weiterer Studien zur Validierung des ARI-Fragebogens und zur systematischen Untersuchung der Lernwirksamkeit von AR-typischen Merkmalen.
Die vorliegende Arbeit thematisiert die rutheniumkatalysierte Oxidation von
(Fettsäure)alkoholen zu den in die korrespondierenden Carbonylverbindungen.
Fettsäurealdehyde sind von großem wissenschaftlichem und wirtschaftlichem Interesse, da
sie wertvolle Start- und Zwischenprodukte für viele Zweige der chemischen Industrie
darstellen. Der Einsatz von rutheniumhaltigen Verbindungen als Katalysatoren zeichnet sich
durch eine hohe Flexibilität und Anpassbarkeit der gewünschten Reaktivität, sowie durch ein
breitgefächertes Anwendungsgebiet aus. Die Oxidationsreaktionen im Rahmen dieser Arbeit
erfolgten sowohl homogen- als auch heterogenkatalytisch unter Verwendung verschiedener
rutheniumhaltiger Katalysatoren und Oxidationsmittel. Der homogenkatalytische Einsatz von
simplem RuCl3 ∙ (H2O)x führte mit tert-Butylhydroperoxid als Sauerstoffquelle zur Entwicklung
eines umweltfreundlichen und unter milden Bedingungen operierenden Systems für die
Oxidation sekundärer Alkohole, während Fettsäurealkohole unter Einsatz von Trimethylamin-
N-oxid als Oxidationsmittel in die korrespondierenden Aldehyde überführt werden konnten.
Die Heterogenisierung von RuCl3 ∙ (H2O)x erfolgte auf harnstofffunktionalisierten Kieselgelen,
welche auch als Trägermaterial für Rutheniumnanopartikel eingesetzt wurden. Diese
Kieselgelderivate wurden durch Kopplung von (3-Isocyanatopropyl)trialkoxysilylverbindungen
mit primären und sekundären Aminen und anschließendem Postgrafting der
resultierenden Silanprecursoren hergestellt. Für einen Teil der Materialien erfolgte zusätzlich
eine Desaktivierung der Silanolgruppen auf der Materialoberfläche durch Austausch der
Hydroxygruppen gegen Trimethylsilyloxyeinheiten. Die heterogenkatalytische Oxidation von
Alkoholen – sowohl mit RuCl3 ∙ (H2O)x als auch mit Rutheniumnanopartikeln - erfolgte
ebenfalls unter Verwendung von Trimethylamin-N-oxid als Sauerstoffquelle und zeigt
Potential für zukünftige Untersuchungen zur Wiederverwendbarkeit der Katalysatormaterialien.
Neben RuCl3 ∙ (H2O)x und Rutheniumnanopartikeln wurde auch ein
rutheniumhaltiges Polyoxometallat für die Oxidation von sekundären und primären Alkoholen
unter Verwendung von tert-Butylhydroperoxid bzw. Trimethylamin-N-oxid eingesetzt, wobei
die für RuCl3 ∙ (H2O)x optimierten Reaktionsbedingungen erneut angewandt wurden.