Doctoral Thesis
Refine
Year of publication
- 2023 (146)
- 2021 (117)
- 2022 (109)
- 2015 (100)
- 2020 (99)
- 2016 (92)
- 2017 (84)
- 2018 (79)
- 2014 (78)
- 2009 (76)
- 2019 (74)
- 2006 (69)
- 2012 (66)
- 2013 (65)
- 2007 (64)
- 2008 (63)
- 2011 (62)
- 2010 (61)
- 2005 (58)
- 2004 (55)
- 2002 (50)
- 2003 (50)
- 2001 (35)
- 2024 (35)
- 2000 (34)
- 1999 (28)
- 1998 (7)
- 1995 (3)
- 1996 (2)
- 1997 (2)
- 1994 (1)
Document Type
- Doctoral Thesis (1864) (remove)
Language
- German (930)
- English (928)
- Multiple languages (6)
Keywords
- Visualisierung (21)
- Simulation (18)
- Katalyse (15)
- Stadtplanung (15)
- Apoptosis (12)
- Finite-Elemente-Methode (12)
- Phasengleichgewicht (12)
- Modellierung (11)
- Infrarotspektroskopie (10)
- Mobilfunk (10)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Chemie (389)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (369)
- Kaiserslautern - Fachbereich Mathematik (291)
- Kaiserslautern - Fachbereich Informatik (229)
- Kaiserslautern - Fachbereich Biologie (133)
- Kaiserslautern - Fachbereich Bauingenieurwesen (94)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (92)
- Kaiserslautern - Fachbereich ARUBI (71)
- Kaiserslautern - Fachbereich Sozialwissenschaften (63)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (35)
In der heutigen Arbeitswelt stehen Organisationen vor der Herausforderung, sich kontinuierlich an Veränderungen anzupassen. Der demographische Wandel und steigende Zahlen von Arbeitsausfällen durch psychische Belastungen rücken das Wohlergehen und die Zufriedenheit von Mitarbeitenden am Arbeitsplatz in den Fokus. Die Mitarbeiterbefragung als Instrument der Organisationsentwicklung ist eine Möglichkeit Veränderungsprozesse so zu gestalten, dass betriebswirtschaftliche und gleichzeitig humanistische Ziele erreicht werden können. Bei der Umsetzung von Mitarbeiterbefragungen kommt es vor allem auf deren Folgeprozesse an, da hier aus den Ergebnissen einer Befragung Schlussfolgerungen gezogen und diese in Aktionen überführt werden. Der Blick in die Praxis zeigt jedoch, dass Erwartungen an Folgeprozesse und somit Mitarbeiterbefragungen, sowohl auf Seite von Unternehmen, als auch auf Seite von Mitarbeitenden, oft enttäuscht werden.
Die bisherige Forschung zeigt zwar allgemein den positiven Effekt von Mitarbeiterbefragungen und Folgeprozessen auf, jedoch bleibt unklar, wie einzelne Bestandteile eines Folgeprozesses und vor allem deren qualitative Durchführung wirken. Hierin liegt der erste Ansatzpunkt der vorliegenden Arbeit. Darüber hinaus soll die Rolle von Führungskräften in Folgeprozessen beleuchtet werden. Denn aus den vielen Überlegungen und Untersuchungen dazu, welche Aspekte Change-Prozesse beeinflussen, sticht oft die besondere Rolle von Führungskräften hervor. Dabei wird von den Führungskräften Verhalten gefordert, welches über ein klassisch rational-funktionales Verständnis von Führung hinausgeht und Mitarbeitende dazu anregt, sich offen und engagiert in Veränderungsprozessen zu verhalten. Einen Ansatz dies zu erreichen, stellt Positive Leadership dar. Hierbei werden Führungsverhaltensweisen an den Tag gelegt, die die Sinnhaftigkeit der Arbeit betonen, positive Beziehungen zu Mitarbeitenden fördern, Anerkennung und Wertschätzung zeigen, Stärkenorientierung praktizieren, für positives Arbeitsklima sorgen, positive Kommunikation beinhalten, die Mitarbeitenden in ihrer Entwicklung unterstützen und insbesondere Partizipation und Befähigung ermöglichen. Auch wenn sich das Konzept Positive Leadership immer größerer Beliebtheit erfreut, existiert noch keine klare Konzeption des Konstrukts und noch kein etabliertes Messinstrument. Darüber hinaus findet sich noch keine Anwendung des Konzepts im Kontext von Change-Prozessen allgemein und von Folgeprozessen von Mitarbeiterbefragungen im Speziellen.
Das Hauptziel der vorliegenden Arbeit besteht darin, Positive Leadership im Kontext von Folgeprozessen einer Mitarbeiterbefragung zu untersuchen. Dazu wurden vier Studien durchgeführt. In Studie 1 wurde durch teilstrukturierte Experten-Interviews (N = 22) exploriert, welche Schritte ein Folgeprozess einer Mitarbeiterbefragung beinhaltet und woran sich eine hohe Qualität in der Durchführung dieser Schritte festmachen lässt. In Studie 2 wurde in drei Teiluntersuchungen (N1 = 194, N2 = 201, N3 = 124) ein Messinstrument für Positive Leadership entwickelt und validiert.
In Studie 3 wurden in einer Fragebogenstudie an einer Stichprobe von Mitarbeitenden (N = 1302) und Führungskräften (N = 266) der Stellenwert einzelner Schritte des Folgeprozesses und der Qualität in der Durchführung aufgezeigt. Des Weiteren wurde der Einfluss von Positive Leadership auf die Qualität des Folgeprozesses und auch Arbeitsengagement und Arbeitszufriedenheit belegt. Dies galt sowohl für Mitarbeitende als auch für Führungskräfte selbst. Sowohl die Einhaltung und Qualität des Folgeprozesses als auch Positive Leadership wirkten sich zudem (zum Teil indirekt über die Zufriedenheit mit dem Folgeprozess vermittelt) auf die Veränderung in Arbeitsengagement und Arbeitszufriedenheit zwischen zwei Mitarbeiterbefragungen aus. Außerdem konnten an einer Stichprobe von 242 Dyaden aus Führungskraft und Mitarbeitendem die Auswirkungen von Diskrepanz und Kongruenz der Einschätzungen zu Positive Leadership oder dem Folgeprozess aufgezeigt werden. Zuletzt wurde untersucht, inwiefern die Attribution von Erfolgen und Misserfolgen im Folgeprozess durch Positive Leadership beeinflusst wird.
Studie 4 bestätigte in einem experimentellen Design (N = 420) unter Anwendung von Video-Vignetten die positiven Effekte einer hohen Qualität des Folgeprozesses und von Positive Leadership auf das Arbeitsengagement und die Arbeitszufriedenheit. Darüber hinaus konnten die vorigen Erkenntnisse um Aussagen über Interaktionen der untersuchten Faktoren erweitert werden. So zeigte sich, dass positives Führungsverhalten die Effekte mangelhafter Qualität im Folgeprozess oder geringer Einhaltung der Schritte des Folgeprozesses abfedern kann. Eine hohe Einhaltung der Schritte im Folgeprozess wirkte sich zudem nur positiv auf die Zufriedenheit mit dem Folgeprozess aus, wenn die Qualität der durchgeführten Schritte hoch war. Außerdem wurde in Studie 4 der Effekt von angenommenen Unterschieden in der Zufriedenheit mit dem Folgeprozess zwischen Mitarbeitenden und Führungskräften auf die Teilnahmeintention an einer nächsten Mitarbeiterbefragung, sowie der Arbeitszufriedenheit und dem Arbeitsengagement aufgezeigt. Abschließend wurden erneut die Auswirkungen von Positive Leadership auf die Attribution von Erfolgen und Misserfolgen im Folgeprozess analysiert. Zusätzlich wurden auch weiterführende Effekte der Attribution auf die Teilnahmeintention an nächsten Mitarbeiterbefragungen untersucht.
Die vorgestellten Studien der Dissertation werden theoretisch und methodisch diskutiert. Auf Basis der Ergebnisse werden praktische Empfehlungen zum verbesserten Umgang mit Folgeprozessen von Mitarbeiterbefragungen und Positive Leadership abgeleitet.
Mit der vorliegenden Dissertation wurde ein Werkzeug für die Erstellung volldigitaler binnendifferenzierter Arbeitsblätter im Regelunterricht Chemie evaluiert und weiterentwickelt, das ein motivations- und interessensförderndes Potential aufweist. Es konnten Zusammenhänge zur Benutzbarkeit der Anwendung und zum Cognitive Load hergestellt werden. Die Ergebnisse stützen damit die Erkenntnisse im Bereich des Lernens mit digitalen Medien. Die Integration von digitalen Werkzeugen in den Lernprozess ist berechtigt. Sie zeigen einerseits für Schüler:innen ein motivationsförderndes Potential und andererseits für Lehrende praktische Vorteile, indem auf vielfältige Weise Informationen dargeboten werden können – zum Beispiel im Bereich der Differenzierung. Mit HyperDocSystems können binnendifferenzierte digitale Arbeitsblätter erstellt und bearbeitet werden. Diese so genannten HyperDocs können von Lehrenden mit Lernhilfen in verschiedenen Darstellungsformen angereichert und von Lernenden volldigital im Browser mit einem Stylus oder der Tastatur bearbeitet werden.
Im Rahmen einer quasi-experimentellen Feldstudie wurde der Einsatz dieser neuartigen HyperDocs erstmals unter Betrachtung der intrinsischen Motivation und des Interesses, der Usability sowie der Nutzung des multimedialen Differenzierungsangebots analysiert. Die Studie fand über vier Schulstunden im Regelunterricht Chemie der Mittelstufe (Gymnasium / Gesamtschule) und Oberstufe (Gymnasium) statt. Dabei wurden auch der Cognitive Load und die tabletbezogenen Kompetenzen der Lernenden berücksichtigt. Die Ergebnisse lassen auf ein motivationsförderndes Potential der HyperDocs gegenüber analogen Arbeitsblättern schließen. Dabei zeigen sich Unterschiede zwischen den Geschlechtern, die zum Teil auf den Cognitive Load zurückzuführen sind und abhängig vom Alter der Lernenden (Mittel- und Oberstufe) auftreten. Die Lernhilfen werden in diesem Zusammenhang häufig aus Interesse und Neugier verwendet. Schüler:innen nutzen insbesondere Lernhilfen in Form von Text und Bild. Die Nutzungshäufigkeit des Differenzierungsangebots gibt jedoch nicht unmittelbar Aufschlüsse über die Motivation oder den Cognitive Load der Lernenden. Bei der Usability handelt es sich um ein wichtiges Kriterium beim Einsatz von digitalen Lernprogrammen, da sich unter anderem ein Zusammenhang zu den Variablen der intrinsischen Motivation und zum Cognitive Load beim Lernen mit HyperDocs herstellen lässt. Die Usability ist dabei jedoch abhängig vom Messzeitpunkt. HyperDocs weisen eine hohe Usability auf und können daher uneingeschränkt in der Mittel- und Oberstufe eingesetzt werden.
Esse aut non esse - Affirmation und Subversion intergeschlechtlicher Existenzen in der Schule
(2024)
Am 10.10.17 beschloss das Bundesverfassungsgericht in Karlsruhe, ein sog. drittes Geschlecht für den Eintrag im Geburtenregister einzuführen. Intersexuellen Menschen sollte damit ermöglicht werden, ihre geschlechtliche Identität eintragen zu lassen und damit Teilhabe am gesellschaftlichen Leben zu ermöglichen. Zur Begründung verwies das Gericht auf das im Grundgesetz geschützte Persönlichkeitsrecht. Die aktuell geltende Regelung sei mit den grundgesetzlichen Anforderungen insoweit nicht vereinbar, als dass es neben „weiblich" oder „männlich" keine dritte Möglichkeit bietet, ein Geschlecht eintragen zu lassen. Der Gesetzgeber musste nun bis Ende 2018 eine Neuregelung schaffen, in der sie eine Bezeichnung für ein drittes Geschlecht aufnimmt – „divers“.
Schulen als bedeutende soziale Einrichtungen sind nun gefordert, will man die Leitperspektiven der Diversität im Bildungsbereich und damit in der Gesellschaft beibehalten. Schulen stellen Arbeitsfeld, Lebenswelt und Lernumfeld für viele Generationen dar und besitzen damit immer eine gesellschaftliche Vorbildfunktion, wobei Diversität zum stets allgegenwärtigen Imperativ geworden ist. Als Avantgarde müssen Schulen deshalb gerade in gesellschaftlichen Fragen voranschreiten und gleichsam Verantwortung für die Entwicklungen und Lösung wichtiger ethischer Fragen übernehmen ohne dabei die Vermittlung traditioneller Werte und Normen als eine ihrer zentralen Funktionen aufzugeben. Diesen anspruchsvollen Spagat zu vollziehen bleibt konstante Herausforderung der Schulentwicklung.
Mit Vielfalt umgehen bedeutet im schulischen Kontext vor allem neben gegenseitiger Anerkennung und Respekt auch, dass das Zusammenleben der Menschen durch die Eröffnung alternativer Wahrnehmungs-, Denk- und Handlungsansätze bereichert wird. Der Beschluss des Bundesverfassungsgerichts ist folglich in besonderer Weise an Schulen gerichtet.
Doch wie kann dieser Weg erfolgreich und nachhaltig eingeschlagen werden?
Bei Betrachtung der zahlreichen Publikationen zum Thema Gender und Schule sowie der wenigen Entwicklungen in den letzten Jahren wird augenscheinlich, dass das deutsche Schulsystem für die Umsetzung der Entscheidung des Bundesverfassungsgerichts vom 10.10.2017 (1BvR 2019/16) systemisch und strukturell nicht vorbereitet ist.
Daraus lassen sich die Forschungsfragen dieser Promotionsarbeit formulieren:
- Wie verhält sich Schule zum Diskurs des dritten Geschlechts?
- Was sind aus Sicht schulischer Akteure Gelingensbedingungen für eine erfolgreiche Sichtbarmachung des dritten Geschlechts an Schulen?
Es soll in der Arbeit mittels empirischer Untersuchungen eingehend verdeutlicht werden, welche Gelingens- bzw. Misslingensfaktoren bei der Implementierung eines dritten Geschlechts eine Rolle spielen und unter welchen Voraussetzungen überhaupt Schule als Organisation auf die Sichtbarmachung intergeschlechtlicher Kinder und Jugendliche vorbereit ist.
In dieser Arbeit wurde wurde das CASOCI-Programm[1], dessen Implementierung bereits Thema der Dissertation von Dr. Tilmann Bodenstein war und Gegenstand kontinuierlicher Weiterentwicklung in den Arbeitsgruppen Fink (Karlsruher Institut für Technologie) und van Wüllen ist, MPI/OpenMP Hybrid parallelisiert. Dieses wurde im Anschluss daran verwendet, um den fünfkernigen [Ni(tmphen)2]3[Os(CN)6]2- Komplex (tmphen = 3,4,7,8-Tetramethyl-1,10-Phenanthrolin) auf dessen magnetische Eigenschaften hin zu untersuchen. Dieser wurde in der Gruppe von Kim R. Dunbar durch χT-Messungen experimentell untersucht[2,3]. Durch diamagnetische Substitution wurden von diesem Komplex Varianten mir nur ein und zwei aktiven Zentren erzeugt. An diesen wurden CASOCI-Rechnungen durchgeführt und g-Tensoren, Austauschkopplungen, D-Tensoren sowie Tensoren für den anisotropen Austausch bestimmt. Mit Hilfe dieser Tensoren konnte eine χT-Kurve berechnet werden, die eine gute Übereinstimmung mit der aus Dunbars Arbeiten zeigt aufweist. Es konnte gezeigt werden, dass der anisotrope Austausch maßgeblich für den Kurvenverlauf ist und die Einzel-Ionen Nullfeldaufspaltung praktisch keine Rolle spielt.
[1] T. Bodenstein, A. Heimermann, K. Fink, C. van Wüllen, Chem. Phys. Chem. 2022, 23, e202100648.
[2] M. G. Hilfiger, M. Shatruk, A. Prosvirin, K. R. Dunbar, Chem. Commun. 2008, 5752–5754.
[3]A.V.Palii,O.S.Reu,S.M.Ostrovsky,S.I.Klokishner,B.S.Tsukerblat,M.Hilfiger, M. Shatruk, A. Prosvirin, K. R. Dunbar, J. Phys. Chem. A 2009, 113, 6886–6890.
Weak memory consistency models capture the outcomes of concurrent
programs that appear in practice and yet cannot be explained by thread
interleavings. Such outcomes pose two major challenges to formal
methods. First, establishing that a memory model satisfies its
intended properties (e.g., supports a certain compilation scheme) is
extremely error-prone: most proposed language models were initially
broken and required multiple iterations to achieve soundness. Second,
weak memory models make verification of concurrent programs much
harder, as a result of which there are no scalable verification
techniques beyond a few that target very simple models.
This thesis presents solutions to both of these problems.
First, it shows that the relevant metatheory of weak memory
models can be effectively decided (sparing years of manual proof
efforts), and presents Kater, a tool that can answer metatheoretic
queries in a matter of seconds. Second, it presents GenMC, the first
(and only) scalable stateless model checker that is parametric in the
choice of the memory model, often improving the prior state of the art
by orders of magnitude.
Cyber-physische Produktionssysteme (CPPS) ermöglichen die Herstellung kundenindividueller Produkte in kleinen Losgrößen durch Nutzung aktueller Entwicklungen der Informations- und Kommunikationstechnologien. Im Materialfluss in CPPS ist jedoch aufgrund unterschiedlicher physikalischer Eigenschaften der Fördergüter und dynamischer Prozesszuweisungen die Gefahr physikalisch bedingter Störungen erhöht. Diese Arbeit untersucht die Nutzung von Physiksimulation als Basis eines Digitalen Zwillings von Fördermitteln, um diesen Herausforderungen zu begegnen. Das Ziel besteht darin, durch die Simulation der physikalischen Phänomene einzelner Materialflussprozesse die negativen Einflüsse von Störungen zu verringern und somit die Leistungsfähigkeit des Produktionssystems zu erhöhen. Hierzu findet zunächst eine konzeptionelle Entwicklung des Digitalen Zwillings statt, die eine Analyse der beteiligten Systeme, eine Anforderungsdefinition, eine Festlegung von Aufbau- und Ablaufstruktur, sowie eine Formalisierung der einzelnen Funktionsbestandteile umfasst. Im Anschluss wird der Digitale Zwilling softwaretechnisch implementiert, mit einem exemplarischen Fördermittel vernetzt und prototypisch in Betrieb genommen. Die Ergebnisse zeigen die Eignung der Physiksimulation für den beschriebenen Zweck und die Wirksamkeit des Einsatzes auf Produktionssystemebene, indem Materialflussprozesse beschleunigt durchgeführt, überwacht und im Falle von Störungen nachträglich simulativ untersucht werden können.
This thesis outlines the development of thermoplastic-graphite based plate heat exchangers from material screening to operation including performance evaluation and fouling investi-gations. Polypropylene and polyphenylene sulfide as matrix and graphite as filler were cho-sen as feedstock materials, as they possess a low density and excellent corrosion resistance at a comparatively low price.
For the purpose of material screening, custom-made polymer composite plates with a plate thickness of 1-2 mm and a filler content of up to 80 wt.% were investigated for their thermal and mechanical suitability with regard to their use in plate heat exchangers. Three-point flexural tests show that the loading of polypropylene with graphite leads to mechanical prop-erties that allow the composites to be applied as corrugated heat exchanger plates. The simu-lated maximum overpressure is greater than 7 bar, depending on the wall thickness. The thermal conductivity of the composites was increased by a factor of 12.5 compared to pure polypropylene, resulting in thermal conductivities of up to 2.74 W/mK.
The fabrication of the developed corrugated heat exchanger plates, with a thickness between 0.85 mm and 2.5 mm and a heat transfer surface area of 11.13·10-3 m² was carried out via processes that can be automized, namely extrusion and embossing. With the manufactured plate heat exchanger, overall heat transfer coefficients are determined over a wide range of operating conditions (Re = 200 - 1600), which are used to validate a plate heat exchanger model and consequently to compare the composites with conventional materials. The em-bossing, which seems to result in a shift of the internal graphite structure, leads to a further improvement of the thermal conductivity by 7-20 %, in addition to the impact of the filler. With low plate thicknesses, overall heat transfer coefficients of up to 1850 W/m²K could be obtained. Considering the low density of the manufactured thermal plates, this ensures com-parable performance with metallic materials over a wide range of process conditions (Re = 200 - 4000).
The fouling kinetics and amount of calcium sulfate and calcium carbonate, respectively, on different polypropylene/graphite composites in a flat plate heat exchanger and the developed chevron type plate heat exchanger are determined and compared to the reference material stainless steel. For a straight evaluation of the fouling susceptibility of the materials the for-mation of bubbles on the materials is considered by optical imaging or excluded by a degas-ser. The results are interpreted using surface free energy and roughness of the surfaces. The results show that if bubble formation is avoided, the polymer composites have a very low fouling tendency compared to stainless steel, which is attributed to the low surface free ener-gies of approximately 25 mN/m. This is particularly the case when turbulent flows are pre-sent, as is in plate heat exchangers or when sandblasted specimen are used. Sandblasting also continues to increase heat transfer compared to untreated samples by increasing thermal conductivity and creating local turbulences. Depending on the test conditions, the fouling resistance formed on the stainless steel surface is an order of magnitude greater than on the flat plate polymer composites. In addition, the fouling layers adhere only weakly to the com-posites, which indicates an easy cleaning in place after the formation of deposits. The fouling investigations in the plate heat exchanger reveal sensitivity to calcium sulfate fouling, how-ever, CFD simulations indicate that this is due to flow maldistribution and not the actual pol-ymer composite materials.
Zeolithe werden seit Jahrzehnten als Katalysatoren in der chemischen Industrie und als Ionentauscher in Waschmitteln eingesetzt. Außerdem können Zeolithe als Trägermaterialien für Metalle, die durch Ionenaustausch oder Imprägnierung aufgebracht werden, eingesetzt werden. Ein neuartiges Anwendungsgebiet von Zeolithen ist die Verwendung als antimikrobielles Füllmaterial in Kunststoffen. Hierzu müssen die Zeolithe zuvor mit einem antimikrobiell wirkenden Metall wie z.B. Silber beladen werden. Dieser gefüllte Kunststoff kann zu Filamenten für den 3D-Druck weiterverarbeitet werden. Ein mögliches Anwendungsgebiet für die resultierenden Verbundmaterialien liegt im Bereich der Zahnmedizin in Form von Kronen oder dreigliedrigen Brücken. Ziel dieses Promotionsprojekts war die Modifikation der Zeolithe Beta und ZSM‑5 mit Silber, um die resultierenden Materialien als antimikrobielle Komponenten in einem Polymerverbundwerkstoff einzusetzen. Die beiden Zeolithe sollen mittels Ionenaustausch mit Silberionen beladen werden. Neben der Reaktionstemperatur und dem Gegenion im Zeolithgitter wurde auch die experimentelle Vorgehensweise des Ionenaustauschs (Dauer und Anzahl der Austauschzyklen) variiert, um eine möglichst hohe Beladung mit Silber zu erzielen. Durch die Kombination verschiedener Charakterisierungsmethoden wie Röntgenpulverdiffraktometrie (PXRD) und Festkörper-NMR-Spektrometrie (MAS-NMR) konnte der Erhalt der Zeolithstruktur nach dem Ionenaustausch bestätigt werden. Mittels Atomabsorptionsspektroskopie (AAS) wurde die Silbermenge im Zeolithgitter bestimmt. Da Zeolith ZSM-5 im Einkauf kostengünstiger ist als Zeolith Beta, wurde in den weiteren Schritten mit Silberionen ausgetauschtem Zeolith AgZSM-5 weitergearbeitet. Im nächsten Schritt wurde Zeolith AgZSM‑5 mit verschiedenen Verfahren modifiziert, um eine zeitlich steuerbare Freisetzung der Silberionen aus dem Zeolithgitter zu gewährleisten. Bei der Oberflächenpassivierung mittels Silylierung konnte mittels temperaturprogrammierter Desorption von Ammoniak (NH3-TPD) eine Abschwächung der Säurezentren nachgewiesen werden. Darüber hinaus wurde Zeolith AgZSM-5 noch mittels Imprägnierung mit Calcium bzw. Magnesium sowie durch Reduktion des Silbers im H2-Strom bei unterschiedlichen Temperaturen modifiziert. Bei der Reduktion des Silbers im H2-Strom konnte der Einfluss der Reduktionstemperatur auf die Kristallitgröße des Silbers gezeigt werden.
In 2022 verfehlten Gebäude- und Verkehrssektor die Klimaschutzziele in Deutschland. Im Gegensatz zum Verkehrssektor stehen im Gebäudesektor lange Lebensdauern schnellen Technologiewechseln entgegen, weshalb Strategien besonders frühzeitig umgesetzt werden müssen. Zudem ist der Gebäudebestand durch hohe Investitionskosten bei vergleichsweise geringen Treibhausgaseinsparungen je investiertem Euro geprägt. In Kombination erschweren diese Hemmnisse die Erreichung der Klimaschutzziele für den Wohngebäudebestand deutlich.
Ziel dieser Arbeit ist die Entwicklung eines Wohngebäudebestandsmodells, um Transformationspfade unter dem Einfluss variierender ökonomischer Rahmenbedingungen, wie z.B. dem Einfluss unterschiedlicher CO2-Preisverläufe und eine Reinvestition der CO2-Steuer in die Modernisierung der Gebäude, simulieren und analysieren zu können.
Im ersten Schritt wird ein Wohngebäudebestandsmodell bei Fortschreibung der ökonomischen Rahmenbedingungen im Startjahr entwickelt und angewendet. Hierzu werden wichtige Parameter des Gebäudebestands identifiziert und diese anhand des vergangenen Verlaufs analysiert sowie Szenarien und Prognosen betrachtet. Ergebnis sind Ausgangsbedingungen und Einflussfaktoren auf den weiteren Verlauf, die für die Modellierung genutzt werden. Im zweiten Schritt wird eine Systematik entwickelt, um Modernisierungsraten endogen bei Variation der ökonomischen Rahmenbedingungen berechnen zu können.
In der vorliegenden Arbeit wird ein Modell vorgestellt, dass die ökonomischen Rahmenbedingungen und das Kopplungsprinzip dynamisch bei der Simulation von Vollmodernisierungsraten berücksichtigt. Die Ergebnisse zeigen, dass Vollmodernisierungsraten von 2 %/a über längere Zeiträume extreme Rahmenbedingungen benötigen und unrealistisch sind. Haupthemmnisse sind der Sanierungsbedarf (Kopplungsprinzip), sinkende Energieeinsparpotenziale der jüngeren Baualtersklassen und Mitnahmeeffekte bei verbesserter Förderung. Da eine Erreichung der Klimaschutzziele nur durch Anpassung der CO2-Steuer (auch bei Reinvestition) nicht innerhalb realistischer Steuerhöhen im Modell möglich ist, wird stattdessen ein Maßnahmenpaket aus wirtschaftlichen und legislativen Rahmenbedingungen zur Zielerreichung vorgestellt.
Diese Dissertation erläutert die Umsetzung eines RAMI 4.0 konformen Marktplatz in der spanenden Bearbeitung. Ziel ist es einen Lösungsansatz zu definieren, in dem firmenübergreifende Prozessketten für kleine Losgrößen automatisiert identifiziert werden und die Fertigung eines individuellen Produktes realisiert wird. Die Extraktion von Produktinformationen, die Fertigung eines individualisierten Produktes sowie die Beschreibung der Informationen in den Verwaltungsschalen wird validiert. Vor allem stellt sich als Herausforderung für die Zukunft heraus, eine gemeinsame Semantik für die Beschreibung von Capabilities zu definieren. Diese würde ermöglichen, dass ein Matching zwischen proprietären Produktinformationen und Skills möglich wird.
Auf Grundlage normativer Regelungen, aktueller Forschungsvorhaben und deren Erkenntnisse (Kuhlmann u. a. 2008 und 2012) wurden experimentelle sowie numerische Untersuchungen an großen Ankerplatten mit mehr als der aktuell normativ zugelassenen Anzahl an Kopfbolzen durchgeführt. Ziel der Untersuchungen war es, einen Ansatz für die Tragfähigkeit großer nachgiebiger Ankerplatten mit Kopfbolzen zu entwickeln. Durch Variationen der maßgebenden Parameter wie der Ankerplattendicke, der Kopfbolzenlänge, des Grads der Rückhängebewehrung sowie des Zustands des Betons konnte anhand der experimentellen Untersuchungen ein Komponentenmodell verifiziert werden. Mögliche Versagensmechanismen, wie Stahlversagen der Kopfbolzen auf Zug, Fließen der Ankerplatte infolge der T-Stummelbildung, kegelförmiger Betonausbruch sowie Stahlversagen der Rückhängebewehrung, konnten mithilfe dieser Parameter abgebildet werden. Weiter hat sich beim Versagensmodus ‚kegelförmiger Betonausbruch‘ die Oberflächenbewehrung im Nachtraglastbereich als zusätzlicher Parameter herausgestellt.
Das auf Grundlage der DIN EN 1993-1-8 entwickelte Modell und die Berücksichtigung der Komponentensteifigkeiten ermöglichen die Bemessung starrer und nachgiebiger Ankerplatten. Durch den Einbezug der Steifigkeiten einzelner Komponenten kann die Gesamtsteifigkeit einer Anschlusskonfiguration berechnet werden, um ein duktiles Tragverhalten zu erhalten. Neben verschiedenen möglichen Fließzonen auf der Ankerplatte infolge unterschiedlicher Geometrien und Anordnungen der Verbindungsmittel werden kegelförmige Betonausbrüche in Abhängigkeit einer möglichen zusätzlichen Rückhängebewehrung im Modell berücksichtigt.
Das in dieser Arbeit beschriebene Modell für die sich bildende Zugseite starrer sowie nachgiebiger Ankerplatten mit mehr als aktuell nach Norm zulässigen Ver-bindungsmitteln konnte anhand experimenteller und numerischer Versuche verifiziert werden. Der plastische Bemessungsansatz zeigt, über alle Versuchsserien hinweg, eine gute Übereinstimmung mit den experimentellen Untersuchungen sowie den numerischen Parameterstudien.
In einem zweiten Schritt wurden Auswirkungen einer Kurzzeitrelaxation des Betons infolge Zwang auf große Ankerplatten in Verbindung mit Kopfbolzen untersucht. Mit dem in Anlehnung an die Komponentenmethode der DIN EN 1993-1-8 entwickelten Federmodell können zeitabhängige Verformungen von Beton infolge von Kriechen und Schwinden berücksichtigt werden. Mithilfe des anhand experimenteller und numerischer Versuche verifizierten Modells ist es möglich, Auswirkungen infolge Zwang auf Ankerplatten zu untersuchen.
Plant-specific factors affecting short-range attraction and oviposition of European grapevine moths
(2024)
The spread of pests and pathogens is increasingly intensified by climate change and globalization. Two of the most serious insect pests threating European viticulture are the European grape berry moth, Eupoecilia ambiguella (Hübner) and the European grapevine moth Lobesia botrana (Denis & Schiffermüller). Larvae feed on fructiferous organs of grapevine Vitis vinifera, resulting in high yield and quality losses. Under the aspects of integrated pest management, insecticide measures are only reasonable when other control strategies become ineffective. In order to support the development of novel decision support system for the application of insecticides, the aim of this thesis was to decipher plant-specific factors, which affect the short-range attraction and oviposition of L. botrana and E. ambiguella.
The focus was set on the visual, volatile, tactile and gustatory stimuli provided by their host plant after settlement. The use of artificial surfaces as model plant showed that oviposition of both species is affected by the color, the shape and the texture of the oviposition site. To explain a susceptibility of certain grapevine cultivars and phenological stages of the berries to egg infestations, we analysed and compared the chemical composition of the epicuticular waxes of the berry surface as well as the volatile organic compounds emitted by the berries. Thereby it turned out that the attractiveness to wax extracts decreased during ripening of the berries, highlighting a preference of earlier phenological stages of the berries for oviposition. In addition, grapevine cultivars exhibited variations in their volatile composition. The principle components perceived by female’s antennae could not explain the differentiation between cultivars, suggesting volatiles do not trigger orientation to certain cultivars. Furthermore, a method was developed to measure real-time behavioural response of female moths to volatiles. The setup allowed to quantify the orientation to a volatile source as well as movements of the antennae and ovipositor. They could be linked to the olfactory and gustatory perception of volatiles during the evaluation of suitable host plants for oviposition. In addition, the risk of potential alternative host plants in the vicinity of the vineyard was investigated. This confirmed that L. botrana in particular prefers the stimuli provided by some plants to those of grapevine. Overall, the results suggest that during oviposition, volatiles emitted by the plants and the composition of the plant surface are the most important factors for host plant differentiation.
VR ist ein stetig wachsendes Forschungsgebiet, das die Perspektiven und Möglichkeiten der Mensch-Computer-Interaktion erweitert (Hassan & Hossain, 2022). Durch Studien konnte bereits vor dem aktiven Einsatz im Schulalltag eine Vielzahl an positiven Auswirkungen auf den Lernprozess durch die Nutzung von VR nachgewiesen werden (Chavez & Bayona, 2018). Das sogenannte Immersive Learning stellt damit einen Schlüsselbereich zur digitalen Transformation im Bildungsbereich dar. Um VR allerdings im Schulunterricht einsetzen zu können, bedarf es Lernumgebungen, die auf die örtlichen Gegebenheiten und alltäglichen Bedürfnisse eines praktischen Schulunterrichts angepasst sind. Solche Gestaltungsprinzipien sind allerdings im Bildungsbereich noch nicht vorhanden (Johnson-Glenberg, 2018). Diese Arbeit beschäftigt sich damit, Prinzipien aus der Theorie abzuleiten, diese mit Gestaltungskomponenten zu vereinen und darauf aufbauend eine VR-Lernumgebung zu gestalten und zu erforschen. Um eine Praxis-nähe bei der Entwicklung und Untersuchung zu gewährleisten, wurde ein Design-Based Research Ansatz gewählt. In aufeinander aufbauenden Mikrozyklen wurden die Gestaltungskomponenten evaluiert und daraus Gestaltungsprinzipien abgeleitet. Die Lernmaterialien wurden fächerübergreifend für die Fächer Chemie und Geografie konzipiert sowie praxisnah mit Teilnehmenden aus vier zehnten Klassen eines Gymnasiums in Rheinland-Pfalz evaluiert. Als Lerninhalt wurde der Kohlenstoffkreislauf gewählt und in den jeweiligen Curricula der Fächer verortet. Der Hauptfokus lag auf dem Fach Chemie, Themenfeld elf „Stoffe im Fokus von Um-welt und Klima“. Als virtueller Ort wurde die Nachbildung eines Abschnitts des außerschulischen Lernorts „Reallabor Queichland“ gewählt. Die Komponenten wurden in insgesamt sieben Mikrozyklen aufgeteilt, nummeriert von null bis sechs. Mikrozyklus null wird genutzt, um den Teilnehmenden den Umgang mit dem VR-System näher zu bringen und den Neuigkeitseffekt abzumildern. Mikrozyklus eins evaluiert die Grundfläche der VR-Lernumgebung mit dem Fokus auf den Realismus der Umgebung. Mikrozyklus zwei beschäftigt sich mit dem zu wählenden Bewegungsradius innerhalb der VR. Mikrozyklus drei untersucht den Effekt von realitätsnahen Hintergrundgeräuschen. Die Mikrozyklen vier bis sechs bestehen aus drei Lernstationen mit unterschiedlichen Interaktionsmöglichkeiten: realitätsnahe Interaktionen, realitätsferne Interaktionen sowie eine Mischung daraus. Erhoben wurden die Skalen räumliches Präsenzerleben, aktuelle Motivation, Realismus, wahrgenommene Bedienbarkeit, wahrgenommene Lerneffektivität und die VR-Skala. Ausgewertet wurden die Daten mit ANOVAs und Pfadanalysen sowie einer übergreifenden Analyse am Ende der Erhebung. Durch das Design der Komponenten konnte ein sehr hohes räumliches Präsenzerleben sowie ein sehr hoher wahrgenommener Realismus erzeugt werden. In den Lernstationen bewerteten die Teilnehmenden die wahrgenommene Lerneffektivität sowie Bedienbarkeit als auch den Zusammenhang von 3-D-Model-len, deren Manipulierbarkeit in VR und der damit verbundene Effekt auf die Lerneffektivität als sehr hoch. Insgesamt konnten aus den vorliegenden Daten zwölf Gestaltungsprinzipien generiert werden. Diese können dafür genutzt werden, neue VR-Lernumgebungen für den praktischen Einsatz im Schulunterricht zu erstellen. Es wurden theoretische Annahmen zur Respezifikation des Prozessmodells des räumlichen Präsenzerlebens getroffen und mit den erhobenen Daten geprüft. Die Anpassung des Modells an moderne VR-Brillen und kognitiv fordernde VR-Lernumgebungen stand dabei im Fokus und ergab sehr gute Modelfit-Werte. In weiterführen-den Studien sollten diese Annahmen mit größeren Stichproben überprüft werden.
Production, purification and analysis of novel peptide antibiotics from terrestrial cyanobacteria
(2024)
Cyanobacteria are a known source for bioactive compounds, of which several also show antibiotic activity. In regard to the growing number of multi-resistant pathogens, the search for novel antibiotic substances is of great importance and unexploited sources should be explored. So, this thesis initially dealt with the identification of productive strains, especially within the group of the terrestrial cyanobacteria, which are less well studied than marine and freshwater strains. Amongst these, Chroococcidiopsis cubana, an extremely desiccation and radiation tolerant, unicellular cyanobacterium was found to produce an extracellular antimicrobial metabolite effective against the Gram-positive indicator bacterium Micrococcus luteus as well as the pathogenic yeast Candida auris. However, as the sole identification of a productive cyanobacterium is not sufficient for further analysis and a future production scale-up, the second part of this thesis targeted the identification of compound synthesis prerequisites. As a result, a limitation of nitrogen was shown to be the production trigger, a finding that was used for the establishment of a continuous production system. The increased compound formation was then used for purification and analysis steps. As a second approach, in silico identified bacteriocin gene clusters from C. cubana were cloned and heterologously expressed in Escherichia coli. By this, the bacteriocin B135CC was identified as a strong bacteriolytic agent, active predominantly against the Gram-positive strains Staphylococcus aureus and Mycobacterium phlei. The peptide showed no cytotoxic effects against mouse neuroblastoma (N2a-) cells and a high temperature tolerance up to 60 °C. In order to facilitate the whole project, two standard protocols, specifically adapted for the work with cyanobacteria, were established. First, a method for a quick and easy in vivo vitality estimation of phototrophic cells and second, an approach for a high throughput determination of nitrate concentrations in microalgal cultures. Both methods greatly helped to proceed the main objectives of this work, the first one by simplifying the development of suitable cryopreservation protocols for individual cyanobacteria strains and the second one by accelerating the determination of the optimal nitrate concentration for the production of the antimicrobial compound from C. cubana. In the course of this cultivation optimization, the ability of cyanobacteria to utilize organic carbon sources for an accelerated cell growth was examined in greater detail. It could be shown that C. cubana reaches significantly higher growth rates when mixotrophically cultivated with fructose or glucose. Interestingly, this effect was even further enhanced when light intensity was decreased. Under these low-light conditions, phototrophically cultivated C. cubana cells showed a clearly decreased cell growth. This effect might be extremely useful for a quick and economic preparation of precultures.
The ability to sense and respond to different environmental conditions allows living organisms to adapt quickly to their surroundings. In order to use light as a source of information, plants, fungi, and bacteria employ phytochromes. With their ability to detect far-red and red light, phytochromes constitute a major photoreceptor family. Bacterial phytochromes (BphPs) are composed of an apo-phytochrome and an open-chain tetrapyrrole, the chromophore biliverdin IXα, which mediates the photosensory properties. Depending on the photoexcitation and the quality of the incident light, phytochromes interconvert between two photoconvertible parental states: the red light-absorbing Pr-form and the far-red light-absorbing Pfr-form. In contrast to prototypical phytochromes, with a thermal stable Pr ground state, there is a group of bacterial phytochromes that exhibit dark reversion from the Pr- to the Pfr-form. These special proteins are classified as bathy phytochromes and range across different classes of bacteria. Moreover, the majority of BphPs act as sensor histidine kinases in two-component regulatory systems. The light-triggered conformational change results in the autophosphorylation of the histidine kinase domain and the transphosphorylation of an associated response regulator, inducing a cellular response. Spectroscopic analysis utilizing homologously produced protein identified PaBphP, the histidine kinase of the human opportunistic pathogen Pseudomonas aeruginosa, as a bathy phytochrome. Intensive research on PaBphP revealed evidence that the interconversion between its physiological active and inactive states is influenced by light and darkness rather than far-red and red light. In order to conduct a comprehensive systematic analysis, further bacterial phytochromes were investigated regarding their biochemical and spectroscopic behavior, as well as their autokinase activity. In addition to PaBphP, this work employs the bathy phytochromes AtBphP2, AvBphP2, XccBphP from the non-photosynthetic plant pathogens Agrobacterium tumefaciens, Allorhizobium vitis, Xanthomonas campestris, as well as RtBphP2 from the soil bacterium Ramlibacter tataouinensis. All investigated BphPs displayed a bathy-typical behavior by developing a distinct Pr-form under far-red light conditions and undergoing dark reversion to their Pfr-form. Different Pr/Pfr-fractions can be identified among the BphP populations in varying natural light conditions, including red or blue light. The Pr-form is considered as the active form due to autophosphorylation activity in the heterologously produced phytochromes when exposed to light. In the absence of light, associated with the development of the Pfr-form, the phytochromes exhibited disabled or strongly reduced autokinase activity. Additionally, light-triggered phosphorylation was observed for the response regulator PaAlgB, which is linked to the phytochrome of P. aeruginosa. This study presents the first comparative investigation of numerous bathy phytochromes under identical conditions. The work addressed a gap in the literature by providing quantitative correlation between kinase activity and calculated Pr/Pfr-fractions obtained from spectroscopic measurements. The biological role of PaBphP was partially elucidated through phenotypic characterization employing P. aeruginosa mutant and overexpression strains. The generation of a functional model was possible by considering the postulated functions of the other phytochromes found in the literature. In summary, bathy BphPs are hypothesized to modulate bacterial virulence according to the circadian day/night rhythm of their hosts. The pathogens are believed to reduce their virulence during daylight hours to evade immune and defense reactions, while increasing their virulence during the evening and night, enabling more effective infections.
Functional structures as well as materials provided by nature have always been a great source of inspiration for new technologies. Adapting and improving the discovered concepts, however, demands a detailed understanding of their working principles, while employing natural materials for fabrication tasks requires suitable functionalization and modification.
In this thesis, the white scales of the beetle Cyphochilus are examined in order to reveal unknown aspects of their light transport properties. In addition, the monomer of the material they are made of is utilized for 3D microfabrication.
White beetle scales have been fascinating scientists for more than a decade because they display brilliant whiteness despite their small thickness and the low refractive index contrast. Their optical properties arise from highly efficient light scattering within the disordered intra-scale network structure.
To gain a better understanding of the scattering properties, several previous studies have investigated the light transport and its connection to the structural anisotropy with the aid of diffusion theory. While this framework allows to relate the light scattering to macroscopic transport properties, an accurate determination of the effective refractive index of the structure is required. Due to its simplicity, the Maxwell-Garnett mixing rule is frequently used for this task, although its constraint to particle and feature sizes much smaller than the wavelength is clearly violated for the scales.
To provide a correct calculation of the effective refractive index, here, finite-difference time-domain simulations are used to systematically examine the impact of size effects on the effective refractive index. Deploying this simulation approach, the Maxwell-Garnett mixing rule is shown to break down for large particles. In contrast, it is found that a quadratic polynomial function describes the effective refractive index in close approximation, while its coefficients can be obtained from an empirical linear function. As a result, a simple mixing rule is reported that unambiguously surpasses classical mixing rules when composite media containing large feature sizes are considered. This is important not only for the accurate description of white beetle scales, but also for other turbid media, such as biological tissues in opto-biomedical diagnostics.
Describing light transport by means of diffusion theory moreover neglects any coherent effects, such as interference. Hence, their impact on the generation of brilliant whiteness is currently unknown. To shed a light on their role, spatial- and time-resolved light scattering spectromicroscopy is applied to investigate the scales and a model structure of them based on disordered Bragg stacks. For both structures the occurrence of weakly localized photonic modes, i.e., closed scattering loops, is observed, which is further verified in accompanying simulations. As shown in this thesis, leakage from these random photonic modes contributes at least 20% to the overall reflected light. This reveals the importance of coherent effects for a complete description of the underlying light transport properties; an aspect that is entirely missing in the purely diffusive transport presumed so far. Identifying the importance of weak localization for the generation of brilliant whiteness paves the way to further enhance the design of efficient optical scattering media, an issue that recently drawn great attention.
Unlike their plant-based counterparts, rigid carbohydrates, such as chitin, are currently unavailable for 3D microfabrication via direct laser writing, despite their great significance in the animal kingdom for the construction of functional microstructures. To overcome this gap, the monomeric unit of chitin, N-acetyl-D-glucosamine, is here functionalized to serve as a photo-crosslinkable monomer in a non-hydrogel photoresist. Since all previous photoresists based on animal carbohydrates are in the form of hydrogel formulations, a new group of photoresists is established for direct laser writing.
Moreover, it is exhibited that the sensitization effect, previously used only in the context of UV curing, can be successfully transferred to direct laser writing to increase the maximum writing speed. This effect is based on the beneficial combination of two photoinitiators.
In this, one photoinitiator is an efficient crosslinking agent for the monomer used, but a rather poor two-photon absorber. The other photoinitiator (called sensitizer) possesses, conversely, a much higher two-photon absorption coefficient at the applied wavelength but is not well suited as a crosslinking agent. In combination, the energy absorbed by the sensitizer is passed to the photoinitiator, resulting in the formation of radicals needed to start the polymerization. As this greatly increases the rate at which the photoinitiator is radicalized, resists containing a photoinitiator and a sensitizer are shown to outperform resists containing only one of the components. Deploying the sensitization effect in direct laser writing therefore offers a simple way to individually tune the crosslinking ability and the two-photon absorption properties by combining existing compounds, compared to the costly chemical synthesis of novel, customized photoinitiators.
In contrast to motorbike tyres, whose friction during cornering has to be as high as possible, the desired effect in skiing is the opposite, that of low friction. The reduced friction between skis and ice or snow is made possible by a film of meltwater that forms as a function of friction power. To support this friction mechanism, skis are waxed with different waxes in both hobby and professional sports, depending on a variety of conditions. Waxes with fluorine additives show best performance in most conditions, corresponding to the lowest friction coefficients. However, for health and environmental reasons, the International Ski Federation (FIS) and the Biathlon Un-ion (IBU) have imposed a complete ban on fluorine additives at all FIS races and IBU events with effect from the 2023/2024 season. As a result, wax manufacturers are required to develop and extensively test fluorine-free waxes in order to remain competitive.
Traditional tests take place either indoors or outdoors in the field. Athletes, who complete a particular distance and whose time is measured, also note the impres-sions that the prepared skis provide to the skiers. The time and cost involved in nu-merous individual tests is a drawback, and the presence of only a single type of snow in the hall or field, air resistance, changing environmental conditions and var-iations in the athlete's movement, limit the depth of information. For the need of re-ducing the time-consuming procedure of indoor and outdoor tests, a tribometer of-fers a solution where friction measurements can be performed on a laboratory scale. Due to the consistent adjustable conditions such as temperature, speed and load applied to the friction partners, scientific studies can be carried out with reduced dis-turbance variables. At present, the tribometric results of laboratory instruments for predicting friction values do not translate into application in practice. The reasons for this are the compromises that have to be made in the design of the tribometers.
This work reviews the existing tribometers for their operating conditions and con-firms the need for a scientific method of characterising different waxes. In order to fill the gap between friction results obtained in laboratory tests which cannot yet be used in the selection of waxes, and traditional field tests, this thesis is dedicated to the methodical design and manufacture of a linear tribometer capable of measuring friction between a ski base made of UHMWPE (ultra high molecular weight polyeth-ylene) and an ice sample. The tribometer provides for the first time results that allow differentiating be-tween different modified waxes with regard to their running performance. Friction-influencing factors such as speed, temperature and the surface pressure below the ski base can be adjusted within the range relevant for ski sports. Furthermore, the laboratory-scale test stand, which is located in a cold chamber, is capable of ac-commodating not only typical ski jumping base lengths and widths, but also cross-country and alpine ski bases. To verify the tribometer, a ski base is treated with three waxes of different fluorine content and measured comparatively. With a minimum of 95% confidence, the friction differences between the tested waxes depending on their fluorine content is validated and proven at the end of this work.
Pervasive human impacts rapidly change freshwater biodiversity. Frequently recorded exceedances of regulatory acceptable thresholds by pesticide concentrations suggest that pesticide pollution is a relevant contributor to broad-scale trends in freshwater biodiversity. A more precise pre-release Ecological Risk Assessment (ERA) might increase its protectiveness, consequently reducing the likelihood of unacceptable effects on the environment. European ERA currently neglects possible differences in sensitivity between exposed ecosystems. If the taxonomic composition of assemblages would differ systematically among certain types of ecosystems, so might their sensitivity toward pesticides. In that case, a single regulatory threshold would be over- or underprotective.
In this thesis, we evaluate (1) whether the assemblage composition of macroinvertebrates, diatoms, fishes, and aquatic macrophytes differs systematically between the types of a European river typology system, and (2) whether these taxonomical differences engender differences in sensitivity toward pesticides. While a selection of ecoregions is available for Europe, only a single typology system that classifies individual river segments is available at this spatial scale - the Broad River Types (BRT).
In the first two papers of this thesis, we compiled and prepared large databases of macroinvertebrate (paper one), diatom, fish, and aquatic macrophyte (paper two) occurrences throughout Europe to evaluate whether assemblages are more similar within than among BRT types. Additionally, we compared its performance to that of different ecoregion systems. We employed multiple tests to evaluate the performances, two of which were also designed in the studies. All typology systems failed to reach common quality thresholds for the evaluated metrics for most taxa. Nonetheless, performance differed markedly between typology systems and taxa, with the BRT often performing worst. We showed that currently available, European freshwater typology systems are not well suited to capture differences in biotic communities and suggest several possible amelioration.
In the third study, we evaluated whether ecologically meaningful differences in sensitivity exist between BRT types. To this end, we predicted the sensitivity of macroinvertebrate assemblages across Europe toward Atrazine, copper, and Imidacloprid using a hierarchical species sensitivity distribution model. The predicted assemblage sensitives differed only marginally between BRT types. The largest difference between
median river type sensitivities was a factor of 2.6, which is far below the assessment factor suggested for such models (6), as well as the factor of variation commonly observed between toxicity tests of the same species-compound pair (7.5 for copper). Our results don’t support the notion that a type-specific ERA might improve the accuracy of thresholds. However, in addition to the taxonomic composition the bioavailability of chemicals, the interaction with other stressors, and the sensitivity of a given species might differ between river types.
Mechanistic disease spread models for different vector borne diseases have been studied from the 19th century. The relevance of mathematical modeling and numerical simulation of disease spread is increasing nowadays. This thesis focuses on the compartmental models of the vector-borne diseases that are also transmitted directly among humans. An example of such an arboviral disease that falls under this category is the Zika Virus disease. The study begins with a compartmental SIRUV model and its mathematical analysis. The non-trivial relationship between the basic reproduction number obtained through two methods have been discussed. The analytical results that are mathematically proven for this model are numerically verified. Another SIRUV model is presented by considering a different formulation of the model parameters and the newly obtained model is shown to be clearly incorporating the dependence on the ratio of mosquito population size to human population size in the disease spread. In order to incorporate the spatial as well as temporal dynamics of the disease spread, a meta-population model based on the SIRUV model was developed. The space domain under consideration are divided into patches which may denote mutually exclusive spatial entities like administrative areas, districts, provinces, cities, states or even countries. The research focused only on the short term movements or commuting behavior of humans across the patches. This is incorportated in the multi-patch meta-population model using a matrix of residence time fractions of humans in each patches. Mathematically simplified analytical results are deduced by which it is shown that, for an exemplary scenario that is numerically studied, the multi-patch model also admits the threshold properties that the single patch SIRUV model holds. The relevance of commuting behavior of humans in the disease spread has been presented using the numerical results from this model. The local and non-local commuting are incorporated into the meta-population model in a numerical example. Later, a PDE model is developed from the multi-patch model.
Cancer, a complex and multifaceted disease, continues to challenge the boundaries of biomedical research. In this dissertation, we explore the complexity of cancer genesis, employing multiscale modeling, abstract mathematical concepts such as stability analysis, and numerical simulations as powerful tools to decipher its underlying mechanisms. Through a series of comprehensive studies, we mainly investigate the cell cycle dynamics, the delicate balance between quiescence and proliferation, the impact of mutations, and the co-evolution of healthy and cancer stem cell lineages. The introductory chapter provides a comprehensive overview of cancer and the critical importance of understanding its underlying mechanisms. Additionally, it establishes the foundation by elucidating key definitions and presenting various modeling perspectives to address the cancer genesis. Next, cell cycle dynamics have been explored, revealing the temporal oscillatory dynamics that govern the progression of cells through the cell cycle.
The first half of the thesis investigates the cell cycle dynamics and evolution of cancer stem cell lineages by incorporating feedback regulation mechanisms. Thereby, the pivotal role of feedback loops in driving the expansion of cancer stem cells has been thoroughly studied, offering new perspectives on cancer progression. Furthermore, the mathematical rigor of the model has been addressed by deriving wellposedness conditions, thereby strengthening the reliability of our findings and conclusions. Then, expanding our modeling scope, we explore the interplay between quiescent and proliferating cell populations, shedding light on the importance of their equilibrium in cancer biology. The models developed in this context offer potential avenues for targeted cancer therapies, addressing perspective cell populations critical for cancer progression. The second half of the thesis focuses on multiscale modeling of proliferating and quiescent cell populations incorporating cell cycle dynamics and the extension thereof with mutation acquisition. Following rigorous mathematical analysis, the wellposedness of the proposed modeling frameworks have been studied along with steady-state solutions and stability criteria.
In a nutshell, this thesis represents a significant stride in our understanding of cancer genesis, providing a comprehensive view of the complex interplay between cell cycle dynamics, quiescence, proliferation, mutation acquisition, and cancer stem cells. The journey towards conquering cancer is far from over. However, this research provides valuable insights and directions for future investigation, bringing us closer to the ultimate goal of mitigating the impact of this formidable disease.
In this thesis, material removal mechanisms in grinding are investigated considering a gritworkpiece interaction as well as a grinding-wheel workpiece interaction. In grit-workpiece interaction in a micrometer scale, single grit scratch experiments were performed to investigate material removal mechanism in grinding namely rubbing, plowing, and cutting. Experiments performed were analyzed based on material removal, process forces and specific energy. A finite element model is developed to simulate a single-grit scratch process. As part of the development of the finite element scratch model a 2D and 3D model is developed. A 2D model is utilized to test
material parameters and test various mesh discretizational approaches. A 3D model undertaking the tested material parameters from the 2D model is developed and is tested against experimental results for various mesh discretization. The simulation model is validated based on process forces and ground topography from experiments. The model is also further scaled to simulate multiple grit-workpiece interaction validated against experimental results. As a final step, simulation models are developed to simulate material removal, due to the interaction of grinding wheel and workpiece. A developed virtual grinding wheel topographical model is employed to display
an approach, to upscale a grinding process from grit-workpiece interaction to wheel-workpiece
interaction. In conclusion, practical conclusions drawn and scope for future studies are derived
based on the developed simulation models.
Zur Förderung der Nahmobilität, insbesondere der Basismobilität „Zufußgehen“, ist die Möglichkeit zur Teilhabe im öffentlichen Verkehrsraum für alle Menschen und im Besonderen für mobilitätseingeschränkte (Bedürfnisgruppen) unerlässlich. Nur mit Hilfe einer barrierefrei gestalteten Umwelt kann die Teilhabe Aller erreicht werden. In diesem Zusammenhang ist es notwendig, ein durchgehend barrierefreies Fußverkehrsnetz herzustellen. Hierzu sind die Fußverkehrsanlagen (Gehbereiche, Überquerungsstellen, Treppen, Rampen und Aufzüge) entsprechend zu gestalten. Ein nachvollziehbares und praxisorientiertes Verfahren zur Bewertung der Barrierefreiheit von Fußverkehrsnetzen existiert allerdings derzeit nicht. An diesem Punkt setzt die vorliegende Forschungsarbeit an. Durch die Entwicklung
eines Verfahrens zur Bewertung der bestehenden Barrierefreiheit von Fußverkehrsnetzen anhand von Qualitätsstufen wird ein praktisches Anwendungstool geschaffen. Dieses richtet sich an verantwortliche Personen, u.a. aus Planung, Politik und Verwaltung, um eine Priorisierung und Umsetzung von
Maßnahmen zum Abbau von Barrieren vornehmen zu können.
Grundlage für das Bewertungsverfahren bilden Interviews und Befragungen von Fachleuten und Bedürfnisgruppen. Der Schwerpunkt liegt hierbei auf motorisch und visuell eingeschränkten Personen. Die Befragungen befassten sich mit der Höhe der Erschwernisse, je nach Bedürfnisgruppe, bei der Nutzung von Fußverkehrsanlagen im öffentlichen Raum, wenn diese nicht den Vorgaben der Technischen Regelwerke entsprechen. Das Bewertungsverfahren übersetzt die Barrierefreiheit in eine verständliche und nachvollziehbare Größe, indem die Erschwernisse in eine gefühlte zusätzliche Entfernung umgerechnet werden. Weiterhin wird neben der gefühlten auch die tatsächliche zusätzliche Entfernung aufgrund von Umwegen berücksichtigt. Aufbauend auf der Bewertung von Fußverkehrsanlagen können so Routen und Verbindungen sowie Fußverkehrsnetze bewertet werden. Der grundsätzliche Ablauf des Bewertungsverfahrens ist für alle Bedürfnisgruppen gleich. Er besteht aus vier wesentlichen Schritten und hat jeweils eine von sechs Qualitätsstufen der Barrierefreiheit (QSB, Stufen von A bis F) zum Ergebnis. Im Rahmen der Forschungsarbeit wird festgelegt, dass der Übergang von der Stufe D zur
Stufe E für die Mehrheit der betrachteten Bedürfnisgruppen die Grenze zwischen Selbstständigkeit und Notwendigkeit fremder Hilfe beim Nutzen der Fußverkehrsanlagen darstellt. Das entwickelte Bewertungsverfahren bietet eine gute Grundlage zur Bewertung von Fußverkehrsnetzen in Bezug auf die Barrierefreiheit. Aufgrund der Modularität und Flexibilität ist es möglich, sowohl
weitere Aspekte als auch weitere Bedürfnisgruppen zu integrieren. Wichtig sind eine kontinuierliche Anwendung des Verfahrens und die Berücksichtigung der Barrierefreiheit von Anfang an in jeder Planung. Ebenfalls ist eine gesetzliche Integration der barrierefreien schrittweisen Umgestaltung anhand
anerkannter Technischer Regelwerke notwendig. Nur so kann ein durchgehend barrierefreies Netz entstehen und allen Menschen, egal ob mit oder ohne Mobilitätseinschränkung, eine Teilhabe im öffentlichen Verkehrsraum ohne fremde Hilfe ermöglicht werden. Zudem kann durch die Steigerung der
Attraktivität die Nahmobilität gefördert werden. Hiermit kann erreicht werden, Menschen bei kurzen Entfernungen vom zu Fuß gehen bzw. von der Nutzung des Rollstuhls zu überzeugen. Letztlich ist so auch eine Minderung des CO2-Ausstoßes denkbar, wenn für kurze Routen kein oder seltener das Kfz
genutzt wird. Das nachhaltigste und umweltschonendste Fortbewegungsmittel ist das zu Fuß gehen und ein barrierefreies Umfeld trägt somit schlussendlich zum Klimaschutz bei.
Climate change will have severe consequences on Eastern Boundary Upwelling Systems (EBUS). They host the largest fisheries in the world supporting the life of millions of people due to their tremendous primary production. Therefore, it is of utmost importance to better understand predicted impacts like alternating upwelling intensities and light impediment on the structure and the trophic role of protistan plankton communities as they form the basis of the food web. Numerical models estimate the intensification of the frequency in eddy formation. These ocean features are of particular importance due to their influence on the distribution and diversity of plankton communities and the access to resources, which are still not well understood even to the present day. My PhD thesis entails two subjects conducted during large-scaled cooperation projects REEBUS (Role of Eddies in Eastern Boundary Upwelling Systems) and CUSCO (Coastal Upwelling System in a Changing Ocean).
Subject I of my study was conducted within the multidisciplinary framework REEBUS to investigate the influence of eddies on the biological carbon pump in the Canary Current System (CanCS). More specifically, the aim was to find out how mesoscale cyclonic eddies affect the regional diversity, structure, and trophic role of protistan plankton communities in a subtropical oligotrophic oceanic offshore region.
Samples were taken during the M156 and M160 cruises in the Atlantic Ocean around Cape Verde during July and December 2019, respectively. Three eddies with varying ages of emergence and three water layers (deep chlorophyll maximum DCM, right beneath the DCM and oxygen minimum zone OMZ) were sampled. Additional stations without eddy perturbation were analyzed as references. The effect of oceanic mesoscale cyclonic eddies on protistan plankton communities was analyzed by implementing three approaches. (i) V9 18S rRNA gene amplicons were examined to analyze the diversity and structure of the plankton communities and to infer their role in the biological carbon pump. (ii) By assigning functional traits to taxonomically assigned eDNA sequences, functional richness and ecological strategies (ES) were determined. (iii) Grazing experiments were conducted to assess abundance and carbon transfer from prokaryotes to phagotrophic protists.
All three eddies examined in this study differed in their ASV abundance, diversity, and taxonomic composition with the most pronounced differences in the DCM. Dinoflagellates were the most abundant taxa in all three depth layers. Other dominating taxa were radiolarians, Discoba and haptophytes. The trait-approach could only assign ~15% of all ASVs and revealed in general a relatively high functional richness. But no unique ES was determined within a specific eddy. This indicates pronounced functional redundancy, which is recognized to be correlated with ecosystem resilience and robustness by providing a degree of buffering capacity in the face of biodiversity loss. Elevated microbial abundances as well as bacterivory were clearly associated to mesoscale eddy features, albeit with remarkable seasonal fluctuations. Since eddy activity is expected to increase on a global scale in future climate change scenarios, cyclonic eddies could counteract climate change by enhancing carbon sequestration to abyssal depths. The findings demonstrate that cyclonic eddies are unique, heterogeneous, and abundant ecosystems with trapped water masses in which characteristic protistan plankton develop as the eddies age and migrate westward into subtropical oligotrophic offshore waters. Therefore, eddies influence regional protistan plankton diversity qualitatively and quantitatively.
Subject II of my PhD project contributed to the CUSCO field campaign to identify the influence of varying upwelling intensities in combination with distinct light treatments on the whole food web structure and carbon pump in the Humboldt Current System (HCS) off Peru. To accomplish such a task, eight offshore-mesocosms were deployed and two light scenarios (low light, LL; high light, HL) were created by darkening half of the mesocosms. Upwelling was simulated by injecting distinct proportions (0%, 15%, 30% and 45%) of collected deep-water (DW) into each of the moored mesocosms. My aim was to examine the changes in diversity, structure, and trophic role of protistan plankton communities for the induced manipulations by analyzing the V9 18S rRNA gene amplicons and performing short-term grazing experiments.
The upwelling simulations induced a significant increase in alpha diversity under both light conditions. In austral summer, reflected by HL conditions, a generally higher alpha diversity was recorded compared to the austral winter simulation, instigated by LL treatment. Significant alterations of the protistan plankton community structure could likewise be observed. Diatoms were associated to increased levels of DW addition in the mimicked austral winter situation. Under nutrient depletion, chlorophytes exhibited high relative abundances in the simulated austral winter scenario. Dinoflagellates dominated the austral summer condition in all upwelling simulations. Tendencies of reduced unicellular eukaryotes and increased prokaryotic abundances were determined under light impediment. Protistan-mediated mortality of prokaryotes also decreased by ~30% in the mimicked austral winter scenario.
The findings indicate that the microbial loop is a more relevant factor in the structure of the food web in austral summer and is more focused on the utilization of diatoms in austral winter in the HCS off Peru. It was evident that distinct light intensities coupled with multiple upwelling scenarios could lead to alterations in biochemical cycles, trophic interactions, and ecosystem services. Considering the threat of climate change, the predicted relocation of EBUS could limit primary production and lengthen the food web structure with severe socio-economic consequences.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
The aim of this thesis is to introduce an equilibrium insurance market model and study its properties and possible applications in risk class management.
First, an insurance market model based on an equilibrium approach is developed. Depending on the premium, the insured will choose the amount of coverage they buy in order to maximize their expected utility. The behavior of the insurer in different market regimes is then compared. While the premiums in markets with perfect competition are calculated in order to make no profit at all, insurers try to maximize their margins in a monopolistic market.
In markets modeled in this way several phenomena become evident. Perhaps the most important one is the so-called push-out effect. When customers with different attributes are insured together, insurance might become so expensive for one type of customers that those agents are better off with buying no insurance at all. The push-out effect was already shown for theoretical examples in the literature. We present a comprehensive analysis of the equilibrium insurance market model and the push-out effect for different insurance products such as life, health and disability insurance contracts using real-life data from different sources. In a concluding chapter we formulate indicators when a push-out can be expected and when not.
Machine learning regression approaches such as neural networks have gained vast popularity in recent years. The exponential growth of computing power has enabled larger and more evolved networks that can perform increasingly complex tasks. In our feasibility study about the use of neural networks in the regression of equilibrium insurance premiums it is shown that this regression is quite robust and the risk of overfitting can almost be excluded -- as long as the regression is performed on at least a few thousand data points.
Grouping customers of different risk types into contracts is important for the stability and the robustness of an insurance market. This motivates the study of the optimal assignment of risk classes into contracts, also known as rating classes. We provide a theoretical framework that makes use of techniques from different mathematical fields such as non-linear optimization, convex analysis, herding theory, game theory and combinatorics. In addition, we are able to show that the market specifications have a large impact on the optimal allocation of risk classes to contracts by the insurer. However, there does not need to be an optimal risk class assignment for each of these specifications.
To address this issue, we present two different approaches, one more theoretical and another that can easily be implemented in practice. An extension of our model to markets with capacity constraints rounds off the topic and extends the applicability of our approach.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
Lubricated tribological contact processes are important in both nature and in many technical applications. Fluid lubricants play an important role in contact processes, e.g. they reduce friction and cool the contact zone. The fundamentals of lubricated contact processes on the atomistic scale are, however, today not fully understood. A lubricated contact process is defined here as a process, where two solid bodies that are in close proximity and eventually in parts in direct contact, carry out a relative motion, whereat the remaining volume is submersed by a fluid lubricant. Such lubricated contact processes are difficult to examine experimentally. Atomistic simulations are an attractive alternative for investigating the fundamentals of such processes. In this work, molecular dynamics simulations were used for studying different elementary processes of lubricated tribological contacts. A simplified, yet realistic simulation setup was developed in this work for that purpose using classical force fields. In particular, the two solid bodies were fully submersed in the fluid lubricant such that the squeeze-out was realistically modeled. The velocity of the relative motion of the two solid bodies was imposed as a boundary condition. Two types of cases were considered in this work: i) a model system based on synthetic model substances, which enables a direct, but generic, investigation of molecular interaction features on the contact process; and ii) real substance systems, where the force fields describe specific real substances. Using the model system i), also the reproducibility of the findings obtained from the computer experiments was critically assessed. In most cases, also the dry reference case was studied. Both mechanical and thermodynamic properties were studied -- focusing on the influence of lubrication. The following properties were studied: The contact forces, the coefficient of friction, the dislocation behavior in the solid, the chip formation and the formation of the groove, the squeeze-out behavior of the fluid in the contact zone, the local temperature and the energy balance of the system, the adsorption of fluid particles on the solid surfaces, as well as the formation of a tribofilm. Systematic studies were carried out for elucidating the influence of the wetting behavior, the influence of the molecular architecture of the lubricant, and the influence of the lubrication gap height on the contact process. As expected, the presence of a fluid lubricant reduces the temperature in the vicinity of the contact zone. The presence of the lubricant is, moreover, found to have a significant influence on the friction and on the energy balance of the process. The presence of a lubricant reduces the coefficient of friction compared to a dry case in the starting phase of a contact process, while lubricant molecules remain in the contact zone between the two solid bodies. This is a result of an increased normal and slightly decreased tangential force in the starting phase. When the fluid molecules are squeezed out with ongoing contact time and the contact zone is essentially dry, the coefficient of friction is increased by the presence of a fluid compared to a dry case. This is attributed to an imprinting of individual fluid particles into the solid surface, which is energetically unfavorable. By studying the contact process in a wide range of gap height, the entire range of the Stribeck curve is obtained from the molecular simulations. Thereby, the three main lubrication regimes of the Stribeck curve and their transition regions are covered, namely boundary lubrication (significant elastic and plastic deformation of the substrate), mixed lubrication (adsorbed fluid layers dominate the process), and hydrodynamic lubrication (shear flow is set up between the surface and the asperity). The atomistic effects in the different lubrication regimes are elucidated. Notably, the formation of a tribofilm is observed, in which lubricant molecules are immersed into the metal surface. The formation of a tribofilm is found to have important consequences for the contact process. The work done by the relative motion is found to mainly dissipate and thereby heat up the system. Only a minor part of the work causes plastic deformation. Finally, the assumptions, simplifications, and approximations applied in the simulations are critically discussed, which highlights possible future work.
Reactive absorption with amines is the most important technique for the removal of CO2
from gas streams, e.g. from flue gas, natural gas or off-gas from the cement industry.
In this work a rigorous simulation model for the absorption and desorption of CO2 with
an amine-containing solvent is validated using data from pilot plants of various sizes.
This model was then coupled with a detailed simulation of a coal-fired power plant.
The power generation efficiency drop with CO2 capture was determined and process
parameters in the power plant and separation process were optimized. It was shown
that the high energy demand of CO2 separation significantly reduces power generation
efficiencies, which underlines the need for improvements. This can be achieved by better
solvents or by advanced process designs. In this work such improved CO2 separation
processes are described and evaluated by detailed simulation studies.
In order to develop detailed rigorous simulation models for reactive absorption with novel
solvent systems, a precise knowledge of the liquid phase reaction kinetics is necessary.
There are well established techniques for measuring species distributions in equilibirated
aqueous amine solutions by NMR spectrosopy. However, the existing NMR techniques
cannot be used for monitoring fast reactions in these solutions. Therefore, in this work
a novel temperature-controlled micro-reactor NMR probe head was developed which
enables studying reaction kinetics with time constants in the range of seconds.
On this basis, modern solvent systems for CO2 absorption can be characterized and
the scale-up of separation process for future plants can be accompanied using rigorous
process simulation.
Distributed Optimization of Constraint-Coupled Systems via Approximations of the Dual Function
(2024)
This thesis deals with the distributed optimization of constraint-coupled systems. This problem class is often encountered in systems consisting of multiple individual subsystems, which are coupled through shared limited resources. The goal is to optimize each subsystem in a distributed manner while still ensuring that system-wide constraints are satisfied. By introducing dual variables for the system-wide constraints the system-wide problem can be decomposed into individual subproblems. These resulting subproblems can then be coordinated by iteratively adapting the dual variables. This thesis presents two new algorithms that exploit the properties of the dual optimization problem. Both algorithms compute a quadratic surrogate function of the dual function in each iteration, which is optimized to adapt the dual variables. The Quadratically Approximated Dual Ascent (QADA) algorithm computes the surrogate function by solving a regression problem, while the Quasi-Newton Dual Ascent (QNDA) algorithm updates the surrogate function iteratively via a quasi-Newton scheme. Both algorithms employ cutting planes to take the nonsmoothness of the dual function into account. The proposed algorithms are compared to algorithms from the literature on a large number of different benchmark problems, showing superior performance in most cases. In addition to general convex and mixed-integer optimization problems, dual decomposition-based distributed optimization is applied to distributed model predictive control and distributed K-means clustering problems.
In dieser Arbeit wird die Co-Konsolidierung im Thermoformen zwischen kontinuierlich faserverstärkten, teilkonsolidierten CF/PEEK Tape-Preforms und kontinuierlich faserverstärkten, vollständig konsolidierten CF/PEEK Tape-Laminaten untersucht. Bei der Co-Konsolidierung handelt es sich um die Herstellung einer Schweißverbindung zwischen zwei oder mehr Thermoplasten durch separates Aufheizen, Zusammenbringen der Fügeflächen und rasches Abkühlen unter Druck im isothermen Werkzeug. Die adressierte Anwendung ist das Verschweißen von Versteifungen auf Tape-Preforms während dem Thermoformen, sodass nachgeschaltete Fügeprozesse solcher Versteifungen obsolet werden und die Zykluszeit des Thermoformens unverändert bleibt.
Die Ergebnisse zeigen, dass der Grad der Teilkonsolidierung der Tape-Preforms -
unabhängig der gewählten Einstellgrößen des Werkzeugdrucks - keinen Einfluss auf die Konsolidierung der Tape-Laminate nach dem Thermoformen nimmt. Im Bereich einer Versteifung ist ein vergleichsweise größerer Werkzeugdruck zur Konsolidierung der teilkonsolidierten Tape-Preform notwendig, damit dort die gleichen Eigenschaften wie fern der Co-Konsolidierung erzeugt werden. Die zwischen Tape-Laminat und Versteifung gemessenen Zugscherfestigkeiten, die mittels Co-Konsolidierung im Thermoformen erzeugt werden, sind niedriger als die der Co-Konsolidierung im Autoklav.
Die von Zhou bereits 1994 erhaltenen tri(tert-butyl)cyclopentadienyltrichloride der vierten Gruppe [Cp'''MCl3] (M = Ti, Zr, Hf) konnten reproduziert, kristallisiert und strukturell untersuchtwerden. Auch konnten neue Di- und Tri(tert-butyl)cyclopentadienylzirconiumbromide und -iodide synthetisiert werden. Von [Cp''ZrI3] wurden röntgendiffraktometertaugliche
Kristalle erhalten, an denen die Struktur der Verbindung
aufgeklärt werden konnte. Bei Substitutionsversuchen mit weiteren Liganden konnten Hydridocluster erhalten werden. Strukturelle Untersuchungen zeigte einen Clusterkomplex mit der Formel (Cp''Zr)4(μ-H)8(μ-Cl)2. Es handelt sich hierbei um einen vierkernigen Zirconiumcluster, welcher von acht Hydrido- und zwei Chloridoliganden verbrückt wird. Jedes Zirconiumatom ist weiterhin
mit einem Di(tert-butyl)cyclopentadienylliganden verbunden. Bei der Untersuchung des Reaktionshergangs wurde ein weiterer Zr-Cluster gefunden. Es konnten röntgendiffraktometertaugliche Kristalle von Tris{di(tert-butyl)cyclopentadienyldi(μ-hydrido)zirconium} {chloridotri(μ-hydrido)aluminat} erhalten werden. Der Cluster besteht aus drei Zirconiumatomen, welche in einem Dreieck angeordnet sind und mit je zwei Hydridoliganden verbrückt. Jedes Zirconium ist über eine Hydridobrücke mit einem Aluminiumchloridfragment verbunden. Zudem ist an je Zirconiumatom je ein Di(tert- butyl)cyclopentadienylligand koordiniert. Weiterhin wurden Experimente zur Herstellung von Alkylderivaten des bislang nicht bekannten
Zirconocengrundkörpers Cp2Zr unternommen. Hierzu wurde Zirconiumtetrachlorid
mit n-Butyllithium zum Dichlorid ZrCl2(THF)2 reduziert. Das Reduktionsprodukt
wurde mit Natriumtetra(isopropyl)cyclopentadienid, Natriumtri(tertbutyl)
cyclopentadienid oder Lithiumpenta(isopropyl)cyclopentadienid umgesetzt.
Die Ergebnisse zeigen keinen eindeutigen erhalt von Zirconocenen, jedoch wurde ein Tri(tert-butyl)cyclopentadienyllithium- salz erhalten, welches strukturell aufgeklärt werden konnte.
Velocity Based Training ist ein Ansatz zur Belastungssteuerung im Widerstandstraining, der die volitional maximale konzentrische Durchschnittsgeschwindigkeit gegen einen bestimmten Lastwiderstand zur Steuerung der Belastungsintensität sowie das Ausmaß der intraseriellen konzentrischen Geschwindigkeitsreduktion zur Steuerung der intraseriellen muskulären Ermüdung verwendet. Die diesem Ansatz inhärente Grundvoraussetzung, sich mit volitional maximalen konzentrischen Geschwindigkeiten zu bewegen, führt jedoch dazu, dass die Steuerung der muskulären Ermüdung auf Basis der relativen Geschwindigkeitsreduktion nicht umsetzbar ist, wenn man sich im Widerstandstraining mit volitional submaximaler Geschwindigkeit bewegt. Deshalb befasste sich dieses Promotionsprojekt mit der übergeordneten Forschungsfrage, inwieweit sich ein adaptierter Ansatz der geschwindigkeitsbasierten Belastungssteuerung im Widerstandstraining auf Basis der Minimum Velocity Threshold (MVT), der eine „Relative Stopping Velocity Threshold“ ([RSVT], berechnet als Vielfaches der MVT in Prozent) zur objektiven Autoregulation der Belastungsdauer verwendet, dazu eignet, den Grad der muskulären Ermüdung innerhalb eines Trainingssatzes mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit zu steuern.
Zur Beantwortung dieser übergeordneten Forschungsfrage wurde eine explanative, prospektive Untersuchung im quasiexperimentellen Design durchgeführt. Dabei wurde für alle Probanden an einem ersten Termin die individuelle dynamische Maximalkraftleistung (1-RM) für die Langhantelübungen Bankdrücken und Kreuzheben ermittelt und an einem zweiten Termin die eigentliche Testung durchgeführt. An diesem zweiten Testtermin wurde pro Übung jeweils ein Testsatz mit volitional maximaler und ein Testsatz mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit bei einer standardisierten Belastungsintensität von 75 % 1-RM ausgeführt, während die konzentrische Bewegungsgeschwindigkeit der einzelnen Wiederholungen mittels einer Inertialsensoreinheit erfasst wurde, um die ermüdungsbedingte Geschwindigkeitsreduktion der Wiederholungen am Ende eines ausbelastenden Testsatzes zu untersuchen.
Als Antwort auf die übergeordnete Forschungsfrage dieser Untersuchung kann festgehalten werden, dass sich die RSVT grundsätzlich zur Steuerung der intraseriellen muskulären Ermüdung im Widerstandstraining mit volitional submaximaler konzentrischer Bewegungsgeschwindigkeit eignet. Für fitness- und gesundheitsorientierte Personen wurde ein RSVT-Zielkorridor abgeleitet der RSVT = 171,4 - 186,6 % MVT entspricht. Führt man einen Satz Bankdrücken mit der Langhantel mit einer Belastungsintensität von 75 % 1-RM und volitional submaximaler konzentrischer Bewegungsgeschwindigkeit so lange aus, bis die durchschnittliche konzentrische Bewegungsgeschwindigkeit (MV) einer Wiederholung ermüdungsbedingt in diesen Zielkorridor absinkt, sollten noch zwei bis drei weitere Wiederholungen ausführbar sein, bevor der Punkt des momentanen konzentrischen Muskelversagens erreicht wird. Für leistungsorientierte Personen im trainierten Zustand wurde ein RSVT-Zielkorridor von RSVT = 183,8 - 211,3 % MVT abgeleitet. Sinkt die gemessene MV einer Wiederholung ermüdungsbedingt in diesen Zielkorridor, kann mit vertretbarer Sicherheit davon ausgegangen werden, dass noch eine bis zwei weitere Wiederholungen bis zum Punkt des momentanen konzentrischen Muskelversagens ausgeführt werden können.
Die vorliegende Dissertation liefert durch diese Weiterentwicklung des Velocity Based Training einen adaptierten Steuerungsansatz, mit dem es erstmals möglich wird, die geschwindigkeitsbasierte Belastungssteuerung im Widerstandstraining auch bei volitional submaximalen konzentrischen Bewegungsgeschwindigkeiten sinnvoll anzuwenden. Aufgrund bestehender Limitationen der Untersuchung sind jedoch weitere wissenschaftliche Studien erforderlich, um die Gültigkeit, die Übertragbarkeit sowie die Effektivität des MVT-basierten Steuerungsansatzes weiter zu erforschen.
Since their introduction, robots have primarily influenced the industrial world, providing new opportunities and challenges for humans and machinery. With the introduction of lightweight robots and mobile robot platforms, the field of robot applications has been expanded, diversified, and brought closer to society. The increased degree of digitalization and the personalization of goods and products require an enhanced and flexible robot deployment by operating several multi-robot systems along production processes, industrial applications, assembly and packaging lines, transport systems, etc.
Efficient and safe robot operation relies on successful task planning followed by the computation and execution of task-performing motion trajectories. This thesis addresses these issues by developing, implementing, and validating optimization-based methods for task and trajectory planning in robotics, considering certain optimality and performance criteria. The focus is mainly on the time optimality of the presented approaches with respect to both execution and computation time without compromising safe robot use.
Driven by a systematic approach, the basis for the algorithm development is established first by modeling the kinematics and dynamics of the considered robots and identifying required dynamic parameters. In a further step, time-optimal task and trajectory planning algorithms for a single robotic arm are developed. Initially, a hierarchical approach is introduced consisting of two decoupled optimization-based control policies, a binary problem for task planning, and a continuous model predictive trajectory planning problem. The two layers of the hierarchical structure are then merged into a monolithic layer, resulting in a hybrid structure in the form of a mixed-integer optimization problem for inherent task and trajectory planning.
Motivated by a multi-robot deployment, the hierarchical control structure for time-optimal task and trajectory planning is extended for the case of a two-arm robotic system with highly overlapping operational spaces, leading to challenging robot motions with high inter-robot collision potential. To this end, a novel predictive approach for collision avoidance is proposed based on a continuous approximation of the robot geometry, resulting in a nonlinear optimization problem capable of online applications with real-time requirements. Towards a mobile and flexible robot platform, a model predictive path-following controller for an omnidirectional mobile robot is introduced. Here, a time-minimal approach is also applied, which consists of the robot following a given parameterized path as accurately as possible and at maximum speed.
The performance of the proposed algorithms and methods is experimentally analyzed and validated under real conditions on robot demonstrators. Implementation details, including the resulting hardware and software architecture, are presented, followed by a detailed description of the results. Concrete and industry-oriented demonstrators for integrating robotic arms in existing manual processes and the indoor navigation of a mobile robot complete the work.
Schneckengetriebe werden meist aus einer Stahlschnecke und einem Bronze-Schneckenrad gefertigt. Diese werden zur einstufigen Übertragung von Drehbewegungen bei hohen Übersetzungen eingesetzt. Einen Nachteil von Schneckengetrieben stellt der relativ hohe Verschleiß infolge der hohen Gleitreibung im Zahneingriff dar. Durch eine geeignete Schmierung können Reibung und Verschleiß reduziert werden. Dies reduziert den Temperaturanstieg
im Betrieb und führt somit zu einer längeren Lebensdauer des Getriebes. Aufgrund der ausgeprägten Kühlwirkung erfolgt die Schmierung von Schneckengetrieben in der Praxis überwiegend mit Schmierölen. Fettartige Schmierstoffe werden ebenfalls verwendet, weisen jedoch eine geringere Kühlwirkung als flüssige Schmierstoffe auf. Bei Vakuumanwendungen oder unter extremen Betriebsbedingungen, wie z.B. Hoch- oder Tieftemperaturanwendungen
sowie bei niedrigen hydrodynamischen Geschwindigkeiten, verlieren die oben genannten konventionellen Schmierstoffe ihre Schmierwirkung. Als Alternative
werden Festschmierstoffe eingesetzt.
Festschmierstoffe können im Allgemeinen auf verschiedene Weise in den Kontaktstellen von Maschinenelementen verwendet werden. In dieser Arbeit wird das Prinzip der Transferschmierung durch ein Opferbauteil eingesetzt. Hierbei werden Compounds aus strahlenmodifiziertem Polytetrafluorethylen (PTFE) und Polyamid (PA) als Opferbauteil im Schneckengetriebe verwendet, sodass die Stahlschnecke zeitgleich mit dem Bronze-Schneckenrad und dem Opferrad aus PA-PTFE-Compound im Zahneingriff steht. Durch die Belastung des Opferrades mit einem relativ kleinen Drehmoment verschleißt das Opferrad, wodurch der PTFE-Festschmierstoff freigesetzt und an der Stahloberfläche deponiert wird. Dies führt zur Bildung eines Transferfilms, welcher zur Schmierung des Kontakts
zwischen der Stahlschnecke und dem Bronze-Schneckenrad führt. Die Mechanismen des Auf- und Abbaus solcher Transferfilme in Schneckengetrieben sind derzeit unbekannt und werden in dieser Arbeit anhand experimenteller Untersuchungen erforscht. Hierzu wurden tribologische Versuche an Modellprüfständen durchgeführt, wodurch das reib- und Verschleißverhalten an Stahl-Bronze-Kontakten untersucht wurde. Als Modellprüfstände kamen der Block-auf-Ring-, der Block-Zwei-Scheiben- und der Drei-Scheiben-Prüfstand zum Einsatz. Anschließend wurden Bauteilversuche auf einem Schneckengetriebeprüfstand durchgeführt, um die aus den Modellversuchen gewonnenen Erkenntnisse zu validieren. Mit Hilfe von oberflächenanalytischen Techniken wurden die Prüfkörper auf der Mikroskala untersucht, um die Qualität und Quantität des aufgebauten Transferfilms zu bestimmen.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
Gliomas are one of the most common types of primary brain tumors. Among
those, high grade astrocytomas - so-called glioblastoma multiforme - are the
most aggressive type of cancer originating in the brain, leaving patients a median survival time of 15 to 20 months after diagnosis. The invasive behavior
of the tumor leads to considerable difficulties regarding the localization of all
tumor cells, and thus impedes successful therapy. Here, mathematical models
can help to enhance the assessment of the tumor’s extent.
In this thesis, we set up a multiscale model for the evolution of a glioblastoma.
Starting on the microscopic level, we model subcellular binding processes and
velocity dynamics of single cancer cells. From the resulting mesoscopic equation, we derive a macroscopic equation via scaling methods. Combining this
equation with macroscopic descriptions of the tumor environment, a nonlinear
PDE-ODE-system is obtained. We consider several variations of the derived
model, amongst others introducing a new model for therapy by gliadel wafers,
a treatment approach indicated i.a. for recurrent glioblastoma.
We prove global existence of a weak solution to a version of the developed
PDE-ODE-system, containing degenerate diffusion and flux limitation in the
taxis terms of the tumor equation. The nonnegativity and boundedness of all
components of the solution by their biological carrying capacities is shown.
Finally, 2D-simulations are performed, illustrating the influence of different
parts of the model on tumor evolution. The effects of treatment by gliadel
wafers are compared to the therapy outcomes of classical chemotherapy in different settings.
Die vorliegende Dissertation befasst sich mit der Herstellung und Charakterisierung
von Titan-, Zirconium- und Hafniumkomplexen, die mit sperrigen
Alkylcyclopentadienylliganden koordinieren. Hierbei wurden vorrangig
tert-butylsubstituierte Cp-Derivate verwendet aber auch die in der vierten
Nebengruppe weniger etablierten Isopropylcyclopentadienylliganden eingesetzt.
UV-Vis-spektroskopische Untersuchungen verdeutlichten Korrelationen der
Absorptionsmaxima und Intensitäten mit dem Substitutionsmuster am Cp-Liganden,
der Übergangsmetallart sowie der sonstigen koordinierenden Liganden.
Unter Durchführen von Substitutionsreaktionen konnten 2,6-Diisopropylphenolato-,
2,6-Di-tert-butylphenolato- und 3,5-Dimethylpyrazolidokomplexe hergestellt werden.
Verbindungen mit bidentaten Liganden konnten durch Verwenden von Natriumacetat,
Kaliumpivalat und Lithiumbenzoat synthetisiert werden. Säure-Base-Reaktionen
ausgehend von Cp''TiBr2N(TMS)2 ermöglichten das Einführen monodentater Liganden
wie Pyrrolidin, Piperidin und tert-Butylamin. Die Etablierung bidentater Liganden wie
N,N'-Diisopropyl-o-phenylendiamin und N,N'-Dimethylethylendiamin war über den
"constrained geometry complex" ansa-Cp'(Me2SitBuN-κN)TiCl2 realisierbar.
Im Zuge der Reduktion von Cp''TiBr3 mit Mangan entstand neben dem dominierend
ausgebildeten [Cp''TiBr(µ-Br)]2 in geringen Mengen [(Cp''TiBr2)2(µ-O)] durch nahezu
unvermeidliche Hydroxidkontaminationen des Reduktionsmittels. Die Umsetzung von
[RCpTiBr(µ-Br)]2 mit dem TEMPO-Radikal ermöglichte bei eng definierten
Reaktionsbedingungen die Herstellung von Cp''TiBr2(TEMPO), Cp''TiBr(TEMPO)2
sowie Cp'''TiBr2(TEMPO), deren homolytische Ti–O-Bindungsdissoziationstendenz
durch quantitative ESR-Experimente bei Raumtemperatur bestimmt wurde. Unter
Einsatz des TEMPO-Liganden konnten bei Raumtemperatur Cp''ZrCl2(TEMPO),
Cp''ZrCl(TEMPO)2 und Cp''HfCl(TEMPO)2 hergestellt und röntgenkristallographisch
untersucht werden. Folgereaktionen zu neuen Titan(III)-Komplexen ermöglichte die
Verbindungsklasse der Carbonsäureamide in Form von N,N-Dimethylisobutyramid,
N,N-Dimethylacetamid, 1,3-Dimethylimidazolidinon, Tetramethylharnstoff und
Acetamid, die unter Koordination zwitterionische Strukturen ausbilden. Alle
synthetisierten Komplexe wurden nach Möglichkeit mittels CHNS-Elementaranalyse,
Röntgenstrukturanalyse, Schmelzpunktbestimmung, NMR-, IR-, UV-Vis- und
ESR-Spektroskopie sowie SQUID-Magnetometrie charakterisiert.
An Efficient Automated Machine Learning Framework for Genomics and Proteomics Sequence Analysis
(2023)
Genomics and Proteomics sequence analyses are the scientific studies of understanding the language of Deoxyribonucleic Acid (DNA), Ribonucleic Acid (RNA) and protein biomolecules with an objective of controlling the production of proteins and understanding their core functionalities. It helps to detect chronic diseases in early stages, root causes of clinical changes, key genetic targets for pharmaceutical development and optimization of therapeutics for various age groups. Most Genomics and Proteomics sequence analysis work is performed using typical wet lab experimental approaches that make use of different genetic diagnostic technologies. However, these approaches are costly, time consuming, skill and labor intensive. Hence, these approaches slow down the process of developing an efficient and economical sequence analysis landscape essential to demystify a variety of cellular processes and functioning of biomolecules in living organisms. To empower manual wet lab experiment driven research, many machine learning based approaches have been developed in recent years. However, these approaches cannot be used in practical environment due to their limited performance. Considering the sensitive and inherently demanding nature of Genomics and Proteomics sequence
analysis which can have very far-reaching as well as serious repercussions on account of misdiagnosis, the main
objective of this research is to develop an efficient automated computational framework for Genomics and Proteomics sequence analysis using the predictive and prescriptive analytical powers of Artificial Intelligence (AI) to significantly improve healthcare operations.
The proposed framework is comprised of 3 main components namely sequence encoding, feature engineering and
discrete or continuous value predictor. The sequence encoding module is equipped with a variety of existing and newly developed sequence encoding algorithms that are capable of generating a rich statistical representation of DNA, RNA and protein raw sequences. The feature engineering module has diverse types of feature selection and dimensionality reduction approaches which can be used to generate the most effective feature space. Furthermore, the discrete and/or continuous value predictor module of the proposed framework contains a wide range of existing machine learning and newly developed deep learning regressors and classifiers. To evaluate the integrity and generalizability of the proposed framework, we have performed a large-scale experimentation over diverse types of Genomics and Proteomics sequence analysis tasks (i.e., DNA, RNA and proteins).
In Genomics analysis, Epigenetic modification detection is one of the key component. It helps clinical researchers and practitioners to distinguish normal cellular activities from malfunctioned ones, which can lead to diverse genetic disorders such as metabolic disorders, cancers, etc. To support this analysis, the proposed framework is used to solve the problem of DNA and Histone modification prediction where it has achieved state-of-the-art performance on 27 publicly available benchmark datasets of 17 different species with best accuracy of 97%. RNA sequence analysis is another vital component of Genomics sequence analysis where the identification of different coding and non-coding RNAs as well as their subcellular localization patterns help to demystify the functions of diverse RNAs, root causes of clinical changes, develop precision medicine and optimize therapeutics. To support this analysis, the proposed framework is utilized for non-coding RNA classification and multi-compartment RNA subcellular localization prediction. Where it achieved state-of-the-art performance on 10 publicly available benchmark datasets of Homo sapiens and Mus Musculus species with best accuracy of 98%.
Proteomics sequence analysis is essential to demystify the virus pathogenesis, host immunity responses, the way
proteins affect or are affected by the cell processes, their structure and core functionalities. To support this analysis, the proposed framework is used for host protein-protein and virus-host protein-protein interaction prediction. It has achieved state-of-the-art performance on 2 publicly available protein protein interaction datasets of Homo Sapiens and Mus Musculus species with best accuracy of 96% and 7 viral host protein protein interaction datasets of multiple hosts and viruses with best accuracy of 94%. Considering the performance and practical significance of proposed framework, we believe proposed framework will help researchers in developing cutting-edge practical applications for diverse Genomic and Proteomic sequence analyses tasks (i.e., DNA, RNA and proteins).
Emission trading systems (ETS) represent a widely used instrument to control greenhouse
gas emissions, while minimizing reduction costs. In an ETS, the desired amount of emissions in
a predefined time period is fixed in advance; corresponding to this amount, tradeable allowances
are handed out or auctioned to companies which underlie the system. Emissions which are not
covered by an allowance are subject to a penalty at the end of the time period.
Emissions depend on non-deterministic parameters such as weather and the state of the
economy. Therefore, it is natural to view emissions as a stochastic quantity. This introduces a
challenge for the companies involved: In planning their abatement actions, they need to avoid
penalty payments without knowing their total amount of emissions. We consider a stochastic control approach to address this problem: In a continuous-time model, we use the rate of
emission abatement as a control in minimizing the costs that arise from penalty payments and
abatement costs. In a simplified variant of this model, the resulting Hamilton-Jacobi-Bellman
(HJB) equation can be solved analytically.
Taking the viewpoint of a regulator of an ETS, our main interest is to determine the resulting
emissions and to evaluate their compliance with the given emission target. Additionally, as an
incentive for investments in low-emission technologies, a high allowance price with low variability
is desirable. Both the resulting emissions and the allowance price are not directly given by the
solution to the stochastic control problem. Instead we need to solve a stochastic differential
equation (SDE), where the abatement rate enters as the drift term. Due to the nature of the
penalty function, the abatement rate is not continuous. This means that classical results on
existence and uniqueness of a solution as well as convergence of numerical methods, such as the
Euler-Maruyama scheme, do not apply. Therefore, we prove similar results under assumptions
suitable for our case. By applying a standard verification theorem, we show that the stochastic
control approach delivers an optimal abatement rate.
We extend the model by considering several consecutive time periods. This enables us to
model the transfer of unused allowances to the subsequent time period. In formulating the
multi-period model, we pursue two different approaches: In the first, we assume the value that
the company anticipates for an unused allowance to be constant throughout one time period.
We proceed similarly to the one-period model and again obtain an analytical solution. In the
second approach, we introduce an additional stochastic process to simulate the evolution of the
anticipated price for an unused allowance.
The model so far assumes that allowances are allocated for free. Therefore, we construct
another model extension to incorporate the auctioning of allowances. Then, additionally the
problem of choosing the optimal demand at the auction needs to be solved. We find that
the auction price equals the allowance price at the beginning of the respective time period.
Furthermore, we show that the resulting emissions as well as the allowance price are unaffected
by the introduction of auctioning in the setting of our model.
To perform numerical simulations, we first solve the characteristic partial differential equation
derived from the HJB equation by applying the method of lines. Then we apply the Euler-
Maruyama scheme to solve the SDE, delivering realizations of the resulting emissions and the
allowance price paths.
Simulation results indicate that, under realistic settings, the probability of non-compliance
with the emission target is quite large. It can be reduced for instance by an increase of the
penalty. In the multi-period model, we observe that by allowing the transfer of allowances to the
subsequent time period, the probability of non-compliance decreases considerably.
Estimation of Motion Vector Fields of Complex Microstructures by Time Series of Volume Images
(2023)
Mechanical tests form one of the pillars in development and assessment of modern materials. In a world that will be forced to handle its resources more carefully in the near future, development of materials that are favorable regarding for example weight or material consumption is inevitable. To guarantee that such materials can also be used in critical infrastructure, such as foamed materials in automotive industry or new types of concrete in civil engineering, mechanical properties like tensile or compressive strength have to be thoroughly described. One method to do so is by so called in situ tests, where the mechanical test is combined with an image acquisition technique such as Computed Tomography.
The resulting time series of volume images comprise the delicate and individual nature of each material. The objective of this thesis is to present and develop methods to unveil this behavior and make the motion accessible by algorithms. The estimation of motion has been tackled by many communities, and two of them have already made big effort to solve the problems we are facing. Digital Volume Correlation (DVC) on the one hand has been developed by material scientists and was applied in many different context in mechanical testing, but almost never produces displacement fields that allocate one vector per voxel. Medical Image Registration (MIR) on the other hand does produce voxel precise estimates, but is limited to very smooth motion estimates.
The unification of both families, DVC and MIR, under one roof, will therefore be illustrated in the first half of this thesis. Using the theory of inverse problems, we lay the mathematical foundations to explain why in our impression none of the families is sufficient to deal with all of the problems that come with motion estimation in in situ tests. We then proceed by presenting a third community in motion estimation, namely Optical flow, which is normally only applied in two dimensions. Nevertheless, within this community algorithms have been developed that meet many of our requirements. Strategies for large displacement exist as well as methods that resolve jumps, and on top the displacement is always calculated on pixel level. This thesis therefore proceeds by extending some of the most successful methods to 3D.
To ensure the competitiveness of our approach, the last part of this thesis deals with a detailed evaluation of proposed extensions. We focus on three types of materials, foam, fibre systems and concrete, and use simulated and real in situ tests to compare the Optical flow based methods to their competitors from DVC and MIR. By using synthetically generated and simulated displacement fields, we also assess the quality of the calculated displacement fields - a novelty in this area. We conclude this thesis by two specialized applications of our algorithm, which show how the voxel-precise displacement fields serve as useful information to engineers in investigating their materials.
In tribology laboratories, the management of material samples and test specimens, the planning and execution of experiments, the evaluation of test data and the longterm storage of results are critical processes. However, despite their criticality, they are carried out manually and typically at a low level of computerization and standardization. Therefore, formats for primary data and aggregated results are wildly different between laboratories, and the interoperability of research data is low. Even within laboratories, low levels of standardization, in combination with ambiguous or non-unique identifiers for data files, test specimens and analysis results greatly reduce data integrity and quality. As a consequence, productivity is low, error rates are high, and the lack or low quality of metadata causes the value of produced data to deteriorate very quickly, which makes the re-use of data, e.g. for data mining and meta studies, practically impossible.
In other fields of science, these are mitigated by the use of Laboratory Information Management Systems (LIMS). However, at the moment, such systems do not exist
in tribological research. The main challenge for the implementation of such a system is that it requires extensive interdisciplinary knowledge from otherwise very
disparate fields: tribology, data and process modelling, quality management, databases and programming. So far, existing solutions are either proprietary, very limited
in their scope or focused on merely storing aggregated results without any support for laboratory operations.
Therefore, this thesis describes fundamentals of information technology, data modelling and programming that are required to build a LIMS for tribology laboratories.
Based on an analysis of a typical workflow of a tribology laboratory, a data model for all relevant entities and processes is designed using object-relational data modelling and object-oriented programming and a relational database is used to provide a reference implementation of such a LIMS. It provides critical functionalities
like a materials database, test specimen management, the planning, execution and evaluation of friction and wear tests, automated procedures for tribometer
parameterization and data transmission, storage and evaluation and for aggregating individual tests into test sets and projects. It improves the quality and long-term usability of data by replacing error-prone human processes by automated variants, e.g. automated collection of metadata and data file transmission, homogenization and storage. The usefulness of the developed LIMS is demonstrated by applying it to Transfer Film Luminance Analysis (TLA), which is a newly developed advanced method for the analysis of the formation and stability of transfer films and their impact on friction and wear, but which produces so much data and requires such a large amount of metadata during evaluation that it can only be performed safely, quickly and reliably by integration into the presented LIMS.
Regulation of sucrose transport between source and sink tissues is critical for plant development and properties. In cells, the dynamic vacuolar sugar homeostasis is maintained by the controlled regulation of the activities of sugar importers and exporters residing in the tonoplast. We show here that the EARLY RESPONSE TO DEHYDRATION6-LIKE4 protein, being the closest homolog to the proton/glucose symporter ERDL6, resides in the vacuolar membrane. We raised both, molecular expression and data deriving from non-aqueous fractionation studies indicating that ERDL4 was involved in glucose and fructose allocation across the tonoplast. Surprisingly, overexpression of ERDL4 increased total sugar levels in leaves, which is due to a concomitantly induced stimulation of TST2 expression, coding for the major vacuolar sugar loader. This conclusion is supported by the notion that tst1-2 knockout lines overexpressing ERDL4 lack increased cellular sugar levels. That ERDL4 activity contributes to the coordination of cellular sugar homeostasis is further indicated by two observations. Firstly, ERDL4 and TST genes exhibit an opposite regulation during a diurnal rhythm, secondly, the ERDL4 gene is markedly expressed during cold acclimation representing a situation in which TST activity needs to be upregulated. Moreover, ERDL4-overexpressing plants show larger size of rosettes and roots, a delayed flowering and increased total seed yield. In summary, we identified a novel factor influencing source to sink transfer of sucrose and by this governing plant organ development.
In this thesis, a new concept to prove Mosco convergence of gradient-type Dirichlet forms within the \(L^2\)-framework of K.~Kuwae and T.~Shioya for varying reference measures is developed.
The goal is, to impose as little additional conditions as possible on the sequence of reference measure \({(\mu_N)}_{N\in \mathbb N}\), apart from weak convergence of measures.
Our approach combines the method of Finite Elements from numerical analysis with the topic of Mosco convergence.
We tackle the problem first on a finite-dimensional substructure of the \(L^2\)-framework, which is induced by finitely many basis functions on the state space \(\mathbb R^d\).
These are shifted and rescaled versions of the archetype tent function \(\chi^{(d)}\).
For \(d=1\) the archetype tent function is given by
\[\chi^{(1)}(x):=\big((-x+1)\land(x+1)\big)\lor 0,\quad x\in\mathbb R.\]
For \(d\geq 2\) we define a natural generalization of \(\chi^{(1)}\) as
\[\chi^{(d)}(x):=\Big(\min_{i,j\in\{1,\dots,d\}}\big(\big\{1+x_i-x_j,1+x_i,1-x_i\big\}\big)\Big)_+,\quad x\in\mathbb R^d.\]
Our strategy to obtain Mosco convergence of
\(\mathcal E^N(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu_N\) towards \(\mathcal E(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu\) for \(N\to\infty\)
involves as a preliminary step to restrict those bilinear forms to arguments \(u,v\) from the vector space spanned by the finite family \(\{\chi^{(d)}(\frac{\,\cdot\,}{r}-\alpha)\) \(|\alpha\in Z\}\) for
a finite index set \(Z\subset\mathbb Z^d\) and a scaling parameter \(r\in(0,\infty)\).
In a diagonal procedure, we consider a zero-sequence of scaling parameters and a sequence of index sets exhausting \(\mathbb Z^d\).
The original problem of Mosco convergence, \(\mathcal E^N\) towards \(\mathcal E\) w.r.t.~arguments \(u,v\) form the respective minimal closed form domains extending the pre-domain \(C_b^1(\mathbb R^d)\), can be solved
by such a diagonal procedure if we ask for some additional conditions on the Radon-Nikodym derivatives \(\rho_N(x)=\frac{d\mu_N(x)}{d x}\), \(N\in\mathbb N\). The essential requirement reads
\[\frac{1}{(2r)^d}\int_{[-r,r]^d}|\rho_N(x)- \rho_N(x+y)|d y \quad \overset{r\to 0}{\longrightarrow} \quad 0 \quad \text{in } L^1(d x),\,
\text{uniformly in } N\in\mathbb N.\]
As an intermediate step towards a setting with an infinite-dimensional state space, we let $E$ be a Suslin space and analyse the Mosco convergence of
\(\mathcal E^N(u,v)=\int_E\int_{\mathbb R^d}\langle\nabla_x u(z,x),\nabla_x v(z,x)\rangle_\text{euc}d\mu_N(z,x)\) with reference measure \(\mu_N\) on \(E\times\mathbb R^d\) for \(N\in\mathbb N\).
The form \(\mathcal E^N\) can be seen as a superposition of gradient-type forms on \(\mathbb R^d\).
Subsequently, we derive an abstract result on Mosco convergence for classical gradient-type Dirichlet forms
\(\mathcal E^N(u,v)=\int_E\langle \nabla u,\nabla v\rangle_Hd\mu_N\) with reference measure \(\mu_N\) on a Suslin space $E$ and a tangential Hilbert space \(H\subseteq E\).
The preceding analysis of superposed gradient-type forms can be used on the component forms \(\mathcal E^{N}_k\), which provide the decomposition
\(\mathcal E^{N}=\sum_k\mathcal E^{N}_k\). The index of the component \(k\) runs over a suitable orthonormal basis of admissible elements in \(H\).
For the asymptotic form \(\mathcal E\) and its component forms \(\mathcal E^k\), we have to assume \(D(\mathcal E)=\bigcap_kD(\mathcal E^k)\) regarding their domains, which is equivalent to the Markov uniqueness of \(\mathcal E\).
The abstract results are tested on an example from statistical mechanics.
Under a scaling limit, tightness of the family of laws for a microscopic dynamical stochastic interface model over \((0,1)^d\) is shown and its asymptotic Dirichlet form identified.
The considered model is based on a sequence of weakly converging Gaussian measures \({(\mu_N)}_{N\in\mathbb N}\) on \(L^2((0,1)^d)\), which are
perturbed by a class of physically relevant non-log-concave densities.
Mit Durchführung des Instandsetzungsverfahrens 8.3 nach der Technischen Regel Instandhaltung, kurz TR-IH; wird zunächst keine direkte Repassivierung des Bewehrungsstahls angestrebt. Der Instandsetzungserfolg ist vielmehr an die zeitliche Änderung der korrosionsrelevanten Kenngrößen gekoppelt. Hierzu zählen die Erhöhung des spezifischen Elektrolytwiderstands des Betons infolge von Austrocknung sowie die Abnahme der Korrosionsströme und Treibspannungen am Bewehrungsstahl.
Im Rahmen dieser Arbeit wurden grundlegende Untersuchungen zur Wirkungsweise und zu den Anwendungsgrenzen des Instandsetzungsverfahrens 8.3 durchgeführt. Zur Beschreibung der Feuchteaufnahme und Feuchtespeicherung von chloridbelastetem Beton im Vergleich zu chloridfreiem Beton erfolgte die Untersuchung der Sorptionsisotherme an Betonen mit unterschiedlichen Chloridgehalten. Im Weiteren erfolgte die Untersuchung des Austrocknungsverhaltens von chloridbelasteten Betonproben unter eher diffusionsoffenen und stark diffusionsbremsenden Beschichtungen. Zusammen mit dem Austrocknungsverhalten wurde auch die Korrosionsaktivität der Bewehrung mittels elektrochemischer Messmethoden untersucht.
Die korrosionsbremsende Wirkungsweise des Verfahrens 8.3 wird bei niedrigen Wassergehalten primär durch eine anodische Steuerung der Korrosion als maßgeblicher Faktor beeinflusst. Der spezifische Elektrolytwiderstand hat als Systembestandteil im untersuchten Aufbau einen wichtigen, aber für den Nachweis des Instandsetzungserfolgs nicht hinreichenden Anteil, wenngleich er eine gute Korrelation mit der Austrocknung des Betons besitzt. Zur Bewertung der Korrosionskinetik ist die alleinige Betrachtung des spezifischen Elektrolytwiderstands daher ungeeignet.
Im Hinblick auf die Anwendungsgrenze des Verfahrens 8.3 zeigt sich der auf Höhe des Betonstahls im Beton vorhandene Chloridgehalt als ein maßgebliches Kriterium. Während die Korrosionsaktivität selbst bei moderat austrocknenden Proben unter stark diffusionsbremsenden Beschichtungen bei Chloridgehalten von 1 M.-% Cl-/z im Bereich der Passivstromdichte liegt, ist eine vergleichbare Reduktion der Stromdichten bei Chloridgehalten von 2 M.-% Cl-/z von weiteren Randparametern abhängig. Die Art der Beschichtung beeinflusst die Austrocknung des Betons entscheidend. So kann ein weniger dichter Beton unter einer diffusionsoffenen Beschichtung (OS 4) so weit austrocknen, dass die Passivstromdichte erreicht wird. Bei eher diffusionsdichten Beschichtungen und Vorhandensein hoher Chloridgehalte von 2 M.-% Cl-/z führte das Instandsetzungsverfahren 8.3 bei dem gewählten Versuchsaufbau innerhalb üblicher Zeiträume (hier 1,5 Jahre) nachweislich nicht zum Erfolg (unschädliche Korrosionsraten).
This thesis deals with the simulation of large insurance portfolios. On the one hand, we need to model the contracts' development and the insured collective's structure and dynamics. On the other hand, an important task is the forward projection of the given balance sheet. Questions that are interesting in this context, such as the question of the default probability up to a certain time or the question of whether interest rate promises can be kept in the long term, cannot be answered analytically without strong simplifications. Reasons for this are high dependencies between the insurer's assets and liabilities, interactions between existing and new contracts due to claims on a collective reserve, potential policy features such as a guaranteed interest rate, and individual surrender options of the insured. As a consequence, we need numerical calculations, and especially the volatile financial markets require stochastic simulations. Despite the fact that advances in technology with increasing computing capacities allow for faster computations, a contract-specific simulation of all policies is often an impossible task. This is due to the size and heterogeneity of insurance portfolios, long time horizons, and the number of necessary Monte Carlo simulations. Instead, suitable approximation techniques are required.
In this thesis, we therefore develop compression methods, where the insured collective is grouped into cohorts based on selected contract-related criteria and then only an enormously reduced number of representative contracts needs to be simulated. We also show how to efficiently integrate new contracts into the existing insurance portfolio. Our grouping schemes are flexible, can be applied to any insurance portfolio, and maintain the existing structure of the insured collective. Furthermore, we investigate the efficiency of the compression methods and their quality in approximating the real life insurance portfolio.
For the simulation of the insurance business, we introduce a stochastic asset-liability management (ALM) model. Starting with an initial insurance portfolio, our aim is the forward projection of a given balance sheet structure. We investigate conditions for a long-term stability or stationarity corresponding to the idea of a solid and healthy insurance company. Furthermore, a main result is the proof that our model satisfies the fundamental balance sheet equation at the end of every period, which is in line with the principle of double-entry bookkeeping. We analyze several strategies for investing in the capital market and for financing the due obligations. Motivated by observed weaknesses, we develop new, more sophisticated strategies. In extensive simulation studies, we illustrate the short- and long-term behavior of our ALM model and show impacts of different business forms, the predicted new business, and possible capital market crashes on the profitability and stability of a life insurer.
The fifth-generation (5G) of wireless networks promises to bring new advances, such as a huge increase in mobile data rates, a plunge in communications latency, and an increase in the quality of experience perceived by users that can cope with the ever-increasing demand in Internet traffic. However, the high cost of capital and operational expenditure (CAPEX/OPEX) of the new 5G network and the lack of a killer application hinder its rapid adoption. In this context, Mobile Network Operators (MNOs) have turned their attention to the following idea: opening up their infrastructure so that vertical businesses can leverage the new 5G network to improve their primary businesses and develop new ones. However, deploying multiple isolated vertical applications on top of the same infrastructure poses unique challenges that must be addressed. In this thesis, we provide critical contributions to developing 5G networks to accommodate different vertical applications in an isolated, flexible, and automated manner. This thesis contributions spawn on three main areas: (i) the development of an integrated fronthaul and backhaul network, (ii) the development of a network slicing overbooking algorithm, and (iii) the development of a method to mitigate the noisy neighbors' problem in a vRAN deployment.
Scientific research plays a crucial role in the development of a society. With ever-increasing volumes of scientific publications are now making it extremely challenging to analyze and maintain insights into the scientific communities like collaboration or citation trends and evolution of interests etc. This thesis is an effort towards using scientific publications to provide detailed insights into a scientific community from a range of aspects. The contribution of this thesis is five-fold.
Firstly, this thesis proposes approaches for automatic information extraction from scientific publications. The proposed layout-based approach for this purpose is inspired by how human beings perceive individual references relying only on visual queues. The proposed approach significantly outperforms the existing text-based techniques and is independent of any domain or language.
Secondly, this thesis tackles the problem of identifying meaningful topics from a given publication as the keywords provided in the publication are not always accurate representatives of the publication topic. To rectify this problem, this thesis proposes a state-of-the-art keywords extraction approach that employs a domain ontology along with the detected keywords to perform topic modeling for a given set of publications.
Thirdly, this thesis analyses the disposition of each citation to understand its true essence. For this purpose, we proposes a transformer-based approach for analyzing the impact of each citation appearing in a scientific publication. The impact of a citation can be determined by the inherent sentiment and intent of a citation, which refers to the assessment and motive of an author towards citing a scientific publication.
Furthermore, this thesis quantifies the influence of a research contributor in a scientific community by introducing a new semantic index for researchers that takes both quantitative and qualitative aspects of a citation into account to better represent the prestige of a researcher in a scientific community. Semantic Index is also evaluated for conformity to the guidelines and recommendations of various research funding organizations to assess the impact of a researcher.
In this thesis, all of the aforementioned aspects are packaged together in a single framework called Academic Community Explorer (ACE) 2.0, which automatically extracts and analyzes information from scientific publications and visualizes the insights using several interactive visualizations. These visualizations provide an instant glimpse into the scientific communities from a wide range of aspects with different granularity levels.
Human interferences within the Earth System are accelerating, leading to major impacts and feedback that we are just beginning to understand. Summarized under the term 'global change' these impacts put human and natural systems under ever-increasing stress and impose a threat to human well-being, particularly in the Global South. Global governance bodies have acknowledged that decisive measures have to be taken to mitigate the causes and to adapt to these new conditions. Nevertheless, neither current international nor national pledges and measures reach the effectiveness needed to sustain global human well-being under accelerating global change. On the contrary, competing interests are not only paralyzing the international debate but also playing an increasingly important role in debates over social fragmentation and societal polarization on national and local scales. This interconnectedness of the natural and the social system and its impact on social phenomena such as cooperation and conflicts need to be understood better, to strengthen social resilience to future disturbances, drive societal transformation towards socially desirable futures while at the same time avoiding path dependencies along continuing colonial continuities. As a case example, this thesis provides insights into southwestern Amazonia, where the intertwined challenges of human contribution to global change in all its dimensions, as well as human adaptation and mitigation attempts to the imposed changes become exaggeratedly visible. As such, southwestern Amazonia with its high social, economic, and biological diversity is a good example to study the deep interrelations of humans with nature and the consequences these relations have on social cohesion amid an ecological crisis.
Therefore, this thesis takes a social-ecological perspective on conflicts and social cohesion. Social cohesion is in a wider sense understood as the way "how members of a society, group, or organization relate to each other and work together" (Dany and Dijkzeul 2022, p. 12). In particular in contexts of violence, conflicts, and fragility, little has been investigated on the role of social cohesion to govern public goods and build resilience for (future) environmental crises. At the same time, governments and international decision-makers more and more acknowledge the role of social cohesion _ comprising both relations between social groups and between groups and the state _ to build upon resilience against crises. Facing uncertainty in how natural and social systems react to certain disturbances and shocks, the governance of potential tipping points, is an additional challenge for the governance of social-ecological systems (SES). Therefore, this thesis asks: "How does governance shape pathways towards cooperative or conflictive social-ecological tipping points?" The results of this thesis can be distinguished into theoretical/conceptual results and empirical results. Initial systematic literature research on the nexus of climate change, land use, and conflict revealed, an extensive body of literature on direct effects, for example, drought-related land use conflicts, with diverging opinions on whether global warming increases the risk for conflicts or not. Adding the perspective of indirect implications, we further identified research gaps, and also a lack of policy recognition, concerning the negative externalities on land use and conflict through climate mitigation and adaptation measures. On a conceptual note, taking a social cohesion perspective into the analysis is beneficial to shift the focus from the problem-oriented perspective of vulnerabilities and conflicts to global change and potential resulting conflicts to a solution-oriented perspective of enhancing agency and resilience to strengthen collaboration. The developed Social Cohesion Conceptual Model and the related analytical framework facilitate the incorporation of societal dynamics into the analysis of SES dynamics. In addition, the elaborated Tipping Multiverse Framework took up this idea and enhanced it with a more detailed perspective on the soil ecosystem and the household livelihood system to identify entry points to potential social-ecological tipping cascades. As such, the Tipping Multiverse Framework offered two matrices that can advance the understanding of regional SES by identifying core processes, functioning, and links in each TE and thus provide entry points to identify potential tipping cascades across SES sub-systems. The exemplified application of these two frameworks on southwestern Amazonia shows the analytical potential of both proposed frameworks in advancing the understanding of social-ecological tipping points and potential tipping cascades in a regional SES.
On an empirical note, zooming in on questions of governance by applying a political ecology lens to human security, we find that 'glocal' resource governance often reproduces, amplifies, or creates power imbalances and divisions on and between different scales. Our results show that the winners of resource extraction are mostly found at the national and international scale while local communities receive little benefit and are left vulnerable to externalities. Hence, our study contributes to the existing research by stressing the importance of one underlying question: "governance by whom and for whom?" This question raised the demand to understand the underlying dynamics of resource governance and resulting conflicts. Therefore, we aimed at analyzing how (environmental) institutions influence the major drivers of social-ecological conflicts over land in and around three protected areas, Tambopata (Peru), the Extractive Reserve Chico Mendes (Brazil), and Manuripi (Bolivia). We found that state institutions, in particular, have the following effects on key conflict drivers: Overlapping responsibilities of governance institutions and limited enforcement of regulations protecting and empowering rural and disadvantaged populations, enabling external actors to (illegally) access and control resources in the protected areas. Consequently, the already fragile social contract between the residents of the protected area and its surrounding areas and the central state is further weakened by the expanding influence of criminal organizations that oppose the state's authority. For state institutions to avoid aggravating these conflict drivers but instead better manage them or even contribute to conflict prevention and mitigation, a transformation from reactive to reflexive institutions and the development of new reflexive governance competencies is needed.
This need for reflexive governance becomes particularly visible when sudden disturbances or shocks impact the SES. Our analysis of the impacts of the COVID-19 pandemic on the interconnections of land use change, ecosystem services, human agency, conflict, and cooperation that the pandemic has had a severe influence on the human security of marginalized social groups in southwestern Amazonia. Civil society actions have been an essential strategy in the fight against COVID-19, not just in the health sector but also in the economic, political, social, and cultural realms. However, our research also showed that the pandemic has consolidated and partly renewed criminal structures, while the already weak state has fallen further behind due to additional tasks managing the pandemic and other disasters such as floods.
In conclusion, it can be said that the reflexivity of governance is crucial to foster cooperation and preventing conflicts in the realm of social-ecological systems. By not only reacting to already occurring changes but also reflecting upon potential future changes, governance can shape transformation pathways away from the detrimental and towards life-sustaining pathways. It can do so, by exercising agency across scales to avoid the crossing of detrimental social-ecological tipping points but rather to trigger life-sustaining tipping points that contribute to global social-ecological well-being.
Der Klimawandel stellt eine der größten Herausforderungen der aktuellen Zeit dar. Um die globale Erwärmung zu begrenzen, ist eine deutliche Reduzierung der CO2-Emissionen erforderlich. Dies muss auch im Gebäudesektor erfolgen. In Deutschland lassen sich auf diesen 34 % des gesamten Endenergieverbrauchs und 28 % der CO2-Emissionen zurückführen.
Um den Heiz- und Kühlenergiebedarf von Gebäuden möglichst umweltschonend sicherzustellen, ist es erforderlich, dass Gebäude mit ihrer Umgebung als eine Einheit betrachtet werden. Neben einem hohen Dämmniveau und einer möglichst luftdichten Ausführung der Gebäudehülle bedarf es dazu einer effizienten Anlagentechnik. Ziel dieser ist es, einen möglichst großen Anteil der erforderlichen Energie aus der Umwelt zu gewinnen und einzuspeichern, wenn diese in ausreichender Menge und auf dem erforderlichen Temperaturniveau zur Verfügung steht und abzugeben, wenn diese zur Beheizung oder Kühlung des Gebäudes benötigt wird.
Bei der Entwicklung solcher Gebäudegesamtsysteme ist es sinnvoll, die einzelnen Komponenten und deren Zusammenspiel über Simulationsprogramme zu modellieren. Auf diese Weise lässt sich die Funktion und Effizienz der Systeme untersuchen und bewerten. Aus diesem Grund wurde ein auf Latentwärmespeichern und Peltier-Wärmetauschern basierender Ansatz für ein neuartiges Gebäudegesamtsystem experimentell und simulativ untersucht.
Die vorliegende Arbeit beschreibt die an einzelnen Komponenten durchgeführten Versuche im Labor sowie in einem Versuchsgebäude. Anhand der gewonnenen Messwerte erfolgt anschließend aufgeteilt in ein Teilsystem zur Kühlung und ein Teilsystem zur Beheizung die Modellierung des Systems über das Gebäudesimulationsprogramm TRNSYS. Um die Funktions-weise der untersuchten Latentwärmespeicher und die Steuerung des Gesamtsystems abzubilden, wurde in TRNSYS der Type62 verwendet.
Es stellte sich heraus, dass sich dieser sehr gut eignet, um Messdaten in die Simulation zu implementieren, physikalische Prozesse abzubilden sowie um Algorithmen zur Steuerung des Systems zu programmieren. Auf diese Weise ließen sich für das neuartige Gebäudegesamtsystem das Zusammenspiel der einzelnen Technologien, der Deckungsanteil am Jahresenergiebedarf sowie die Energieeffizienz analysieren. Beim Teilsystem zur Kühlung wurden darüber hinaus die thermische Behaglichkeit sowie die Auswirkungen, die sich bei einer Holzrahmenbauweise und einem extrem warmen Testreferenzjahr ergeben, untersucht.
Die entwickelten Simulationsmodelle ermöglichen es, Randbedingungen sowie die Dimensionierung einzelner Komponenten zu variieren oder die Steuerungstechnik zu erweitern. Auch besteht die Möglichkeit weitere Technologien mit einzubinden, um deren Effekt auf die Leistungsfähigkeit und Effizienz des Gesamtsystems zu untersuchen. Optimierungspotential besteht beim Teilsystem zur Kühlung darin, weitere prädiktive Steuerungsalgorithmen zu hinterlegen, um die Betriebszeiten der Anlage und somit auch den Stromverbrauch zu reduzieren.
This work is concerned with two often separated disciplines. First, experimental studies in which the effect of cooling rate on martensite transformation and the resulting microstructure in a low-alloy steel is investigated. From this, a possible transformation mechanism is derived. Second, the development of a simulation model which describes the martensitic morphology and its evolution. In this context, a phase field model is presented introducing order parameters to simulate the material state, namely austenite and martensite. The evolution of the order parameters is assumed to follow the time-dependent Ginzburg-Landau equation. A major extension to previous models is the consideration of twelve crystallographic martensite variants corresponding to the Nishiyama-Wassermann orientation relationship. To describe the ordered displacement of atoms during transformation and to account for the martensitic substructure, the well-known phenomenological theory of martensite crystallography is employed. The presented experiments as well as thermodynamic calculations are used as a basis in the identification of model parameters. With the presented model, basic features of the martensitic transformation can be reproduced. These include the martensite start temperature and the hierarchical microstructure consisting of blocks and packets. The sizes of the blocks are in good agreement with the real sizes of the experimental database.