Refine
Year of publication
- 2019 (74) (remove)
Document Type
- Doctoral Thesis (74) (remove)
Has Fulltext
- yes (74)
Keywords
- Elektromobilität (2)
- Flexibilität (2)
- Niederspannungsnetz (2)
- Scientific Visualization (2)
- Smart Grid (2)
- Stadtplanung (2)
- Topology (2)
- Uncertainty Visualization (2)
- 3D image analysis (1)
- 3D printing (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (19)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (11)
- Kaiserslautern - Fachbereich Chemie (9)
- Kaiserslautern - Fachbereich Mathematik (9)
- Kaiserslautern - Fachbereich Bauingenieurwesen (5)
- Kaiserslautern - Fachbereich Biologie (5)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (4)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (4)
- Kaiserslautern - Fachbereich Sozialwissenschaften (3)
- Kaiserslautern - Fachbereich Physik (2)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (2)
- Kaiserslautern - Fachbereich Architektur (1)
Function of two redox sensing kinases from the methanogenic archaeon Methanosarcina acetivorans
(2019)
MsmS is a heme-based redox sensor kinase in Methanosarcina acetivorans consisting of alternating PAS and GAF domains connected to a C-terminal kinase domain. In addition to MsmS, M. acetivorans possesses a second kinase, MA0863 with high sequence similarity. Interestingly, MA0863 possesses an amber codon in its second GAF domain, encoding for the amino acid pyrrolysine. Thus far, no function of this residue has been resolved. In order to examine the heme iron coordination in both proteins, an improved method for the production of heme proteins was established using the Escherichia coli strain Nissle 1917. This method enables the complete reconstitution of a recombinant hemoprotein during protein production, thereby resulting in a native heme coordination. Analysis of the full-length MsmS and MA0863 confirmed a covalently bound heme cofactor, which is connected to one conserved cysteine residue in each protein. In order to identify the coordinating amino acid residues of the heme iron, UV/vis spectra of different variants were measured. These studies revealed His702 in MsmS and the corresponding His666 in MA0863 as the proximal heme ligands. MsmS has previously been described as a heme-based redox sensor. In order to examine whether the same is true for MA0863, redox dependent kinase assays were performed. MA0863 indeed displays redox dependent autophosphorylation activity, which is independent of heme ligands and only observed under oxidizing conditions. Interestingly, autophosphorylation was shown to be independent of the heme cofactor but rather relies on thiol oxidation. Therefore, MA0863 was renamed in RdmS (redox dependent methyltransferase-associated sensor). In order to identify the phosphorylation site of RdmS, thin layer chromatography was performed identifying a tyrosine as the putative phosphorylation site. This observation is in agreement with the lack of a so-called H-box in typical histidine kinases. Due to their genomic localization, MsmS and RdmS were postulated to form two-component systems (TCS) with vicinal encoded regulator proteins MsrG and MsrF. Therefore, protein-protein interaction studies using the bacterial adenylate two hybrid system were performed suggesting an interaction of RdmS and MsmS with the three regulators MsrG/F/C. Due to these multiple interactions these signal transduction pathways should rather be considered multicomponent system instead of two component systems.
Ranking lists are an essential methodology to succinctly summarize outstanding items, computed over database tables or crowdsourced in dedicated websites. In this thesis, we propose the usage of automatically generated, entity-centric rankings to discover insights in data. We present PALEO, a framework for data exploration through reverse engineering top-k database queries, that is, given a database and a sample top-k input list, our approach, aims at determining an SQL query that returns results similar to the provided input when executed over the database. The core problem consist of finding selection predicates that return the given items, determining the correct ranking criteria, and evaluating the most promising candidate queries first. PALEO operates on subset of the base data, uses data samples, histograms, descriptive statistics, and further proposes models that assess the suitability of candidate queries which facilitate limitation of false positives. Furthermore, this thesis presents COMPETE, a novel approach that models and computes dominance over user-provided input entities, given a database of top-k rankings. The resulting entities are found superior or inferior with tunable degree of dominance over the input set---a very intuitive, yet insightful way to explore pros and cons of entities of interest. Several notions of dominance are defined which differ in computational complexity and strictness of the dominance concept---yet, interdependent through containment relations. COMPETE is able to pick the most promising approach to satisfy a user request at minimal runtime latency, using a probabilistic model that is estimating the result sizes. The individual flavors of dominance are cast into a stack of algorithms over inverted indices and auxiliary structures, enabling pruning techniques to avoid significant data access over large datasets of rankings.
Wine and alcoholic fermentations are complex and fascinating ecosystems. Wine aroma is shaped by the wine’s chemical compositions, in which both microbes and grape constituents play crucial roles. Activities of the microbial community impact the sensory properties of the final product, therefore, the characterisation of microbial diversity is essential in understanding and predicting sensory properties of wine. Characterisation has been challenging with traditional approaches, where microbes are isolated and therefore analyzed outside from their natural environment. This causes a bias in the observed microbial composition structure. In addition, true community interactions cannot be studied using isolates. Furthermore, the multiplex ties between wine chemical and sensory compositions remain evasive due to their multivariate and nonlinear nature. Therefore, the sensorial outcome arising from different microbial communities has remained inconclusive.
In this thesis, microbial diversity during Riesling wine fermentations is investigated with the aim to understand the roles of microbial communities during fermentations and their links to sensory properties. With the advancement of high-throughput tools based ‘omic methods, such as next-generation sequencing (NGS) technologies, it is now possible to study microbial communities and their functions without isolation by culturing. This developing field and its potential to wine community is reviewed in Chapter 1. The standardisation of methods remains challenging in the field. DNA extraction is a key step in capturing the microbial diversity in samples for generating NGS data, therefore, DNA extraction methods are evaluated in Chapter 2. In Chapter 3, machine learning is utilized in guiding raw data mining generated by the untargeted GC-MS analysis. This step is crucial in order to take full advantages of the large scope of data generated by ‘omic methods. These lay a solid foundation for Chapters 4 and 5 where microbial community structures and their outputs - chemical and sensory compositions are studied by using approaches and tools based on multiple ‘omics methods.
The results of this thesis show first that by using novel statistical approaches, it is possible to extract meaningful information from heterogeneous biological, chemical and sensorial data. Secondly, results suggest that the variation in wine aroma, might be related
to microbial interactions taking place not only inside a single community, but also the
IV
interactions between communities, such as vineyard and winery communities. Therefore, the true sensory expression of terroir might be masked by the interaction between two microbial communities, although more work is needed to uncover this potential relationship. Such potential interaction mechanisms were uncovered between non- Saccharomyces yeast and bacteria in this work and unexpected novel bacterial growth was observed during alcohol fermentation. This suggests new layers in understanding of wine fermentations. In the future, multi-omic approaches could be applied to identify biological pathways leading to specific wine aroma as well as investigate the effects upon specific winemaking conditions. These results are relevant not just for the wine industry, but also to other industries where complex microbial networks are important. As such, the approaches presented in this thesis might find widely use in the food industry.
Im Rahmen der Energiewende werden eine Neustrukturierung des Energiesektors und eine grundlegende Transformation der elektrischen Energieversorgung erforderlich. Die volatile Erzeugung von Strom aus Wind und Sonne erfordert mit stetig steigendem Anteil an der Gesamtenergieproduktion einen immer höheren Bedarf an Flexibilität zur Stabilisierung der Stromnetze. Aus energetischer Sicht zeichnen sich Kläranlagen durch eine Vielfalt an Prozessen aus, bei denen Energie umgewandelt, gespeichert, bezogen und produziert wird.
Die vorliegende Arbeit soll einen Beitrag zu einem besseren Verständnis und vertieften Erkenntnissen an der Schnittstelle zwischen der Abwasser- und der Energiewirtschaft liefern, indem die Vereinbarkeit zwischen den Belangen der Abwasserreinigung und einem flexiblen Anlagenbetrieb sowie die Stromerzeugungs- und Flexibilitätspotenziale kommunaler Kläranlagen in Deutschland umfassend untersucht wurden.
Vor diesem Hintergrund wurden Erkenntnisse gewonnen, die zur Auswahl, Bewertung und möglichst sicheren Implementierung von typischen Aggregaten auf Kläranlagen herangezogen werden können, um durch einen anpassungsfähigen Anlagenbetrieb Flexibilität bereit zu stellen. Die zugrundeliegende Methodik wurde im Wesentlichen am Beispiel der Kläranlage Radevormwald entwickelt. Dabei wurden relevante Kennzahlen erarbeitet, die die klärtechnischen Anforderungen sowohl mit den technisch-physikalischen als auch energiemarktbedingten Anforderungen an die Aggregate in Einklang bringen. Ein wesentlicher Fokus liegt auf den abgeleiteten Restriktionen und Kontrollparametern, um einen sicheren Reinigungsbetrieb zu gewährleisten.
Im Rahmen dieser Forschungsarbeit konnte eine Vielzahl von Erkenntnissen an der Schnittstelle von Energie- und Abwasserwirtschaft gewonnen, bestätigt und zur Ableitung von Anwendungsempfehlungen genutzt werden. Die vorhandene Flexibilität auf Kläranlagen kommt für eine Vielzahl von Verwendungsmöglichkeiten in Betracht. Damit können sie bereits heute und zukünftig verstärkt an Produkten der Energieversorgung und neuen Geschäftsmodellen teilhaben. Die untersuchten Aggregate weisen dabei je nach Verwendungszweck unterschiedliche Eignungen auf und nicht jedes Aggregat ist für jede Nutzungsoption einsetzbar. Die Ergebnisse belegen, dass Kläranlagen mit ihren Leistungsgrößen und verschiebbaren Energiemengen relevante Beiträge zur Energiewende leisten können und zu mehr in der Lage sind als, weitgehend losgelöst vom Energiemarkt und den sich dort abzeichnenden Änderungen, elektrische Energie nur zur eigenen Nutzung zu verwenden.
Large-scale distributed systems consist of a number of components, take a number of parameter values as input, and behave differently based on a number of non-deterministic events. All these features—components, parameter values, and events—interact in complicated ways, and unanticipated interactions may lead to bugs. Empirically, many bugs in these systems are caused by interactions of only a small number of features. In certain cases, it may be possible to test all interactions of \(k\) features for a small constant \(k\) by executing a family of tests that is exponentially or even doubly-exponentially smaller than the family of all tests. Thus, in such cases we can effectively uncover all bugs that require up to \(k\)-wise interactions of features.
In this thesis we study two occurrences of this phenomenon. First, many bugs in distributed systems are caused by network partition faults. In most cases these bugs occur due to two or three key nodes, such as leaders or replicas, not being able to communicate, or because the leading node finds itself in a block of the partition without quorum. Second, bugs may occur due to unexpected schedules (interleavings) of concurrent events—concurrent exchange of messages and concurrent access to shared resources. Again, many bugs depend only on the relative ordering of a small number of events. We call the smallest number of events whose ordering causes a bug the depth of the bug. We show that in both testing scenarios we can effectively uncover bugs involving small number of nodes or bugs of small depth by executing small families of tests.
We phrase both testing scenarios in terms of an abstract framework of tests, testing goals, and goal coverage. Sets of tests that cover all testing goals are called covering families. We give a general construction that shows that whenever a random test covers a fixed goal with sufficiently high probability, a small randomly chosen set of tests is a covering family with high probability. We then introduce concrete coverage notions relating to network partition faults and bugs of small depth. In case of network partition faults, we show that for the introduced coverage notions we can find a lower bound on the probability that a random test covers a given goal. Our general construction then yields a randomized testing procedure that achieves full coverage—and hence, find bugs—quickly.
In case of coverage notions related to bugs of small depth, if the events in the program form a non-trivial partial order, our general construction may give a suboptimal bound. Thus, we study other ways of constructing covering families. We show that if the events in a concurrent program are partially ordered as a tree, we can explicitly construct a covering family of small size: for balanced trees, our construction is polylogarithmic in the number of events. For the case when the partial order of events does not have a "nice" structure, and the events and their relation to previous events are revealed while the program is running, we give an online construction of covering families. Based on the construction, we develop a randomized scheduler called PCTCP that uniformly samples schedules from a covering family and has a rigorous guarantee of finding bugs of small depth. We experiment with an implementation of PCTCP on two real-world distributed systems—Zookeeper and Cassandra—and show that it can effectively find bugs.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
Im Rahmen der vorliegenden Dissertation wurde ein Lernzirkel entwickelt, der die Schulfächer Biologie und Erdkunde miteinander verknüpft. Er thematisiert die Angepasstheit von Pflanzen an klimatische Faktoren und stützt sich somit auf die Geobotanik als integrative Wissenschaft. Der Lernzirkel mit seinen 16 Stationen nutzt den außerschulischen Lernort „Botanischer Garten“ und ist in den Gewächshäusern des Fachbereichsgarten der Biologie der Technischen Universität Kaiserslautern verortet. Zielgruppe sind Schülerinnen und Schüler der gymnasialen Oberstufe der Biologie- und Erdkunde-Kurse. Eine Lehrplanpassung in beiden Fächern fundiert den fächerverbindenden Ansatz.
An den handlungsorientierten Stationen werden spezifische Fragestellungen anhand des Originals sowie mittels experimenteller Lernformen unter Anwendung fachwissenschaftlicher Arbeitsweisen in Partnerarbeit gelöst. Dabei erarbeiten sich die Schülerinnen und Schüler die Stationen mittels einer Applikation auf einem Tablet-PC, die Informationen (Text-, Bild- und Videomaterial) bietet, sowie die interaktive Arbeit der Schülerinnen und Schüler an den Stationen sichert und dokumentiert. Neben dieser digitalen Umsetzung sorgen weitere methodische Unterrichtsprinzipien wie Realbezug und Primärerfahrung sowie ein Aufbau der Stationen vom Einfachen zum Komplexen für einen effektiven Wissenstransfer.
Durch die digitale Umsetzung kommt dem Instruktionsdesign und mediendidaktischen Entscheidungen bei der Entwicklung der Web-Applikation zum Lernzirkel entscheidende Bedeutung zu. Nach der Konzeption der Web-App auf Basis lerntheoretischer Grundlagen wurde im Rahmen der Erprobung des Lernzirkels ein schülerorientierter Ansatz zur Optimierung verfolgt. Unterstützt durch Videographie konnte der Lernzirkel weiterentwickelt werden und hat gezeigt, dass die Schülerinnen und Schüler die Inhalte motiviert und zielführend eigenständig erarbeiten.
Sozioökonomische Trends mit hoher Raumrelevanz bilden die Grundlage für die Zukunftsfragen von Regionen und Kommunen. Zugleich bedingen ein anhaltender Verstädterungsprozess sowie zunehmend differenziert und zum Teil stark divergent ablaufende Entwicklungsdynamiken eine Zunahme regionaler Ungleichgewichte. Eben diese Entwicklung wirft Fragen nach der Sicher-stellung der Leitvorstellung gleichwertiger Lebensverhältnisse auf. Insbesondere Mittelstädten werden in diesem Zusammenhang gerade für strukturschwache und periphere Regionen als Anker im Raum angesehen. Zugleich stehen Mittelstädte ländlich-peripherer Regionen in einer zunehmenden Diskrepanz hinsichtlich ihrer Funktionszuordnung sowie der an sie gestellten Her-ausforderungen. Einerseits ist ihnen aus raumordnungspolitischer Sicht neben ihrer Rolle als regionale Versorgungs-, Arbeitsmarkt- und Wirtschaftszentren eine stabilisierende Funktion des Umlandes sowie eine Trägerfunktion der ländlichen Entwicklungsdynamik zugeschrieben. Ande-rerseits weisen sie gleichzeitig selbst eine erhöhte Betroffenheit bezüglich des infrastrukturellen Anpassungsdrucks an sozioökonomische Veränderungsprozesse auf, den es zu bewältigen gilt.
Entsprechend gilt der Erhalt und der Ausbau der Leistungsfähigkeit der Mittelstädte außerhalb von Verdichtungsräumen als ein wesentlicher Beitrag zur zukünftigen, flächendeckenden Siche-rung der Grunddaseinsvorsorge in ländlich-peripheren Regionen.
Vorliegende Arbeit widmet sich somit erstens einer Untersuchung der regionalen Stabilisierungs-funktion von Mittelstädten für ländlich-periphere Räume einschließlich einer Analyse der Mög-lichkeiten und Grenzen deren Aufrechterhaltung unter den Einflüssen sozioökonomischer Trans-formationsprozesse und den damit verbundenen Anpassungsbedarfen. Darauf aufbauend um-fasst sie zweitens eine Analyse zur Identifikation von Erfolgsfaktoren, die mittelstädtische Stabili-sierungsfunktionen in ländlich-peripheren Räumen zukünftig sicherstellen.
Hierzu widmet sich die Arbeit zunächst einer definitorischen Einordnung des Stabilisierungsbe-griffs in den Regionalwissenschaften. Eng verknüpft ist damit einhergehend die Analyse landes- und regionalplanerischer sowie regionalökonomischer Ansätze unter dem Blickwinkel ihres Sta-bilisierungsgedankens sowie die Untersuchung von bestehenden Strategien zum Umgang mit regionalen Strukturwandelprozessen.
Daran anknüpfend erfolgt eine indikatorenbasierte beziehungsweise funktionale Typisierung des Stadttypus Mittelstadt im Kontext des ländlich-peripheren Raumtypus. Damit einhergehend wird fünf ausgewählten Fallstudien eine vertiefende Evaluation zugeführt. Hierdurch ergeben sich ergänzende Erkenntnisse insbesondere im Hinblick auf Verflechtungen zwischen Kreisregion und Mittelstadt, auf den Bedeutungsgrad der Mittelstadt bezüglich ihrer Wohn-, Arbeitsplatz- und Versorgungszentralität sowie insbesondere im Hinblick auf bestehende Handlungserfordernisse sowie Entwicklungsstrategien und Handlungsansätze zur Stärkung der Funktion und Rolle der Mittelstadt in und für ihr ländlich-peripheres Umfeld.
Daraus abgeleitet wird dargelegt, welche Handlungserfordernisse sich hieraus für die Regional-entwicklung ergeben und welche zukunftsfähigen Ansätze und Strategien auf der kommunalen, regionalen sowie landesplanerischen Ebene sich besonders eignen, um die Anker- und Stabilisie-rungsfunktion der Mittelstädte ländlicher-peripherer Räume zu stärken und somit letztlich die Daseinsvorsorge in ländlich-peripheren Regionen auch zukünftig gesichert zu wissen.
This thesis addresses several challenges for sustainable logistics operations and investigates (1) the integration of intermediate stops in the route planning of transportation vehicles, which especially becomes relevant when alternative-fuel vehicles with limited driving range or a sparse refueling infrastructure are considered, (2) the combined planning of the battery replacement infrastructure and of the routing for battery electric vehicles, (3) the use of mobile load replenishment or refueling possibilities in environments where the respective infrastructure is not available, and (4) the additional consideration of the flow of goods from the end user in backward direction to the point of origin for the purpose of, e.g., recapturing value or proper disposal. We utilize models and solution methods from the domain of operations research to gain insights into the investigated problems and thus to support managerial decisions with respect to these issues.
Magnetoelastic coupling describes the mutual dependence of the elastic and magnetic fields and can be observed in certain types of materials, among which are the so-called "magnetostrictive materials". They belong to the large class of "smart materials", which change their shape, dimensions or material properties under the influence of an external field. The mechanical strain or deformation a material experiences due to an externally applied magnetic field is referred to as magnetostriction; the reciprocal effect, i.e. the change of the magnetization of a body subjected to mechanical stress is called inverse magnetostriction. The coupling of mechanical and electromagnetic fields is particularly observed in "giant magnetostrictive materials", alloys of ferromagnetic materials that can exhibit several thousand times greater magnitudes of magnetostriction (measured as the ratio of the change in length of the material to its original length) than the common magnetostrictive materials. These materials have wide applications areas: They are used as variable-stiffness devices, as sensors and actuators in mechanical systems or as artificial muscles. Possible application fields also include robotics, vibration control, hydraulics and sonar systems.
Although the computational treatment of coupled problems has seen great advances over the last decade, the underlying problem structure is often not fully understood nor taken into account when using black box simulation codes. A thorough analysis of the properties of coupled systems is thus an important task.
The thesis focuses on the mathematical modeling and analysis of the coupling effects in magnetostrictive materials. Under the assumption of linear and reversible material behavior with no magnetic hysteresis effects, a coupled magnetoelastic problem is set up using two different approaches: the magnetic scalar potential and vector potential formulations. On the basis of a minimum energy principle, a system of partial differential equations is derived and analyzed for both approaches. While the scalar potential model involves only stationary elastic and magnetic fields, the model using the magnetic vector potential accounts for different settings such as the eddy current approximation or the full Maxwell system in the frequency domain.
The distinctive feature of this work is the analysis of the obtained coupled magnetoelastic problems with regard to their structure, strong and weak formulations, the corresponding function spaces and the existence and uniqueness of the solutions. We show that the model based on the magnetic scalar potential constitutes a coupled saddle point problem with a penalty term. The main focus in proving the unique solvability of this problem lies on the verification of an inf-sup condition in the continuous and discrete cases. Furthermore, we discuss the impact of the reformulation of the coupled constitutive equations on the structure of the coupled problem and show that in contrast to the scalar potential approach, the vector potential formulation yields a symmetric system of PDEs. The dependence of the problem structure on the chosen formulation of the constitutive equations arises from the distinction of the energy and coenergy terms in the Lagrangian of the system. While certain combinations of the elastic and magnetic variables lead to a coupled magnetoelastic energy function yielding a symmetric problem, the use of their dual variables results in a coupled coenergy function for which a mixed problem is obtained.
The presented models are supplemented with numerical simulations carried out with MATLAB for different examples including a 1D Euler-Bernoulli beam under magnetic influence and a 2D magnetostrictive plate in the state of plane stress. The simulations are based on material data of Terfenol-D, a giant magnetostrictive materials used in many industrial applications.
Novel image processing techniques have been in development for decades, but most
of these techniques are barely used in real world applications. This results in a gap
between image processing research and real-world applications; this thesis aims to
close this gap. In an initial study, the quantification, propagation, and communication
of uncertainty were determined to be key features in gaining acceptance for
new image processing techniques in applications.
This thesis presents a holistic approach based on a novel image processing pipeline,
capable of quantifying, propagating, and communicating image uncertainty. This
work provides an improved image data transformation paradigm, extending image
data using a flexible, high-dimensional uncertainty model. Based on this, a completely
redesigned image processing pipeline is presented. In this pipeline, each
step respects and preserves the underlying image uncertainty, allowing image uncertainty
quantification, image pre-processing, image segmentation, and geometry
extraction. This is communicated by utilizing meaningful visualization methodologies
throughout each computational step.
The presented methods are examined qualitatively by comparing to the Stateof-
the-Art, in addition to user evaluation in different domains. To show the applicability
of the presented approach to real world scenarios, this thesis demonstrates
domain-specific problems and the successful implementation of the presented techniques
in these domains.
The focus of this work is to provide and evaluate a novel method for multifield topology-based analysis and visualization. Through this concept, called Pareto sets, one is capable to identify critical regions in a multifield with arbitrary many individual fields. It uses ideas found in graph optimization to find common behavior and areas of divergence between multiple optimization objectives. The connections between the latter areas can be reduced into a graph structure allowing for an abstract visualization of the multifield to support data exploration and understanding.
The research question that is answered in this dissertation is about the general capability and expandability of the Pareto set concept in context of visualization and application. Furthermore, the study of its relations, drawbacks and advantages towards other topological-based approaches. This questions is answered in several steps, including consideration and comparison with related work, a thorough introduction of the Pareto set itself as well as a framework for efficient implementation and an attached discussion regarding limitations of the concept and their implications for run time, suitable data, and possible improvements.
Furthermore, this work considers possible simplification approaches like integrated single-field simplification methods but also using common structures identified through the Pareto set concept to smooth all individual fields at once. These considerations are especially important for real-world scenarios to visualize highly complex data by removing small local structures without destroying information about larger, global trends.
To further emphasize possible improvements and expandability of the Pareto set concept, the thesis studies a variety of different real world applications. For each scenario, this work shows how the definition and visualization of the Pareto set is used and improved for data exploration and analysis based on the scenarios.
In summary, this dissertation provides a complete and sound summary of the Pareto set concept as ground work for future application of multifield data analysis. The possible scenarios include those presented in the application section, but are found in a wide range of research and industrial areas relying on uncertainty analysis, time-varying data, and ensembles of data sets in general.
Diese Arbeit beinhaltet die Synthese zweier cyclischer Amidat-Liganden und die Untersuchung des Koordinationsverhaltens dieser Makrocyclen. Dabei wurden die strukturellen, elektrochemischen und spektroskopischen Eigenschaften der entstandenen Komplexverbindungen untersucht. Um höhere Oxidationsstufen am Metallion besser zu stabilisieren als durch neutrale Liganden, wurden die Liganden H\(_2\)L-Me\(_2\)TAOC und HL-TAAP-\(^t\)Bu\(_2\) hergestellt. Es sind zwölfgliedrige makrocyclische Ringe mit vielen sp\(^2\)-hybridisierten Atomen, die eine sterische Rigidität bedingen. Gleichzeitig besitzen sie zwei trans-ständige, sp\(^3\)-hybridisierte Amin-Donoratome, die eine Faltung entlang der N\(_{Amin}\)-N\(_{Amin}\)-Achse ermöglichen. Die äquatorialen Stickstoffdonoratome werden durch deprotonierte Amid-Gruppen bzw. durch das Stickstoffatom eines Pyridinrings zur Verfügung gestellt. Für beide Liganden konnte eine zufriedenstellende Syntheseroute, mit passablen Ausbeuten etabliert werden. In der Kristallstruktur des Makrocyclus HL-TAAP-\(^t\)Bu\(_2\) wird eine Wanne-Wanne-Konformation beobachtet. Die für eine cis-oktaedrische Koordination an Metallionen benötigte Konformation wird bereits im metallfreien Zustand des Liganden wegen der intramolekularen Wasserstoffbrückenbindungen verwirklicht. Die freie Rotation um die C-C-Bindungen ist bei diesem Liganden nur leicht gehindert, da die diastereotopen H-Atome der Methylengruppen im \(^1\)H-NMR-Spektrum als breite Singuletts in Erscheinung treten. Die Makrocyclen konnten erfolgreich mit Nickel(II)-, Kupfer(II)- und Cobalt(II)-Ionen komplexiert und kristallisiert werden. Dabei wurden zufriedenstellende Ausbeuten erhalten. Ohne weiteren zweifach koordinierenden Coliganden bildet der Ligand H\(_2\)L-Me\(_2\)TAOC stets fünffach koordinierte Mono-chloro-Komplexe. Der Ligand HL-TAAP-\(^t\)Bu\(_2\) bildet sechsfach koordinierte Verbindungen. Durch die Verwendung von zweizähnigen Coliganden wurde für den Makrocyclus H\(_2\)L-Me\(_2\)TAOC eine sechsfache Koordination erzwungen. Wie alle sechsfach koordinierte Verbindungen in dieser Arbeit liegen sie in einer cis-oktaedrischen Koordinationsumgebung vor. Um Vergleichskomplexe zu erhalten, wurden auch mit den Diazapyridinophan-Liganden L-N\(_4\)Me\(_2\) und L-N\(_4\)\(^t\)Bu\(_2\) die entsprechenden Kupfer- und Nickelkomplexe mit den jeweiligen Coliganden synthetisiert. In den Kristallstrukturen sind die entsprechenden Verbindungen der Diazapyridinophan-Liganden generell stärker gefaltet als die der Amidat-Liganden. Durch die starken \( \sigma\)-Donoreigenschaften der Amidatgruppen werden im Allgemeinen kürzere äquatoriale Bindungen zu den Metallionen verursacht. Durch den Vergleich der Bindungslängen mit ähnlichen bekannten high- und low-spin-Cobalt(II)-Komplexen hat sich gezeigt, dass für die Länge der Co-N\(_{Amid}\)-Bindung im high-spin-Zustand Werte von 1,95 bis 1,97 Å gefunden werden. Für den low-spin-Zustand werden Werte zwischen 1,92 und 1,95 Å gefunden. Durch die elektrochemischen Untersuchungen konnte gezeigt werden, dass beim überwiegenden Teil der Verbindungen das Potential der Oxidationen deutlich in der Reihenfolge der Makrocyclen H\(_2\)L-Me\(_2\)TAOC < HL-TAAP-\(^t\)Bu\(_2\) < L-N\(_4\)Me\(_2\) < L-N\(_4\)\(^t\)Bu\(_2\) ansteigt. Das belegt eindeutig die leichtere Oxidierbarkeit der Komplexe mit den negativ geladenen Liganden, die damit höhere Oxidationsstufen besser stabilisieren. Durch die Energie der ersten Anregung in den UV/Vis-Spektren der Nickel(II)-Komplexe ergibt sich die Ligandenfeldstärke der makrocyclischen Liganden etwa in der Reihenfolge H\(_2\)L-Me\(_2\)TAOC ≈ L-N\(_4\)Me\(_2\) > L-N\(_4\)\(^t\)Bu\(_2\) ≈ HL-TAAP-\(^t\)Bu\(_2\).
In jüngerer Vergangenheit wurden Konzepte zur Bestimmung der erforderlichen
Bewehrung zur Begrenzung der Rissbreite in dicken Bauteilen erarbeitet und in den
aktuellen Bemessungsnormen des Stahlbeton- und Spannbetonbaus verankert. Mit
Hilfe dieser Konzepte ist es möglich, dicke Bauteile gegenüber den bislang
angewendeten Bemessungsverfahren sinnvoll zu bewehren.
Stahlbetonhochbaudecken zählen jedoch in der Regel zu den schlanken Bauteilen und
somit nicht zu der Kategorie von Bauteilen, für die die neuen Bemessungsansätze
entwickelt wurden. Dennoch sind sie es, die den Massenverbrauch in den Tragwerken
von Hochbauten dominieren. Die Auslegung von Stahlbetondecken spielt somit eine
entscheidende Rolle im Hinblick auf die Wirtschaftlichkeit und die Umweltfreundlichkeit
von Hochbauten.
Stahlbetonhochbaudecken sind im Regelfall einer kombinierten Beanspruchung aus
Last und Zwang unterworfen. Die hierbei für die Bemessung erforderliche Größe der
Zwangkraft kann angesichts ihrer direkten Verknüpfung mit der Steifigkeit nur mit Hilfe
von physikalisch nichtlinearen Finite-Elemente-Berechnungen mit ausreichender
Genauigkeit abgeschätzt werden. Für Tragwerksplaner wäre ein solches Vorgehen im
Rahmen von realen Bauprojekten jedoch mit einem unverhältnismäßig hohen Aufwand
verbunden. In der Praxis ist es daher derzeit üblich, an jeder Stelle eines Bauteils den
größeren Wert derjenigen Bewehrungsquerschnitte einzulegen, die sich aus Last oder
aus Zwang ergeben. Die zur Aufnahme der Zwangbeanspruchungen erforderliche
Bewehrung wird hierbei basierend auf der Risskraft des jeweiligen Bauteils gewählt.
Dieses Vorgehen ist aber in vielen Fällen unwirtschaftlich und kann auch auf der
unsicheren Seite liegen.
Im Rahmen der vorliegenden Arbeit wird daher die Frage nach einer geeigneten
Bewehrung für Hochbaudecken unter einer kombinierten Beanspruchung aus Last und
zentrischem Zwang experimentell und numerisch untersucht. Auf Basis der
gewonnenen Erkenntnisse aus den experimentellen und numerischen
Untersuchungen wird ein Näherungsverfahren ausgearbeitet, welches eine
wirklichkeitsnahe Abschätzung der Zwanglängskraft und somit eine wirtschaftliche und
sichere Wahl der Bewehrung zur Begrenzung der Rissbreite bei einachsig gespannten
Stahlbetonhochbaudecken erlaubt.
Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.
Der Bevölkerungsrückgang in ländlichen Städten und Dörfern stellt die Gemeinden vor erhebliche Herausforderungen im Bereich der Daseinsvorsorge. So ist die Funktionalität leitungsgebundener Infrastrukturen der Abwasserentsorgung im Zuge einer starken, dispersen demographischen Entdichtung der Siedlungen nur mit betrieblichem Mehraufwand bis hin zu baulichen Systemanpassungen zu gewährleisten. Aktuelle Herausforderungen im Zusammenhang mit dem Klimawandel, wie längere Trockenperioden und immer häufiger und stärker auftretende Niederschlagsereignisse, ein gesteigertes Problembewusstsein zur Endlichkeit kostbarer Ressourcen, wie z.B. Wasser oder Phosphor sowie aktuelle energiepolitische Fragestellungen stellen die zentralisierte Systemkonzeption der kommunalen Abwasserentsorgung in dispersen Siedlungsstrukturen zusätzlich zur Disposition.
Die Bereitstellung, Konzeption, Finanzierung und der Betrieb kommunaler Infrastrukturen ist abhängig von den Bedarfsträgern. Im Zuge weitreichender, dynamischer und motivorientierter Migrationsbewegungen sind gerade ländliche Raumstrukturen von starker Abwanderung, insbesondere jüngerer Kohorten und zusätzlich von Überalterung und hohen Sterbeziffern betroffen. In Abhängigkeit der raumstrukturellen Beschaffenheit einzelner Städte und Dörfer und deren Lage im überörtlichen Sinne, können die Nutzungsmischung, Größe der Siedlungen und ebenso die altersstrukturelle Zusammensetzung der Einwohner sowie die Bevölkerungsdichte erheblich schwanken. Aus der Komposition von Bevölkerung, Raumfunktionen und Infrastrukturen ergeben sich ebenso unterschiedliche demographische Entwicklungsperspektiven, die im Rahmen dieser Arbeit szenariobasiert analysiert werden.
Die auf der de facto-Bevölkerung der ländlichen Modellstädte und -dörfer aufbauenden Demographieszenarien bilden die Basis der weiterführenden Untersuchung der kleinräumigen Auswirkungen auf deren Abwasserentsorgungssysteme. Der Untersuchungsansatz stützt sich auf eine umfassende Daten- und Analysebasis aus dem BMBF-Verbundprojekt SinOptiKom (2016). Die Synthese aus der de facto-Bevölkerung, ihrer Entwicklung durch kohortenspezifisches Migrationsverhalten, ihrer natürlichen Entwicklung sowie der SinOptiKom-Analyseergebnisse, zur Transformation der Abwasserentsorgungssysteme in den Modellgemeinden, bilden die Grundlage für die Ableitung und Diskussion möglicher Transformations- und Konsolidierungsstrategien der Gemeinden.
Den methodischen Schwerpunkt der Untersuchung bildet die szenariobasierte Analyse, mit der sich mögliche zukünftige Entwicklungen im betrachteten Themenfeld sowohl quantitativ als auch graphisch abbilden und durch relevante Akteure der örtlichen, überörtlichen und fachlichen Planung diskutieren lassen, um daraus Handlungsstrategien abzuleiten.
On the Effect of Nanofillers on the Environmental Stress Cracking Resistance of Glassy Polymers
(2019)
It is well known that reinforcing polymers with small amounts of nano-sized fillers is one of the most effective methods for simultaneously improving their mechanical and thermal properties. However, only a small number of studies have focused on environ-mental stress cracking (ESC), which is a major issue for premature failures of plastic products in service. Therefore, the contribution of this work focused on the influence of nano-SiO2 particles on the morphological, optical, mechanical, thermal, as well as envi-ronmental stress cracking properties of amorphous-based nanocomposites.
Polycarbonate (PC), polystyrene (PS) and poly(methyl methacrylate) (PMMA) nanocom-posites containing different amounts and sizes of nano-SiO2 particles were prepared using a twin-screw extruder followed by injection molding. Adding a small amount of nano-SiO2 caused a reduction in optical properties but improved the tensile, toughness, and thermal properties of the polymer nanocomposites. The significant enhancement in mechanical and thermal properties was attributed to the adequate level of dispersion and interfacial interaction of the SiO2 nanoparticles in the polymer matrix. This situation possibly increased the efficiency of stress transfer across the nanocomposite compo-nents. Moreover, the data revealed a clear dependency on the filler size. The polymer nanocomposites filled with smaller nanofillers exhibited an outstanding enhancement in both mechanical properties and transparency compared with nanocomposites filled with larger particles. The best compromise of strength, toughness, and thermal proper-ties was achieved in PC-based nanocomposites. Therefore, special attention to the influ-ence of nanofiller on the ESC resistance was given to PC.
The ESC resistance of the materials was investigated under static loading with and without the presence of stress-cracking agents. Interestingly, the incorporation of nano-SiO2 greatly enhanced the ESC resistance of PC in all investigated fluids. This result was particularly evident with the smaller quantities and sizes of nano-SiO2. The enhancement in ESC resistance was more effective in mild agents and air, where the quality of the deformation process was vastly altered with the presence of nano-SiO2. This finding confirmed that the new structural arrangements on the molecular scale in-duced by nanoparticles dominate over the ESC agent absorption effect and result in greatly improving the ESC resistance of the materials. This effect was more pronounced with increasing molecular weight of PC due to an increase in craze stability and fibril density. The most important and new finding is that the ESC behavior of polymer-based nanocomposites/ stress-cracking agent combinations can be scaled using the Hansen solubility parameter. Thus allowed us to predict the risk of ESC as a function of the filler content for different stress-cracking agents without performing extensive tests. For a comparison of different amorphous polymer-based nanocomposites at a given nano-SiO2 particle content, the ESC resistance of materials improved in the following order: PMMA/SiO2 < PS/SiO2 < low molecular weight PC/SiO2 < high molecular weight PC/SiO2. In most cases, nanocomposites with 1 vol.% of nano-SiO2 particles exhibited the largest improvement in ESC resistance.
However, the remarkable improvement in the ESC resistance—particularly in PC-based nanocomposites—created some challenges related to material characterization because testing times (failure time) significantly increased. Accordingly, the superposition ap-proach has been applied to construct a master curve of crack propagation model from the available short-term tests at different temperatures. Good agreement of the master curves with the experimental data revealed that the superposition approach is a suitable comparative method for predicting slow crack growth behavior, particularly for long-duration cracking tests as in mild agents. This methodology made it possible to mini-mize testing time.
Additionally, modeling and simulations using the finite element method revealed that multi-field modeling could provide reasonable predictions for diffusion processes and their impact on fracture behavior in different stress cracking agents. This finding sug-gests that the implemented model may be a useful tool for quick screening and mitigat-ing the risk of ESC failures in plastic products.
Die räumliche Planung begegnet häufig Herausforderungen, zu deren Bewältigung nicht auf existierendes Wissen zurückgegriffen werden kann. Um neuartiges Wissen zu erzielen, werden insbesondere Modellvorhaben – kleinmaßstäbliche, befristete reale Feldexperimente – als Instrument eingesetzt. Diese zielen darauf ab, wiederverwendbares Wissen reproduzierbar zu erzeugen. Im Rahmen eines Modellvorhabens werden in verschiedenen Modellräumen vielfältige innovative Projekte initiiert, über einen festen Zeitraum umgesetzt sowie bewertet. Akademische oder private Institutionen begleiten Modellvorhaben wissenschaftlich, um allgemeingültige und übertragbare Erkenntnisse zu identifizieren. Die Ergebnisse dieser umfassenden Evaluation werden in einem Abschlussbericht dokumentiert. Erfahrungen zeigen allerdings, dass dies zur Verteilung der Ergebnisse nicht ausreicht, um die Nutzung und Wiederverwendung der in Modellvorhaben generierten Erkenntnisse sicherzustellen. Dies liegt insbesondere daran, dass die Abschlussberichte zu wenig anwendungsorientiert und zu umfangreich sind. So ist der Vergleich zwischen vorhandenen Berichten und einem laufenden Modellvorhaben mit einem zu hohen Aufwand verbunden, wodurch sich ein unausgeglichenes Aufwand-Ertrag-Verhältnis ergibt. Somit wird das Lernen aus Modellvorhaben erschwert.
Um eine effektive und effiziente Dissemination und Verstetigung sowie Wiederverwendbarkeit von Wissen generiert in Modellvorhaben zu erzielen, wurde im Rahmen der vorliegenden Forschungsarbeit ein Modell entwickelt. In einem ersten Schritt wurde für die Analyse von Modellvorhaben eine allgemeingültige Struktur geschaffen, die mit dem generellen Ablauf eines Projekts im Rahmen des Projektmanagements vergleichbar ist. Diese Struktur reduziert den Aufwand Erkenntnisse und Wissen jeweils für die folgenden hier definierten Phasen zu nutzen: Identifikation einer neuen Herausforderung; Projektaufruf; Bewerbungen der möglichen Teilnehmer; Bewertungen der Bewerbungen durch den Initiator; Durchführung; Auswertung; Dissemination, Transfer und Verstetigung.
Im nächsten Schritt wurde in die einzelnen Phasen eines Modellvorhabens ein Wissensmanagementprozess – die Bausteine Wissensziele, -identifikation, -erwerb, - entwicklung, -bewertung, -bewahrung, -(ver)teilung und -nutzung – integriert, um die gemeinsame Nutzungseinheit vom umfassenden Abschlussbericht auf kleinere, in sich abgeschlossene Informationseinheiten zu reduzieren. Auf diese Weise wird der Aufwand für die Identifikation, den Erwerb und die Nutzung von Wissen verringert. Am Ende jeder Phase wird eine Bewertung durchgeführt sowie das erzielte Wissen effizient geteilt. Dafür ist eine systematische Interaktion zwischen Akteuren von Modellvorhaben und eine zentrale Sammlung des Wissens notwendig. Ein wesentliches Ergebnis dieser Arbeit ist die Entwicklung einer neuartigen Austauschinfrastruktur, die das generierte Wissen einerseits bewahrt und andererseits systematisch verteilt. Dadurch kann bereits im Verlauf eines Modellvorhabens gewonnenes Wissen ausgetauscht und wiederverwendet werden, sodass
die Phase der Dissemination, Transfer und Verstetigung in den Prozess verschoben wird. Die Infrastruktur soll frei zugänglich sein und nutzerfreundlich gestaltet werden.
Durch das entwickelte Modell wird eine effektive und effiziente Wiederverwendung von Wissen generiert in Modellvorhaben ermöglicht sowie eine belastbare Grundlage für neue Projekte in der räumlichen Planung geschaffen.
Model uncertainty is a challenge that is inherent in many applications of mathematical models in various areas, for instance in mathematical finance and stochastic control. Optimization procedures in general take place under a particular model. This model, however, might be misspecified due to statistical estimation errors and incomplete information. In that sense, any specified model must be understood as an approximation of the unknown "true" model. Difficulties arise since a strategy which is optimal under the approximating model might perform rather bad in the true model. A natural way to deal with model uncertainty is to consider worst-case optimization.
The optimization problems that we are interested in are utility maximization problems in continuous-time financial markets. It is well known that drift parameters in such markets are notoriously difficult to estimate. To obtain strategies that are robust with respect to a possible misspecification of the drift we consider a worst-case utility maximization problem with ellipsoidal uncertainty sets for the drift parameter and with a constraint on the strategies that prevents a pure bond investment.
By a dual approach we derive an explicit representation of the optimal strategy and prove a minimax theorem. This enables us to show that the optimal strategy converges to a generalized uniform diversification strategy as uncertainty increases.
To come up with a reasonable uncertainty set, investors can use filtering techniques to estimate the drift of asset returns based on return observations as well as external sources of information, so-called expert opinions. In a Black-Scholes type financial market with a Gaussian drift process we investigate the asymptotic behavior of the filter as the frequency of expert opinions tends to infinity. We derive limit theorems stating that the information obtained from observing the discrete-time expert opinions is asymptotically the same as that from observing a certain diffusion process which can be interpreted as a continuous-time expert. Our convergence results carry over to convergence of the value function in a portfolio optimization problem with logarithmic utility.
Lastly, we use our observations about how expert opinions improve drift estimates for our robust utility maximization problem. We show that our duality approach carries over to a financial market with non-constant drift and time-dependence in the uncertainty set. A time-dependent uncertainty set can then be defined based on a generic filter. We apply this to various investor filtrations and investigate which effect expert opinions have on the robust strategies.
Private data analytics systems preferably provide required analytic accuracy to analysts and specified privacy to individuals whose data is analyzed. Devising a general system that works for a broad range of datasets and analytic scenarios has proven to be difficult.
Despite the advent of differentially private systems with proven formal privacy guarantees, industry still uses inferior ad-hoc mechanisms that provide better analytic accuracy. Differentially private mechanisms often need to add large amounts of noise to statistical results, which impairs their usability.
In my thesis I follow two approaches to improve the usability of private data analytics systems in general and differentially private systems in particular. First, I revisit ad-hoc mechanisms and explore the possibilities of systems that do not provide Differential Privacy or only a weak version thereof. Based on an attack analysis I devise a set of new protection mechanisms including Query Based Bookkeeping (QBB). In contrast to previous systems QBB only requires the history of analysts’ queries in order to provide privacy protection. In particular, QBB does not require knowledge about the protected individuals’ data.
In my second approach I use the insights gained with QBB to propose UniTraX, the first differentially private analytics system that allows to analyze part of a protected dataset without affecting the other parts and without giving up on accuracy. I show UniTraX’s usability by way of multiple case studies on real-world datasets across different domains. UniTraX allows more queries than previous differentially private data analytics systems at moderate runtime overheads.
Im Rahmen dieser Dissertationsarbeit wurden 2,6-Bis(pyrazol-3-yl)pyridinliganden im Ligandenrückgrat und in N-Position funktionalisiert, um chirale C2-symmetrische tridentate Liganden mit Stickstoffdonoratomen bzw. bifunktionelle pentadentate Liganden mit Stickstoff- und Phosphordonoratomen zu generieren. Die C2-symmetrischen tridentaten Liganden wurden mit Eisen(II)- und Ruthenium(II)vorstufen zu monometallischen Katalysatoren umgesetzt. Diese wurden erfolgreich in der Hydrosilylierung und Transferhydrierung von Carbonylverbindungen angewendet. Ebenso wurden erste Untersuchungen der Reaktionsmechanismenverläufe durchgeführt. Mit Hilfe der gut zugänglichen bifunktionellen Bispyrazolylpyridinliganden konnten zahlreiche mono- und multimetallische Übergangsmetallkomplexverbindungen synthetisiert und teilweise auf Kooperativität in der katalytischen Reduzierung von Ketonen getestet werden. Dabei wurde bei den durchgeführten Hydrierungs- und Transferhydrierungsreaktionen eine deutliche Aktivitätssteigerung einiger multimetallischer Katalysatoren im Vergleich zu deren monometallischen Derivaten beobachtet. Durch geschickte Wahl der Übergangsmetallkombinationen konnten zusätzlich erste Erkenntnisse über die Kooperativität innerhalb der multimetallischen Katalysatoren gewonnen werden.
The usage of sensors in modern technical systems and consumer products is in a rapid increase. This advancement can be characterized by two major factors, namely, the mass introduction of consumer oriented sensing devices to the market and the sheer amount of sensor data being generated. These characteristics raise subsequent challenges regarding both the consumer sensing devices' reliability and the management and utilization of the generated sensor data. This thesis addresses these challenges through two main contributions. It presents a novel framework that leverages sentiment analysis techniques in order to assess the quality of consumer sensing devices. It also couples semantic technologies with big data technologies to present a new optimized approach for realization and management of semantic sensor data, hence providing a robust means of integration, analysis, and reuse of the generated data. The thesis also presents several applications that show the potential of the contributions in real-life scenarios.
Due to the broad range, growing feature set and fast release pace of new sensor-based products, evaluating these products is very challenging as standard product testing is not practical. As an alternative, an end-to-end aspect-based sentiment summarizer pipeline for evaluation of consumer sensing devices is presented. The pipeline uses product reviews to extract the sentiment at the aspect level and includes several components namely, product name extractor, aspects extractor and a lexicon-based sentiment extractor which handles multiple sentiment analysis challenges such as sentiment shifters, negations, and comparative sentences among others. The proposed summarizer's components generally outperform the state-of-the-art approaches. As a use case, features of the market leading fitness trackers are evaluated and a dynamic visual summarizer is presented to display the evaluation results and to provide personalized product recommendations for potential customers.
The increased usage of sensing devices in the consumer market is accompanied with increased deployment of sensors in various other fields such as industry, agriculture, and energy production systems. This necessitates using efficient and scalable methods for storing and processing of sensor data. Coupling big data technologies with semantic techniques not only helps to achieve the desired storage and processing goals, but also facilitates data integration, data analysis, and the utilization of data in unforeseen future applications through preserving the data generation context. This thesis proposes an efficient and scalable solution for semantification, storage and processing of raw sensor data through ontological modelling of sensor data and a novel encoding scheme that harnesses the split between the statements of the conceptual model of an ontology (TBox) and the individual facts (ABox) along with in-memory processing capabilities of modern big data systems. A sample use case is further introduced where a smartphone is deployed in a transportation bus to collect various sensor data which is then utilized in detecting street anomalies.
In addition to the aforementioned contributions, and to highlight the potential use cases of sensor data publicly available, a recommender system is developed using running route data, used for proximity-based retrieval, to provide personalized suggestions for new routes considering the runner's performance, visual and nature of route preferences.
This thesis aims at enhancing the integration of sensing devices in daily life applications through facilitating the public acquisition of consumer sensing devices. It also aims at achieving better integration and processing of sensor data in order to enable new potential usage scenarios of the raw generated data.
Alkyl methoxypyrazines (MPs) have been identified in foodstuffs of plant origin as potent flavor compounds, contributing significantly to the characteristic vegetative, so-called green sensory impression of products made thereof. Of particular interest are 2-isopropyl-3-methoxypyrazin (IPMP), 2-methoxy-3-sec-butylpyrazin (SBMP) and 2-isobutyl-3-methoxypyrazin (IBMP) as they are often dominating in flavor due to their low odor thresholds. Biogenesis of MPs takes place in various species, resulting in variing concentration levels and distributions (ppt levels in Vitis vinifera wine berries, ppb levels in vegetables). The before mentioned and further MPs are also found in body fluids of insects such as ladybugs, e. g. in the species Coccinella septempunctata and Harmonia axyridis. Ladybugs may play a role in the generation of off-flavors in wine in case of their incorporation during harvest and wine making. Furthermore, MPs are products from the metabolism of certain microorganisms and could lead to off-flavors indirectly via contact materials like cork stoppers for wine.
The last step in the proposed biosynthetic pathway(s) was clarified als O-methylation of alykl hydroxypyrazines, whereas the initial steps considering naturally occurring amino acids as starting materials are not yet fully explored. In case of SBMP, the alkyl side chain may thus derive from L-isoleucine, resulting in the same, namely (S)-configuration of the stereo center in SBMP.
Considering analytical approaches, MPs in high concentration levels as in vegetables are accessible using classical extraction techniques, simple separation and detection techniques. On the other hand, for lower concentration ranges near the odor thresholds of MPs in wine and for the highly complex matrix of wine most analytical methods do not yield the necessary levels of detections.
In this work a trace-level method for routine analysis and quantitation of MPs in the concentration range of the odor thresholds in white wine of 1 to 2 ng/L was developed. The extraction of the analytes was based on automated headspace solid phase micro extraction (HS-SPME). The separation was done either by heart-cut multidimensional GC (H/C MDGC) or comprehensive two-dimensional GC (GCxGC). MPs were detected using mass spectrometry (MS) in the selected ion monitoring (SIM) mode, especially for higher concentrations. To better clarify co-elution situations and for lower MP concentrations, tandem MS (MS/MS) was used in the selected reaction monitoring (SRM) mode. For a more reliable quatitation of trace levels, the stable isotope dilution assay (SIDA) was applied. The optimized method using HS-SPME H/C MDGC-MS/MS had levels of detection at concentrations below the odor thresholds of MPs in wine and allowed for the analysis of further MPs (isomers of dimethyl methoxypyrazine). As the core piece of enantiodifferentiation, one of the analytical columns in MDGC or GCxGC was replaced by a column with chiral selectors on the stationary phase, enabling the evaluation of the enantiomeric composition of SBMP in various samples.
The quantitation of MPs in vegetable samples in this work revealed levels in the range of a few ng/kg up to µg/kg, which is in accordance with results from literature. Quantitative studies of wine (Sauvignon blanc and Cabernet blanc) indicated influences of oenological and viticultural processes on MP levels. The evaluation of the enantiomeric composition of SBMP resulted in the exclusive detection of (S)-SBMP in all analyzed samples, various vegetables, wine and ladybug species. The congruency of the configuration of the stereo center in the side chain of (S)-SBMP and the side chain of L-Isoleucin supports the hypothesis that natural amino acids serve as starting material in the biosynthesis of MPs. Extending the optimized H/C MDGC method, a successful separation of the isomers 3-methoxy-2,5-dimethylpyrazin (DMMP) and 2-methoxy-3,5-dimethylpyrazin (MDMP) was achieved. Of these isomers only MDMP was detected in cork and wine with an atypical cork off-flavor and it was identified for the first time in two ladybug species, Harmonia axyridis and Coccinella septempunctata.
The methods developed in this work allowed for the quantitation of MPs in the range of a few ng/L. This can be used for further studies on influences on endogenic MP levels in Vitis vinifera or on influences on MP levels in wine by oenological processes or by contamination from different sources. For further clarification on the biogenesis of MPs, studies using labeled precursors or intermediates have to be developed. For the analysis of the resulting compounds from these fundamental studies, the quantitative (and enantioselective) analytical methods described here are essential.
Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.
Die Teilnahme an Weiterbildung wird immer beliebter und seit geraumer Zeit auch auf einem kontinuierlich hohen Niveau nachgefragt. Zunehmend agieren auch immer mehr Hochschulen als Anbieter auf dem Weiterbildungsmarkt, deren Relevanz durch die Bologna-Reform und der bildungspolitischen Betonung der Strategie einer Recurrent Education untermauert wurde. Die Lernenden, die als lebenslang Lernende auch im Erwachsenenalter in ihre Zukunft investieren, gelten als weiterbildungsaffine Teilnehmende, die über eine Vielzahl an gesammelten biographischen Lern- und Berufserfahrungen verfügen und häufig berufs- und lebensbegleitend wissenschaftliche Weiterbildung partizipieren. Über eine Kosten-Nutzen-Kalkulation von Weiterbildungsteilnahmen hinaus, stehen seit der Bologna-Reform neben den Lernergebnissen gleichsam die durch die Teilnahmen erzielten Weiterbildungswirkungen als Outcome im Fokus. Der bildungspolitische Blick auf diesen wissensbasierten Outcome bedeutet auch, stärker die individuelle Kompetenzbilanz der Teilnehmenden bei der Betrachtung ihrer Bildungsbemühungen einzubeziehen. Wenn bei der Beurteilung wirksamer Weiterbildungsteilnahmen vor allem die Lernenden und deren Kompetenzentwicklung im Mittelpunkt stehen, stellt sich die Frage, welche nachhaltigen und langfristigen Effekte die Lernenden selbst der Weiterbildung zurechnen und erlebbar auch als Erweiterung der eigenen Handlungsvielfalt wahrnehmen. Die hiesige Interviewstudie, angesiedelt im Feld der Teilnehmerforschung, rekonstruiert typische Handlungsmuster als Aneignungsperformanz von Fernstudierenden im angeleiteten Selbststudium und interpretiert diese im Hinblick auf die wahrgenommene Kompetenzentwicklung durch die Weiterbildung. Damit ist das Ziel verbunden Hinweise für die Gestaltung outcomeorientierter Lehr-Lernarrangements zu erarbeiten.
In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.
While the design step should be free from computational related constraints and operations due to its artistic aspect, the modeling phase has to prepare the model for the later stages of the pipeline.
This dissertation is concerned with the design and implementation of a framework for local remeshing and optimization. Based on the experience gathered, a full study about mesh quality criteria is also part of this work.
The contributions can be highlighted as: (1) a local meshing technique based on a completely novel approach constrained to the preservation of the mesh of non interesting areas. With this concept, designers can work on the design details of specific regions of the model without introducing more polygons elsewhere; (2) a tool capable of recovering the shape of a refined area to its decimated version, enabling details on optimized meshes of detailed models; (3) the integration of novel techniques into a single framework for meshing and smoothing which is constrained to surface structure; (4) the development of a mesh quality criteria priority structure, being able to classify and prioritize according to the application of the mesh.
Although efficient meshing techniques have been proposed along the years, most of them lack the possibility to mesh smaller regions of the base mesh, preserving the mesh quality and density of outer areas.
Considering this limitation, this dissertation seeks answers to the following research questions:
1. Given that mesh quality is relative to the application it is intended for, is it possible to design a general mesh evaluation plan?
2. How to prioritize specific mesh criteria over others?
3. Given an optimized mesh and its original design, how to improve the representation of single regions of the first, without degrading the mesh quality elsewhere?
Four main achievements came from the respective answers:
1. The Application Driven Mesh Quality Criteria Structure: Due to high variation in mesh standards because of various computer aided operations performed for different applications, e.g. animation or stress simulation, a structure for better visualization of mesh quality criteria is proposed. The criteria can be used to guide the mesh optimization, making the task consistent and reliable. This dissertation also proposes a methodology to optimize the criteria values, which is adaptable to the needs of a specific application.
2. Curvature Driven Meshing Algorithm: A novel approach, a local meshing technique, which works on a desired area of the mesh while preserving its boundaries as well as the rest of the topology. It causes a slow growth in the overall amount of polygons by making only small regions denser. The method can also be used to recover the details of a reference mesh to its decimated version while refining it. Moreover, it employs a geometric fast and easy to implement approach representing surface features as simple circles, being used to guide the meshing. It also generates quad-dominant meshes, with triangle count directly dependent on the size of the boundary.
3. Curvature-based Method for Anisotropic Mesh Smoothing: A geometric-based method is extended to 3D space to be able to produce anisotropic elements where needed. It is made possible by mapping the original space to another which embeds the surface curvature. This methodology is used to enhance the smoothing algorithm by making the nearly regularized elements follow the surface features, preserving the original design. The mesh optimization method also preserves mesh topology, while resizing elements according to the local mesh resolution, effectively enhancing the design aspects intended.
4. Framework for Local Restructure of Meshed Surfaces: The combination of both methods creates a complete tool for recovering surface details through mesh refinement and curvature aware mesh smoothing.
Basierend auf den Erkenntnissen und Erfahrungen zum Klimawandel ergaben sich in den letzten Jahren weltweit enorme energie- und klimapolitische Veränderungen. Dies führt zu einem immer stärken Wandel der Erzeugungs-, Verbrauchs- und Versorgungsstrukturen unserer Energiesysteme. Der Fokus der Energieerzeugung auf fluktuierenden erneuerbaren Energieträgern erfordert einen weitreichenderen Einsatz von Flexibilitäten als dies bisher der Fall war.
Diese Arbeit diskutiert den Einsatz von Wärmepumpen und Speichersystemen als Flexibilitäten im Kontext des Zellularen Ansatzes der Energieversorgung. Dazu werden die Flexibilitätspotentiale von Wärmepumpen -Speichersystemen auf drei Betrachtungsebenen untersucht und validiert. Erstere berücksichtigt die Wärmepumpe, den thermischen Speicher und thermische Lasten in einer generellen Potentialbetrachtung. Darauf aufbauend folgt die Betrachtung der Wärmepumpen-Speichersysteme im Rahmen einer Haushalts-Zelle als energetische Einheit, gefolgt von Untersuchungen im Niederspannungs-Zellkontext. Zur Abbildung des Flexibilitätsverhaltens werden detaillierte Modelle der Wandler und Speicher sowie deren Steuerungen entwickelt und anhand von Zeitreihensimulationen analysiert und evaluiert.
Die zentrale Frage ob Wärmepumpen mit Speichersystemen einen Beitrag als Flexibilität zum Gelingen der Energiewende leisten können kann mit einem klaren Ja beantwortet werden. Dennoch sind die beim Einsatz von Wärmepumpen-Speichersystemen als Flexibilität zu beachtenden Randbedingungen vielfältig und bedürfen, je nach Anwendungszweck der Flexibilität, einer genauen Betrachtung. Die entscheidenden Faktoren sind dabei die Außentemperatur, der zeitliche Kontext, das Netz und die Wirtschaftlichkeit.
In computer graphics, realistic rendering of virtual scenes is a computationally complex problem. State-of-the-art rendering technology must become more scalable to
meet the performance requirements for demanding real-time applications.
This dissertation is concerned with core algorithms for rendering, focusing on the
ray tracing method in particular, to support and saturate recent massively parallel computer systems, i.e., to distribute the complex computations very efficiently
among a large number of processing elements. More specifically, the three targeted
main contributions are:
1. Collaboration framework for large-scale distributed memory computers
The purpose of the collaboration framework is to enable scalable rendering
in real-time on a distributed memory computer. As an infrastructure layer it
manages the explicit communication within a network of distributed memory
nodes transparently for the rendering application. The research is focused on
designing a communication protocol resilient against delays and negligible in
overhead, relying exclusively on one-sided and asynchronous data transfers.
The hypothesis is that a loosely coupled system like this is able to scale linearly
with the number of nodes, which is tested by directly measuring all possible
communication-induced delays as well as the overall rendering throughput.
2. Ray tracing algorithms designed for vector processing
Vector processors are to be efficiently utilized for improved ray tracing performance. This requires the basic, scalar traversal algorithm to be reformulated
in order to expose a high degree of fine-grained data parallelism. Two approaches are investigated: traversing multiple rays simultaneously, and performing
multiple traversal steps at once. Efficiently establishing coherence in a group
of rays as well as avoiding sorting of the nodes in a multi-traversal step are the
defining research goals.
3. Multi-threaded schedule and memory management for the ray tracing acceleration structure
Construction times of high-quality acceleration structures are to be reduced by
improvements to multi-threaded scalability and utilization of vector processors. Research is directed at eliminating the following scalability bottlenecks:
dynamic memory growth caused by the primitive splits required for high-
quality structures, and top-level hierarchy construction where simple task par-
allelism is not readily available. Additional research addresses how to expose
scatter/gather-free data-parallelism for efficient vector processing.
Together, these contributions form a scalable, high-performance basis for real-time,
ray tracing-based rendering, and a prototype path tracing application implemented
on top of this basis serves as a demonstration.
The key insight driving this dissertation is that the computational power necessary
for realistic light transport for real-time rendering applications demands massively
parallel computers, which in turn require highly scalable algorithms. Therefore this
dissertation provides important research along the path towards virtual reality.
Many loads acting on a vehicle depend on the condition and quality of roads
traveled as well as on the driving style of the motorist. Thus, during vehicle development,
good knowledge on these further operations conditions is advantageous.
For that purpose, usage models for different kinds of vehicles are considered. Based
on these mathematical descriptions, representative routes for multiple user
types can be simulated in a predefined geographical region. The obtained individual
driving schedules consist of coordinates of starting and target points and can
thus be routed on the true road network. Additionally, different factors, like the
topography, can be evaluated along the track.
Available statistics resulting from travel survey are integrated to guarantee reasonable
trip length. Population figures are used to estimate the number of vehicles in
contained administrative units. The creation of thousands of those geo-referenced
trips then allows the determination of realistic measures of the durability loads.
Private as well as commercial use of vehicles is modeled. For the former, commuters
are modeled as the main user group conducting daily drives to work and
additional leisure time a shopping trip during workweek. For the latter, taxis as
example for users of passenger cars are considered. The model of light-duty commercial
vehicles is split into two types of driving patterns, stars and tours, and in
the common traffic classes of long-distance, local and city traffic.
Algorithms to simulate reasonable target points based on geographical and statistical
data are presented in detail. Examples for the evaluation of routes based
on topographical factors and speed profiles comparing the influence of the driving
style are included.
Im Rahmen dieser Arbeit wurden im ersten von vier Teilen fünf Brønsted-saure, mesoporöse Organokieselgele synthetisiert, vier auf Basis eines BTEB-PMOs und eines auf der eines SBA-15s. Diese wiesen hohe spezifische Oberflächen zwischen 581 und 710 m2/g auf und geordnete, 2D-hexagonale Strukturen. Drei dieser Materialien wurden näher auf ihre Eigenschaft als Katalysator untersucht. Während sie in Kondensationsreaktionen geringe bis moderate Aktivität zeigten, erbrachte das SO3H-BTEB-PMO in der THP-Schützung von Isoamylalkohol und Phenol bereits nach 10 min mit nur 0.1 mol-% Katalysator einen vollständigen Umsatz. In den entsprechenden Entschützungen war die Aktivität vergleichbar mit der von pTsOH. Der Katalysator konnte mehrmals regeneriert werden, ohne dabei erheblich an Aktivität zu verlieren.
Im zweiten Teil der vorliegenden Arbeit wurde das SO3H-BTEB-PMO dazu genutzt, um vier kationisch funktionalisierte Phenothiazine zu immobilisieren. Die Phenothiazine wurden allesamt in Kooperation im Rahmen des DFG-Projekts TH 550/20-1 von M.Sc. Hilla Khelwati im Arbeitskreis von Prof. Dr. T. J. J. Müller an der HHU Düsseldorf synthetisiert. So konnten neuartige, redoxaktive Hybridmaterialien mit spezifischen Oberflächen zwischen 500 und 688 m2/g, 2D-hexagonaler Struktur und Phenothiazin-Beladungen zwischen 167 und 243 µmol/g erhalten werden. Die Umwandlung in ihre stabilen Radikalkationen gelang durch gezielte Bestrahlung mit Licht, wobei selbst nach zehn Monaten Lagerung im Dunkeln noch radikalische Spezies in nur gering verminderter Intensität detektiert werden konnten. Zwei weitere, kationisch funktionalisierte Phenothiazine wurden nach demselben Prinzip auf BTEB-NP immobilisiert. Dabei besaßen die beiden Organokieselgele spezifische Oberflächen von 335 und 565 m2/g und Phenothiazin-Beladungen von 394 bzw. 137 µmol/g. Die letzten beiden Phenothiazine, diesmal mit Triethoxysilylgruppen versehen, wurden mittels Grafting auf das BTEB-NP-Gerüst aufgebracht, wobei diese chemisch oxidiert werden mussten, um stabile Radikalkationen zu erhalten. Es zeigte sich, dass bei der Bestrahlung mit Licht bei den kationisch funktionalisierten Phenothiazinen die Umgebung innerhalb der Pore für die beobachtete Stabilität der Radikalkationen essentiell ist, welche bei den gegrafteten Phenothiazinen nicht gegeben war. Die Hybridmaterialien mit den gegrafteten Phenothiazinen besaßen Oberflächen von 172 und 920 m2/g und Phenothiazin-Beladungen von 809 bzw. 88.9 µmol/g.
Der dritte Bereich beschäftigte sich mit der Synthese eines epi-Chinin-BTEB-PMOs für die Anwendung in der Katalyse der Mannich-Reaktion von Ketiminen mit dem Nucleophil 2,4-Pentandion. Dabei konnten hohe Umsätze zwischen 73 und 98%, bei Stereoselektivitäten zwischen 69 und 98% ee, erhalten werden. Das mesoporöse Katalysatorsystem besaß eine spezifische Oberfläche von 812 m2/g, eine 2D-hexagonale Struktur und eine Beladung an aktiven Zentren von 151 µmol/g. Der Katalysator konnte mehrmals regeneriert werden, bei leichten Einbußen im Umsatz und gleichbleibender Selektivität.
Im vierten und letzten Abschnitt wurden Möglichkeiten untersucht, Vanadium- und Aluminium-Spezies in das BTEB-PMO-Grundgerüst zu inkorporieren, um somit Katalysatoren für die Epoxidierung von Olefinen zu erhalten. Dabei konnte eine große Anzahl an Materialien erhalten werden, die jedoch keine geordnete Struktur aufwiesen und sich lediglich in der Epoxidierung von (Z)-Cycloocten sehr aktiv zeigten.
Under the notion of Cyber-Physical Systems an increasingly important research area has
evolved with the aim of improving the connectivity and interoperability of previously
separate system functions. Today, the advanced networking and processing capabilities
of embedded systems make it possible to establish strongly distributed, heterogeneous
systems of systems. In such configurations, the system boundary does not necessarily
end with the hardware, but can also take into account the wider context such as people
and environmental factors. In addition to being open and adaptive to other networked
systems at integration time, such systems need to be able to adapt themselves in accordance
with dynamic changes in their application environments. Considering that many
of the potential application domains are inherently safety-critical, it has to be ensured
that the necessary modifications in the individual system behavior are safe. However,
currently available state-of-the-practice and state-of-the-art approaches for safety assurance
and certification are not applicable to this context.
To provide a feasible solution approach, this thesis introduces a framework that allows
“just-in-time” safety certification for the dynamic adaptation behavior of networked
systems. Dynamic safety contracts (DSCs) are presented as the core solution concept
for monitoring and synthesis of decentralized safety knowledge. Ultimately, this opens
up a path towards standardized service provision concepts as a set of safety-related runtime
evidences. DSCs enable the modular specification of relevant safety features in
networked applications as a series of formalized demand-guarantee dependencies. The
specified safety features can be hierarchically integrated and linked to an interpretation
level for accessing the scope of possible safe behavioral adaptations. In this way, the networked
adaptation behavior can be conditionally certified with respect to the fulfilled
DSC safety features during operation. As long as the continuous evaluation process
provides safe adaptation behavior for a networked application context, safety can be
guaranteed for a networked system mode at runtime. Significant safety-related changes
in the application context, however, can lead to situations in which no safe adaptation
behavior is available for the current system state. In such cases, the remaining DSC
guarantees can be utilized to determine optimal degradation concepts for the dynamic
applications.
For the operationalization of the DSCs approach, suitable specification elements and
mechanisms have been defined. Based on a dedicated GUI-engineering framework it is
shown how DSCs can be systematically developed and transformed into appropriate runtime
representations. Furthermore, a safety engineering backbone is outlined to support
the DSC modeling process in concrete application scenarios. The conducted validation
activities show the feasibility and adequacy of the proposed DSCs approach. In parallel,
limitations and areas of future improvement are pointed out.
Die vorliegende Arbeit lässt sich in drei Themengebiete unterteilen. Jedes davon kann durch ein Übergangsmetall der sechsten Nebengruppe repräsentiert werden. Ein Vertreter der sperrigen Alkylcyclopentadienyl-Liganden wurde stets als Hilfsligand verwendet. Mehr als 40 Komplexe des Chroms, Molybdäns und Wolframs mit sperrigen Cyclopentadienyl-Liganden konnten im Rahmen dieser Arbeit hergestellt und untersucht werden. Die meisten Strukturmotive konnten dabei röntgenkristallographisch aufgeklärt werden.
Mit dem synthetisch und sterisch besonders aufwändigen Pentaisopropyl-cyclopentadienyl-Liganden konnten allein sieben Chromkomplexe hergestellt und charakterisiert werden. Eine sehr hohe Reaktivität der Ausgangsverbindungen der Form [RCpCr(µ-Br)]2 mit RCp = Cp’’’ und 5Cp gegenüber Nucleophilen zeigte sich in Umsetzungen mit Natrium-cyclopentadieniden, Natrium-bis(trimethylsilyl)amid, einem N-heterocylischen Carben, unterschiedlichen Natrium-phenolaten, Natrium-phenylacetylid, unterschiedlichen Grignard-Reagenzien, Lithiumaluminiumhydrid, Natriumazid und Kaliumcyanid. Gemischt substituierte Pentaalkyl-Chromocene [Cp’’’CrCp’’] und [5CpCrCp] mit unterschiedlich substituierten Hilfsliganden konnten erfolgreich synthetisiert werden. Mit sperrigen Nucleophilen oder mit einem sperrigen N-heterocyclischen Carbenliganden wurden die Einkernkomplexe [RCpCrYL] mit der Kombination aus einem anionischen Liganden Y und einem neutralen Donorliganden L (Y/L = Br/NHC, CH3/NHC, N(SiMe3)2/THF und O(tBu)2C6H3/THF) gebildet. Die Umsetzungen der Chrom(II)-Ausgangsverbindungen mit Phenolato-Liganden lieferten Produkte mit unterschiedlichen Strukturmotiven je nach Sperrigkeit der Substituenten am aromatischen Sechsring des eingesetzten Liganden. Dabei konnte neben dem Einkernkomplex [Cp’’’Cr(O(tBu)2C6H3)(THF)] eine Zweikernverbindung [Cp’’’Cr(µ-OPh)]2 isoliert und kristallographisch aufgeklärt werden. Die µ,η1:η2-Koordination der Phenylacetylido-Brücken des Zweikernkomplexes [Cp’’’Cr(µ,η1:η2-C≡CPh)]2 konnte mittels Röntgenstrukturanalyse belegt werden. Die Umsetzungen der Bromido-verbrückten Ausgangsverbindungen mit Me- oder Ph-MgX erfolgten unter Bildung entsprechender Zweikernkomplexe mit Alkyl- oder Aryl-Brückenliganden. Beim Einsatz von Et- oder iPr-MgX gelang durch eine β-H-Eliminierung ein Zugang zu einer Hydrido-Spezies des Chroms mit der Form [RCpCr(µ-H)]2 mit RCp = Cp’’’ und 5Cp. Die Vierkernkomplexe [Cp’’’Cr(µ-X)]4 entstanden mit dem linear verbrückenden Cyanid oder mit Tetrahydridoaluminat als Nucleophil. [(Cp’’’Cr)3(µ3-N)4(CrBr)] wurde durch eine Umsetzung mit Azid unter Abspaltung von Distickstoff erhalten.
Ein Vertreter der Molybdän-Komplexe mit Acetato-Brücken [Cp’’Mo(µ-OAc)]2 konnte erhalten und anschließend mit Trimethylbromsilan weiter umgesetzt werden. Eine Oxidation der Metallzentren zu Molybdän(III) anstelle eines Austausches der verbrückenden Acetato-Liganden wurde dabei beobachtet.
Wolfram-Komplexe der Form [RCpW(CO)3CH3] mit RCp = Cp’’, Cr’’’, 23Cp und 4Cp wurden zunächst ausgehend von W(CO)6 synthetisiert und anschließend mit PCl5 und PBr5 halogeniert. Die isolierten Komplexe [RCpWCl4] und [RCpWBr4] mit RCp = Cp’’, Cr’’’, 23Cp und 4Cp konnten in guten Ausbeuten erhalten werden. Anschließende Reduktionen mit Mangan lieferten Zweikernkomplexe des Wolframs mit verbrückenden Halogenido-Liganden [RCpW(µ-Cl)2]2 und [RCpW(µ-Br)2]2 mit RCp = Cp’’, Cr’’’ und 4Cp. Neben [Cp’’W(CO)2(µ-I)]2 konnte durch Reduktion von [Cp’’W(CO)2I3] auch ein Vertreter der „semibridging“-Komplexe, [Cp’’W(CO)2]2, erhalten werden.
Die Entwicklung des ersten 3D-druckbaren Smartphonephotometers zur Erfassung kinetischer und statischer Messdaten enzymatischer und chemischer Reaktionen am Smartphone ohne den Einsatz zusätzlicher elektronischer Komponenten oder aufwändiger optischer Komponenten wird vorgestellt. Die Entwicklung erfolgt sowohl im Bereich der Umweltanalytik zum Nachweis von Schwermetallbelastungen in Gewässerproben im Rahmen von Citizen Sciences Anwendungen als auch als point of need Analysesystem zum industriellen Einsatz in Winzereibetrieben unter jeweils angepassten Zielsetzungen und Beispielsystemen. Für die verschiedenen Anwendungsbereiche werden unterschiedliche Systeme mit angepassten Zielsetzungen entwickelt. Ein küvettenbasiertes System wird in Hinblick auf die Anforderungen des Citizen Sciences Bereichs entwickelt. Ein kapillarbasiertes Smartphonephotometer und –nephelometer wird entwickelt um den Anforderungen in Winzereibetrieben optimal zu entsprechen.
Verwendung finden im Bereich der Umweltanalytik enzymatische Assay, die in herkömmlichen Küvetten durchgeführt werden. Die Assays beruhen auf der Erfassung inhibitorischer Effekte auf die Aktivität von Enzymen und Enzymkaskaden durch Schwermetallbelastungen in Gewässerproben. Im Anwendungsfeld der Weinanalytik werden chemisch Parameter am Smartphonephotometer bestimmt. Die photometrische Analyse wird in vorkonfektionierbaren Assaykapillaren durchgeführt. Dies ermöglicht die Realisierung minimaler Anforderungen an die durchführenden Personen bei maximaler Reproduzierbarkeit der Untersuchungsergebnisse.
Die Smartphonephotometer sind unter Verwendung der smartphoneeigenen Blitzlicht-LED als Lichtquelle in der Lage Änderungen der Extinktion in den Wellenlängenbereichen von 410-545 nm, 425-650 und 555-675 nm zu quantifizieren. Die Einschränkung der spezifischen Wellenlänge wird durch die inhomogenen Emissionsspektren der LED der Blitzlicht-LED des Smartphones verursacht. Nephelometrische Trübungsmessungen können bis zu einer Trübung von 250 FNU quantitativ durchgeführt werden.
Topological insulators (TI) are a fascinating new state of matter. Like usual insulators, their band structure possesses a band gap, such that they cannot conduct current in their bulk. However, they are able to conduct current along their edges and surfaces, due to edge states that cross the band gap. What makes TIs so interesting and potentially useful are these robust unidirectional edge currents. They are immune to significant defects and disorder, which means that they provide scattering-free transport.
In photonics, using topological protection has a huge potential for applications, e.g. for robust optical data transfer [1-3] – even on the quantum level [4, 5] – or to make devices more stable and robust [6, 7]. Therefore, the field of topological insulators has spread to optics to create the new and active research field of topological photonics [8-10].
Well-defined and controllable model systems can help to provide deeper insight into the mechanisms of topologically protected transport. These model systems provide a vast control over parameters. For example, arbitrary lattice types without defects can be examined, and single lattice sites can be manipulated. Furthermore, they allow for the observation of effects that usually happen at extremely short time-scales in solids. Model systems based on photonic waveguides are ideal candidates for this.
They consist of optical waveguides arranged on a lattice. Due to evanescent coupling, light that is inserted into one waveguide spreads along the lattice. This coupling of light between waveguides can be seen as an analogue to electrons hopping/tunneling between atomic lattice sites in a solid.
The theoretical basis for this analogy is given by the mathematical equivalence between Schrödinger and paraxial Helmholtz equation. This means that in these waveguide systems, the role of time is assigned to a spatial axis. The field evolution along the waveguides' propagation axis z thus models the temporal evolution of an electron's wave-function in solid states. Electric and magnetic fields acting on electrons in solids need to be incorporated into the photonic platform by introducing artificial fields. These artificial gauge fields need to act on photons in the same way that their electro-magnetic counterparts act on electrons. E.g., to create a photonic analogue of a topological insulator the waveguides are bent helically along their propagation axis to model the effect of a magnetic field [3]. This means that the fabrication of these waveguide arrays needs to be done in 3D.
In this thesis, a new method to 3D micro-print waveguides is introduced. The inverse structure is fabricated via direct laser writing, and subsequently infiltrated with a material with higher refractive index contrast. We will use these model systems of evanescently coupled waveguides to look at different effects in topological systems, in particular at Floquet topological systems.
We will start with a topologically trivial system, consisting of two waveguide arrays with different artificial gauge fields. There, we observe that an interface between these trivial gauge fields has a profound impact on the wave vector of the light traveling across it. We deduce an analog to Snell's law and verify it experimentally.
Then we will move on to Floquet topological systems, consisting of helical waveguides. At the interface between two Floquet topological insulators with opposite helicity of the waveguides, we find additional trivial interface modes that trap the light. This allows to investigate the interaction between trivial and topological modes in the lattice.
Furthermore, we address the question if topological edge states are robust under the influence of time-dependent defects. In a one-dimensional topological model (the Su-Schrieffer-Heeger model [11]) we apply periodic temporal modulations to an edge wave-guide. We find Floquet copies of the edge state, that couple to the bulk in a certain frequency window and thus depopulate the edge state.
In the two-dimensional Floquet topological insulator, we introduce single defects at the edge. When these defects share the temporal periodicity of the helical bulk waveguides, they have no influence on a topological edge mode. Then, the light moves around/through the defect without being scattered into the bulk. Defects with different periodicity, however, can – likewise to the defects in the SSH model – induce scattering of the edge state into the bulk.
In the end we will briefly highlight a newly emerging method for the fabrication of waveguides with low refractive index contrast. Moreover, we will introduce new ways to create artificial gauge fields by the use of orbital angular momentum states in waveguides.
Entwicklung thermoplastischer Faserkunststoffverbunde aus carbonfaserverstärkten PPS/PES-Blends
(2019)
Der Anteil von Faserkunststoffverbunden (FKV) in heutigen Verkehrsflugzeugen nimmt
rund 50% der Gesamtmasse ein. Thermoplastische Verbundwerkstoffe werden dabei
vorzugsweise wegen ihrer fertigungstechnischen Vorteile (Umformbarkeit, Schweißbarkeit)
und ihrer hohen Zähigkeit ausgewählt. Ihr Anteil an den Verbundbauteilen ist
jedoch noch relativ gering, vor allem wegen vergleichsweise hohen Materialkosten
insbesondere für kohlenstofffaserverstärktem Polyetheretherketone (PEEK).
Die vorliegende Arbeit fokussiert die Herstellung thermoplastischer Blends aus Polyphenylensulfid
(PPS) und Polyethersulfon (PES) sowie deren Weiterverarbeitung zu
Hochleistungs-FKV im Autoklavprozess. Es konnte gezeigt werden, dass mechanische
und thermomechanische Eigenschaften beider Polymere in die Faserverbundstrukturen
lokal und global überführbar sind. Der Transfer in FKV-Strukturen ist abhängig von der
lokalen Anordnung der im PPS verteilten PES-Phase. Die Abschätzung der Faserbenetzung
durch die Polymerphasen sowie die Fraktographie verdeutlichten, dass der
Eigenschaftstransfer in den Verbund über Grenzflächeninteraktionen gesteuert werden
kann. Weiterhin führten die Interaktionen der Polymer-Phasen zu einer Strukturviskosität
der Blends, welche sich in einem deutlich elastischerem Fließverhalten des
Matrixgemisches äußerte. Mittels der oberflächenenergetischen Analysen an Fasern
und Polymerschmelzen wurden die konkurrierende Affinitäten der Phasen ermittelt
und modellhaft diskutiert. Es zeigte sich, dass die abgeschätzten Kapillarkräfte der
Polymere signifikant genug sein können, um die Imprägnierung zu beeinflussen. Der
Einsatz eines Verträglichkeitsvermittlers kann dabei die Gemisch-Stabilität fördern sowie
Gegenkräfte zur Imprägnierung in Gang setzen. Gleichzeitig wurde die Phasenkompatibilität
als notwendig für den Eigenschaftstransfer deklariert. Die wichtigen Interaktionen
zwischen Polymeren und Fasern müssen in solchen Systemen durch die Anpassung
von Benetzungs- und Phasenformierungsmechanismen gesteuert werden, um schnelle
Verarbeitungsprozesse und damit qualitativ hochwertige Faserverbundstrukturen zu
ermöglichen.
Most modern multiprocessors offer weak memory behavior to improve their performance in terms of throughput. They allow the order of memory operations to be observed differently by each processor. This is opposite to the concept of sequential consistency (SC) which enforces a unique sequential view on all operations for all processors. Because most software has been and still is developed with SC in mind, we face a gap between the expected behavior and the actual behavior on modern architectures. The issues described only affect multithreaded software and therefore most programmers might never face them. However, multi-threaded bare metal software like operating systems, embedded software, and real-time software have to consider memory consistency and ensure that the order of memory operations does not yield unexpected results. This software is more critical as general consumer software in terms of consequences, and therefore new methods are needed to ensure their correct behavior.
In general, a memory system is considered weak if it allows behavior that is not possible in a sequential system. For example, in the SPARC processor with total store ordering (TSO) consistency, all writes might be delayed by store buffers before they eventually are processed by the main memory. This allows the issuing process to work with its own written values before other processes observed them (i.e., reading its own value before it leaves the store buffer). Because this behavior is not possible with sequential consistency, TSO is considered to be weaker than SC. Programming in the context of weak memory architectures requires a proper comprehension of how the model deviates from expected sequential behavior. For verification of these programs formal representations are required that cover the weak behavior in order to utilize formal verification tools.
This thesis explores different verification approaches and respectively fitting representations of a multitude of memory models. In a joint effort, we started with the concept of testing memory operation traces in regard of their consistency with different memory consistency models. A memory operation trace is directly derived from a program trace and consists of a sequence of read and write operations for each process. Analyzing the testing problem, we are able to prove that the problem is NP-complete for most memory models. In that process, a satisfiability (SAT) encoding for given problem instances was developed, that can be used in reachability and robustness analysis.
In order to cover all program executions instead of just a single program trace, additional representations are introduced and explored throughout this thesis. One of the representations introduced is a novel approach to specify a weak memory system using temporal logics. A set of linear temporal logic (LTL) formulas is developed that describes all properties required to restrict possible traces to those consistent to the given memory model. The resulting LTL specifications can directly be used in model checking, e.g., to check safety conditions. Unfortunately, the derived LTL specifications suffer from the state explosion problem: Even small examples, like the Peterson mutual exclusion algorithm, tend to generate huge formulas and require vast amounts of memory for verification. For this reason, it is concluded that using the proposed verification approach these specifications are not well suited for verification of real world software. Nonetheless, they provide comprehensive and formally correct descriptions that might be used elsewhere, e.g., programming or teaching.
Another approach to represent these models are operational semantics. In this thesis, operational semantics of weak memory models are provided in the form of reference machines that are both correct and complete regarding the memory model specification. Operational semantics allow to simulate systems with weak memory models step by step. This provides an elegant way to study the effects that lead to weak consistent behavior, while still providing a basis for formal verification. The operational models are then incorporated in verification tools for multithreaded software. These state space exploration tools proved suitable for verification of multithreaded software in a weak consistent memory environment. However, because not only the memory system but also the processor are expressed as operational semantics, some verification approach will not be feasible due to the large size of the state space.
Finally, to tackle the beforementioned issue, a state transition system for parallel programs is proposed. The transition system is defined by a set of structural operational semantics (SOS) rules and a suitable memory structure that can cover multiple memory models. This allows to influence the state space by use of smart representations and approximation approaches in future work.
Linking protistan community shifts along salinity gradients with cellular haloadaptation strategies
(2019)
Salinity is one of the most structuring environmental factors for microeukaryotic communities. Using eDNA barcoding, I detected significant shifts in microeukaryotic community compositions occurring at distinct salinities between brackish and marine conditions in the Baltic Sea. I, furthermore, conducted a metadata analysis including my and other marine and hypersaline community sequence data to confirm the existence of salinity-related transition boundaries and significant changes in alpha diversity patterns along a brackish to hypersaline gradient. One hypothesis for the formation of salinity-dependent transition boundaries between brackish to hypersaline conditions is the use of different cellular haloadaptation strategies. To test this hypothesis, I conducted metatranscriptome analyses of microeukaryotic communities along a pronounced salinity gradient (40 – 380 ‰). Clustering of functional transcripts revealed differences in metabolic properties and metabolic capacities between microeukaryotic communities at specific salinities, corresponding to the transition boundaries already observed in the taxonomic eDNA barcoding approach. In specific, microeukaryotic communities thriving at mid-hypersaline conditions (≤ 150 ‰) seem to predominantly apply the ‘low-salt – organic-solutes-in’ strategy by accumulating compatible solutes to counteract osmotic stress. Indications were found for both the intracellular synthesis of compatible solutes as well as for cellular transport systems. In contrast, communities of extreme-hypersaline habitats (≥ 200 ‰) may preferentially use the ‘high-salt-in’ strategy, i. e. the intracellular accumulation of inorganic ions in high concentrations, which is implied by the increased expression of Mg2+, K+, Cl- transporters and channels.
In order to characterize the ‘low-salt – organic-solutes-in’ strategy applied by protists in more detail, I conducted a time-resolved transcriptome analysis of the heterotrophic ciliate Schmidingerothrix salinarum serving as model organism. S. salinarum was thus subjected to a salt-up shock to investigate the intracellular response to osmotic stress by shifts of gene expression. After increasing the external salinity, an increased expression of two-component signal transduction systems and MAPK cascades was observed. In an early reaction, the expression of transport mechanisms for K+, Cl- and Ca2+ increased, which may enhance the capacity of K+, Cl- and Ca2+ in the cytoplasm to compensate possibly harmful Na+ influx. Expression of enzymes for the synthesis of possible compatible solutes, starting with glycine betaine, followed by ectoine and later proline, could imply that the inorganic ions K+, Cl- and Ca2+ are gradually replaced by the synthesized compatible solutes. Additionally, expressed transporters for choline (precursor of glycine betaine) and proline could indicate an intracellular accumulation of compatible solutes to balance the external salinity. During this accumulation, the up-regulated ion export mechanisms may increase the capacity for Na+ expulsion from the cytoplasm and ion compartmentalization between cell organelles seem to happen.
The results of my PhD project revealed first evidence at molecular level for the salinity-dependent use of different haloadaptation strategies in microeukaryotes and significantly extend existing knowledge about haloadaptation processes in ciliates. The results provide ground for future research, such as (comparative) transcriptome analysis of ciliates thriving in extreme-hypersaline habitats or experiments like qRT-PCR to validate transcriptome results.
Der zunehmende Ausbau dezentraler Erzeugungsanlagen sowie die steigende Anzahl an Elektrofahrzeugen stellen die Niederspannungsnetze vor neue Herausforderungen. Neben der Einhaltung des zulässigen Spannungsbands führen Erzeugungsanlagen und neue Lasten zu einer zunehmenden thermischen Auslastung der Leitungen. Einfache, konventionelle Maßnahmen wie Topologieänderungen zu vermascht betriebenen Niederspannungsnetzen sind ein erster hilfreicher und kostengünstiger Ansatz, bieten aber keinen grundsätzlichen Schutz vor einer thermischen Überlastung der Betriebsmittel. Diese Arbeit befasst sich mit der Konzeption eines Spannungs- und Wirkleistungsreglers für vermaschte Niederspannungsnetze. Durch den Regler erfolgt eine messtechnische Erfassung der Spannungen und Ströme in einzelnen Messpunkten des Niederspannungsnetzes. Mit Hilfe eines speziellen Kennlinienverfahrens kann eine Leistungsverschiebung in einzelnen Netzmaschen hervorgerufen und vorgegebene Soll- oder Grenzwerte eingehalten werden. In vorliegender Arbeit werden die analytischen Grundlagen des Reglers, seine Hardware sowie das Kennlinienverfahren zusammen mit den realisierbaren Regelkonzepten vorgestellt. Die Ergebnisse aus Simulationsstudien, Labor- und Feldtests stellen die Effektivität des Reglers eindeutig dar und werden diskutiert.
Anwenderunterstützung bei der Nutzung und Überprüfung von optischen 3D-Oberflächenmessgeräten
(2019)
Technische Oberflächen werden mit immer komplexeren, dreidimensionalen Strukturen hergestellt, um gewünschte Funktionseigenschaften zu erhalten. Mit taktilen Rauheitsmessgeräten lassen sich diese allerdings nur schwer charakterisieren. Besser eignen sich hierfür optische Rauheitsmessgeräte, die die Oberfläche flächenhaft erfassen können. Diese unterscheiden sich allerdings in ihren Eigenschaften und Einstellungen von den in der Industrie bekannten und bewährten taktilen Systemen. Daher wird in dieser Arbeit ein Assistenzsystem vorgestellt, das die Anwender unterstützt, ihr optisches Rauheitsmessgerät sicher und normgerecht nach DIN EN ISO 25178 einzusetzen.Das Assistenzsystem führt Schritt für Schritt durch die Planung einer Messaufgabe, durch die Überprüfung zur Kontrolle der korrekten Funktion des Gerätes und der Eignung für die Messaufgabe, und im letzten Schritt durch die normgerechte Auswertung der Messung, um die gewünschten 3D-Oberflächenkennwerte zu erhalten.
Die Funktion der c-di-GMP modulierenden Membranproteine NbdA und MucR in Pseudomonas aeruginosa
(2019)
NbdA und MucR sind Multi-Domänenproteine aus Pseudomonas aeruginosa. Beide Proteine besitzen eine ähnliche Domänenorganisation mit einer N-terminalen, membranständigen
MHYT-Domäne sowie einer GGDEF- und einer EAL-Domäne im Cytoplasma. Die
cytosolischen Domänen von MucR sind beide aktiv, während NbdA neben der intakten
EAL-Domäne eine degenerierte GGDEF-Domäne mit dem Motiv AGDEF aufweist. Bioinformatischen
Vorhersagen zufolge soll die MHYT-Domäne eine sensorische Funktion für diatomische Gase wie Stickstoffmonoxid oder Sauerstoff vermitteln. Die phänotypische Charakterisierung der markerlosen PAO1-Deletionsmutanten \(Delta\)nbdA, \(Delta\)mucR und \(Delta\)nbdA \(Delta\)mucR zeigte, dass NbdA und MucR nicht in die NO-induzierte
Dispersion involviert sind. Ebenso konnte in einem neu etablierten heterologen in-vivo-System in E. coli keine NO-sensorische Funktion der Proteine detektiert werden. Im Weiteren wurde festgestellt, dass die MHYT-Domäne keinen ersichtlichen Einfluss auf die
Enzymaktivität von NbdA und MucR unter aeroben Bedingungen hat. Demzufolge fungiert
die Membrandomäne vermutlich weder als Sensor für Sauerstoff, noch für NO. Anhand heterologer Komplementationstests konnte eine PDE-Aktivität des NbdA-Volllängenproteins
nachgewiesen werden. Zudem wurde gezeigt, dass die degenerierte AGDEF-Domäne einen regulatorischen Effekt auf die EAL-Domäne hat, der essentiell für die in-
vivo-Aktivität von NbdA ist. In-vivo-Untersuchungen bestätigten die postulierte DGC-Aktivität von MucR. Weiterhin konnte belegt werden, dass MucR ein bifunktionelles Enzym
ist. Entgegen den Erwartungen scheint es jedoch im Planktonischen als DGC und im
Biofilm als PDE zu fungieren.
Ein weiterer Aspekt dieser Arbeit war die Charakterisierung der homologen Überexpression
von nbdA in P. aeruginosa, welche teilweise unerwartete Phänotypen ergab. Anhand
der homologen Überproduktion einer inaktiven NbdA-Variante stellte sich heraus, dass die
Hemmung der Motilität unabhängig von der Aktivität von NbdA auftritt. Massenspektrometrische
Analysen deuteten daraufhin, dass NbdA lokal c-di-GMP hydrolysiert. Diese
Ergebnisse implizieren, dass NbdA eine Trigger-PDE ist, deren primäre Funktion die Regulation
anderer makromolekularer Zielmoleküle ist. In Pseudomonas fluorescens Pf0-1 ist bekannt, dass das NbdA-Homolog Pfl01_1252 mit den Homologen von MucR (Pfl01_2525) und SadC (Pfl01_4451) interagiert. Ergebnisse einer früheren Arbeit lassen eine Interaktion
von NbdA und SadC ebenso in P. aeruginosa vermuten. Daher ist denkbar, dass sich
NbdA im gleichen Netzwerk wie MucR und SadC befindet und deren Aktivität reguliert.
The systems in industrial automation management (IAM) are information systems. The management parts of such systems are software components that support the manufacturing processes. The operational parts control highly plug-compatible devices, such as controllers, sensors and motors. Process variability and topology variability are the two main characteristics of software families in this domain. Furthermore, three roles of stakeholders -- requirement engineers, hardware-oriented engineers, and software developers -- participate in different derivation stages and have different variability concerns. In current practice, the development and reuse of such systems is costly and time-consuming, due to the complexity of topology and process variability. To overcome these challenges, the goal of this thesis is to develop an approach to improve the software product derivation process for systems in industrial automation management, where different variability types are concerned in different derivation stages. Current state-of-the-art approaches commonly use general-purpose variability modeling languages to represent variability, which is not sufficient for IAM systems. The process and topology variability requires more user-centered modeling and representation. The insufficiency of variability modeling leads to low efficiency during the staged derivation process involving different stakeholders. Up to now, product line approaches for systematic variability modeling and realization have not been well established for such complex domains. The model-based derivation approach presented in this thesis integrates feature modeling with domain-specific models for expressing processes and topology. The multi-variability modeling framework includes the meta-models of the three variability types and their associations. The realization and implementation of the multi-variability involves the mapping and the tracing of variants to their corresponding software product line assets. Based on the foundation of multi-variability modeling and realization, a derivation infrastructure is developed, which enables a semi-automated software derivation approach. It supports the configuration of different variability types to be integrated into the staged derivation process of the involved stakeholders. The derivation approach is evaluated in an industry-grade case study of a complex software system. The feasibility is demonstrated by applying the approach in the case study. By using the approach, both the size of the reusable core assets and the automation level of derivation are significantly improved. Furthermore, semi-structured interviews with engineers in practice have evaluated the usefulness and ease-of-use of the proposed approach. The results show a positive attitude towards applying the approach in practice, and high potential to generalize it to other related domains.
Shared memory concurrency is the pervasive programming model for multicore architectures
such as x86, Power, and ARM. Depending on the memory organization, each architecture follows
a somewhat different shared memory model. All these models, however, have one common
feature: they allow certain outcomes for concurrent programs that cannot be explained
by interleaving execution. In addition to the complexity due to architectures, compilers like
GCC and LLVM perform various program transformations, which also affect the outcomes of
concurrent programs.
To be able to program these systems correctly and effectively, it is important to define a
formal language-level concurrency model. For efficiency, it is important that the model is
weak enough to allow various compiler optimizations on shared memory accesses as well
as efficient mappings to the architectures. For programmability, the model should be strong
enough to disallow bogus “out-of-thin-air” executions and provide strong guarantees for well-synchronized
programs. Because of these conflicting requirements, defining such a formal
model is very difficult. This is why, despite years of research, major programming languages
such as C/C++ and Java do not yet have completely adequate formal models defining their
concurrency semantics.
In this thesis, we address this challenge and develop a formal concurrency model that is very
good both in terms of compilation efficiency and of programmability. Unlike most previous
approaches, which were defined either operationally or axiomatically on single executions,
our formal model is based on event structures, which represents multiple program executions,
and thus gives us more structure to define the semantics of concurrency.
In more detail, our formalization has two variants: the weaker version, WEAKEST, and the
stronger version, WEAKESTMO. The WEAKEST model simulates the promising semantics proposed
by Kang et al., while WEAKESTMO is incomparable to the promising semantics. Moreover,
WEAKESTMO discards certain questionable behaviors allowed by the promising semantics.
We show that the proposed WEAKESTMO model resolve out-of-thin-air problem, provide
standard data-race-freedom (DRF) guarantees, allow the desirable optimizations, and can be
mapped to the architectures like x86, PowerPC, and ARMv7. Additionally, our models are
flexible enough to leverage existing results from the literature to establish data-race-freedom
(DRF) guarantees and correctness of compilation.
In addition, in order to ensure the correctness of compilation by a major compiler, we developed
a translation validator targeting LLVM’s “opt” transformations of concurrent C/C++
programs. Using the validator, we identified a few subtle compilation bugs, which were reported
and were fixed. Additionally, we observe that LLVM concurrency semantics differs
from that of C11; there are transformations which are justified in C11 but not in LLVM and
vice versa. Considering the subtle aspects of LLVM concurrency, we formalized a fragment
of LLVM’s concurrency semantics and integrated it into our WEAKESTMO model.
Various physical phenomenons with sudden transients that results into structrual changes can be modeled via
switched nonlinear differential algebraic equations (DAEs) of the type
\[
E_{\sigma}\dot{x}=A_{\sigma}x+f_{\sigma}+g_{\sigma}(x). \tag{DAE}
\]
where \(E_p,A_p \in \mathbb{R}^{n\times n}, x\mapsto g_p(x),\) is a mapping, \(p \in \{1,\cdots,P\}, P\in \mathbb{N}
f \in \mathbb{R} \rightarrow \mathbb{R}^n , \sigma: \mathbb{R} \rightarrow \{1,\cdots, P\}\).
Two related common tasks are:
Task 1: Investigate if above (DAE) has a solution and if it is unique.
Task 2: Find a connection among a solution of above (DAE) and solutions of related
partial differential equations.
In the linear case \(g(x) \equiv 0\) the task 1 has been tackeled already in a
distributional solution framework.
A main goal of the dissertation is to give contribution to task 1 for the
nonlinear case \(g(x) \not \equiv 0\) ; also contributions to the task 2 are given for
switched nonlinear DAEs arising while modeling sudden transients in water
distribution networks. In addition, this thesis contains the following further
contributions:
The notion of structured switched nonlinear DAEs has been introduced,
allowing also non regular distributions as solutions. This extend a previous
framework that allowed only piecewise smooth functions as solutions. Further six mild conditions were given to ensure existence and uniqueness of the solution within the space of piecewise smooth distribution. The main
condition, namely the regularity of the matrix pair \((E,A)\), is interpreted geometrically for those switched nonlinear DAEs arising from water network graphs.
Another contribution is the introduction of these switched nonlinear DAEs
as a simplication of the PDE model used classically for modeling water networks. Finally, with the support of numerical simulations of the PDE model it has been illustrated that this switched nonlinear DAE model is a good approximation for the PDE model in case of a small compressibility coefficient.
Im Rahmen der Untersuchungen zu dieser Dissertation sollten literaturbekannte sowie neue heterocyclische aromatische Liganden synthetisiert werden. Diese Liganden haben die Aufgabe, zwei Metalle in unmittelbarer Nähe zueinander zu komplexieren. Dies ebnet den Weg zum Studium kooperativer Effekte. Die vorliegende Dissertation gliedert sich in drei Themenbereiche, wobei jeweils die Synthese und Charakterisierung von drei verschiedenen Ligandensystemen (A: 4,6-Dimethyl-2-(4H-1,2,4-triazol-4-yl)pyrimidin, B: derivatisierter Dipyrimidinylliganden, C: 2-(Pyrimidin-2-yl)chinolin) sowie deren Komplexierung mit Übergangsmetallen der Gruppen VI, VIII-XI gesondert diskutiert werden.
4,6-Dimethyl-2-(4H-1,2,4-triazol-4-yl)pyrimidin kann vermutlich aus elektronischen Gründen nicht zweifach am Triazolfragment methyliert werden. Diese doppelte Methylierung wäre jedoch erforderlich, um N-heterocyclische Dicarbene zu generieren. Deshalb war der Zugang zu dinuklearen Komplexen über diesen Weg nicht möglich. Erste Hinweise zur erneuten Methylierung einer Triazolylidengruppe eines mono-NHC-Komplexes, der diesen Liganden trägt, wurden aber gefunden. Darüber hinaus wurden ein iridium- und ein rhodiumbasierter NHC-Komplex synthetisiert, welche die Transferhydrierung von Ketonen mit Isopropanol als Wasserstoffquelle katalysieren. Mit derivatisierten Dipyrimdinylliganden konnten erfolgreich roll-over cyclometallierte Komplexe generiert werden, die den Zugang zu homo- und heterodinuklearen Komplexen erlauben. Kooperative Effekte verschiedener dinuklearer Komplexe wurden mit Hilfe der UV/Vis-Spektroskopie detektiert. Die Interpretation dieser Effekte ist Gegenstand laufender und künftiger Arbeiten. Ein generelles Problem war die geringe Löslichkeit vieler der im Rahmen dieser Arbeit untersuchten Metallkomplexe. Dadurch war eine Reihe weiterer dinuklearer Komplexe nicht zugänglich, mit denen ein vertieftes Verständnis kooperativer Effekte möglich gewesen wäre. Der Ligand 2-(Pyrimidin-2-yl)chinolin, sollte mit einem literaturbekannten Derivat von fac-Ir(ppy)3 im Sinne einer roll-over Cyclometallierung umgesetzt werden, um im nächsten Schritt an seiner freien N,N-Koordinationstasche ein zweites Metall zu binden. Auch dies hätte das Studium kooperativer Effekte erlaubt. Die roll-over cyclometallierte Verbindung wurde jedoch nicht erhalten. Es bildete sich ein N,N-koordinierter Komplex, der thermisch nicht in die C,N-Koordination umlagerte. In der Zukunft müssen an diesem Liganden weitere sterisch anspruchsvolle Gruppen eingeführt werden, um den roll-over Prozess voranzutreiben.
Topology-Based Characterization and Visual Analysis of Feature Evolution in Large-Scale Simulations
(2019)
This manuscript presents a topology-based analysis and visualization framework that enables the effective exploration of feature evolution in large-scale simulations. Such simulations pose additional challenges to the already complex task of feature tracking and visualization, since the vast number of features and the size of the simulation data make it infeasible to naively identify, track, analyze, render, store, and interact with data. The presented methodology addresses these issues via three core contributions. First, the manuscript defines a novel topological abstraction, called the Nested Tracking Graph (NTG), that records the temporal evolution of features that exhibit a nesting hierarchy, such as superlevel set components for multiple levels, or filtered features across multiple thresholds. In contrast to common tracking graphs that are only capable of describing feature evolution at one hierarchy level, NTGs effectively summarize their evolution across all hierarchy levels in one compact visualization. The second core contribution is a view-approximation oriented image database generation approach (VOIDGA) that stores, at simulation runtime, a reduced set of feature images. Instead of storing the features themselves---which is often infeasable due to bandwidth constraints---the images of these databases can be used to approximate the depicted features from any view angle within an acceptable visual error, which requires far less disk space and only introduces a neglectable overhead. The final core contribution combines these approaches into a methodology that stores in situ the least amount of information necessary to support flexible post hoc analysis utilizing NTGs and view approximation techniques.
Synapses are the fundamental structures that regulate the functionality of the neural circuit. The ability of the synapse to modulate its structure and function at a fast rate due to various sensory inputs provides the strength to the nervous system to incorporate new adaptations and behaviors in the animal. The synapses are very dynamic throughout the life of the animal starting from early development. Continuous events of formation and elimination of synapse, activation and inhibition of synaptic function are observed in almost all synapses. These processes occur at a high speed and require controlled cellular mechanisms. Imbalance in these processes results in defective nervous system and has been reported in many neurological disorders. Thus, it is important to understand the mechanisms that regulate process of synapse development maintenance and function.
Kinases and phosphatases are the key regulators of cellular mechanisms. Understanding the function of these molecules in the neuron will shed light on the molecular mechanisms of synaptic plasticity. Using Drosophila melanogaster larval neuromuscular junction as a model, Bulat et al. (2014) performed a large RNAi based screen targeting kinome and phosphatome of Drosophila to identify the essential kinases and phosphatases and found Myeloid leukemia factor-1 adaptor molecule (Madm) and Protein phosphatase 4 (PP4) as novel regulators of synapse development and maintenance. The function of these molecules in the nervous system has not been reported and hence I investigated on the role of Madm and PP4 in the regulation of synapse development, maintenance and function.
Myeloid leukemia factor-1 adaptor molecule (Madm), a ubiquitously expressing psuedokinase essentially functions to regulate synaptic growth, stability and function. Using a combination of genetic and high throughput imaging, I could demonstrate that Madm functions to regulate the synaptic growth and stability from the presynapse and synaptic organization form the postsynapse. Also, I could demonstrate that Madm functions in association with mTOR pathway to regulate synapse growth acting downstream of 4E-BP. In addition, using electrophysiology, we could demonstrate that Madm is essential for the basic synaptic transmission with an additive function of retrograde synaptic potentiation. In summary, I could demonstrate that Madm is a novel regulator of synaptic development, maintenance and function.
Protein phosphatase 4 (PP4), a ubiquitously expressing protein phosphatase is involved in the regulation of multiple aspects of the nervous system. I could demonstrate that PP4 is essential for the development of nervous system and the metamorphosis. Using genetics and imaging analysis, I could demonstrate that loss of PP4 results in the abnormal morphology of cell organelles. In addition, I could show that loss of PP4 results in defective brain development with poorly developed structures.
Altogether, in this study, I could demonstrate the importance of novel molecules, a pesudokinase Madm and protein phosphatases PP4 in the nervous system to regulate distinct aspects of the neuron.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
Visualization is vital to the scientific discovery process.
An interactive high-fidelity rendering provides accelerated insight into complex structures, models and relationships.
However, the efficient mapping of visualization tasks to high performance architectures is often difficult, being subject to a challenging mixture of hardware and software architectural complexities in combination with domain-specific hurdles.
These difficulties are often exacerbated on heterogeneous architectures.
In this thesis, a variety of ray casting-based techniques are developed and investigated with respect to a more efficient usage of heterogeneous HPC systems for distributed visualization, addressing challenges in mesh-free rendering, in-situ compression, task-based workload formulation, and remote visualization at large scale.
A novel direct raytracing scheme for on-the-fly free surface reconstruction of particle-based simulations using an extended anisoptropic kernel model is investigated on different state-of-the-art cluster setups.
The versatile system renders up to 170 million particles on 32 distributed compute nodes at close to interactive frame rates at 4K resolution with ambient occlusion.
To address the widening gap between high computational throughput and prohibitively slow I/O subsystems, in situ topological contour tree analysis is combined with a compact image-based data representation to provide an effective and easy-to-control trade-off between storage overhead and visualization fidelity.
Experiments show significant reductions in storage requirements, while preserving flexibility for exploration and analysis.
Driven by an increasingly heterogeneous system landscape, a flexible distributed direct volume rendering and hybrid compositing framework is presented.
Based on a task-based dynamic runtime environment, it enables adaptable performance-oriented deployment on various platform configurations.
Comprehensive benchmarks with respect to task granularity and scaling are conducted to verify the characteristics and potential of the novel task-based system design.
A core challenge of HPC visualization is the physical separation of visualization resources and end-users.
Using more tiles than previously thought reasonable, a distributed, low-latency multi-tile streaming system is demonstrated, being able to sustain a stable 80 Hz when streaming up to 256 synchronized 3840x2160 tiles and achieve 365 Hz at 3840x2160 for sort-first compositing over the internet, thereby enabling lightweight visualization clients and leaving all the heavy lifting to the remote supercomputer.
Die Alterungsbeständigkeit und Sicherheit von geklebten Verbindungen sind von großer
Bedeutung in industriellen Anwendungen. Die Ausfallwahrscheinlichkeit einer geklebten
Verbindung nach einer bestimmten Zeit kann hierbei durch verschiedene Alterungseffekte,
wie beispielsweise Temperatur und Luftfeuchtigkeit, beeinflusst werden. Die Korrelation der
Ergebnisse aus beschleunigten Laboralterungstests mit dem Langzeitverhalten der
Verbindungen unter Einsatzbedingungen bleibt häufig eine ungelöste Herausforderung. In der
vorliegenden Arbeit wurden computerbasierte Methoden für die nichtlineare Regressionsanalyse,
die Abschätzung der Zuverlässigkeit und die Vorhersage der Sicherheit auf
experimentelle Daten angewendet, die durch beschleunigte Alterung von Zugscherproben
sowie Substanz-Schulterproben generiert wurden. Die Modellierung des Alterungsverhaltens
wurde mit kombinierten Funktionen in Anlehnung an die Modelle nach EYRING und PECK
durchgeführt. Beide Modellierungsansätze erschienen hierbei geeignet zur Beschreibung der
experimentellen Daten. Die Sicherheitsvorhersage wurde anhand der Versagenswahrscheinlichkeit
sowie des Sicherheitsindex β allerdings auf Basis des EYRING-Modells
durchgeführt, da dieses die experimentellen Daten der Referenzbedingung konservativer
beschreibt.