Refine
Year of publication
Document Type
- Doctoral Thesis (940) (remove)
Language
- English (940) (remove)
Has Fulltext
- yes (940)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (54)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (22)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like \( \textit{Computer Vision} \) (CV), \( \textit{Neural Language Processing} \) (NLP), and \( \textit{Reinforcement Learning} \) (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size.
This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and second-order information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time.
Secondly, this thesis presents a regularization method for \( \textit{Generative Adversarial Networks} \) (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training.
Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers.
Lastly, this thesis unifies sparsity and \( \textit{neural architecture search} \) (NAS) through the framework of group sparsity. Group sparsity is achieved through \( \ell_{2,1} \)-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methods.
Formaldehyde is an important intermediate in the chemical industry. In technical processes, formaldehyde is used in aqueous or methanolic solutions. In these, it is bound in oligomers that are formed in reversible reactions. These reactions and also the vapor-liquid equilibria of mixtures containing formaldehyde, water, and methanol have been thoroughly studied in the literature. This is, however, not the case for solid-liquid equilibria of these mixtures, even though the precipitation of solids poses important problems in many technical processes. Therefore, in the present thesis, a fundamental study on the formation of solid phases in the system (formaldehyde + water + methanol) was carried out. Based on the experiments, a physico-chemical model of the solid-liquid equilibrium was developed. Furthermore, also kinetic effects, which are important in practice, were described. The results enable, for the first time, to understand the solid formation in these mixtures, which previously was considered to be hard to predict.
The studies on the solid formation in formaldehyde-containing systems were carried out as a part of a project dealing with the production of poly(oxymethylene) dimethyl ethers (OME). OME are formaldehyde-based synthetic fuels that show cleaner combustion than fossil diesel. Different aspects of the OME production were studied. First, a conceptual design for a OME production process based on dimethyl ether (DME) was carried out based on process simulation. This study revealed that the DME route is principally attractive. However, basic data on the formation of OME from DME were missing, and had to be estimated for the conceptual design study. Therefore, in a second step an experimental study on the formation of OME from DME was carried out. In this reaction, trioxane, a cyclic trimer of formaldehyde is used as a water-free formaldehyde source. Trioxane is currently produced from aqueous formaldehyde solution in energy-intensive processes. Therefore, a new trioxane production process was developed in which trioxane is obtained from a crystallization step. In process simulations, the new process was compared to the best previously available process and was found to be promising.
While OME are excellent synthetic fuels, it is also attractive to use them in blends with hydrogenated vegetable oil (HVO), which is available on a large scale. However, blends of OME and HVO that are initially homogenous tend to demix after a while in technical applications. This phenomenon was poorly understood previously. Therefore, in this work, liquid-liquid equilibria in mixtures of individual components of the two fuels in combination with water were systematically studied and a corresponding model was developed.
In recent years, the formal methods community has made significant progress towards the development of industrial-strength static analysis tools that can check properties of real-world production code. Such tools can help developers detect potential bugs and security vulnerabilities in critical software before deployment. While the potential benefits of static analysis tools are clear, their usability and effectiveness in mainstream software development workflows often comes into question and can prevent software developers from using these tools to their full potential. In this dissertation, we focus on two major challenges that can limit their ability to be incorporated into software development workflows.
The first challenge is unintentional unsoundness. Static program analyzers are complicated tools, implementing sophisticated algorithms and performance heuristics. This makes them highly susceptible to undetected unintentional soundness issues. These issues in program analyzers can cause false negatives and have disastrous consequences e.g., when analyzing safety critical software. In this dissertation, we present novel techniques to detect unintentional unsoundness bugs in two foundational program analysis tools namely SMT solvers and Datalog engines. These tools are used extensively by the formal methods community, for instance, in software verification, systematic testing, and program synthesis. We implemented these techniques as easy-to-use open source tools that are publicly available on Github. With the proposed techniques, we were able to detect more than 55 unique and confirmed critical soundness bugs in popular and widely used SMT solvers and Datalog engines in only a few months of testing.
The second challenge is finding the right balance between soundness, precision, and perfor- mance. In an ideal world, a static analyzer should be as precise as possible while maintaining soundness and being sufficiently fast. However, to overcome undecidability issues, these tools have to employ a variety of techniques to be practical for example, compromising on the sound- ness of the analysis or approximating code behavior. Static analyzers therefore are not trivial to integrate into any usage scenario with different program sizes, resource constraints and SLAs. Most of the times, these tools also don’t scale to large industrial code bases containing millions of lines of code. This makes it extremely challenging to get the most out of these analyzers and integrate them into everyday development activities, especially for average software develop- ment teams with little to no knowledge or understanding of advanced static analysis techniques. In this dissertation we present an approach to automatically tailor an abstract interpreter to the code under analysis and any given resource constraints. We implemented our technique as an open source framework, which is publicly available on Github. The second contribution of this dissertation in this challenge area is a technique to horizontally scale analysis tools in cloud-based static analysis platforms by splitting the input to the analyzer into partitions and analyzing the partitions independently. The technique was developed in collaboration with Amazon Web Services and is now being used in production in their CodeGuru service.
The rising demand for machine learning (ML) models has become a growing concern for stakeholders who depend on automatic decisions. In today's world, black-box solutions (in particular deep neural networks) are being continuously implemented for more and more high-stake scenarios like medical diagnosis or autonomous vehicles. Unfortunately, when these opaque models make predictions that do not align with our expectations, finding a valid justification is simply not possible.
Explainable Artificial Intelligence (XAI) has emerged in response to our need for finding reasons that justify what a machine sees, but we don't. However, contributions in this field are mostly centered around local structures such as individual neurons or single input samples. Global characteristics that govern the behavior of a model are still poorly understood or have not been explored yet. An aggravating factor is the lack of a standard terminology to contextualize and compare contributions in this field. Such lack of consensus is depriving the ML community from ultimately moving away from black-boxes, and start creating systematic methods to design models that are interpretable by design.
So, what are the global patterns that govern the behavior of modern neural networks, and what can we do to make these models more interpretable from the start?
This thesis delves into both issues, unveiling patterns about existing models, and establishing strategies that lead to more interpretable architectures. These include biases coming from imbalanced datasets, quantification of model capacity, and robustness against adversarial attacks. When looking for new models that are interpretable by design, this work proposes a strategy to add more structure to neural networks, based on auxiliary tasks that are semantically related to the main objective. This strategy is the result of applying a novel theoretical framework proposed as part of this work. The XAI framework is meant to contextualize and compare contributions in XAI by providing actionable definitions for terms like "explanation" and "interpretation."
Altogether, these contributions address dire demands for understanding more about the global behavior of modern deep neural networks. More importantly, they can be used as a blueprint for designing novel, and more interpretable architectures. By tackling issues from the present and the future of XAI, results from this work are a firm step towards more interpretable models for computer vision.
This thesis is primarily motivated by a project with Deutsche Bahn about offer preparation in rail freight transport. At its core, a customer should be offered three train paths to choose from in response to a freight train request. As part of this cooperation with DB Netz AG, we investigated how to compute these train paths efficiently. They should be all "good" but also "as different as possible". We solved this practical problem using combinatorial optimization techniques.
At the beginning of this thesis, we describe the practical aspects of our research collaboration. The more theoretical problems, which we consider afterwards, are divided into two parts.
In Part I, we deal with a dual pair of problems on directed graphs with two designated end-vertices. The Almost Disjoint Paths (ADP) problem asks for a maximum number of paths between the end-vertices any two of which have at most one arc in common. In comparison, for the Separating by Forbidden Pairs (SFP) problem we have to select as few arc pairs as possible such that every path between the end-vertices contains both arcs of a chosen pair. The main results of this more theoretical part are the classifications of ADP as an NP-complete and SFP as a Sigma-2-P-complete problem.
In Part II, we address a simplified version of the practical project: the Fastest Path with Time Profiles and Waiting (FPTPW) problem. In a directed acyclic graph with durations on the arcs and time windows at the vertices, we search for a fastest path from a source to a target vertex. We are only allowed to be at a vertex within its time windows, and we are only allowed to wait at specified vertices. After introducing departure-duration functions we develop solution algorithms based on these. We consider special cases that significantly reduce the complexity or are of practical relevance. Furthermore, we show that already this simplified problem is in general NP-hard and investigate the complexity status more closely.
Processing data streams is a classical and ubiquitous problem.
A query is registered against a potentially endless data stream and continuously delivers results as tuples stream in.
Modern stream processing systems allow users to express queries in different ways.
However, when a query involves joins between multiple input streams, the order of these joins is not transparently optimized.
In this thesis, we explore ways to optimize multi-way theta joins, where the join predicates are not limited to equality and multiple inputs are referenced.
We put forward a novel operator, MultiStream, which joins multiple input streams using iterative probing and bringing minimal materialization effort in.
The order in which tuples are sent inside a MultiStream operator is optimized using a cost-based model.
Further, a query can be answered using an multi-way tree comprising multiple MultiStream operators where each inner operator represents a materialized intermediate result.
We integrate equi-joins in MultiStream to reduce communication, such that mixed queries of theta and equality predicates are supported.
Streaming queries are long-standing and thus multiple queries might be registered at the system at the same time.
Hence, we research joint answering of multiple multi-way join queries and optimize the global ordering using integer linear programming.
All these approaches are implemented in CLASH, a system for generating Apache Storm topologies including runtime components that enables users to pose queries in a declarative way and let the system craft the suitable topology.
This thesis focuses on the development and analysis of Stochastic Model Predictive Control (SMPC) strategies for both distributed stochastic systems and centralized stochastic systems with partially known distributional information. The first part deals with the development of distributed SMPC schemes that can be synthesized and operated in a fully distributed manner, establishing rigorous theoretical guarantees such as recursive feasibility, stability and closed-loop chance constraint satisfaction. We study several control problems of practical interest, such as the output-feedback regulation problem or the state-feedback tracking problem under additive stochastic noise, and the regulation problem under multiplicative noise. In the second part of this thesis, a novel research topic known as distributionally robust MPC (DR-MPC) is explored, which enhances the applicability of SMPC to real-world problems. DR-MPC is advantageous as it solely necessitates partial knowledge in the form of samples of the uncertainty, which is usually available in practical scenarios, while SMPC mandates exact knowledge of the (unknown) distributional information. We investigate different so-called ambiguity sets to immunize the DR-MPC optimization problem against sampling inaccuracies, leading to tractable optimization problems with strong theoretical guarantees. Altogether, both parts provide rigorous theoretical guarantees with practical design procedures demonstrated by numerical examples, which are the main contributions of this thesis.
In this thesis, a new concept to prove Mosco convergence of gradient-type Dirichlet forms within the \(L^2\)-framework of K.~Kuwae and T.~Shioya for varying reference measures is developed.
The goal is, to impose as little additional conditions as possible on the sequence of reference measure \({(\mu_N)}_{N\in \mathbb N}\), apart from weak convergence of measures.
Our approach combines the method of Finite Elements from numerical analysis with the topic of Mosco convergence.
We tackle the problem first on a finite-dimensional substructure of the \(L^2\)-framework, which is induced by finitely many basis functions on the state space \(\mathbb R^d\).
These are shifted and rescaled versions of the archetype tent function \(\chi^{(d)}\).
For \(d=1\) the archetype tent function is given by
\[\chi^{(1)}(x):=\big((-x+1)\land(x+1)\big)\lor 0,\quad x\in\mathbb R.\]
For \(d\geq 2\) we define a natural generalization of \(\chi^{(1)}\) as
\[\chi^{(d)}(x):=\Big(\min_{i,j\in\{1,\dots,d\}}\big(\big\{1+x_i-x_j,1+x_i,1-x_i\big\}\big)\Big)_+,\quad x\in\mathbb R^d.\]
Our strategy to obtain Mosco convergence of
\(\mathcal E^N(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu_N\) towards \(\mathcal E(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu\) for \(N\to\infty\)
involves as a preliminary step to restrict those bilinear forms to arguments \(u,v\) from the vector space spanned by the finite family \(\{\chi^{(d)}(\frac{\,\cdot\,}{r}-\alpha)\) \(|\alpha\in Z\}\) for
a finite index set \(Z\subset\mathbb Z^d\) and a scaling parameter \(r\in(0,\infty)\).
In a diagonal procedure, we consider a zero-sequence of scaling parameters and a sequence of index sets exhausting \(\mathbb Z^d\).
The original problem of Mosco convergence, \(\mathcal E^N\) towards \(\mathcal E\) w.r.t.~arguments \(u,v\) form the respective minimal closed form domains extending the pre-domain \(C_b^1(\mathbb R^d)\), can be solved
by such a diagonal procedure if we ask for some additional conditions on the Radon-Nikodym derivatives \(\rho_N(x)=\frac{d\mu_N(x)}{d x}\), \(N\in\mathbb N\). The essential requirement reads
\[\frac{1}{(2r)^d}\int_{[-r,r]^d}|\rho_N(x)- \rho_N(x+y)|d y \quad \overset{r\to 0}{\longrightarrow} \quad 0 \quad \text{in } L^1(d x),\,
\text{uniformly in } N\in\mathbb N.\]
As an intermediate step towards a setting with an infinite-dimensional state space, we let $E$ be a Suslin space and analyse the Mosco convergence of
\(\mathcal E^N(u,v)=\int_E\int_{\mathbb R^d}\langle\nabla_x u(z,x),\nabla_x v(z,x)\rangle_\text{euc}d\mu_N(z,x)\) with reference measure \(\mu_N\) on \(E\times\mathbb R^d\) for \(N\in\mathbb N\).
The form \(\mathcal E^N\) can be seen as a superposition of gradient-type forms on \(\mathbb R^d\).
Subsequently, we derive an abstract result on Mosco convergence for classical gradient-type Dirichlet forms
\(\mathcal E^N(u,v)=\int_E\langle \nabla u,\nabla v\rangle_Hd\mu_N\) with reference measure \(\mu_N\) on a Suslin space $E$ and a tangential Hilbert space \(H\subseteq E\).
The preceding analysis of superposed gradient-type forms can be used on the component forms \(\mathcal E^{N}_k\), which provide the decomposition
\(\mathcal E^{N}=\sum_k\mathcal E^{N}_k\). The index of the component \(k\) runs over a suitable orthonormal basis of admissible elements in \(H\).
For the asymptotic form \(\mathcal E\) and its component forms \(\mathcal E^k\), we have to assume \(D(\mathcal E)=\bigcap_kD(\mathcal E^k)\) regarding their domains, which is equivalent to the Markov uniqueness of \(\mathcal E\).
The abstract results are tested on an example from statistical mechanics.
Under a scaling limit, tightness of the family of laws for a microscopic dynamical stochastic interface model over \((0,1)^d\) is shown and its asymptotic Dirichlet form identified.
The considered model is based on a sequence of weakly converging Gaussian measures \({(\mu_N)}_{N\in\mathbb N}\) on \(L^2((0,1)^d)\), which are
perturbed by a class of physically relevant non-log-concave densities.
Regulation of sucrose transport between source and sink tissues is critical for plant development and properties. In cells, the dynamic vacuolar sugar homeostasis is maintained by the controlled regulation of the activities of sugar importers and exporters residing in the tonoplast. We show here that the EARLY RESPONSE TO DEHYDRATION6-LIKE4 protein, being the closest homolog to the proton/glucose symporter ERDL6, resides in the vacuolar membrane. We raised both, molecular expression and data deriving from non-aqueous fractionation studies indicating that ERDL4 was involved in glucose and fructose allocation across the tonoplast. Surprisingly, overexpression of ERDL4 increased total sugar levels in leaves, which is due to a concomitantly induced stimulation of TST2 expression, coding for the major vacuolar sugar loader. This conclusion is supported by the notion that tst1-2 knockout lines overexpressing ERDL4 lack increased cellular sugar levels. That ERDL4 activity contributes to the coordination of cellular sugar homeostasis is further indicated by two observations. Firstly, ERDL4 and TST genes exhibit an opposite regulation during a diurnal rhythm, secondly, the ERDL4 gene is markedly expressed during cold acclimation representing a situation in which TST activity needs to be upregulated. Moreover, ERDL4-overexpressing plants show larger size of rosettes and roots, a delayed flowering and increased total seed yield. In summary, we identified a novel factor influencing source to sink transfer of sucrose and by this governing plant organ development.
In tribology laboratories, the management of material samples and test specimens, the planning and execution of experiments, the evaluation of test data and the longterm storage of results are critical processes. However, despite their criticality, they are carried out manually and typically at a low level of computerization and standardization. Therefore, formats for primary data and aggregated results are wildly different between laboratories, and the interoperability of research data is low. Even within laboratories, low levels of standardization, in combination with ambiguous or non-unique identifiers for data files, test specimens and analysis results greatly reduce data integrity and quality. As a consequence, productivity is low, error rates are high, and the lack or low quality of metadata causes the value of produced data to deteriorate very quickly, which makes the re-use of data, e.g. for data mining and meta studies, practically impossible.
In other fields of science, these are mitigated by the use of Laboratory Information Management Systems (LIMS). However, at the moment, such systems do not exist
in tribological research. The main challenge for the implementation of such a system is that it requires extensive interdisciplinary knowledge from otherwise very
disparate fields: tribology, data and process modelling, quality management, databases and programming. So far, existing solutions are either proprietary, very limited
in their scope or focused on merely storing aggregated results without any support for laboratory operations.
Therefore, this thesis describes fundamentals of information technology, data modelling and programming that are required to build a LIMS for tribology laboratories.
Based on an analysis of a typical workflow of a tribology laboratory, a data model for all relevant entities and processes is designed using object-relational data modelling and object-oriented programming and a relational database is used to provide a reference implementation of such a LIMS. It provides critical functionalities
like a materials database, test specimen management, the planning, execution and evaluation of friction and wear tests, automated procedures for tribometer
parameterization and data transmission, storage and evaluation and for aggregating individual tests into test sets and projects. It improves the quality and long-term usability of data by replacing error-prone human processes by automated variants, e.g. automated collection of metadata and data file transmission, homogenization and storage. The usefulness of the developed LIMS is demonstrated by applying it to Transfer Film Luminance Analysis (TLA), which is a newly developed advanced method for the analysis of the formation and stability of transfer films and their impact on friction and wear, but which produces so much data and requires such a large amount of metadata during evaluation that it can only be performed safely, quickly and reliably by integration into the presented LIMS.
Human interferences within the Earth System are accelerating, leading to major impacts and feedback that we are just beginning to understand. Summarized under the term 'global change' these impacts put human and natural systems under ever-increasing stress and impose a threat to human well-being, particularly in the Global South. Global governance bodies have acknowledged that decisive measures have to be taken to mitigate the causes and to adapt to these new conditions. Nevertheless, neither current international nor national pledges and measures reach the effectiveness needed to sustain global human well-being under accelerating global change. On the contrary, competing interests are not only paralyzing the international debate but also playing an increasingly important role in debates over social fragmentation and societal polarization on national and local scales. This interconnectedness of the natural and the social system and its impact on social phenomena such as cooperation and conflicts need to be understood better, to strengthen social resilience to future disturbances, drive societal transformation towards socially desirable futures while at the same time avoiding path dependencies along continuing colonial continuities. As a case example, this thesis provides insights into southwestern Amazonia, where the intertwined challenges of human contribution to global change in all its dimensions, as well as human adaptation and mitigation attempts to the imposed changes become exaggeratedly visible. As such, southwestern Amazonia with its high social, economic, and biological diversity is a good example to study the deep interrelations of humans with nature and the consequences these relations have on social cohesion amid an ecological crisis.
Therefore, this thesis takes a social-ecological perspective on conflicts and social cohesion. Social cohesion is in a wider sense understood as the way "how members of a society, group, or organization relate to each other and work together" (Dany and Dijkzeul 2022, p. 12). In particular in contexts of violence, conflicts, and fragility, little has been investigated on the role of social cohesion to govern public goods and build resilience for (future) environmental crises. At the same time, governments and international decision-makers more and more acknowledge the role of social cohesion _ comprising both relations between social groups and between groups and the state _ to build upon resilience against crises. Facing uncertainty in how natural and social systems react to certain disturbances and shocks, the governance of potential tipping points, is an additional challenge for the governance of social-ecological systems (SES). Therefore, this thesis asks: "How does governance shape pathways towards cooperative or conflictive social-ecological tipping points?" The results of this thesis can be distinguished into theoretical/conceptual results and empirical results. Initial systematic literature research on the nexus of climate change, land use, and conflict revealed, an extensive body of literature on direct effects, for example, drought-related land use conflicts, with diverging opinions on whether global warming increases the risk for conflicts or not. Adding the perspective of indirect implications, we further identified research gaps, and also a lack of policy recognition, concerning the negative externalities on land use and conflict through climate mitigation and adaptation measures. On a conceptual note, taking a social cohesion perspective into the analysis is beneficial to shift the focus from the problem-oriented perspective of vulnerabilities and conflicts to global change and potential resulting conflicts to a solution-oriented perspective of enhancing agency and resilience to strengthen collaboration. The developed Social Cohesion Conceptual Model and the related analytical framework facilitate the incorporation of societal dynamics into the analysis of SES dynamics. In addition, the elaborated Tipping Multiverse Framework took up this idea and enhanced it with a more detailed perspective on the soil ecosystem and the household livelihood system to identify entry points to potential social-ecological tipping cascades. As such, the Tipping Multiverse Framework offered two matrices that can advance the understanding of regional SES by identifying core processes, functioning, and links in each TE and thus provide entry points to identify potential tipping cascades across SES sub-systems. The exemplified application of these two frameworks on southwestern Amazonia shows the analytical potential of both proposed frameworks in advancing the understanding of social-ecological tipping points and potential tipping cascades in a regional SES.
On an empirical note, zooming in on questions of governance by applying a political ecology lens to human security, we find that 'glocal' resource governance often reproduces, amplifies, or creates power imbalances and divisions on and between different scales. Our results show that the winners of resource extraction are mostly found at the national and international scale while local communities receive little benefit and are left vulnerable to externalities. Hence, our study contributes to the existing research by stressing the importance of one underlying question: "governance by whom and for whom?" This question raised the demand to understand the underlying dynamics of resource governance and resulting conflicts. Therefore, we aimed at analyzing how (environmental) institutions influence the major drivers of social-ecological conflicts over land in and around three protected areas, Tambopata (Peru), the Extractive Reserve Chico Mendes (Brazil), and Manuripi (Bolivia). We found that state institutions, in particular, have the following effects on key conflict drivers: Overlapping responsibilities of governance institutions and limited enforcement of regulations protecting and empowering rural and disadvantaged populations, enabling external actors to (illegally) access and control resources in the protected areas. Consequently, the already fragile social contract between the residents of the protected area and its surrounding areas and the central state is further weakened by the expanding influence of criminal organizations that oppose the state's authority. For state institutions to avoid aggravating these conflict drivers but instead better manage them or even contribute to conflict prevention and mitigation, a transformation from reactive to reflexive institutions and the development of new reflexive governance competencies is needed.
This need for reflexive governance becomes particularly visible when sudden disturbances or shocks impact the SES. Our analysis of the impacts of the COVID-19 pandemic on the interconnections of land use change, ecosystem services, human agency, conflict, and cooperation that the pandemic has had a severe influence on the human security of marginalized social groups in southwestern Amazonia. Civil society actions have been an essential strategy in the fight against COVID-19, not just in the health sector but also in the economic, political, social, and cultural realms. However, our research also showed that the pandemic has consolidated and partly renewed criminal structures, while the already weak state has fallen further behind due to additional tasks managing the pandemic and other disasters such as floods.
In conclusion, it can be said that the reflexivity of governance is crucial to foster cooperation and preventing conflicts in the realm of social-ecological systems. By not only reacting to already occurring changes but also reflecting upon potential future changes, governance can shape transformation pathways away from the detrimental and towards life-sustaining pathways. It can do so, by exercising agency across scales to avoid the crossing of detrimental social-ecological tipping points but rather to trigger life-sustaining tipping points that contribute to global social-ecological well-being.
Emission trading systems (ETS) represent a widely used instrument to control greenhouse
gas emissions, while minimizing reduction costs. In an ETS, the desired amount of emissions in
a predefined time period is fixed in advance; corresponding to this amount, tradeable allowances
are handed out or auctioned to companies which underlie the system. Emissions which are not
covered by an allowance are subject to a penalty at the end of the time period.
Emissions depend on non-deterministic parameters such as weather and the state of the
economy. Therefore, it is natural to view emissions as a stochastic quantity. This introduces a
challenge for the companies involved: In planning their abatement actions, they need to avoid
penalty payments without knowing their total amount of emissions. We consider a stochastic control approach to address this problem: In a continuous-time model, we use the rate of
emission abatement as a control in minimizing the costs that arise from penalty payments and
abatement costs. In a simplified variant of this model, the resulting Hamilton-Jacobi-Bellman
(HJB) equation can be solved analytically.
Taking the viewpoint of a regulator of an ETS, our main interest is to determine the resulting
emissions and to evaluate their compliance with the given emission target. Additionally, as an
incentive for investments in low-emission technologies, a high allowance price with low variability
is desirable. Both the resulting emissions and the allowance price are not directly given by the
solution to the stochastic control problem. Instead we need to solve a stochastic differential
equation (SDE), where the abatement rate enters as the drift term. Due to the nature of the
penalty function, the abatement rate is not continuous. This means that classical results on
existence and uniqueness of a solution as well as convergence of numerical methods, such as the
Euler-Maruyama scheme, do not apply. Therefore, we prove similar results under assumptions
suitable for our case. By applying a standard verification theorem, we show that the stochastic
control approach delivers an optimal abatement rate.
We extend the model by considering several consecutive time periods. This enables us to
model the transfer of unused allowances to the subsequent time period. In formulating the
multi-period model, we pursue two different approaches: In the first, we assume the value that
the company anticipates for an unused allowance to be constant throughout one time period.
We proceed similarly to the one-period model and again obtain an analytical solution. In the
second approach, we introduce an additional stochastic process to simulate the evolution of the
anticipated price for an unused allowance.
The model so far assumes that allowances are allocated for free. Therefore, we construct
another model extension to incorporate the auctioning of allowances. Then, additionally the
problem of choosing the optimal demand at the auction needs to be solved. We find that
the auction price equals the allowance price at the beginning of the respective time period.
Furthermore, we show that the resulting emissions as well as the allowance price are unaffected
by the introduction of auctioning in the setting of our model.
To perform numerical simulations, we first solve the characteristic partial differential equation
derived from the HJB equation by applying the method of lines. Then we apply the Euler-
Maruyama scheme to solve the SDE, delivering realizations of the resulting emissions and the
allowance price paths.
Simulation results indicate that, under realistic settings, the probability of non-compliance
with the emission target is quite large. It can be reduced for instance by an increase of the
penalty. In the multi-period model, we observe that by allowing the transfer of allowances to the
subsequent time period, the probability of non-compliance decreases considerably.
Estimation of Motion Vector Fields of Complex Microstructures by Time Series of Volume Images
(2023)
Mechanical tests form one of the pillars in development and assessment of modern materials. In a world that will be forced to handle its resources more carefully in the near future, development of materials that are favorable regarding for example weight or material consumption is inevitable. To guarantee that such materials can also be used in critical infrastructure, such as foamed materials in automotive industry or new types of concrete in civil engineering, mechanical properties like tensile or compressive strength have to be thoroughly described. One method to do so is by so called in situ tests, where the mechanical test is combined with an image acquisition technique such as Computed Tomography.
The resulting time series of volume images comprise the delicate and individual nature of each material. The objective of this thesis is to present and develop methods to unveil this behavior and make the motion accessible by algorithms. The estimation of motion has been tackled by many communities, and two of them have already made big effort to solve the problems we are facing. Digital Volume Correlation (DVC) on the one hand has been developed by material scientists and was applied in many different context in mechanical testing, but almost never produces displacement fields that allocate one vector per voxel. Medical Image Registration (MIR) on the other hand does produce voxel precise estimates, but is limited to very smooth motion estimates.
The unification of both families, DVC and MIR, under one roof, will therefore be illustrated in the first half of this thesis. Using the theory of inverse problems, we lay the mathematical foundations to explain why in our impression none of the families is sufficient to deal with all of the problems that come with motion estimation in in situ tests. We then proceed by presenting a third community in motion estimation, namely Optical flow, which is normally only applied in two dimensions. Nevertheless, within this community algorithms have been developed that meet many of our requirements. Strategies for large displacement exist as well as methods that resolve jumps, and on top the displacement is always calculated on pixel level. This thesis therefore proceeds by extending some of the most successful methods to 3D.
To ensure the competitiveness of our approach, the last part of this thesis deals with a detailed evaluation of proposed extensions. We focus on three types of materials, foam, fibre systems and concrete, and use simulated and real in situ tests to compare the Optical flow based methods to their competitors from DVC and MIR. By using synthetically generated and simulated displacement fields, we also assess the quality of the calculated displacement fields - a novelty in this area. We conclude this thesis by two specialized applications of our algorithm, which show how the voxel-precise displacement fields serve as useful information to engineers in investigating their materials.
An Efficient Automated Machine Learning Framework for Genomics and Proteomics Sequence Analysis
(2023)
Genomics and Proteomics sequence analyses are the scientific studies of understanding the language of Deoxyribonucleic Acid (DNA), Ribonucleic Acid (RNA) and protein biomolecules with an objective of controlling the production of proteins and understanding their core functionalities. It helps to detect chronic diseases in early stages, root causes of clinical changes, key genetic targets for pharmaceutical development and optimization of therapeutics for various age groups. Most Genomics and Proteomics sequence analysis work is performed using typical wet lab experimental approaches that make use of different genetic diagnostic technologies. However, these approaches are costly, time consuming, skill and labor intensive. Hence, these approaches slow down the process of developing an efficient and economical sequence analysis landscape essential to demystify a variety of cellular processes and functioning of biomolecules in living organisms. To empower manual wet lab experiment driven research, many machine learning based approaches have been developed in recent years. However, these approaches cannot be used in practical environment due to their limited performance. Considering the sensitive and inherently demanding nature of Genomics and Proteomics sequence
analysis which can have very far-reaching as well as serious repercussions on account of misdiagnosis, the main
objective of this research is to develop an efficient automated computational framework for Genomics and Proteomics sequence analysis using the predictive and prescriptive analytical powers of Artificial Intelligence (AI) to significantly improve healthcare operations.
The proposed framework is comprised of 3 main components namely sequence encoding, feature engineering and
discrete or continuous value predictor. The sequence encoding module is equipped with a variety of existing and newly developed sequence encoding algorithms that are capable of generating a rich statistical representation of DNA, RNA and protein raw sequences. The feature engineering module has diverse types of feature selection and dimensionality reduction approaches which can be used to generate the most effective feature space. Furthermore, the discrete and/or continuous value predictor module of the proposed framework contains a wide range of existing machine learning and newly developed deep learning regressors and classifiers. To evaluate the integrity and generalizability of the proposed framework, we have performed a large-scale experimentation over diverse types of Genomics and Proteomics sequence analysis tasks (i.e., DNA, RNA and proteins).
In Genomics analysis, Epigenetic modification detection is one of the key component. It helps clinical researchers and practitioners to distinguish normal cellular activities from malfunctioned ones, which can lead to diverse genetic disorders such as metabolic disorders, cancers, etc. To support this analysis, the proposed framework is used to solve the problem of DNA and Histone modification prediction where it has achieved state-of-the-art performance on 27 publicly available benchmark datasets of 17 different species with best accuracy of 97%. RNA sequence analysis is another vital component of Genomics sequence analysis where the identification of different coding and non-coding RNAs as well as their subcellular localization patterns help to demystify the functions of diverse RNAs, root causes of clinical changes, develop precision medicine and optimize therapeutics. To support this analysis, the proposed framework is utilized for non-coding RNA classification and multi-compartment RNA subcellular localization prediction. Where it achieved state-of-the-art performance on 10 publicly available benchmark datasets of Homo sapiens and Mus Musculus species with best accuracy of 98%.
Proteomics sequence analysis is essential to demystify the virus pathogenesis, host immunity responses, the way
proteins affect or are affected by the cell processes, their structure and core functionalities. To support this analysis, the proposed framework is used for host protein-protein and virus-host protein-protein interaction prediction. It has achieved state-of-the-art performance on 2 publicly available protein protein interaction datasets of Homo Sapiens and Mus Musculus species with best accuracy of 96% and 7 viral host protein protein interaction datasets of multiple hosts and viruses with best accuracy of 94%. Considering the performance and practical significance of proposed framework, we believe proposed framework will help researchers in developing cutting-edge practical applications for diverse Genomic and Proteomic sequence analyses tasks (i.e., DNA, RNA and proteins).
Malaria is still a big problem globally causing more than 400,000 deaths each year. Although very effective antimalarial drugs are on the market, their mode of action is still not fully understood. In the last years some patients showed an increased clearance time from Plasmodium falciparum malaria parasites after a therapy with the most effective antimalarial compound artemisinin. It was shown that mutations in the propeller domain of a protein called PfKelch13 are directly linked to this decreased susceptibility towards the drug. To gain insights into the protein function, I produced different mutants of a truncated version of this protein containing the BTB and propeller domain in high yield and purity in insect cells. I showed a positive correlation between the solubility of the recombinant protein and the artemisinin susceptibility. Prominent PfKelch13 mutants from the field with decreased artemisinin susceptibility (I543T, R539T, C580Y) were insoluble when recombinantly expressed suggesting that improper folding of PfKelch13 leads to this decrease in sensitivity. The mutation C580Y is the most frequent mutation in South East Asia. The substitution of this cysteine residue does not allow the formation of an intramolecular disulfide bond with cysteine C532 according to an existing crystal structure. Interestingly, substitution of these cysteines to serines did not show improper folding in insect cells, arguing that rather specific substitutions than the residue position itself are responsible for the alteration of drug sensitivity. To test the impact of the disulfide bond on artemisinin susceptibility, I generated stable transgenic parasites expressing the corresponding serine mutations. Neither C580S nor C532S showed decreased artemisinin susceptibility. Therefore, it could be excluded that the formation of the intramolecular disulfide bond has an influence on artemisinin susceptibility. We further asked, if the protein abundance has an influence on the sensitivity towards the drug. I successfully generated stable transgenic parasites expressing His8-PFKELCH13 fused to the glmS riboswitch. I showed that protein levels could be efficiently down-regulated by more than 90% resulting in a very low parasite susceptibility towards artemisinin. This strain offers a basis for future experiments in the understanding of the impact of PfKelch13 protein levels on biochemical pathways in malaria parasites.
Peroxiredoxins (Prxs) play an important role in protecting the cell from high amounts hydroperoxides. Among the five known Prxs in P. falciparum our group took PfAOP as a model enzyme to study the catalytic cycle. It has been shown that PfAOP reduces hydroperoxides like H2O2 or tBuOOH with fast kinetics, and that reduction of the protein is linked to the GSH/Grx system (Djuika et al. 2013). However, no direct kinetic data was available for the reductive half-reaction of PfAOP and GSH. In this thesis, I qualitatively showed that oxidized PfAOP can be glutathionylated and that in a next step glutathione can be transferred to PfGrx. I further determined the rate constants of the glutathionylation of PfAOP by stopped-flow measurements. Rate constants of around 10^5 M-1s-1 indicate a fast kinetic that is able to protect the protein from hyperoxidation and inactivation. Furthermore, I determined the activation energy, entropy and enthalpy for this reaction of 41.1 kJ/mol, -0.79 J/mol and 39.8 kJ/mol, respectively. Hence, the activation energy of the glutathionylation of oxidized PfAOP suggests the break of two to three hydrogen bonds and is rather temperature-independent.
Synergism of Lipoates and Established Anticancer Drugs in Cell and Mouse Models of Colorectal Cancer
(2021)
Das kolorektale Karzinom (KRK) ist eine der am häufigsten auftretenden Krebsentitäten und zeigt aktuell eine erhöhte Inzidenz und Mortalität bei Erwachsenen unter 50 Jahren in Europa und den USA auf. Zumeist im fortgeschrittenen Stadium diagnostiziert, ist die 5-Jahres-Überlebensrate des KRKs immer noch gering. Daher besteht eine Notwendigkeit neuer Therapieansätze und Angriffspunkte für Wirkstoffkandidaten, wenngleich Standardtherapien mit den Zytostatika 5-Fluorouracil (5 FU) oder Irinotecan (IT) und Biologika existieren. Einen solchen Angriffspunkt könnte der Energiemetabolismus darstellen, der für Krebszellen charakteristische Veränderungen aufweist. Das Lipoat CPI 613 ist ein Derivat der natürlich vorkommenden α Liponsäure (LA) und gehört aufgrund seiner einzigartigen Hemmung des veränderten Energiemetabolismus in Krebszellen als Vorreiter zu einer neuen Klasse von Wirkstoffsubstanzen. CPI-613 erwies sich bereits als Inhibitor mitochondrieller Multienzymkomplexe wie der Pyruvatdehydrogenase und der α Ketoglutaratdehydrogenase. Diese Wirkung wurde vorrangig in Krebszellen beschrieben.
Der Fokus dieser Arbeit lag zunächst auf der Untersuchung zellulärer Antworten auf die Behandlung von KRK-Zellen mit dem Lipoat CPI 613 und schlossen neben Effekten auf die mitochondrielle Integrität und Funktion der oxidativen Phosphorylierung auch die Endpunkte Zelltod, DNA-Schädigung und Autophagie ein. Hierfür wurde ein Panel an KRK-Zellen und auch nicht maligne transformierte humane Kolonepithelzellen (HCEC) untersucht. Darüber hinaus wurde in isolierten murinen Mitochondrien die Wirkungsweise von CPI-613 geprüft. Weiterhin sollte ein möglicher Synergismus durch eine Kombinationsbehandlung von Lipoaten wie CPI-613 oder dessen Muttersubstanz LA und Standardchemotherapeutika in der KRK-Behandlung wie 5-FU und IT charakterisiert werden. Nach Identifizierung der aussichtsreichsten Kombination in vitro folgten Studien zur Wirksamkeitssteigerung der Kombinationsbehandlung im Vergleich zur Einzelbehandlung in vivo sowie eine Beurteilung zu möglichen hämatotoxischen Nebeneffekten. Zu diesem Zwecke wurden sowohl das Xenograft-Modell in immundefizienten Mäusen (BALB/c nu/nu) als auch das Azoxymethan (AOM)/Dextran Natriumsulfat (DSS)-Model zur chemischen Induktion von KRK-Tumoren in C57BL/6-Mäusen genutzt.
Es konnte gezeigt werden, dass CPI-613 sowohl in isolierten Mitochondrien als auch in KRK-Zelllinien zu einer Reduktion des mitochondriellen Membranpotentials neben einer vermehrten Bildung von reaktiven Sauerstoffspezies führte. Dies ging mit einer deutlichen Verminderung der zellulären Atmung einher und äußerte sich in KRK-Zelllinien zudem in einer Reduktion der Mitochondrien-Anzahl. Während kein Zellzyklusarrest durch die Behandlung mit CPI 613 ausgelöst wurde, konnte in einem Panel von diversen KRK-Zelllinien Zelltod nachgewiesen werden. Dies war mit gleicher Potenz in den verschiedenen Zelllinien unabhängig des p53- und MSS/MSS-Status zu beobachten. Dabei wurden verschiedene und teils redundante Zelltodmechanismen wie Apoptose, Nekroptose und Caspase-unabhängigem Zelltod nach CPI-613 Behandlung nachgewiesen. Als Folge einer Behandlung mit CPI-613 kam es des Weiteren zu einer gesteigerten Autophagie-Rate in KRK-Zellen. Analysen zum genotoxischen Potential von CPI-613 ergaben keine Hinweise auf DNA-Schädigungen. Verschiedene Kombinationen von Lipoaten und Standardchemotherapeutika wurden in vitro auf deren Synergismus charakterisiert. Neben einer synergistischen Wirkung von CPI-613 in Kombination mit IT in KRK-Zellkulturmodellen konnte ebenfalls ein positiver Effekt in Mausmodellen des KRKs verzeichnet werden. Während CPI-613 bereits alleine in Xenograft-Modellen zu einer Reduktion des Tumorwachstums und somit zu einer verlängerten Überlebenszeit und damit einem Therapieerfolg führte, verstärkten sich diese Effekte deutlich in der Kombinationsbehandlung mit IT. In chemisch-induzierten Tumoren hingegen zeigte vor allem IT einen therapeutischen Effekt, welcher ebenfalls in der Kombination mit CPI-613 zu verzeichnen war. Eine Monotherapie mit CPI 613 führte in diesem Modell zu keinem signifikanten Therapieerfolg. Der Synergismus in vitro und in vivo gründet vornehmlich auf einer gesteigerten Zelltodrate, der Depletion von p53 sowie einer Reduktion der Autophagierate und nicht auf einer erhöhten DNA-Schädigung. Ein hämatotoxisches Nebenwirkungsprofil von CPI-613 wurde hier im Allgemeinen nicht beobachtet.
Zusammenfassend wurde in dieser Arbeit demonstriert, dass CPI-613 im Kontext von KRK den veränderten Energiemetabolismus angreift und zum Zelltod führt. Darüber hinaus konnte eine Genotoxizität von CPI-613 ausgeschlossen werden. Eine Kombination aus CPI-613 und IT führte in Xenograft- und chemisch-induzierten KRK-Tumoren zu einer gesteigerten Wirksamkeit in Bezug auf die Hemmung des Tumorwachstums und der Überlebenszeit. Die Befunde in Zellkultur- und Maus-Modellen des KRKs identifizierten CPI-613 als vielversprechenden Therapiebaustein für die Behandlung des KRKs.
Compared to canonical model organisms, the genetic toolbox of Kinetoplastid parasites have a considerable gap in the transgenic techniques available. The implementation of the CRISPR/Cas9 technology is poised to transform the way we perform genetic manipulations and offers a new and exciting horizon for molecular parasitology. In this study, we use the Kinetoplastid parasite Leishmania tarentolae as a model organism. This unicellular eukaryote is an attractive model for both basic and applied research. Understanding Leishmania’s basic biology is valuable to underpin differences to the host that might help to treat infectious diseases. Furthermore, it also provides new examples of non-conserved mechanisms that will help to understand the fundamental principles of the biology of eukaryotes and their evolution. In this work, the CRISPR/Cas9 system was used to study mitochondrial protein import.
Here I show the efficacy of CRISPR/Cas9 to generate knockout and knockin mutants. Proof- of- concept gene PF16 was used to generate knockout immotile parasites and knockin fluorescent mutants fused with mCherry. The APRT gene was also knocked out showing resistance to APP.
In addition, I generated endogenous mutants of a constituent of the mitochondrial import machineries, the sulfhydryl oxidoreductase Erv. I showed that the KISS domain and cysteine 17 are dispensable for survival dismissing that their functions correlate with the essential operation/s of Erv. I report that the ERV gene and the intervening sequences of its shuttle pair cysteines are refractory to ablation and modification, respectively, indicating that they are essential for survival. I also generated Erv interactomes using full-length and mutant (ErvΔKISS) baits showing candidates with hitherto unknown functions that might be related to Erv function.
I also tested the glmS riboswitch and generate endogenous mutants with CRISPR/Cas9. We asked if it was possible in Leishmania to obtain knockdown mutants with this technique. The evidence of this study indicates that the system is inefficient in provoking a knockdown phenotype for the genes characterized.
An alternative negative marker was also developed in this work. I propose the APRT gene as a novel and efficient counter-selectable marker as compared to the current yFCU and TK genes. The implementation of this system could lead to first shuffling experiments that are not feasible in Leishmania further highlighting the value of this model organism.
Synapses are the fundamental structures that regulate the functionality of the neural circuit. The ability of the synapse to modulate its structure and function at a fast rate due to various sensory inputs provides the strength to the nervous system to incorporate new adaptations and behaviors in the animal. The synapses are very dynamic throughout the life of the animal starting from early development. Continuous events of formation and elimination of synapse, activation and inhibition of synaptic function are observed in almost all synapses. These processes occur at a high speed and require controlled cellular mechanisms. Imbalance in these processes results in defective nervous system and has been reported in many neurological disorders. Thus, it is important to understand the mechanisms that regulate process of synapse development maintenance and function.
Kinases and phosphatases are the key regulators of cellular mechanisms. Understanding the function of these molecules in the neuron will shed light on the molecular mechanisms of synaptic plasticity. Using Drosophila melanogaster larval neuromuscular junction as a model, Bulat et al. (2014) performed a large RNAi based screen targeting kinome and phosphatome of Drosophila to identify the essential kinases and phosphatases and found Myeloid leukemia factor-1 adaptor molecule (Madm) and Protein phosphatase 4 (PP4) as novel regulators of synapse development and maintenance. The function of these molecules in the nervous system has not been reported and hence I investigated on the role of Madm and PP4 in the regulation of synapse development, maintenance and function.
Myeloid leukemia factor-1 adaptor molecule (Madm), a ubiquitously expressing psuedokinase essentially functions to regulate synaptic growth, stability and function. Using a combination of genetic and high throughput imaging, I could demonstrate that Madm functions to regulate the synaptic growth and stability from the presynapse and synaptic organization form the postsynapse. Also, I could demonstrate that Madm functions in association with mTOR pathway to regulate synapse growth acting downstream of 4E-BP. In addition, using electrophysiology, we could demonstrate that Madm is essential for the basic synaptic transmission with an additive function of retrograde synaptic potentiation. In summary, I could demonstrate that Madm is a novel regulator of synaptic development, maintenance and function.
Protein phosphatase 4 (PP4), a ubiquitously expressing protein phosphatase is involved in the regulation of multiple aspects of the nervous system. I could demonstrate that PP4 is essential for the development of nervous system and the metamorphosis. Using genetics and imaging analysis, I could demonstrate that loss of PP4 results in the abnormal morphology of cell organelles. In addition, I could show that loss of PP4 results in defective brain development with poorly developed structures.
Altogether, in this study, I could demonstrate the importance of novel molecules, a pesudokinase Madm and protein phosphatases PP4 in the nervous system to regulate distinct aspects of the neuron.
The fifth-generation (5G) of wireless networks promises to bring new advances, such as a huge increase in mobile data rates, a plunge in communications latency, and an increase in the quality of experience perceived by users that can cope with the ever-increasing demand in Internet traffic. However, the high cost of capital and operational expenditure (CAPEX/OPEX) of the new 5G network and the lack of a killer application hinder its rapid adoption. In this context, Mobile Network Operators (MNOs) have turned their attention to the following idea: opening up their infrastructure so that vertical businesses can leverage the new 5G network to improve their primary businesses and develop new ones. However, deploying multiple isolated vertical applications on top of the same infrastructure poses unique challenges that must be addressed. In this thesis, we provide critical contributions to developing 5G networks to accommodate different vertical applications in an isolated, flexible, and automated manner. This thesis contributions spawn on three main areas: (i) the development of an integrated fronthaul and backhaul network, (ii) the development of a network slicing overbooking algorithm, and (iii) the development of a method to mitigate the noisy neighbors' problem in a vRAN deployment.
Gliomas are one of the most common types of primary brain tumors. Among
those, high grade astrocytomas - so-called glioblastoma multiforme - are the
most aggressive type of cancer originating in the brain, leaving patients a median survival time of 15 to 20 months after diagnosis. The invasive behavior
of the tumor leads to considerable difficulties regarding the localization of all
tumor cells, and thus impedes successful therapy. Here, mathematical models
can help to enhance the assessment of the tumor’s extent.
In this thesis, we set up a multiscale model for the evolution of a glioblastoma.
Starting on the microscopic level, we model subcellular binding processes and
velocity dynamics of single cancer cells. From the resulting mesoscopic equation, we derive a macroscopic equation via scaling methods. Combining this
equation with macroscopic descriptions of the tumor environment, a nonlinear
PDE-ODE-system is obtained. We consider several variations of the derived
model, amongst others introducing a new model for therapy by gliadel wafers,
a treatment approach indicated i.a. for recurrent glioblastoma.
We prove global existence of a weak solution to a version of the developed
PDE-ODE-system, containing degenerate diffusion and flux limitation in the
taxis terms of the tumor equation. The nonnegativity and boundedness of all
components of the solution by their biological carrying capacities is shown.
Finally, 2D-simulations are performed, illustrating the influence of different
parts of the model on tumor evolution. The effects of treatment by gliadel
wafers are compared to the therapy outcomes of classical chemotherapy in different settings.
Scientific research plays a crucial role in the development of a society. With ever-increasing volumes of scientific publications are now making it extremely challenging to analyze and maintain insights into the scientific communities like collaboration or citation trends and evolution of interests etc. This thesis is an effort towards using scientific publications to provide detailed insights into a scientific community from a range of aspects. The contribution of this thesis is five-fold.
Firstly, this thesis proposes approaches for automatic information extraction from scientific publications. The proposed layout-based approach for this purpose is inspired by how human beings perceive individual references relying only on visual queues. The proposed approach significantly outperforms the existing text-based techniques and is independent of any domain or language.
Secondly, this thesis tackles the problem of identifying meaningful topics from a given publication as the keywords provided in the publication are not always accurate representatives of the publication topic. To rectify this problem, this thesis proposes a state-of-the-art keywords extraction approach that employs a domain ontology along with the detected keywords to perform topic modeling for a given set of publications.
Thirdly, this thesis analyses the disposition of each citation to understand its true essence. For this purpose, we proposes a transformer-based approach for analyzing the impact of each citation appearing in a scientific publication. The impact of a citation can be determined by the inherent sentiment and intent of a citation, which refers to the assessment and motive of an author towards citing a scientific publication.
Furthermore, this thesis quantifies the influence of a research contributor in a scientific community by introducing a new semantic index for researchers that takes both quantitative and qualitative aspects of a citation into account to better represent the prestige of a researcher in a scientific community. Semantic Index is also evaluated for conformity to the guidelines and recommendations of various research funding organizations to assess the impact of a researcher.
In this thesis, all of the aforementioned aspects are packaged together in a single framework called Academic Community Explorer (ACE) 2.0, which automatically extracts and analyzes information from scientific publications and visualizes the insights using several interactive visualizations. These visualizations provide an instant glimpse into the scientific communities from a wide range of aspects with different granularity levels.
The association between social origin and educational attainment has been repeatedly confirmed and studied in social science research. Much of the international comparative research to date has shown that countries differ in the extent of educational inequality. This research suggests that the institutional design of the education system can affect multiple dimensions of educational inequality, such as school performance and educational decisions. In addition to international comparative research, other research also suggests that institutional characteristics moderate the link between social origin and educational inequality. Thus, the institutional features of the education system provide opportunities for policy interventions to influence the relationship between social origin and educational inequality and to reduce educational inequalities. The literature examines and discusses various institutional characteristics of the education system for their respective effects on or associations to educational inequalities. In this respect, tracking is also an institutional characteristic that has been studied repeatedly and could be an important link between social origin and education. Tracking is the practice of separating students by performance. This separation can occur between schools or within schools. Thus, students are placed in a particular school type (between-school tracking) or class (within-school tracking) based on their performance. National and international research demonstrate the importance of tracking in relation to the emergence of educational inequalities. In this context, previous research has often shown that early and strict tracking leads to greater educational inequality. However, there is also research that finds no effects from tracking or even inequality-reducing effects from early and strict tracking. Against this background, further research on the associations with - and effects of - tracking, including under different settings and contexts, is important for a better understanding of tracking and may be particularly interesting for the German education system. This is because, apart from some deviations, the German education system is characterized by an early and strict separation of students into different school types in secondary education. Over the years, there have been many different educational reforms in Germany with different scopes, goals, and at different phases in the education system. The fact that the federal states in Germany can decide independently on education policy (Kulturhoheit der Länder - Cultural sovereignty of the states) means that they partially developed in different directions. The following contribution is therefore limited to three selected aspects of tracking in the education systems of the federal states in Germany and its influences on features of educational inequality: integrated comprehensive schools, timing of tracking, and strictness of tracking.
This thesis deals with the simulation of large insurance portfolios. On the one hand, we need to model the contracts' development and the insured collective's structure and dynamics. On the other hand, an important task is the forward projection of the given balance sheet. Questions that are interesting in this context, such as the question of the default probability up to a certain time or the question of whether interest rate promises can be kept in the long term, cannot be answered analytically without strong simplifications. Reasons for this are high dependencies between the insurer's assets and liabilities, interactions between existing and new contracts due to claims on a collective reserve, potential policy features such as a guaranteed interest rate, and individual surrender options of the insured. As a consequence, we need numerical calculations, and especially the volatile financial markets require stochastic simulations. Despite the fact that advances in technology with increasing computing capacities allow for faster computations, a contract-specific simulation of all policies is often an impossible task. This is due to the size and heterogeneity of insurance portfolios, long time horizons, and the number of necessary Monte Carlo simulations. Instead, suitable approximation techniques are required.
In this thesis, we therefore develop compression methods, where the insured collective is grouped into cohorts based on selected contract-related criteria and then only an enormously reduced number of representative contracts needs to be simulated. We also show how to efficiently integrate new contracts into the existing insurance portfolio. Our grouping schemes are flexible, can be applied to any insurance portfolio, and maintain the existing structure of the insured collective. Furthermore, we investigate the efficiency of the compression methods and their quality in approximating the real life insurance portfolio.
For the simulation of the insurance business, we introduce a stochastic asset-liability management (ALM) model. Starting with an initial insurance portfolio, our aim is the forward projection of a given balance sheet structure. We investigate conditions for a long-term stability or stationarity corresponding to the idea of a solid and healthy insurance company. Furthermore, a main result is the proof that our model satisfies the fundamental balance sheet equation at the end of every period, which is in line with the principle of double-entry bookkeeping. We analyze several strategies for investing in the capital market and for financing the due obligations. Motivated by observed weaknesses, we develop new, more sophisticated strategies. In extensive simulation studies, we illustrate the short- and long-term behavior of our ALM model and show impacts of different business forms, the predicted new business, and possible capital market crashes on the profitability and stability of a life insurer.
This dissertation presents a generalization of the generalized grey Brownian motion with componentwise independence, called a vector-valued generalized grey Brownian motion (vggBm), and builds a framework of mathematical analysis around this process with the aim of solving stochastic differential equations with respect to this process. Similar to that of the one-dimensional case, the construction of vggBm starts with selecting the appropriate nuclear triple, and construct the corresponding probability measure on the co-nuclear space. Since independence of components are essential in constructing vggBm, a natural way to achieve this is to use the nuclear triple of product spaces: \[ \mathcal{S}_d(\mathbb{R}) \subset L^2_d(\mathbb{R}) \subset \mathcal{S}_d'(\mathbb{R}), \]
where \( L^2_d(\mathbb{R}) \) is the real separable Hilbert space of \( \mathbb{R}^d \)-valued square integrable functions on \( \mathbb{R} \) with respect to the Lebesgue measure, \( \mathcal{S}_d(\mathbb{R}) \) is the external direct sum of \(d\) copies of the nuclear space \(\mathcal{S}(\mathbb{R})\) of Schwartz test functions, and \(\mathcal{S}_d'(\mathbb{R})\) is the dual space of \(\mathcal{S}_d(\mathbb{R})\).
The probability measure used is the the \(d\)-fold product measure of the Mittag-Leffler measure, denoted by \(\mu_{\beta}^{\otimes d}\), whose characteristic function is given by \[ \int_{\mathcal{S}_d'(\mathbb{R})} e^{i\langle\omega,\varphi\rangle}\,\text{d}\mu_{\beta}^{\otimes d}(\omega) = \prod_{k=1}^{d}E_\beta\left(-\frac{1}{2}\langle\varphi_k,\varphi_k\rangle\right),\qquad \varphi\in \mathcal{S}_d(\mathbb{R}), \]
where \( \beta\in(0,1] \), and \( E_\beta \) is the Mittag-Leffler function. Vector-valued generalized grey Brownian motion, denoted by \( B^{\beta,\alpha}_{d}:=(B^{\beta,\alpha}_{d,t})_{t\geq 0}\), is then defined as a process taking values in \( L^2(\mu_{\beta}^{\otimes d};\mathbb{R}^d) \) given by
\[ B^{\beta,\alpha}_{d,t}(\omega) := (\langle\omega_1,M^{\alpha/2}_{-}1\!\!1_{[0,t)}\rangle,\dots,\langle\omega_d,M^{\alpha/2}_{-}1\!\!1_{[0,t)}\rangle),\quad \omega\in\mathcal{S}_d'(\mathbb{R}), \]
where \( M^{\alpha/2} \) is an appropriate fractional operator indexed by \( \alpha\in(0,2) \) and \( 1\!\!1_{[0,t)} \) is the indicator function on the interval \( [0,t) \). This process is, in general, not the aforementioned \(d\)-dimensional analogues of ggBm for \(d\geq 2\), since componentwise independence of the latter process holds only in the Gaussian case.
The study of analysis around vggBm starts with accessibility to Appell systems, so that characterizations and tools for the analysis of the corresponding distribution spaces are established. Then, explicit examples of the use of these characterizations and tools are given: the construction of Donsker's delta function, the existence of local times and self-intersection local times of vggBm, the existence of the derivative of vggBm in the sense of distributions, and the existence of solutions to linear stochastic differential equations with respect to vggBm.
Chromosomal aberrations are manifold changes in the configuration of the DNA. Each cell in a tumor
may accumulate different karyotype changes, making it challenging to determine the causes and
consequences of this instability. Therefore, model systems have been developed in the past to
generate and study specific genome alterations. In this thesis, I present the results of my studies on
three types of chromosomal aberrations, all of which may contribute to tumor development or
progression.
Chromothripsis is a phenomenon that describes a one-off massive chromosomal disruption and
reassembly, perhaps arising via DNA damage micronuclei (MN). MN are small DNA-packed nuclear
envelopes. I tested potential causes of DNA damage in MN and found that the rupture of the MN
envelope and the entry of cytosolic fractions increase DNA damage in MN. Furthermore, I addressed
the question of what physiological consequences cell lines with an additional rearranged chromosome
have compared to those with an intact extra chromosome. Strikingly, the cells with more
rearrangements showed a functional advantage resulting in an improved fitness potential.
However, the engineering of polysomic cell lines with fully intact additional chromosomes increases
various cellular stress responses and reduces the proliferation capacity. To investigate how cancer cells
overcome the detrimental consequences of aneuploidy, I explored physiological adaptations of model
cells with a defined additional chromosome that underwent in vivo and in vitro evolution. Interestingly,
unfavorable phenotypes of aneuploid cells, such as the replication stress, were mitigated upon
evolution. Furthermore, I examined the replication on single molecule resolution, showing alteration
after evolution that might underlie the replication stress bypass or tolerance.
In contrast to these unbalanced forms of genomic aberrations, whole genome doubling (WGD) leads
to a full doubled chromosome set, which was shown to evolve into aneuploid karyotypes by
chromosomal instability (CIN), frequently by losing chromosomes. Cells that underwent WGD
accumulate DNA damage in the S phase. I performed a single molecule analysis on the DNA during the
first cell cycle after WGD to elucidate how the DNA damage arises and found that the number of active
origins is not sufficient to replicate the doubled amount of DNA in the first S phase after WGD faithfully.
This starts a genome-destabilizing cascade that eventually promotes tumorigenesis, metastasis, and
poor patient outcome.
Taken together, these studies provide insights into the causes and consequences of three types of
genomic aberrations: chromothripsis, polysomy, and WGD. However different these phenomena may
be, they share one common feature – they contribute to tumor development and progression.
Therefore, elucidating the aberrant cell functions caused by genomic aberrations contributes to a
better understanding of a cancer cell's nature and will perhaps help to find new cancer therapy targets.
The generally unsupervised nature of autoencoder models implies that the main training metric is formulated as the error between input images and their corresponding reconstructions. Different reconstruction loss variations and latent space regularization have been shown to improve model performances depending on the tasks to solve and to induce new desirable properties like disentanglement. Nevertheless, measuring the success in, or enforcing properties by, the input pixel space is a challenging endeavor. In this work, we want to make more efficient use of the available data and provide design choices to be considered in the recording or generation of future datasets to implicitly induce desirable properties during training. To this end, we propose a new sampling technique which matches semantically important parts of the image while randomizing the other parts, leading to salient feature extraction and a neglection of unimportant details. Further, we propose to recursively apply a previously trained autoencoder model, which can then be interpreted as a dynamical system with desirable properties for generalization and uncertainty estimation.
The proposed methods can be combined with any existing reconstruction loss. We give a detailed analysis of the resulting properties on various datasets and show improvements on several computer vision tasks: image and illumination normalization, invariances, synthetic to real generalization, uncertainty estimation and improved classification accuracy by means of simple classifiers in the latent space.
These investigations are adopted in the automotive application of vehicle interior rear seat occupant classification. For the latter, we release a synthetic dataset with several fine-grained extensions such that all the aforementioned topics can be investigated in isolation, or together, in a single application environment. We provide quantitative evidence that machine learning, and in particular deep learning methods cannot readily be used in industrial applications when only a limited amount of variation is available for training. The latter can, however, often be the case because of constraints enforced by the application to be considered and financial limitations.
This thesis concerns itself with the long-term behavior of generalized Langevin dynamics with multiplicative noise,
i.e. the solutions to a class of two-component stochastic differential equations in \( \mathbb{R}^{d_1}\times\mathbb{R}^{d_2} \)
subject to outer influence induced by potentials \( \Phi \) and \( \Psi \),
where the stochastic term is only present in the second component, on which it is dependent.
In particular, convergence to an equilibrium defined by an invariant initial distribution \( \mu \) is shown
for weak solutions to the generalized Langevin equation obtained via generalized Dirichlet forms,
and the convergence rate is estimated by applying hypocoercivity methods relying on weak or classical Poincaré inequalities.
As a prerequisite, the space of compactly supported smooth functions is proven to be a domain of essential m-dissipativity
for the associated Kolmogorov backward operator on \(L^2(\mu)\).
In the second part of the thesis, similar Langevin dynamics are considered, however defined on a product of infinite-dimensional separable Hilbert spaces.
The set of finitely based smooth bounded functions is shown to be a domain of essential m-dissipativity for the corresponding Kolmogorov operator \( L \) on \( L^2(\mu) \)
for a Gaussian measure \( \mu \), by applying the previous finite-dimensional result to appropriate restrictions of \( L \).
Under further bounding conditions on the diffusion coefficient relative to the covariance operators of \( \mu \),
hypocoercivity of the generated semigroup is proved, as well as the existence of an associated weakly continuous Markov process
which analytically weakly provides a weak solution to the considered Langevin equation.
In recent years, deep learning has made substantial improvements in various fields like image understanding, Natural Language Processing (NLP), etc. These huge advancements have led to the release of many commercial applications which aim to help users carry out their daily tasks. Personal digital assistants are one such successful application of NLP, having a diverse userbase from all age groups. NLP tasks like Natural Language Understanding (NLU) and Natural Language Generation (NLG) are core components for building these assistants. However, like any other deep learning model, the growth of NLU & NLG models is directly coupled with tremendous amounts of training examples, which are expensive to collect due to annotator costs. Therefore, this work investigates the methodologies to build NLU and NLG systems in a data-constrained setting.
We evaluate the problem of limited training data in multiple scenarios like limited or no data available when building a new system, availability of a few labeled examples when adding a new feature to an existing system, and changes in the distribution of test data during the lifetime of a deployed system.
Motivated by the standard methods to handle data-constrained settings, we propose novel approaches to generate data and exploit latent representations to overcome performance drops emerging from limited training data.We propose a framework to generate high-quality synthetic data when few training examples are available for a newly added feature for dialogue agents. Our interpretation-to-text model uses existing training data for bootstrapping new features and improves the accuracy of downstream tasks of intent classification and slot labeling. Following, we study a few-shot setting and observe that generation systems face a low semantic coverage problem. Hence, we present an unsupervised NLG algorithm that ensures that all relevant semantic information is present in the generated text.
We also study to see if we really need all training examples for learning a generalized model. We propose a data selection method that selects the most informative training examples to train Visual Question Answering (VQA) models without erosion of accuracy. We leverage the already available inter-annotator agreement and design a diagnostic tool, called (EaSe), that leverages the entropy and semantic similarity of answer patterns.
Finally, we discuss two empirical studies to understand the feature space of VQA models and show how language model pre-training and exploiting multimodal embedding space allows for building data constrained models ensuring minimal or no accuracy losses.
This work is concerned with two often separated disciplines. First, experimental studies in which the effect of cooling rate on martensite transformation and the resulting microstructure in a low-alloy steel is investigated. From this, a possible transformation mechanism is derived. Second, the development of a simulation model which describes the martensitic morphology and its evolution. In this context, a phase field model is presented introducing order parameters to simulate the material state, namely austenite and martensite. The evolution of the order parameters is assumed to follow the time-dependent Ginzburg-Landau equation. A major extension to previous models is the consideration of twelve crystallographic martensite variants corresponding to the Nishiyama-Wassermann orientation relationship. To describe the ordered displacement of atoms during transformation and to account for the martensitic substructure, the well-known phenomenological theory of martensite crystallography is employed. The presented experiments as well as thermodynamic calculations are used as a basis in the identification of model parameters. With the presented model, basic features of the martensitic transformation can be reproduced. These include the martensite start temperature and the hierarchical microstructure consisting of blocks and packets. The sizes of the blocks are in good agreement with the real sizes of the experimental database.
Despite their “weak nature” London dispersion interactions are omnipresent and of fundamental importance for many aspects of chemistry and biology and have often been underestimated in the description of intra- and intermolecular interactions. In this thesis, London dispersion is investigated in the gas phase with molecular beam experiments and quantum chemistry. The focus of this work lies in the investigation of London dispersion in the electronic ground state and the electronically excited state. For the electronic ground state, dispersion-bound dimers of triphenylmethane derivatives were analyzed. Depending on the dispersion energy donor, a tail-to-tail (TPM), head-to-tail (iPrTPM) or head-to-head (tBuTPM) arrangement can be assumed for the minimum structure. The tBuTPM dimer exhibits an exceptionally small C-H·· H-C contact which is stabilized by strong London dispersion interactions which was quantified by energy composition analysis. For the characterization of the dimer, the calculation of anharmonic frequencies was of high importance and was also validated with literature data. The second system, the chromone-MeOH balance represents an ideal molecular balance with two competing docking sites at the carbonyl oxygen. The experimental results are compared to theoretical predictions obtained from (TD)DFT-, DLPNO-CCSD(T) and SAPT-calculations to study the balance between electrostatics, induction and dispersion interaction in the S0 and T1 state. The chromone-solvent system was identified as an ideal system for studying London dispersion in multiple electonic states. Furthermore, candidates for derivivatives of chromone were analyzed with quantum chemical methods in the electronic ground and electronically excited state in an attempt to identify suitable candidates for further experiments. The 6-methylchromone shows promising behavior in stabilizing the inside pocket regardless of the electronic state and was analyzed in more detail with a variety of methods. Similar analysis of 2-CF3chromone and the 2-CF3, 6-methylchromone showed no special effect of a substitution in 2-position or possible cooperative effects.
In group theory, a big and important family of infinite groups is given by the algebraic groups. These groups and their structures are already well-understood. In representation theory, the study of the unipotent variety in algebraic groups - and by extension the study of the nilpotent variety in the associated Lie algebra - is of particular interest.
Let \( G \) be a connected reductive algebraic group over an algebraically closed field \(\mathbf{k}\), and let \(\operatorname{Lie}(G)\) be its associated Lie algebra. By now, the orbits in the nilpotent and unipotent variety under the action of \(G\) are completely known and can be found for example in a book of Liebeck and Seitz. There exists, however, no uniform description of these orbits that holds in both good and bad characteristic. With this in mind, Lusztig defined a partition of the unipotent variety of \(G\) in 2011. Equivalently, one can consider certain subsets of the nilpotent variety of \(\operatorname{Lie}(G)\) called the nilpotent pieces. This approach appears in the same paper by Lusztig in which he explicitly determines the nilpotent pieces for simple algebraic groups of classical type.
The nilpotent pieces for the exceptional groups of type \(G_2, F_4, E_6, E_7,\) and \(E_8\) in bad characteristic have not yet been determined.
This thesis gives an introduction to the definition of the nilpotent pieces and presents a solution to this problem for groups of type \(G_2, F_4, E_6\), and partly for \(E_7\). The solution relies heavily on computational work which we elaborate on in later chapters.
Wreath product groups \(C_\ell \wr \mathfrak{S}_n\) have a rich combinatorial representation theory coming from the symmetric group case and involving partitions, Young tableaux, and Specht modules. To such a wreath product group \(W\), one can associate various algebras and geometric objects: Hecke algebras, quantum groups, Hilbert schemes, Calogero--Moser spaces, and (restricted) rational Cherednik algebras. Over the years, surprising connections have been made between a lot of these objects, with many of these connections having been traced back to combinatorial constructions and properties of the group \(W\) itself.
In this thesis, we have studied one of the algebras, namely the restricted rational Cherednik algebra \(\overline{\mathsf{H}}_\mathbf{c}(W)\), in order to find combinatorial models which describe certain representation theoretical phenomena around \(\overline{\mathsf{H}}_\mathbf{c}(W)\). In particular, we generalize a result by Gordon and describe the graded \(W\)-characters of the simple modules of \(\overline{\mathsf{H}}_\mathbf{c}(W)\) for generic parameter \(\mathbf{c}\) using Haiman's wreath Macdonald polynomials. These graded \(W\)-characters turn out to be specializations of Haiman's wreath Macdonald polynomials. In the non-generic parameter case, we use recent results by Maksimau to combinatorially express an inductive rule of \(\overline{\mathsf{H}}_\mathbf{c}(W)\)-modules first described by Bellamy. We use our results in type \(B\) to describe the (ungraded) \(B_n\)-character of simple \(\overline{\mathsf{H}}_\mathbf{c}(B_n)\)-modules associated to bipartitions with one empty part. Afterwards, we relate this combinatorial induction to various other algebras and families of \(W\)-characters found in the literature such as Lusztig's constructible characters, as well as detail some connections between generic and non-generic parameter using wreath Macdonald polynomials.
Climate change and its effects are accelerating, with climate-related disasters surging. To tackle climate change, the reduction of emissions by means of climate policy is vital. As such, the purpose of the present dissertation is to provide deeper insights about market-based and non-market-based environmental state interventions. Using regression analyses, the empirical part of this doctoral thesis investigates the adverse effect of financial subsidy payments on the energy market. Findings indicate that subsidized renewables may depress the profitability of energy storages and lower their own market values. Research projects demonstrate that carbon pricing is a promising solution to counteract the adverse effect. The theoretical part of this doctoral thesis examines the implementation of a unilateral price floor in emissions trading schemes and emissions cap negotiations. Results suggest that, under certain conditions, i) a unilateral price floor can be welfare-enhancing and ii) negotiations can achieve the socially optimal emissions cap. The dissertation helps provide a better understanding of climate policy design and emphasizes the advantage of carbon pricing as a market-based approach.
Pyrrolizidine alkaloids (PA) are secondary plant metabolites occurring in a great many plant species worldwide, known to exhibit hepatotoxic, genotoxic and carcinogenic properties after metabolic activation. In recent years, contamination of food, feed and herbal medicines with PA has become an increasing problem. The concept of interim relative potency factors (iREP) proposed by Merz and Schrenk in 2016 was a new approach for risk assessment of PA. While existing approaches of risk assessment assumed equivalent toxic potency for all PA congeners, the approach of Merz and Schrenk considered the structural features of individual PA congeners based on existing data from the literature. In order to generate further data on the structure-specific toxicity of PA, congeners of different structural classes were investigated in different in vitro test systems.
In vitro cytotoxicity was investigated in primary rat hepatocytes, HepG2 C9 cells (overespressing human CYP3A4) and naïve HepG2 cells. Overall, it could be observed that lasiocarpine and the cyclic di-esters (except monocrotaline) showed much stronger cytotoxic effects in comparison to the tested mono-esters in both, primary rat hepatocytes and in HepG2 C9 cells. Primary hepatocytes were the most sensitive cells investigating cytotoxicity of different PA congeners, followed by the HepG2 C9 cells. This is confirmed by markedly higher metabolism rates for all investigated PA in primary rat hepatocytes determined in the metabolism experiments. In naïve HepG2 cells no cytotoxic effects could be observed. The influence of cytochrome P450 (CYP) on the formation of toxic metabolites seem to play a crucial role. This assumption could be beared using ketoconazole as CYP inhibitor and testing various pre-incubation times in primary rat hepatocytes. CYP activity was measured using 7-Benzoxyresorufin-O-Dealkylase (BROD) assay in primary rat hepatocytes and in HepG2 C9 cells. Glutahione (GSH) depletion using buthionine sulfoximine (BSO) showed slight stronger cytotoxic effects for several PA, but not for all tested.
In contrast to the negative results of mutagenicity in ames fluctuation assay using Salmonella strains TA98 and TA100 with and without metabolic activation by S9 mix, all tested PA congeners showed micronuclei induction in the HepG2 C9 cell line. Again, laisocarpine and the cyclic di-esters (except monocrotaline) were the most potent ones. In conclusion, the data from cytotoxicity and genototoxicity experiments from the tested PA congeners confirm published iREP factors with a few exceptions, in particular for monocrotaline or echimidine.
Additionally, metabolism of six selected PA was studied in primary rat hepatocytes and HepG2 C9 cells. Genrally, it was found that almost all tested cyclic and open-chained di-esters (except retrorsine) showed much higher metabolism rates in both cell types, in comparison to the mono-esters, for which only low metabolism rates could be measured. The same was observed for the quantified amounts of reactive metabolites in the supernatants of both cell types. In general, also these data bear the results from cytotoxicity and genotoxicity experiments and help to better understand the complex metabolism and the structure-specific toxicity of different PA congeners.
In the representation theory of finite groups, the so-called local-global conjectures assert a relation between the representation theory of a finite group and one of its local subgroups. The McKay-Navarro conjecture claims that the action of a set of Galois automorphisms on certain ordinary characters of the local and global group is equivariant. Navarro, Späth, and Vallejo reduced the conjecture to a problem about simple groups in 2019 and stated an inductive condition that has to be verified for all finite simple groups.
In this work, we give an introduction to the character theory of finite groups and state the McKay-Navarro conjecture and its inductive condition. Furthermore, we recall the definition of finite groups of Lie type and present results regarding their structure and their representation theory.
In the second part of this work, we verify the inductive McKay-Navarro condition for various families of finite groups of Lie type.
In defining characteristic, most groups have already been considered by Ruhstorfer.
We show that the inductive condition also holds for the groups with exceptional graph automorphisms, the Suzuki and Ree groups, the groups \(B_n(2)\) for \(n \geq 2\), as well as for the simple groups of Lie type with non-generic Schur multiplier in their defining characteristic.
This completes the verification of the inductive McKay-Navarro condition in defining characteristic. We further consider the Suzuki and Ree groups and verify the inductive condition for all primes. On the way, we show that there exists a Galois-equivariant Jordan decomposition for their irreducible characters.
Moreover, we consider some families of groups of Lie type that do not admit a generic choice of a local subgroup.
We show that the inductive condition is satisfied for the prime \(\ell=3\) and the groups \(\text{PSL}_3(q)\) with \(q \equiv 4, 7 \mod 9\), \(\text{PSU}_3(q)\) with \(q \equiv 2, 5 \mod 9\), and \(G_2 (q)\) with \(q \equiv 2, 4, 5, 7 \mod 9\).
Further, we verify the inductive condition for the prime \(\ell=2\) and \(G_2(3^f)\) for \(f \geq 1\), \(^3 D_4(q)\), and \(^2E_6(q)\) where \(q\) is an odd prime power.
The four essays deal with social motivators for human behavior in economics, namely social norms and social preferences. The first three essays present and analyze a particular social preference model, socially attentive preferences. The fourth essay gives a review of the theoretical economic literature on social norms.
This thesis is separated into seven distinct research projects on mono and multinuclear transition metal complexes as trapped ions in gas phase, as well as one chapter on focusing on the development of a new ion source to enable access to catalytic processes via coadsorption.
ElectroSpray Ionization (ESI) transfers ions from solution to gas phase for mass spectrometric investigations, allowing a broad variety of experimental methods to obtain fundamental insights into the molecular properties of isolated complexes devoid of solvent, crystal lattice, bulk, or supporting surface effects.
Collision Induced Dissociation (CID) researches molecular fragmentation mechanisms and their relative gas phase stabilities at room temperature. Laser experiments such as InfraRed (Multiple) Photon dissociation and Ultraviolet Photon dissociation offer information on the bonding motifs, resulting in molecular structures and their electronic ground states. When quantum chemical calculations utilizing Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TD-DFT) are combined with monitored spectra, a better and deeper understanding of the structural properties and electronics of transition metal complexes is possible.
X-ray magnetic circular dichroism (XMCD) is a technique that analyzes the magnetic properties of isolated and trapped ions at cryogenic temperatures inside an externally applied magnetic field using high brilliant polarized X-ray photons in conjunction with a mass spectrometer. The element selective technique, combined with sum analysis, allows for the decomposition of the total magnetic moments in their spin and orbital magnetic moments in various metal centers. A determination of the magnetic couplings between distinct metal centers in multinuclear complexes is possible via Broken symmetry approach in combination with X-ray Magnetic Circular Dichroism (XMCD).
Fused Filament Fabrication (FFF), an extrusion-based additive manufacturing technique, is becoming increasingly popular for polymer processing in academia and industry since it provides several benefits. As an inherent nature of additive structures, the quality of the inter-filament bonding in 3D printed components poses the main challenge in the application to mechanically critical components. Still, the precise placement of the material allows for generating load-path-specific orientations within a volume. In this work, to improve inter-filament bonding, the effects of the processing conditions on the mechanical properties of macroscopically defect-free 3D printed polypropylene (PP) were comprehensively investigated based on an analysis of supermolecular morphology formation in combination with local thermal simulations. Additionally, to exploit the anisotropic properties of the FFF process, specifically the unique fiber orientation, a composite based on PP and poly(ethylene terephthalate) (PET) microfibrils was prepared, and the morphology and the effects of the PET-fiber reinforcement on the mechanical performance were studied. The importance of the fiber orientation on the tribological properties was highlighted by the characterization of two printed fiber-reinforced polyetheretherketone (PEEK)-based compounds sliding against a steel ring. By understanding in-depth the effect of the processing conditions and the anisotropic properties of fiber-reinforced composites, practical insights were gained regarding how the material potential can be exploited via the FFF process.
The thesis investigates the phenomenon of hypocoercivity for Langevin-type equations on manifolds via a powerful abstract Hilbert space method. In applications, hypocoercivity experienced by the semigroup can be used to find optimal parameters for the production of nonwoven fleeces. Furthermore, the last chapter introduces a new scaling limit technique: Employing the concept of so-called stratifolds we can show Kuwae-Shioya-Mosco convergence of anisotropic 3D fibre lay-down models to an isotropic 2D model.
Aerodynamic design optimization, considered in this thesis, is a large and complex area spanning different disciplines from mathematics to engineering. To perform optimizations on industrially relevant test cases, various algorithms and techniques have been proposed throughout the literature, including the Sobolev smoothing of gradients. This thesis combines the Sobolev methodology for PDE constrained flow problems with the parameterization of the computational grid and interprets the resulting matrix as an approximation of the reduced shape Hessian.
Traditionally, Sobolev gradient methods help prevent a loss of regularity and reduce high-frequency noise in the derivative calculation. Such a reinterpretation of the gradient in a different Hilbert space can be seen as a shape Hessian approximation. In the past, such approaches have been formulated in a non-parametric setting, while industrially relevant applications usually have a parameterized setting. In this thesis, the presence of a design parameterization for the shape description is explicitly considered. This research aims to demonstrate how a combination of Sobolev methods and parameterization can be done successfully, using a novel mathematical result based on the generalized Faà di Bruno formula. Such a formulation can yield benefits even if a smooth parameterization is already used.
The results obtained allow for the formulation of an efficient and flexible optimization strategy, which can incorporate the Sobolev smoothing procedure for test cases where a parameterization describes the shape, e.g., a CAD model, and where additional constraints on the geometry and the flow are to be considered. Furthermore, the algorithm is also extended to One Shot optimization methods. One Shot algorithms are a tool for simultaneous analysis and design when dealing with inexact flow and adjoint solutions in a PDE constrained optimization. The proposed parameterized Sobolev smoothing approach is especially beneficial in such a setting to ensure a fast and robust convergence towards an optimal design.
Key features of the implementation of the algorithms developed herein are pointed out, including the construction of the Laplace-Beltrami operator via finite elements and an efficient evaluation of the parameterization Jacobian using algorithmic differentiation. The newly derived algorithms are applied to relevant test cases featuring drag minimization problems, particularly for three-dimensional flows with turbulent RANS equations. These problems include additional constraints on the flow, e.g., constant lift, and the geometry, e.g., minimal thickness. The Sobolev smoothing combined with the parameterization is applied in classical and One Shot optimization settings and is compared to other traditional optimization algorithms. The numerical results show a performance improvement in runtime for the new combined algorithm over a classical Quasi-Newton scheme.
The choice of the optimal rolling bearing depends on the boundary conditions and the requirements of the application. This way, the rolling bearings are designed in terms of their requirements of carrying capacity, the resulting frictional losses or the velocity limit among others. The optimization of the internal geometry of rolling bearings for specific applications is still a focus of study. Moreover, new rolling bearings, based on the existing geometries have been developed in the recent years and are on continuous development up to now.
One of the most commonly used rolling bearings for combined load when high load carrying capacity is needed is the tapered roller bearing (TRB). Although this type of rolling bearing has been used in widespread application, its relatively high friction losses occurring at the rib contact are a spotlight for the engineers on this area of work. A solution for reducing the frictional losses appearing at TRBs for applications where a high load carrying capacity is needed is still being searched for. Many recent studies focus on the optimization of the contact between the roller end and the raceway rib surface. On the contrary, this work focuses on the development of a new type of rolling bearing, based on the existing TRB, but where a rib contact is no longer needed.
First of all, the geometrical parameters defining the internal geometry of the rolling bearings, more specifically the contact between the roller and the raceways, have been studied. Moreover, several patents defining new geometries of rolling bearings have been analyzed. Based on the correlations observed between the different geometrical parameters, types of geometries and outcomes, the geometry of a new type of rolling bearing has been developed. In order to study its behavior, a Multi-Body-Simulation (MBS) Model of the new type of rolling bearing has been generated. Moreover, in order to validate the model, a prototype of the geometry under study has been manufactured and experimentally tested.
The results obtained have been compared with the simulated results as well as with a TRB of same main dimensions. After the validation of the model, several simulations have been conducted in order to understand better the behavior of the new rolling bearing design. To do so, a sensitivity analysis has been conducted. Within the analysis, the main geometrical parameters defining the roller-raceway contact have been varied and their influence on main outcomes examined. Finally, an application example of an axle-gearbox for heavy-duty trucks is presented and its result compared with those of a tapered roller bearing.
In recent years, Augmented Reality has made its way into everyday devices. Most smartphones are AR-enabled, providing applications like pedestrian navigation, Point of Interest highlighting, gaming, and retail. The high-tech industry has been focused on developing smartglasses to present virtual elements directly in front of the viewers’ eyes, allowing more immersive AR experiences. Smartglasses can also be deployed while driving for an enhanced and more safe experience. A 3D registered augmentation of the real world with navigation arrows, lane highlighting, or warnings can decrease the duration of inattentiveness regarding driving due to glancing at other screens. Enabling HMDs’ usage inside cars requires knowing its exact position and orientation (6-DoF pose) in the car. This necessitates sensors either built inside the AR glasses or the car. In a car, the latter option called outside-in tracking is more attractive due to two reasons. First, AR glasses containing different sensor sets exist, hampering finding one single solution for different HMDs. Second, the view from the driver’s perspective combines static interior and dynamic exterior features, complicating finding a reliable set of features. Nowadays, tracking methods utilize Deep Learning for a more generalizable and accurate derivation of the 6-DoF pose. They achieve outstanding results for head and object pose estimation. In this thesis, we present Deep Learning-based in-car 6-DoF AR glasses pose estimation approaches. The goal of the work is an exploration of accurate HMD pose estimation with the help of neural networks. The thesis achieves this by investigating numerous pose estimation techniques. Evaluations on the recorded HMDPose dataset constitute the foundation for this, consisting of infrared images of drivers wearing different HMD models. First, algorithms based on images are derived and evaluated on the dataset. For comparison, we carried out an evaluation on image-based methods considering time information. Further, pose estimation based on point clouds, generated out of infrared images, are analyzed. An investigation of various head pose estimation methods to derive its potential use are conducted. In conclusion, we introduce several highly accurate AR glasses pose estimators. The HMD pose alone achieves better results than the head pose and the combination of the head and HMD. Especially our image-based methods with optional usage of time information can efficiently and accurately regress the AR glasses pose. Our algorithms show excellent estimation results on live data when deployed inside a car, making seamless in-car HMD usage possible in the future.
The wireless spectrum is already a scarce good, shared by multiple competing technologies such as Bluetooth, ZigBee and Wi-Fi, and the hunger for traffic is only increasing. Due to the heterogeneity of the existing wireless technologies and the real threat that interference poses to network performance, sophisticated techniques must be developed to ensure acceptable levels of quality of service.
In this thesis, we present a passive channel sensing scheme based on both energy and signal detection, that primarily considers the spectrum occupation of foreign traffic while allowing for additional complementary information such as the signal-to-noise ratio. The resulting channel quality metric is first corrected for the spectrum occupation of internal transmissions and later aggregated with help of a moving average followed by an exponential weighted moving average. This aggregation keeps the metric both sufficiently stable and adaptive to significant changes in channel usage. Moreover, the channel quality metric is made volatility-aware by penalizing qualities proportionally to their downward volatility. This yields a conservative metric and allows to differentiate channels with similar aggregated qualities but different volatility behavior.
Our second main contribution is in the form of a schedule-based channel sensing protocol, in which nodes possess two network interfaces, one for communication and one for channel sensing. Channel sensing schedules are derived from communication schedules, i.e. channel hopping sequences used for communication, with help of a stochastic local search-based heuristic that attempts to minimize channel sensing bias, the channel overlap between both schedules and to maximize overlap fairness. This minimizes the effect of internal transmissions in the resulting channel quality metric, allowing nodes to derive channel quality primarily based on foreign traffic in an unbiased manner.
Finally, we propose and implement a stabilization protocol for keeping nodes in an ad-hoc network tick-synchronized and schedule-consistent w.r.t. a communication schedule. This stabilization protocol makes use of special messages, namely tick frames for synchronization, channel quality reports for sharing local views of channel conditions and schedule reports for disseminating the global communication hopping sequence. The communication schedules are computed by a master node based on an aggregation of local channel quality views and the re-computation of these schedules is triggered by significant changes in channel conditions. The resulting protocol is robust against changes in topology and channel conditions.
Wetted contacts play an important role in many fields. A prominent example in engineering is the lubricated contact between tool and workpiece in machining processes with cutting liquids. In such contacts, highly dynamic processes occur in the fluid at small length scales under extreme conditions regarding temperature, pressure, and shear. Experimental studies of these phenomena are generally not feasible. Thus, only little information on the actual processes in the contact zone is available. A tractable route for obtaining such information is molecular dynamics (MD) simulation. As input for these simulations, only a potential model that describes the interactions on the atomistic scale, is needed. On that basis also complex processes can be predicted. In the present work, a simple model potential was used, i.e. the Lennard-Jones truncated and shifted potential (LJTS), which was parameterized to describe the solids, the fluid, and their interactions. A novel method for determining fluid properties with non-equilibrium MD simulations was developed, which yields thermal, caloric and transport properties in a single simulation run. It can also be used for studying the influence of shear on these properties. With the new method, a comprehensive study of properties of the LJTS fluid was carried out. Furthermore, it was investigated how these fluid properties change near the solid-fluid interface and how these changes affect the conductive heat transfer between the solid and the fluid. Finally, a nanotribological process was studied, in which all these phenomena occur simultaneously.
We encounter directional data in numerous application areas such as astronomy, biology or engineering. Examples include the direction of arrival of cosmic rays, the direction of flight of migratory birds or the orientation of steel fibres in fibre-reinforced concrete.
In part I, we define and apply morphological operators, quantiles and depths for directional data. The morphological operators are defined for \(\mathcal{S}^{d−1}\)-valued images with \(\mathcal{S}^{d−1} = \{x \in \mathbb{R}^d :\sqrt{x^T x} = 1\}\) , \(d \geq 2\). Since an ordered structure is necessary for a definition of these operators, which is not naturally given between vectors, an order is determined with the help of the theory of statistical depth functionals.
This allows for defining the basic operators erosion and dilation as well as morphological (multi-scale) operators for \(\mathcal{S}^{d−1}\)-valued images based on them. The operators introduced are related to their grey value counterparts. Furthermore, quantiles and the "angular Mahalanobis" depth for directional data introduced by Ley
et al. (2014) are extended. The concept of Ley et al. (2014) provides useful geometric properties of the depth contours (such as convexity and rotational equivariance) and a Bahadur-type representation of the quantiles. Their concept is canonical for rotationally symmetric depth contours. However, it also produces rotationally symmetric depth contours when the underlying distribution is not rotationally
symmetric. We solve this lack of flexibility for distributions with elliptical depth contours. The basic idea is to deform the elliptic contours by a diffeomorphic mapping to rotationally symmetric contours, thus reverting to the canonical case in Ley et al. (2014). Our results are confirmed by a Monte Carlo simulation study and applied to the analysis of fibre directions in fibre-reinforced concrete. In Part II, we elaborate interdisciplinary results of statistical analysis and stochastic modelling in civil
engineering. Our statistical analysis of the correlation between production parameters (fibre length, fibre diameter, fibre volume fraction as well as casting method, superplasticiser content and specimen size) of ultra-high performance fibre reinforced concrete and the fibre system (spatial arrangement and orientation of the fibres) provides users with a better understanding of this relatively new composite material. The fibre system is modelled by a Boolean model and the fibre orientation by a one-parameter distribution. In addition, the behaviour under tensile loading is modelled.
The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major
interest from both academic and industrial stakeholders.
Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions
and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed.
End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests
such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex
spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects.
To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while
trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis.
Within toxicology, reproductive toxicology is a highly relevant and socially particularly sensitive field.
It encompasses all toxicological processes within the reproductive cycle and therefore includes many effects and modes of action. This makes the assessment of reproductive toxicity very challenging despite the established in vivo studies. In addition, the in vivo studies are very demanding both in terms of their conduct and interpretation, and there is scope for decision-making on both aspects. As a result, the interpretation of study results may vary from laboratory to laboratory. For the final classification, the assessment of relevance for men is decisive. The problem here is that relatively little is known about the species differences between men and the
usual test animals (rat and rabbit). The rabbit in particular has hardly been researched in molecular biology. The aim of the dissertation was to develop approaches for a better assessment of
reproductive toxicity, with two different foci: The first aim was to investigate species differences, focusing on the expression of xenobiotic transporters during ontogeny. Xenobiotic transporters, of the superfamily of ATP-binding cassette transporters (ABC) or solute carriers (SLC), are known to transport exogenous substances in
addition to their endogenous substrates and therefore play an important role in the absorption, distribution and excretion of xenobiotics. Species differences in kinetics can in turn have a major
impact on toxic effects. In the study, the expression of 20 xenobiotic transporters during ontogeny was investigated at the mRNA level in the liver, kidney and placenta of rats and rabbits and compared with that of men. This revealed major differences in the expression of the transporters between the species. However, further studies on the functionality and activity of the xenobiotic transporters are needed to fully assess the kinetic impact of the observed species differences. Overall, the study provides a valid starting point for further systematic investigations of species differences at the protein level. Furthermore, it provides previously unavailable data on the expression of xenobiotic transporters during ontogeny in rabbits, which is an important step in the molecular biological study of this species.
The second part focused on investigating the predictive power of in silico models for reproductive
toxicology in relation to pesticides. Both the commercial and the freely available models did not
perform adequately in the evaluation. Three reasons could be identified for this: 1. many pesticides
are outside the chemical space of the models, 2. different definition/assessment of reproductive
toxicity and 3. problems in detecting similarity between molecules. To solve these problems, an
extension of the databases on reproductive toxicity in relation to pesticides, respecting a uniform
nomenclature, is needed. Furthermore, endpoint-specific models should be developed which, in
addition to the usual structure-based fingerprints, use descriptors for, for example, biological
activity.
Overall, the dissertation shows how essential it is to further research the modes of action of
reproductive toxicity. This knowledge is necessary to correctly assess in vivo studies and their
relevance to men, as well as to improve the predictive power of in silico models by incorporating
this information.
Several applications have emerged and benefited from the recent advancements in wireless communication technologies. In the case of industrial automation, the wireless networks substituted wired networks to control and monitor the production systems and the factory environment. In such use cases, a common requirement is communication reliability. Technologies based on IEEE 802.15.4, such as WirelessHart and ZigBee developed for industrial applications, offer deterministic guarantees using reservation-based medium access. However, it is becoming more challenging for these technologies to guarantee their sufficiently predictable behavior, as the number of consumer electronics equipped with wireless communication technologies operating in the 2.4 GHz ISM band shared by IEEE 802.15.4 is increasing day by day.
Meanwhile, developments in WiFi technology opened the opportunity to use WiFi for industrial applications. Compared to the technologies based on IEEE 802.15.4, WiFi offers significantly higher transmission rates, and the off-the-shelf commodity WiFi hardwares are available at a low cost. However, when using a contention-based technology such as WiFi for industrial applications, additional measures are required to guarantee the specified statistical reliability.
This thesis lays the foundations for developing a multi-hop wireless control network using off-the-shelf IEEE 802.11 (WiFi) hardware operating in contention mode that can satisfy the specified reliability requirements of the applications. In a multi-hop wireless network, the communication reliability between the nodes depends on the routes determined by the routing protocol and managing these routes. We introduce a novel Quality-of-Service (QoS) routing protocol for contention-based wireless technologies such as WiFi that prioritizes reliability as the QoS requirement for route selection. The proposed routing protocol relies on different aspects of the network to determine and manage the routes. For instance, it requires algorithms and protocols to monitor and measure link quality, available bandwidth, or medium overload. Further, the determined routes require certain statistical link properties for the successful operation of the routes. We develop and evaluate different protocols, algorithms, and metrics to monitor and measure different aspects of the network in this thesis.
This dissertation describes the implementation, validation, and troubleshooting of ``Digital Twins'' in assembly processes of thin structures like parts from the automotive and aerospace industry. As requirements in terms of cost, weight, and human (pedestrian) safety are increasing for modern vehicles, thinner materials are used for exterior components. By that, components become softer but less stable which is challenging for the assembly processes and impacts the resulting quality. The most critical quality measures are gap and flushness as these are affecting aesthetics, wind noise, and fuel consumption of the final vehicle. To compensate for geometrical deviations, parts have adjustable mechanical interfaces which are used to tune in gaps and flushness for each individual assembly. For the components being assembled, individual process parameters depending on the geometry of the actual physical part must be defined. This is a challenging task that cannot be solved in a straightforward manner. However, assembly quality can be predicted by setting up individual Finite Element Method (FEM) simulation models for each part being assembled. These simulation models are called Digital Twin (DTs) as they are enriched with measured properties from the actual physical part. By that, precise predictions can be made and optimal assembly parameters for automated processes are derived. The demonstration use case in this dissertation is the assembly process of exterior car components made from sheet metals. For this kind of process, the geometrical deviations of individual components are crucial and have to be considered by the DT. To capture geometrical deviations, 3D-scanning is employed which provides a high-resolution point cloud representation of the actual physical part. This point cloud is processed further to obtain the DT that preserves the measured geometry. This dissertation tackles the following challenges: (a) setting up DTs on different level of details, (b) correctly post-processing 3D-scanned data to remove systematical measurement errors, (c) automatically morphing meshes to derive simulation models from measured point clouds, and (d) troubleshooting DTs with human-in-the-loop approaches. For all approaches, validations are provided that underline applicability and benefits. All methods and results are discussed on a high-level perspective and connections as well as the interplay between methods are elaborated. Each method either improves or extends existing approaches or provides benefits, i.e. higher precision, compared to existing solutions.
Transferfilm-Luminance-Analysis: A comprehensive transfer film detection and evaluation method
(2022)
In the field of polymer tribology, the deposition of wear debris on the counter-facing
wear track during dry sliding, also known as transfer film, is a fundamental process.
These typically discontinuous and heterogeneous films are often assumed to be di-
rectly responsible for or at least be involved in the progression of friction and wear.
Nevertheless, the state of the art shows that the determination and evaluation of such
transfer film formations are most frequently reported either only qualitatively, ex-post, or
derived from local, possibly unrepresentative inspection areas. This reporting state is
valid for determining the extent of transfer film formation and other transfer film-related
attributes, e.g. its formation speed, coverage, adhesion, and thickness.
Because of this data situation, transferring results from one study to another is challeng-
ing and risks misinterpretation. Furthermore, without temporally resolved and quanti-
tative results, correlation analyses between friction or wear and the extent of transfer
film formation, or other attributes, are difficult if conductible at all. As a consequence,
the long lasting research question on the role of transfer film, whether it is the cause or
consequence of friction or wear, is still unanswered.
In order to come closer to the answer, new transfer film detection and evaluation meth-
ods are needed whose results overcome these shortcomings.
The following elaboration develops such a concept based on photo-optical techniques
combined with automation programs. The developed transfer film luminance analy-
sis (TLA) was demonstrated on a material composition study based on polyphenylene
sulfide (PPS) and PPS compositions incorporated with various fillers, including carbon
fibers, graphite, and polytetrafluoroethylene. The results demonstrate the tempo-spatial
dynamics of the transfer film formation process in a quantitative manner.
Based on this data, transfer film metrics were developed in the form of transfer film ki-
netic properties, spatial uniformity, temporal stability, and transfer film thickness. Finally,
based on this information, an integrated data model was developed to identify correla-
tions of various parameters and their contribution to friction and wear. Furthermore,
new hypotheses on transfer film formation mechanisms were formulated.
Overall, with the help of TLA and the data-driven model, the understanding of transfer
films was improved, and a step towards the answer on the role of transfer films was
achieved.
Organizational Coordination of Digital Structures: The Effects of ICT and Values on Grand Challenges
(2022)
This doctoral thesis sheds light on organizing contributions toward grand challenges by
highlighting various effects on organizing values, coordination mechanisms, and digital
technologies. Grand challenges are defined as vast and complex problems affecting
organizations, governments, and entire societies. The objective of this thesis is to address such
global societal problems. Towards this end, at first a systematic literature review depicts the
overall process of addressing grand challenges. Second, building upon the holistic process from
this literature review, an empirical inquiry is conducted, scrutinizing the development of
organizing mechanisms and structures along organizing values. Third, digital technologies and
their role in the solution process are explored. Taken as a whole, the systematic literature
review offers a holistic overview over the solution process of grand challenges addressed by
organizations, while the empirically substantiated theoretical frameworks analyze and
highlight coordination mechanisms, organizing structures and values, as well as digital
infrastructures in great detail.
Wine is a complex chemical mixture that is bound to change over time. Most wines are
produced for consumption within months. Some premium wines are meant to be maturated for
several years or even decades after bottling. The post-bottling evolution and the longevity of a
wine depends on its initial chemical composition and the storage conditions. Temperature,
exposure to light and the closure type are often mentioned as the most important storage
influences. Especially elevated temperature is known to cause accelerated aging reactions in
wine. Refrigerated wine storage cabinets promise to be the best storage option without the
need of a wine cellar. They are available in different sizes and fit in every household. However,
the influence of vibrations and low-interval temperature fluctuations caused by compressors
are parameters that have been neglected in literature. The aim of this thesis was to investigate
if vibrations and low-interval temperature fluctuations, which occur in refrigerated wine storage
cabinets, have an influence on the post-bottling evolution of a wine. The influence of both
parameters was studied separately from each other.
The impact of vibration on oxidation and gas uptake from the headspace of a wine bottle into
the wine was investigated using a model wine with saturated O2 and different headspace
volumes. The study revealed that vibration promotes the dissolution of O2 from the headspace
of bottle into the wine resulting in a faster SO2 consumption. Furthermore, it was shown that
horizontal bottle position accelerated the O2 uptake significantly. It was concluded that the
increased surface size between headspace and wine accelerates the O2 dissolution in wine.
Also, bigger headspace volumes caused an accelerated O2 uptake into the wine. An
experiment without any headspace volume revealed that the factors vibration and bottle
position did not accelerate the O2 consumption in wine. This proves that vibration and bottle
position accelerate only the dissolution of O2 in wine, but not the chemical reaction of O2 with
wine constituents.
The influence of vibration on the volatile profile of wine was investigated using Riesling
sparkling and still wines sealed with different closures that were subjected to vibration for six
months. Vibration caused no CO2 losses, SO2 and color changes in all wines indicating that
vibration caused by compressors has no impact on the gas permeability of the used closures.
However, vibration affected the volatile profile of sparkling wine and Riesling still wine sealed
with a screw cap. Similar to the model wine study described earlier, it was shown that the
equilibrium of volatile substances between the wine and the headspace in a bottle was
influenced by vibration. The gas-liquid-equilibrium of some volatile compounds was shifted
towards wine, while others were shifted towards headspace. As a result of this, the
concentration of volatile compounds in wine is changed. Besides this indirect influence of
vibration, the results of this study also suggested that specific degradation and formation
reactions of volatile compounds are directly influenced by vibration. These multiple effects of
2
vibration most likely explain why increasing vibration intensities could not be proportionally
related to the observed volatile changes. The investigation of different wine styles revealed
that the impact of vibration depends strongly on the initial composition of wine, age, and
packaging conditions. Especially, headspace volume, closure type and CO2 pressure are likely
to influence the equilibrium of volatile substances between the wine and the headspace in a
bottle.
Another study investigated the impact of low-interval temperature fluctuations on the volatile
profile of wine. For this purpose, a Riesling wine was stored for two years under different
temperature fluctuation patterns caused by compressors. Additionally, a model wine with nine
volatile substances with known concentrations was stored for eight months under the same
fluctuation patterns. The low-interval temperature fluctuations were compared to the mean
value of the temperature fluctuations. Chemical and sensory analysis revealed that that lowinterval temperature fluctuations accelerate wine aging reactions like ester hydrolysis and
monoterpene degradation. Even small temperature amplitudes showed a significant impact on
wine aging. The observed effect was explained by the Arrhenius equation which states that
reaction rates exponentially increase with rising temperatures. A pump effect of air through the
closure was initially assumed but not observed in this study. Small deviations in wine
temperature, such as those caused by door openings of a refrigerator were found to be
negligible. It was concluded that low-interval temperature fluctuations can accelerate wine
aging reactions. The amplitude of the temperature fluctuations should be as small as possible
during bottle storage of wine.
This thesis showed that both parameters, vibration, and low-interval temperature fluctuations,
have been proven to influence the evolution of wine during bottle storage. Regarding storage
conditions in a refrigerated wine storage cabinet, those parameters should be monitored. Wine
connoisseurs should therefore consider good wine cabinets, since some manufacturers
emphasize on the importance to minimize vibrations and temperature fluctuations in their
devices. The development of technology should be advanced to reduce both vibration and
temperature fluctuations in refrigerated wine storage cabinets. Future research should focus
on specific wine compounds in model systems and realistic vibration conditions to reveal the
relationship between vibration intensities and reaction rates. The impact of low-interval
temperature fluctuations on wine compositional changes should be investigated considering
horizontal and vertical bottle positions. The calculated acceleration factors due to temperature
fluctuations have to be verified by isotherm storage conditions at higher temperatures.
This PhD thesis is concerned with the visual analysis of time-dependent scalar field ensembles as occur in climate simulations.
Modern climate projections consist of multiple simulation runs (ensemble members) that vary in parameter settings and/or initial values, which leads to variations in the resulting simulation data.
The goal of ensemble simulations is to sample the space of possible futures under the given climate model and provide quantitative information about uncertainty in the results.
The analysis of such data is challenging because apart from the spatiotemporal data, also variability has to be analyzed and communicated.
This thesis presents novel techniques to analyze climate simulation ensembles visually.
A central question is how the data can be aggregated under minimized information loss.
To address this question, a key technique applied in several places in this work is clustering.
The first part of the thesis addresses the challenge of finding clusters in the ensemble simulation data.
Various distance metrics lend themselves for the comparison of scalar fields which are explored theoretically and practically.
A visual analytics interface allows the user to interactively explore and compare multiple parameter settings for the clustering and investigate the resulting clusters, i.e. prototypical climate phenomena.
A central contribution here is the development of design principles for analyzing variability in decadal climate simulations, which has lead to a visualization system centered around the new Clustering Timeline.
This is a variant of a Sankey diagram that utilizes clustering results to communicate climatic states over time coupled with ensemble member agreement.
It can reveal
several interesting properties of the dataset, such as:
into how many inherently similar groups the ensemble can be divided at any given time,
whether the ensemble diverges in general,
whether there are different phases in the time lapse, maybe periodicity, or outliers.
The Clustering Timeline is also used to compare multiple climate simulation models and assess their performance.
The Hierarchical Clustering Timeline is an advanced version of the above.
It introduces the concept of a cluster hierarchy that may group the whole dataset down to the individual static scalar fields into clusters of various sizes and densities recording the nesting relationship between them.
One more contribution of this work in terms of visualization research is, that ways are investigated how to practically utilize a hierarchical clustering of time-dependent scalar fields to analyze the data.
To this end, a system of different views is proposed which are linked through various interaction possibilities.
The main advantage of the system is that a dataset can now be inspected at an arbitrary level of detail without having to recompute a clustering with different parameters.
Interesting branches of the simulation can be expanded to reveal smaller differences in critical clusters or folded to show only a coarse representation of the less interesting parts of the dataset.
The last building block of the suit of visual analysis methods developed for this thesis aims at a robust, (largely) automatic detection and tracking of certain features in a scalar field ensemble.
Techniques are presented that I found can identify and track super- and sub-levelsets.
And I derive “centers of action” from these sets which mark the location of extremal climate phenomena that govern the weather (e.g. Icelandic Low and Azores High).
The thesis also presents visual and quantitative techniques to evaluate the temporal change of the positions of these centers; such a displacement would be likely to manifest in changes in weather.
In a preliminary analysis with my collaborators, we indeed observed changes in the loci of the centers of action in a simulation with increased greenhouse gas concentration as compared to pre-industrial concentration levels.
One of the biggest social issues in mature societies such as Europe and Japan
is the aging population and declining birth rate. These societies have a serious
problem with the retirement of the expert workers, doctors, and engineers etc.
Especially in the sectors that require long time to make experts in fields like medicine and industry; the retirement and injuries of the experts, is a serious problem. The technology to support the training and assessment of skilled workers (like doctors, manufacturing
workers) is strongly required for the society. Although there are some solutions for
this problem, most of them are video-based which violates the privacy of the subjects.
Furthermore, they are not easy to deploy due to the need for large training data.
This thesis provides a novel framework to recognize, analyze, and assess human
skills with minimum customization cost. The presented framework tackles this problem
in two different domains, industrial setup and medical operations of catheter-based
cardiovascular interventions (CBCVI).
In particular, the contributions of this thesis are four-fold. First, it proposes an
easy-to-deploy framework for human activity recognition based on zero-shot learning
approach, which is based on learning basic actions and objects. The model recognizes
unseen activities by combinations of basic actions learned in a preliminary way and involved objects. Therefore, it is completely configurable by the user and can be used to detect completely new activities.
Second, a novel gaze-estimation model for attention driven object detection task is
presented. The key features of the model are: (i) usage of the deformable convolutional
layers to better incorporate spatial dependencies of different shapes of objects and
backgrounds, (ii) formulation of the gaze-estimation problem in two different way, as a
classification as well as a regression problem. We combine both formulations using a
joint loss that incorporates both the cross-entropy as well as the mean-squared error in
order to train our model. This enhanced the accuracy of the model from 6.8 by using only
the cross-entropy loss to 6.4 for the joint loss.
The third contribution of this thesis targets the area of quantification of quality of
i
actions using wearable sensor. To address the variety of scenarios, we have targeted two
possibilities: a) both expert and novice data is available , b) only expert data is available,
a quite common case in safety critical scenarios.
Both of the developed methods from these scenarios are deep learning based. In the
first one, we use autoencoders with OneClass SVM, and in the second one we use the
Siamese Networks. These methods allow us to encode the expert’s expertise and to learn
the differences between novice and expert workers. This enables quantification of the
performance of the novice in comparison to the expert worker.
The fourth contribution, explicitly targets medical practitioners and provides a
methodology for novel gaze-based temporal spatial analysis of CBCVI data. The developed
methodology allows continuous registration and analysis of gaze data for analysis
of the visual X-ray image processing (XRIP) strategies of expert operators in live-cases scenarios and may assist in transferring experts’ reading skills to novices.
This thesis addresses the need for a new approach to hardware sign-off verification which guarantees the security of processors at the Register Transfer Level (RTL). To this end, we introduce a formal definition of security with respect to microarchitectural vulnerabilities, formulated as a hardware property.
We present a formal proof methodology based on Unique Program Execution Checking (UPEC) which can be used to systematically detect all vulnerabilities to transient execution attacks in RTL designs. UPEC does not exploit any a priori knowledge on known attacks and can therefore detect also vulnerabilities based on new, so far unknown, types of channels. This is demonstrated by the new attack scenarios discovered in our experiments with UPEC. UPEC operates on a verification model consisting of two identical instances of the SoC design under verification. The SoC instances in the model execute the same program.
The only difference between the two instances is the content of the protected part of the memory, i.e., the secret.
The development of machine learning algorithms and novel sensing modalities has boosted the exploration of human activity recognition(HAR) in recent years. In this work, we explored field-based sensing solutions and different machine learning models for HAR tasks to address the shortcomings of existing HAR sensing solutions, like the weak robustness of RF-based solution, environment-dependency of the optic-based solution, etc., aiming to supply a competitive and alternative sensing approach for HAR tasks.
Field, in physics, describes a region in which each point will be affected by force. Field sensing is potentially a low-cost, low-power, non-intrusive, privacy-respecting HAR solution that is ideal for long-term, wearable activity recording. By directly/indirectly monitoring the field strength or other field variation caused variables, some unsolved HAR problems could be addressed when other sensing solutions fail. An example is the social distance monitoring problem, where the most widely adopted approach is based on the Bluetooth signal strength measurement. However, the signal is so subtle that any object surrounding the signal emitter will cause signal attenuation. To guarantee the accuracy of social distance monitoring, we developed an induced magnetic field-based social distance monitoring system with an accuracy of a sub-ten centimetre. Moreover, the system is robust and resistant to environmental variations. Like Bluetooth, other RF-wave-based sensing modalities also face the multi-path effect caused by refraction. Thus their signal is unreliable for positioning applications where higher accuracy and robustness are needed. Besides the magnetic field, we also explored a natural static passive electric field, the field between the human body and surroundings, namely the human body capacitance(HBC). HBC is a physiological parameter describing the charge distribution difference between the body and the surroundings and is seldomly explored before. We developed a few wearable, low-cost, low power consumption hardware platforms, either based on an oscillating unit or discrete components composed sensing front end followed by a high resolution analog-to-digital module, to
monitor the variation of the parameter regarding the body movement and environmental variations. Compared with the inertial sensors, the HBC could deliver full-body movement perceiving, meaning that the movement of the legs could be perceived by a wrist-worn HBC sensing unit, which is far beyond the
sensing ability of an inertial sensing unit.
To summarize, we introduced two competitive field sensing modalities for HAR tasks, the magnetic field sensing for position-related services and the passive electric field sensing for full-body action and environmental variation sensing. Both of which were still in an infant stage and not fully explored in the community. The advantages of the two field sensing modalities were demonstrated with a series of position-related and motion-related experiments.
This thesis comprises investigations on the interaction of \(N_2\) adsorbate molecules with size-selected iron metal clusters under cryo conditions. All investigations were performed at the customized fourier transform ion cyclotron resonance (FT-ICR) mass spectrometer FRITZ. This setup serves a laser vaporization (LVAP) source to generate the investigated iron cluster ions. Cryo kinetic studies investigate the stepwise \(N_2\) adsorption on size selected \(Fe_n^+\) clusters under well-defined isothermal conditions. The adsorption behavior and the adsorption limits lead to information about the cluster structure and its reactivity. By coupling a tunable IR laser in the ICR cell, it is possible to perform cryo infrared photon dissociation (IR-PD) spectroscopy experiments. This provides information on binding motifs of the \(N_2\) adsorbates and the cluster structure. Combining both methods with quantum chemical calculations via density functional theory (DFT) substantiates the experimental results and deepens the fundamental insights into the cluster structure, their reactivity, and the metal-adsorbate bonding.
As the usage of concurrency in software has gained importance in the last years, and is still rising, new types of defects increasingly appeared in software. One of the most prominent and critical types of such new defect types are data races. Although research resulted in an increased effectiveness of dynamic quality assurance regarding data races, the efficiency in the quality assurance process still is a factor preventing widespread practical application. First, dynamic quality assurance techniques used for the detection of data races are inefficient. Too much effort is needed for conducting dynamic quality assurance. Second, dynamic quality assurance techniques used for the analysis of reported data races are inefficient. Too much effort is needed for analyzing reported data races and identifying issues in the source code.
The goal of this thesis is to enable efficiency improvements in the process of quality assurance for data races by: (1) analyzing the representation of the dynamic behavior of a system under test. The results are used to focus instrumentation of this system, resulting in a lower runtime overhead during test execution compared to a full instrumentation of this system. (2) Analyzing characteristics and preprocessing of reported data races. The results of the preprocessing are then provided to developers and quality assurance personnel, enabling an analysis and debugging process, which is more efficient than traditional analysis of data race reports. Besides dynamic data race detection, which is complemented by the solution, all steps in the process of dynamic quality assurance for data races are discussed in this thesis.
The solution for analyzing UML Activities for nodes possibly executing in parallel to other nodes or themselves is based on a formal foundation using graph theory. A major problem that has been solved in this thesis was the handling of cycles within UML Activities. This thesis provides a dynamic limit for the number of cycle traversals, based on the elements of each UML Activity to be analyzed and their semantics. Formal proofs are provided with regard to the creation of directed acyclic graphs and with regard to their analysis concerning the identification of elements that may be executed in parallel to other elements. Based on an examination of the characteristics of data races and data race reports, the results of dynamic data race detection are preprocessed and the outcome of this preprocessing is presented to users for further analysis.
This thesis further provides an exemplary application of the solution idea, of the results of analyzing UML Activities, and an exemplary examination of the efficiency improvement of the dynamic data race detection, which showed a reduction in the runtime overhead of 44% when using the focused instrumentation compared to full instrumentation. Finally, a controlled experiment has been set up and conducted to examine the effects of the preprocessing of reported data races on the efficiency of analyzing data race reports. The results show that the solution presented in this thesis enables efficiency improvements in the analysis of data race reports between 190% and 660% compared to using traditional approaches.
Finally, opportunities for future work are shown, which may enable a broader usage of the results of this thesis and further improvements in the efficiency of quality assurance for data races.
Die vorliegende Arbeit befasst sich mit der Untersuchung von (insbesondere neutralen) kalten, isolierten Molekülen, Aggregaten und Metallkomplexen in der Gasphase mittels UV- und kombinierter IR/UV-Laserspektroskopie im Molekularstrahl. Die Dissertation setzt sich im Wesentlichen aus drei Teilprojekten zusammen. Im ersten Teil wurden erste spektroskopische Untersuchungen in Kombination mit einer neu etablierten Laserdesorptionsquelle durchgeführt. Hierbei wurden zunächst die Desorptionstarget-Vorbereitung und die Expansionsbedingungen der Molekularstrahlquelle entscheidend optimiert. Trotz dieser Anpassungen waren die Ionensignalfluktuationen immer noch zu ausgeprägt um aussagekräftige kombinierte IR/UV-Experimente zu ermöglichen. Daraufhin wurde eine so genannte „Referenzsignal-Korrektur“ eingeführt. Mithilfe dieser Vorgehensweise konnten erste IR/R2PI-Spektren mit dem neuen Laserdesorptionsaufbau gemessen werden. Nach erfolgreichen IR/UV Experimenten an rein organischen Molekülen wurde der Fokus auf die spektroskopische Untersuchung von isolierten neutralen Kontaktionenpaaren (CIPs) gelegt. Hierbei standen insbesondere die Alkali-Ionenpaare (von \( Li^+ \) bis \( Cs^+ \) ) des para-Aminobenzoats (\( M^+ PABA^− \)) im Vordergrund, wobei in allen Experimenten eindeutige Resonanzverschiebungen in Abhängigkeit der Größe des koordinierenden Alkaliions festgestellt wurden. Dabei sind die spektralen Shifts auf elektronische Effekte zurückzuführen, die durch das Coulomb-Potential des Metallions hervorgerufen werden. Weiterhin wurde der neutrale OLED-relevante Metallkomplex Tris(8-hydroxychinolinato)aluminium (\( Alq_3 \)) ebenfalls erfolgreich desorbiert und in intakter Form im Flugzeitmassenspektrometer nachgewiesen. Im zweiten Teil der Arbeit wurden isolierte Chromon-Methanol-Cluster in Bezug auf nichtkovalente Wechselwirkungen analysiert. Bei diesem System liegen zwei nahezu isoenergetische Isomere vor, die sich strukturell durch unterschiedliche CH···O-Kontakte unterscheiden. Chromon besitzt die Eigenschaft nach elektronischer Anregung in die Triplet-Mannigfaltigkeit überzugehen, sodass an diesem Beispiel erstmalig ein neutraler Cluster in einem elektronisch angeregten Triplet-Zustand spektroskopisch untersucht werden konnte. Interessanterweise kommt es im T\(_1 \)-Zustand zu einem Verlust der Planarität des 4-Pyronrings, wodurch sich der energetische Abstand zwischen den beiden Minimumstrukturen vergrößert. Schlussendlich ist dieser energetische Effekt auf unterschiedliche elektrostatische und induktive Wechselwirkungen, jedoch kaum auf Dispersionseffekte zurückzuführen. Zusätzlich wurden Untersuchungen der Aggregation von Methanol an die geschützte Aminosäure AcTyr(Me)OMe durchgeführt, wobei ebenfalls potenzielle Clustergeometrien zugeordnet werden konnten. Im letzten Teil der Arbeit standen die in der Natur allgegenwärtigen Metall−Peptid-Wechselwirkungen im Fokus. In dem Zusammenhang wurde (mittels Dichtefunktionaltheorie) eine tiefgründige strukturelle Analyse der Aggregation eines monovalenten Aluminiumions an die geschützte Aminosäure AcTrpOMe ausgeführt. Hierbei wurde für das energetisch klar stabilste Isomer ein spezielles, energetisch ausgesprochen stabiles Strukturmotif gefunden, bei dem das Aluminiumion in die NH-Bindung des Indol-Substituenten insertiert ist. Aufgrund einer hohen (berechneten) Isomerisierungsbarriere kann ein derartiges Bindungsmotiv nicht im kalten Molekularstrahl gebildet werden, durchaus aber im Plasma einer Thermo-Ablationsquelle, wie sie im entsprechenden Molekularstrahlexperiment verwendet wurde. Weitere quantenchemische Untersuchungen haben ergeben, dass dieser Strukturtyp nur für bestimmte monovalente Metalle (z.B. \( Ti^+ \) oder \( Al^+ \) ) bevorzugt wird.
Oxidative folding of proteins in the mitochondrial intermembrane space of Leishmania tarentolae
(2022)
Mitochondrial genes encode for a few proteins. Thus, the majority of proteins has to be imported to the organelles, which is only possible in the unfolded state. The subsequent folding guarantees functionality. One of the proteins responsible for folding in the intermembrane space is Mia40, which is known in opisthokonts. No ortholog for Mia40 is known in kinetoplastida such as Leishmania tarentolae. First, already known candidates for Mia40 orthologs were investigated. In previous work Mic20 had been identified in Trypanosomes.1–4 Gene editing cassettes to knock-out or modify the gene LtaPh_3313851, which encodes the Mic20 ortholog, could not be inserted homozygously. Thus, the gene is assumed to be essential. Another protein that plays an important role in mitochondrial protein import is Erv, a known interactor of Mia40. Erv is also found in Leishmania. Two proteins of so far unknown function had been identified as potential interactors of Erv and could be candidates for Mia40 orthologs.5 Potential knock-out strains of one protein-encoding gene each were investigated. The knock-out of LtaP32.0380 was assumed to be complete and the gene dispensable. The knock-out cassette for LtaP07.0980 could be shown to be inserted heterozygously, which could indicate the essentiality of the gene. To identify new candidates for Mia40 orthologs in Leishmania tarentolae, potential substrates6 of the Mia40/Erv pathway were used as baits in the present work. Gene editing via CRISPR/Cas9 included attempts to insert knock-out or tagging cassettes to five different genes. Homozygous insertion succeeded for the C-terminal His8-tagging cassettes for LtaP19.1110, and for the N-terminal His8-tagging cassettes for LtaP25.1620 and LtaP09.1390. No homozygous gene editing could be observed for LtaP35.0210. The knock-out of LtaP04.0060 was assumed to be complete. The presence of the N-terminal His8-tagged substrate 4 (LtaP09.1390) could be shown in cell lysates. The correct position of tagged substrate 4 in the cell was confirmed. Further cell lysates were purified in pull-downs on Ni-NTA to obtain tagged substrate 4 with its interaction partners. The presence of tagged proteins in the eluates could be confirmed. To identify interacting proteins, mass spectrometry analysis was performed. In further experiments, DTT and TMAD were used to alter the redox conditions in the cells before lysis and purification. The evaluation of the data included the comparison of the proteins identified in different experiments and the comparison with potential interactors of Erv.5,7 Also, properties of Mia40 that might be conserved were considered. Two characteristic motifs of known Mia40 orthologs are a CPC and a twin CX9C motif. Thus, proteins with these or similar motifs were specifically searched for. Different candidates for Mia40 orthologs were identified and discussed.
Towards PACE-CAD Systems
(2022)
Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored.
Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed.
Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied.
Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction.
Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier.
This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare.
Dealing with Dependence in the End-to-End Performance Analysis in Stochastic Network Calculus
(2022)
Communication networks, in particular the Internet, have become a pivotal part of our life. Since their beginnings, a key aspect of their applicability has been the performance. Safety-critical applications, for example, can sometimes only be implemented in a responsible manner if guarantees about their end-to-end delay can be made. A mathematical modeling and performance evaluation of communication networks requires a powerful set of tools that is able to incorporate their increasing complexity.
The stochastic network calculus (SNC) is a versatile, mathematical framework that allows for a calculation of probabilistic end-to-end performance bounds of distributed systems. Its flexibility enables to incorporate a large class of different schedulers as well as different models of traffic processes beyond the assumption of Poisson arrivals that is predominant in queueing theory-based analyses. It originates in the so-called deterministic network analysis (DNC) in the 90's of the 20th century that was introduced to provide deterministic, ``hard'' guarantees that are of relevance, e.g., in the context of real-time systems. While the DNC of today can be used to calculate fast and accurate delay bounds of arbitrary feedforward networks, the SNC is still in a significantly earlier stage. In particular, method-pertinent dependencies, i.e., a phenomenon that occurs when independent flows become stochastically dependent after sharing resources in the network, can be considered a major challenge in the SNC with moment-generating functions (MGFs).
This thesis argues to contribute to the SNC in several ways. First, we show that the ``pay multiplexing only once'' (PMOO) analysis known from the DNC is also possible in the SNC. Not only does it significantly improve end-to-end delay bounds, it also needs to consider less method-pertinent dependencies. Therefore, complexity and runtimes of the calculation are greatly reduced. Second, we introduce the concept of negative dependence to the SNC with MGFs and give numerical evidence that this can further lead to better performance bounds. Third, for the larger problem of end-to-end performance bounds of tree networks, we introduce so-called ''h-mitigators'', a modification in the calculation of MGF output bounds. It is minimally invasive, all existing results and procedures are still applicable, and improves performance bounds. As a fourth contribution, we conduct extensive numerical evaluations to substantiate our claims. Moreover, we made the respective code, the ''SNC MGF toolbox'', publicly available to ensure that the results are reproducible. At last, we conduct different stochastic analyses of a popular fair scheduler, generalized processor sharing (GPS). We give an overview of the state-of-the-art analyses in the SNC and substantiate the comparison through numerical evaluations.
To support scientific work with large and complex data the field of scientific visualization emerged in computer science and produces images through computational analysis of the data. Frameworks for combination of different analysis and visualization modules allow the user to create flexible pipelines for this purpose and set the standard for interactive scientific visualization used by domain scientists.
Existing frameworks employ a thread-parallel message-passing approach to parallel and distributed scalability, leaving the field of scientific visualization in high performance computing to specialized ad-hoc implementations. The task-parallel programming paradigm proves promising to improve scalability and portability in high performance computing implementations and thus, this thesis aims towards the creation of a framework for distributed, task-based visualization modules and pipelines.
The major contribution of the thesis is the establishment of modules for Merge Tree construction and (based on the former) topological simplification. Such modules already form a necessary first step for most visualization pipelines and can be expected to increase in importance for larger and more complex data produced and/or analysed by high performance computing.
To create a task-parallel, distributed Merge Tree construction module the construction process has to be completely revised. We derive a novel property of Merge Tree saddles and introduce a novel task-parallel, distributed Merge Tree construction method that has both good performance and scalability. This forms the basis for a module for topological simplification which we extend by introducing novel alternative simplification parameters that aim to reduce the importance of prior domain knowledge to increase flexibility in typical high performance computing scenarios.
Both modules lay the groundwork for continuative analysis and visualization steps and form a fundamental step towards an extensive task-parallel visualization pipeline framework for high performance computing.
Investigations on the high cycle fatigue strength of short glass fiber reinforced polyamide 66
(2022)
The growth of composite materials is closely linked to the demands of the transportation industry for efficient and resource-saving solutions. The beginnings of modern lightweight construction lie in the aviation industry, which constantly strives to reduce weight in order to achieve greater travel distances and reduce fuel consumption. For this purpose, continuous fiber-reinforced plastics are used, that are characterized in particular by their outstanding specific mechanical properties and therefore meet the high requirements for load-bearing capacity in aviation. In the automotive sector, the focus is similar and increasingly trends towards weight-optimized solutions in order to meet the growing requirements to reduce CO2 emissions. Substituting metallic components with composite materials can be a solution for these challenges. However, the requirements for the materials used in the automotive sector are different from those in the aerospace sector. In contrast to the rather low production volumes in the aerospace industry, vehicle components are manufactured in large quantities and therefore require short cycle times and cost-efficient production.
One group of materials that meet these requirements are short-fiber-reinforced thermoplastics (SFRTs). They are characterized in particular by their low-cost manufacturing, which can be achieved through the injection molding process and therefore enables the production of components in high volumes. Glass fibers are mainly used for reinforcement, with the fiber length limited to 1 mm due to the injection molding process. Despite this short fiber length, the strength and stiffness of the pure polymer can be significantly increased with short fiber reinforcement. However, the reinforcement effect is strongly influenced by the fiber orientation resulting from the injection molding process. Layers with different fiber orientation are formed, which are accompanied by a pronounced anisotropic material behavior. Due to the thermoplastic matrix, the mechanical properties of SFRT additionally depend on environmental conditions such as humidity and temperature. Consequently, characterizing the material and thereby covering all influencing factors requires effortful and lengthy testing. Especially when determining fatigue properties for a detailed service life analysis, long testing times combined with high costs must be taken into account. In this case, a quasi-static strength analysis is usually performed for the dimensioning of components, which is often also used with empirical reduction factors for the fatigue analysis.
Considering the above mentioned context, the present work investigates the behavior of short glass fiber-reinforced polyamide 66 in the range of very high cycle fatigue (> 106 load cycles) and addresses the question as to whether a fatigue limit exists for the aforementioned material or not. For this particular purpose, various experimental methods are used, which all have in common, that they generate large amounts of data requiring automated processing. The experiments are carried out with test specimens longitudinally and transversely to the injection molding direction in order to observe a possible influence of the fiber orientation on the material’s fatigue behavior.
In the first part of this experimental work, the stiffness and the hysteresis data are studied during fatigue tests with different maximum stress levels. Subsequently, the stiffness degradation and the dissipative energy are parameterized to correlate this data with the applied maximum stresses. The data analysis method identifies characteristic stress levels at which fatigue behavior changes. The changes occur in three fatigue life ranges of low cycle, high cycle and very high cycle fatigue. The research shows that a cyclic load with 105 cycles is sufficient to estimate the three mentioned fatigue ranges.
In the second part of this experimental work, residual strength tests with acoustic emission (AE) analysis are performed on the cyclically preloaded specimens. AE uses sensors to detect acoustic signals generated by crack initiation and crack growth under mechanical load. Accordingly, in the residual strength tests, only the damage that has not already occurred under cyclic preloading can be recorded. Based on the "acoustic fingerprint", characteristic stress levels were identified at which a change in damage behavior occurred. As long as this "acoustic fingerprint" differs from that of a non-preloaded specimen, it can be concluded that damage was initiated under the applied cyclic load.
Finally, a digital twin was generated to investigate the underlying micromechanical mechanisms at the experimentally identified characteristic stress levels. X-ray microscope scans of the specimens were imported into the commercial software GeoDict®. This voxel-based software uses numerically efficient Fast Fourier Transforms (FFT) to analyze the microstructure and simulate the experiments on the real 3D microstructure. The simulations show that the fiber-matrix interface significantly influences the damage behavior in the very high cycle fatigue range. By analyzing the matrix plasticization rate, the stress levels associated with high cycle fatigue can be estimated. Fiber fractures, on the other hand, are only relevant in the low cycle fatigue range.
Thus, both the experimental methods and the simulation of a digital twin have been proven suitable to estimate the different fatigue ranges in a time-efficient manner. In addition, the analyses of stiffness degradation in fatigue tests and acoustic emissions in residual strength tests indicate that damage occurs at a loading above stresses, which correspond to a fatigue life of 1011 cycles. This in turn implies that no fatigue limit exists before 1011 load cycles for the short-fiber-reinforced polyamide 66 under investigation.
The publication explores policy and planning approaches to urban shrinkage and population decline from Bilbao, Spain (2000-2015), Leipzig, Germany (2000-2015) and Zeeland, the Netherlands (2018-2020). Through the main method of interpretive policy analysis, causal and normative dimensions of the policy and planning approaches are determined. The conclusions from the different cases are reviewed in a cross-national comparative framework, based on policy benchmarking. As a result of the policy benchmarking, variants of the new planning concept of "Shrinking Smart" are defined, providing recommendations for overall policy approach under conditions of population decline, encompassing: the planning process for shrinking cities; a decision making mechanism with normative orientation; economic development and quality of life as a broader policy framework; spatial planning under conditions of urban shrinkage. The conclusions are complemented by critical reflection on growth orientation and its pervasiveness under conditions of shrinkage and population decline as well as with recommendations on particular instruments under each of the concept variants.
This dissertation was developed in the context of the BMBF and EU/ECSEL funded
projects GENIAL! and Arrowhead Tools. In these projects the chair examines methods
of specifications and cooperations in the automotive value chain from OEM-Tier1-Tier2.
Goal of the projects is to improve communication and collaborative planning, especially
in early development stages. Besides SysML, the use of agreed vocabularies and on-
tologies for modeling requirements, overall context, variants, and many other items, is
targeted. This thesis proposes a web database, where data from the collaborative requirements elicitation is combined with an ontology-based approach that uses reasoning
capabilities.
For this purpose, state-of-the-art ontologies have been investigated and integrated that
entail domains like hardware/software, roadmapping, IoT, context, innovation and oth-
ers. New ontologies have been designed like a HW / SW allocation ontology and a
domain-specific "eFuse ontology" as well as some prototypes. The result is a modular
ontology suite and the GENIAL! Basic Ontology that allows us to model automotive
and microelectronic functions, components, properties and dependencies based on the
ISO26262 standard among these elements. Furthermore, context knowledge that influences design decisions such as future trends in legislation, society, environment, etc. is
included. These knowledge bases are integrated in a novel tool that allows for collabo-
rative innovation planning and requirements communication along the automotive value
chain. To start off the work of the project, an architecture and prototype tool was developed. Designing ontologies and knowing how to use them proved to be a non-trivial
task, requiring a lot of context and background knowledge. Some of this background
knowledge has been selected for presentation and was utilized either in designing models
or for later immersion. Examples are basic foundations like design guidelines for ontologies, ontology categories and a continuum of expressiveness of languages and advanced
content like multi-level theory, foundational ontologies and reasoning.
Finally, at the end, we demonstrate the overall framework, and show the ontology with
reasoning, database and APPEL/SysMD (AGILA ProPErty and Dependency Descrip-
tion Language / System MarkDown) and constraints of the hardware / software knowledge base. There, by example, we explore and solve roadmap constraints that are coupled
with a car model through a constraint solver.
Organizational routines constitute how work is accomplished in organizations. This dissertation thesis draws on recent routine research and is anchored in the field of organization theory. The thesis consists of four separate manuscripts that contribute to related research fields such as agility or coordination research from a routine perspective while also extending routine dynamics research. Recent routine dynamics research offers a wide perspective on how situated actions within and across routines unfold as emergent accomplishments. This allows us to analyze other organization research phenomena, such as agility and coordination. Accordingly, the first and second manuscripts argue for the adoption of a very dynamic perspective on routines and the incorporation of these insights into agility and coordination research. This is followed by two empirical manuscripts that expand the routine literature based on qualitative research within agile software development. The third manuscript of this dissertation analyzes how situated actions address different temporal orientations (i.e., past, present, and future). Last, the fourth manuscript addresses the performing of roles within and through routines. In general, this dissertation contributes to overall organization research in two ways: (1) by outlining and examining how agility is enacted; (2) by highlighting that actions are performed flexibly to consider the situation at hand.
Towards standardized operating procedures for eDNA-based monitoring of marine coastal ecosystems
(2022)
Marine coastal ecosystems are exposed to a variety of anthropogenic impacts, which
often manifest themselves in the pollution of the surrounding ecosystem. Especially on
densely populated coasts or in regions heavily used for aquaculture, changes in the natural
marine habitat can be observed. In order to protect nature and thus its ecosystem services
for humans, more and more environmental protection laws are coming into force.
Exemplary, operators of facilities known to contribute to pollution are obliged to regularly
monitor the condition of the surrounding environment. The purpose of such so-called
compliance monitoring is to determine whether the prescribed regulations are being
followed. The traditional routine involves sampling by ship, during which sediment
samples are taken from the seabed below the aquaculture cages and all macrofauna
organisms found, such as mussels or worms, are taxonomically determined and quantified
by experts. Based on the community of organisms the ecological status of the sample can
then be inferred. Since this method is very labor- and time-consuming, a reorientation of
the scientific community towards alternative monitoring methods is currently taking place.
A bacteria-based eDNA (environmental DNA) metabarcoding system in particular has
proven to be a suitable monitoring tool. With this molecular method, the composition of
the benthic bacterial community is determined using high-throughput sequencing. The
great advantage of this method is that bacteria, due to their short generation times, react
rapidly to various environmental influences. The composition of this community can then
be used to infer the ecological status of the sample under investigation via sequencing
without the need for laborious enumeration and identification of organisms. Additionally,
sequencing costs are more and more decreasing, proposing eDNA metabarcoding-based
monitoring as a faster and cheaper alternative to traditional monitoring. In order to
implement the method in legislation in the long term, standard protocols need to be
developed. Once these are sufficiently validated, the novel methodology can be
incorporated into regulations to support or even replace traditional monitoring. However,
some steps of the eDNA metabarcoding method, from sampling to ecosystem assessment,
are not yet sufficiently standardized, which is why the development of this work was
necessary. Since there is no consensus in the scientific community on (i) the preservation of
environmental samples during transport, (ii) the reproducibility of ecosystem assessment
among different laboratories, (iii) the most appropriate bioinformatic method for ecosystem
assessment, and (iv) the minimum sequencing depth required to determine ecosystem
status, these sub-steps were investigated. It was found that the most common methods
currently used to preserve samples during transport had no discernible effect on the final
ecosystem assessment. Furthermore, sample processing in independent laboratories
allowed the same ecological interpretations based on the bacterial community, which
resulted in concordant ecosystem assessments among laboratories. This indicates the
overall reproducibility of the eDNA metabarcoding-based method, thus enabling its
implementation in standard protocols. Furthermore, it was shown that corresponding
ecosystem assessments can be obtained with the currently used methods for determining
ecological status based on eDNA data. Critical to predictive accuracy is not the method
itself, but a sufficient number of samples that accounts for the natural spatial and temporal
variability of bacterial communities. It was demonstrated that a very shallow sequencing
depth per sample can be sufficient to use machine learning to prediction the ecological
status of the environmental sample. The quality of this classifications did not depend on
the sequencing depth as assumed but was determined by the separability of individual
categories. The results and recommendations of this work contribute directly to the
standardization of ecological assessment of nearshore marine ecosystems. By establishing
these standard protocols, it will be possible to integrate the eDNA metabarcoding-based
method for monitoring compliance of coastal marine ecosystems into legislative
regulations in the future.
The testing effect describes the finding that retrieval practice compared to study practice enhances memory performance. Prior evidence consistently demonstrates that this effect can be further boosted by providing feedback after retrieval attempts (test-potentiated encoding, TPE). The present PhD thesis was aimed at investigating the neural processes during memory retrieval underlying the beneficial effect of additional performance feedback beyond the benefits of only adding correct answer feedback. Three studies were conducted and behavioral as well as neural correlates (collected with electroencephalography and functional magnetic resonance imaging) of feedback learning were examined.
Adjoint-Based Shape Optimization and Optimal Control with Applications to Microchannel Systems
(2021)
Optimization problems constrained by partial differential equations (PDEs) play an important role in many areas of science and engineering. They often arise in the optimization of technological applications, where the underlying physical effects are modeled by PDEs. This thesis investigates such problems in the context of shape optimization and optimal control with microchannel systems as novel applications. Such systems are used, e.g., as cooling systems, heat exchangers, or chemical reactors as their high surface-to-volume ratio, which results in beneficial heat and mass transfer characteristics, allows them to excel in these settings. Additionally, this thesis considers general PDE constrained optimization problems with particular regard to their efficient solution.
As our first application, we study a shape optimization problem for a microchannel cooling system: We rigorously analyze this problem, prove its shape differentiability, and calculate the corresponding shape derivative. Afterwards, we consider the numerical optimization of the cooling system for which we employ a hierarchy of reduced models derived via porous medium modeling and a dimension reduction technique. A comparison of the models in this context shows that the reduced models approximate the original one very accurately while requiring substantially less computational resources.
Our second application is the optimization of a chemical microchannel reactor for the Sabatier process using techniques from PDE constrained optimal control. To treat this problem, we introduce two models for the reactor and solve a parameter identification problem to determine the necessary kinetic reaction parameters for our models. Thereafter, we consider the optimization of the reactor's operating conditions with the objective of improving its product yield, which shows considerable potential for enhancing the design of the reactor.
To provide efficient solution techniques for general shape optimization problems, we introduce novel nonlinear conjugate gradient methods for PDE constrained shape optimization and analyze their performance on several well-established benchmark problems. Our results show that the proposed methods perform very well, making them efficient and appealing gradient-based shape optimization algorithms.
Finally, we continue recent software-based developments for PDE constrained optimization and present our novel open-source software package cashocs. Our software implements and automates the adjoint approach and, thus, facilitates the solution of general PDE constrained shape optimization and optimal control problems. Particularly, we highlight our software's user-friendly interface, straightforward applicability, and mesh independent behavior.
With the growing support for features such as hardware virtualization tied to the boost of hardware capacity, embedded systems are now able to regroup many software components on a same hardware platform to save costs. This evolution has raised system complexity, motivating the introduction of Mixed-Criticality Systems (MCS) to consolidate applications from different criticality levels on a hardware target: in critical environments such as an aircraft or a factory floor, high-critical functions are now regrouped with other non-critical functions. A key requirement of such system is to guarantee that the execution of a critical function cannot be compromised by other functions, especially by ones with a lower-criticality level. In this context, runtime intrusion detection contributes to secure system execution to avoid an intentional misbehavior in critical applications.
Host Intrusion Detection Systems (HIDS) has been an active field of research for computer security for more than two decades. The goal of HIDS is to detect traces of malicious activity in the execution of a monitored software at runtime. While this topic has been extensively investigated for general-purpose computers, its application in the specific context of embedded MCS is comparatively more recent.
We extend the domain of HIDS research towards HIDS deployment into industrial embedded MCS. For this, we provide a review of state-of-the-art HIDS solutions and evaluate the main problems towards a deployment into an industrial embedded MCS.
We present several HIDS approaches based on solutions for general-purpose computers, which we apply to protect the execution of an application running into an embedded MCS. We introduce two main HIDS methods to protect the execution of a given user-level application. Because of possible criticality constraints of the monitored application, such as industrial certification aspects, our solutions support transparent monitoring; i.e. they do not require application instrumentation. On one hand, we propose a machine-learning (ML) based framework to monitor low-level system events transparently. On the other hand, we introduce a hardware-assisted control-flow monitoring framework to deploy control-flow integrity monitoring without instrumentation of the monitored application.
We provide a methodology to integrate and evaluate HIDS mechanisms into an embedded MCS. We evaluate and implement our monitoring solutions on a practical industrial platform, using generic hardware system and SYSGO’s industrial real-time hypervisor.
Automation, Industry 4.0 and artificial intelligence are playing an increasingly central role for companies. Artificial intelligence in particular is currently enabling new methods to achieve a higher level of automation. However, machine learning methods are usually particularly lucrative when a lot of data can be easily collected and patterns can be learned with the help of this data. In the field of metrology, this can prove difficult depending on the area of work. Particularly for micrometer-scale measurements, measurement data often involves a lot of time, effort, patience, and money, so measurement data is not readily available. This raises the question of how meaningfully machine learning approaches can be applied to different domains of measurement tasks, especially in comparison to current solution approaches that use model-based methods. This thesis addresses this question by taking a closer look at two research areas in metrology, micro lead determination and reconstruction. Methods for micro lead determination are presented that determine texture and tool axis with high accuracy. The methods are based on signal processing, classical optimization and machine learning. In the second research area, reconstructions for cutting edges are considered in detail. The reconstruction methods here are based on the robust Gaussian filter and deep neural networks, more specifically autoencoders. All results on micro lead and reconstruction are compared and contrasted in this thesis, and the applicability of the different approaches is evaluated.
Controller design for continuous dynamical systems is a core algorithmic problem in the design of cyber-physical systems (CPS). When the CPS application is safety critical, additionally we require the controller to have strong correctness guarantees. One approach for this design problem is to use simpler discrete abstraction of the original continuous system, on which known reactive synthesis methods can be used to design the controller. This approach is known as the abstraction-based controller design (ABCD) paradigm.
In this thesis, we build ABCD procedures which are faster and more modular compared to the state-of-the-art, and can handle problems which were beyond the scope of the existing techniques.
Usually, existing ABCD approaches use state space discretization for computing the abstractions, for which the procedures do not scale well for larger systems. Our first contribution is a multi-layered ABCD algorithm, where we combine coarse abstractions and lazily computed fine abstractions to improve scalability. So far, we only address reach-avoid and safety specifications, for which our prototype tool (called Mascot) showed up to an order of magnitude speedup on standard benchmark examples.
Second, we consider the problem of modular design of sound local controllers for a network of local discrete abstractions communicating via discrete/boolean variables and having local specifications. We propose a sound algorithm, where the systems negotiate a pair of local assume-guarantee contracts, in order to synchronize on a set of non-conflicting and correct behaviors. As a by-product, we also obtain a set of local controllers for the systems which ensure simultaneous satisfaction of the local specifications. We show the effectiveness of the our algorithm using a prototype tool (called Agnes) on a set of discrete benchmark examples.
Our third contribution is a novel ABCD algorithm for a more expressive model of nonlinear dynamical systems with stochastic disturbances and ω-regular specifications. This part has two subparts, which are of significant merits on their own rights. First, we present an abstraction algorithm for nonlinear stochastic systems using 2.5-player games (turn-based stochastic graph games). We show that an almost sure winning strategy in this abstract 2.5-player game gives us a sound controller for the original system for satisfying the specification with probability one. Second, we present symbolic algorithms for a seemingly different class of 2-player games with certain environmental fairness assumptions, which can also be used to efficiently compute winning strategies in the aforementioned abstract 2.5-player game. Using our prototype tool (Mascot-SDS), we show that our algorithm significantly outperforms the state-of-the-art implementation on standard benchmark examples from the literature.
Comparative Uncertainty Visualization for High-Level Analysis of Scalar- and Vector-Valued Ensembles
(2022)
With this thesis, I contribute to the research field of uncertainty visualization, considering parameter dependencies in multi valued fields and the uncertainty of automated data analysis. Like uncertainty visualization in general, both of these fields are becoming more and more important due to increasing computational power, growing importance and availability of complex models and collected data, and progress in artificial intelligence. I contribute in the following application areas:
Uncertain Topology of Scalar Field Ensembles.
The generalization of topology-based visualizations to multi valued data involves many challenges. An example is the comparative visualization of multiple contour trees, complicated by the random nature of prevalent contour tree layout algorithms. I present a novel approach for the comparative visualization of contour trees - the Fuzzy Contour Tree.
Uncertain Topological Features in Time-Dependent Scalar Fields.
Tracking features in time-dependent scalar fields is an active field of research, where most approaches rely on the comparison of consecutive time steps. I created a more holistic visualization for time-varying scalar field topology by adapting Fuzzy Contour Trees to the time-dependent setting.
Uncertain Trajectories in Vector Field Ensembles.
Visitation maps are an intuitive and well-known visualization of uncertain trajectories in vector field ensembles. For large ensembles, visitation maps are not applicable, or only with extensive time requirements. I developed Visitation Graphs, a new representation and data reduction method for vector field ensembles that can be calculated in situ and is an optimal basis for the efficient generation of visitation maps. This is accomplished by bringing forward calculation times to the pre-processing.
Visually Supported Anomaly Detection in Cyber Security.
Numerous cyber attacks and the increasing complexity of networks and their protection necessitate the application of automated data analysis in cyber security. Due to uncertainty in automated anomaly detection, the results need to be communicated to analysts to ensure appropriate reactions. I introduce a visualization system combining device readings and anomaly detection results: the Security in Process System. To further support analysts I developed an application agnostic framework that supports the integration of knowledge assistance and applied it to the Security in Process System. I present this Knowledge Rocks Framework, its application and the results of evaluations for both, the original and the knowledge assisted Security in Process System. For all presented systems, I provide implementation details, illustrations and applications.
Robotic systems are entering the stage. Enabled by advances in both hardware components and software techniques, robots are increasingly able to operate outside of factories, assist humans, and work alongside them. The limiting factor of robots’ expansion remains the programming of robotic systems. Due to the many diverse skills necessary to build a multi-robot system, only the biggest organizations are able to innovate in the space of services provided by robots.
To make developing new robotic services easier, in this dissertation I propose a program- ming model in which users (programmers) give a declarative specification of what needs to be accomplished, and then a backend system makes sure that the specification is safely and reliably executed. I present Antlab, one such backend system. Antlab accepts Linear Temporal Logic (LTL) specifications from multiple users and executes them using a set of robots of different capabilities.
Building on the experience acquired implementing Antlab, I identify problems arising from the proposed programming model. These problems fall into two broad categories, specification and planning.
In the category of specification problems, I solve the problem of inferring an LTL formula from sets of positive and negative example traces, as well as from a set of positive examples only. Building on top of these solutions, I develop a method to help users transfer their intent into a formal specification. The approach taken in this dissertation is combining the intent signals from a single demonstration and a natural language description given by a user. A set of candidate specifications is inferred by encoding the problem as a satisfiability problem for propositional logic. This set is narrowed down to a single specification through interaction with the user; the user approves or declines generated simulations of the robot’s behavior in different situations.
In the category of planning problems, I first solve the problem of planning for robots that are currently executing their tasks. In such a situation, it is unclear what to take as the initial state for planning. I solve the problem by considering multiple, speculative initial states. The paths from those states are explored based on a quality function that repeatedly estimates the planning time. The second problem is a problem of reinforcement learning when the reward function is non-Markovian. The proposed solution consists of iteratively learning an automaton representing the reward function and using it to guide the exploration.
Building on knowledge and innovation: the role of Green Economy in revitalising Shrinking Cities
(2022)
This research introduces the topic of the Green Economy in the context of shrinking cities. The analysis is supported by two case studies, one located in Mexico and the other in France, to identify adaptable strategies of sustainable development in different contexts of urban decline that consider significant differences in the availability of resources.
Shrinking cities suffer from problems such as depopulation, economic decline and underuse of urban infrastructure, mainly due to a regional process of economic peripheralisation. Shrinking cities can adopt two logics to address these issues: de-peripheralisation and endogenous development.
It is argued that shrinking cities can exploit emerging green markets to stimulate economic growth and enhance liveability and sustainability; however, the solutions vary depending on the available resources, local comparative advantages and national political and financial support systems. The Green Economy driven solutions in shrinking cities can follow two main strategies: one is aimed at regrowth, betting on the creation of regional innovation systems by investing in research and development and the local capture of the produced spill-overs; the other, inspired by the concept of greening, aims to improve the quality of urban life of the inhabitants by enhancing the quality of consumption sectors through ecological practices and respect for the environment. The analysis of the two case studies serves as a method to observe different strategies for the sustainable development of shrinking cities by introducing activities in the sectors of the Green Economy.
This study supports the global comparative perspective approach in urban studies focusing on urban shrinkage. The context of shrinking cities is explored in Latin America by identifying the eighteen shrinking cities in Mexico.
In the pre-seed phase before entering a market, new ventures face the complex, multi-faceted, and uncertain task of designing a business model. Founders accomplish this task within the framework of an innovation process, the so-called business model innovation process. However, because a set of feasible opportunities to design a viable business model is often not predictable in this early phase (Alvarez & Barney, 2007), business model ideas have to be revised multiple times, which corresponds to experimenting with alternative business models (Chesbrough, 2010). This also brings scholars to the relevant, but seldom noticed field of research on experimentation as a cognitive schema (Felin et al., 2015; Gavetti & Levinthal, 2000). The few scholars that discussed the importance of such thought experimentation did not elaborate on the manifestations of this phenomenon. Thus, building on qualitative interviews with entrepreneurs, the current state of the research has a gap that offers this dissertation the ability to clearly conceptualise the manifestation of experimentation as a cognitive schema in business model innovation. The results extend previous conceptualisations of experimentation by illustrating the interplay of three different forms of thought experimentation, namely purposeful interactions, incidental interactions, and theorising. In addition, the role of individuals in business model innovation has recently been recognised by scholars (Amit & Zott, 2015; Snihur & Zott, 2020). It is noticed that not only the founders themselves but also many other actors play a central role in this process to support a new venture on its way to designing a viable business model, such as accelerators or public institutions. It thus stands to reason that in addition to understanding how new ventures design their business model, it is also important to study how different actors are involved in this process. Building on qualitative interviews with entrepreneurs, this gap offers this dissertation the ability to study how different actors are involved in business model innovation and conceptualise actor engagement behaviours in this context. The results reveal six different actor engagement behaviours, including teaching, supporting, mobilising, co-developing, sharing, and signalling behaviour. Furthermore, it stands to reason, that entrepreneurs and external actors each play a certain role in business model innovation. Certain behavioural patterns and types of resource contributions may be characteristic for a group of actors, leading to the emergence of distinct actor roles. Thus, in this dissertation a role concept is established to illustrate how actors are involved in designing a new business model, including 13 actor roles. These actor roles are divided into task-oriented and network-oriented roles. Building on this, a variety of role dynamics are unveiled. Moreover, special attention is given to role temporality. Building on two case studies and a quantitative survey, the results reveal how actor roles are played at a certain point in time, thereby concretising them in relation to certain stages of the pre-seed phase.
Data-driven and Sparse-to-Dense Concepts in Scene Flow Estimation for Automotive Applications
(2022)
Highly assisted driving and autonomous vehicles require a detailed and accurate perception of the environment. This includes the perception of the 3D geometry of the scene and the 3D motion of other road users. The estimation of both based on images is known as the scene flow problem in computer vision. This thesis deals with a solution to the scene flow problem that is suitable for application in autonomous vehicles. This application imposes strict requirements on accuracy, robustness, and speed. Previous work was lagging behind in at least one of these metrics. To work towards the fulfillment of those requirements, the sparse-to-dense concept for scene flow estimation is introduced in this thesis. The idea can be summarized as follows: First, scene flow is estimated for some points of the scene for which this can be done comparatively easily and reliably. Then, an interpolation is performed to obtain a dense estimate for the entire scene. Because of the separation into two steps, each part can be optimized individually. In a series of experiments, it is shown that the proposed methods achieve competitive results and are preferable to previous techniques in some aspects. As a second contribution, individual components in the sparse-to-dense pipeline are replaced by deep learning modules. These are a highly localized and highly accurate feature descriptor to represent pixels for dense matching, and a network for robust and generic sparse-to-dense interpolation. Compared to end-to-end architectures, the advantage of deep modules is that they can be trained more effciently with data from different domains. The recombination approach applies a similar concept as the sparse-to-dense approach by solving and combining less diffcult, auxiliary sub-problems. 3D geometry and 2D motion are estimated separately, the individual results are combined, and then also interpolated into a dense scene flow. As a final contribution, the thesis proposes a set of monolithic end-to-end networks for scene flow estimation.
Every organism contains a characteristic number of chromosomes that have to be segregated equally into
two daughter cells during mitosis. Any error during chromosome segregation results in daughter cells that
lost or gained a chromosome, a condition known as aneuploidy. Several studies from our laboratory and
across the world have previously shown that aneuploidy per se strongly affects cellular physiology.
However, these studies were limited mainly to the chromosomal gains due to the availability of several
model systems. Strikingly, no systemic study to evaluate the impact of chromosome loss in the human
cells has been performed so far. This is mainly due to the lack of model systems, as chromosome loss is
incompatible with survival and drastically reduces cellular fitness. During my PhD thesis, for the first time,
I used diverse omics and biochemical approaches to investigate the consequences of chromosome losses
in human somatic cells.
Using isogenic monosomic cells derived from the human cell line RPE1 lacking functional p53, we showed
that, similar to the cells with chromosome gains, monosomic cells proliferate slower than the parental
cells and exhibit genomic instability. Transcriptome and proteome analysis revealed that the expression
of genes encoded on the monosomic chromosomes was reduced, as expected, but the abundance was
partially compensated towards diploid levels by both transcriptional and post transcriptional mechanisms.
Furthermore, we showed that monosomy induces global gene expression changes that are distinct to
changes in response to chromosome gains. The most consistently deregulated pathways among the
monosomies were ribosomes and translation, which we validated using polysome profiling and analysis
of translation with puromycin incorporation experiments. We showed that these defects could be
attributed to the haploinsufficiency of ribosomal protein genes (RPGs) encoded on monosomic
chromosomes. Reintroduction of p53 into the monosomic cells uncovered that monosomy is incompatible
with p53 expression and that the monosomic cells expressing p53 are either eliminated or outgrown by
the p53 negative population. Given the RPG haploinsufficiency and ribosome biogenesis defects caused
by monosomy, we show an evidence that the p53 activation in monosomies could be caused by the
defects in ribosomes. These findings were further supported by computational analysis of cancer genomes
revealing those cancers with monosomic karyotype accumulated frequently p53 pathway mutations and
show reduced ribosomal functions.
Together, our findings provide a rationale as to why monosomy is embryonically lethal, but frequently
occurs in p53 deficient cancers.
An increasing number of nowadays tasks, such as speech recognition, image generation,
translation, classification or prediction are performed with the help of machine learning.
Especially artificial neural networks (ANNs) provide convincing results for these tasks.
The reasons for this success story are the drastic increase of available data sources in
our more and more digitalized world as well as the development of remarkable ANN
architectures. This development has led to an increasing number of model parameters
together with more and more complex models. Unfortunately, this yields a loss in the
interpretability of deployed models. However, there exists a natural desire to explain the
deployed models, not just by empirical observations but also by analytical calculations.
In this thesis, we focus on variational autoencoders (VAEs) and foster the understanding
of these models. As the name suggests, VAEs are based on standard autoencoders (AEs)
and therefore used to perform dimensionality reduction of data. This is achieved by a
bottleneck structure within the hidden layers of the ANN. From a data input the encoder,
that is the part up to the bottleneck, produces a low dimensional representation. The
decoder, the part from the bottleneck to the output, uses this representation to reconstruct
the input. The model is learned by minimizing the error from the reconstruction.
In our point of view, the most remarkable property and, hence, also a central topic
in this thesis is the auto-pruning property of VAEs. Simply speaking, the auto-pruning
is preventing the VAE with thousands of parameters from overfitting. However, such a
desirable property comes with the risk for the model of learning nothing at all. In this
thesis, we look at VAEs and the auto-pruning from two different angles and our main
contributions to research are the following:
(i) We find an analytic explanation of the auto-pruning. We do so, by leveraging the
framework of generalized linear models (GLMs). As a result, we are able to explain
training results of VAEs before conducting the actual training.
(ii) We construct a time dependent VAE and show the effects of the auto-pruning in
this model. As a result, we are able to model financial data sequences and estimate
the value-at-risk (VaR) of associated portfolios. Our results show that we surpass
the standard benchmarks for VaR estimation.
Today’s digital world would be unthinkable without complex data sets. Whether in private, business or industrial environments, complex data provide the basis for important and critical decisions and determine many processes, some of which are automated. This is often associated with Big Data. However, often only one aspect of the usual Big Data definitions is sufficient and a human observer can no longer capture the data completely and correctly. In this thesis, different approaches are presented in order to master selected challenges in a more effective, efficient and userfriendly way. The approaches range from easier pre-processing of data sets for later analysis and the identification of design guidelines of such assistants, new visualization techniques for presenting uncertainty, extensions of existing visualizations for categorical data, concepts for time-saving selection methods for subsets of data points and faster navigation and zoom interaction–especially in the web-based area with enormous amounts of data–to new and innovative orientation-based interaction metaphors for mobile devices as well as stationary working environments. Evaluations and appropriate use case of the individual approaches show the usability also in comparison with state-of-the-art techniques.
Pyrrolizidine alkaloids are naturally occurring secondary plant metabolites mainly found in plant families of Asteraceae, Boraginaceae, and Fabaceae. Chemically, PAs consist of a pyrrolizidine core bearing hydroxyl groups, the so-called necine base, and mono- or dicarboxylic necine acids bound to the pyrrolizidine core via ester linkages. 1,2-unsaturated PAs are hepatotoxic, genotoxic, and carcinogenic due to the highly reactive pyrrolic metabolites formed by cytochrome P450 monooxygenases (CYPs) primarily in the liver. The presence of PAs as frequent contaminants in the wide variety of food and feed products would be a concern for public health.
Due to the inadequate data, the risk assessment of PAs was mainly approached using the two most toxic potent congeners, i.e., lasiocarpine and riddelliine. However, the toxic potencies of individual PA congeners differentiated widely between the congeners probably related to their structural features. The risk of PA-containing products is indeed overestimated, and a comprehensive risk assessment should take these differences into account.
After analyzing the data of many PAs, Merz and Schrenk derived interim Relative Potency (iREP) factors to present the differences in their toxicity between the sub-groups concerning their structural features. But since this concept was derived from an inadequate database, it was found that the relative toxicity of individual congeners cannot be entirely reliably evaluated. My work aimed to achieve more comprehensive congener-specific in vitro toxicological data and estimate the structure-related characteristics for refining this concept. For this purpose, ten congeners, lasiocarpine, monocrotaline, retrorsine, senecionine, seneciphylline, echimidine, europine, heliotrine, indicine, and lycopsamine, were determined in a series of in vitro test systems with different endpoints to quantify their cytotoxicity, genotoxicity, and mutagenicity.
Cytotoxicity was assessed using the Alamar blue assay. A clear structure dependence could be demonstrated in primary rat hepatocytes and HepG2 (CYP3A4) cells. On the contrary, in HepG2 cells, none of the selected PAs exhibited cytotoxic effects, probably due to the lack of CYPs. The role of CYP450 enzymes in metabolic activation was further confirmed using an inhibition assay and the activity of CYP450 enzymes was measured by a kinetic assay analyzing 7-benzyloxyresorufin-O-dealkylation (BROD). Furthermore, utilizing a glutathione-reductase-DTNB recycling assay indicated that glutathione might not play a critical role in PA-induced cytotoxicity. A micronucleus test was used for determining the PA-induced clastogenic genotoxicity. All selected PA congeners exhibited a concentration-dependent manner in the HepG2 (CYP3A4) cells. The relative potencies of PA congeners estimated from Alamar blue assay and micronucleus assay are generally consistent with the following ranking: lasiocarpine > senecionine > seneciphylline ≥ retrorsine > heliotrine (?) echimidine ≥ europine ≈ indicine ≈ lycopsamine ≈ monocrotaline. Compared to the iREP reported by Merz and Schrenk, monocrotaline exhibited considerably lower toxic potency. However, echimidine was more toxic than expected. On the other hand, mutagenicity was measured in Ames fluctuation assay with Salmonella typhimurium strains TA98 and TA100. None of the selected PA congeners up to 300 µM showed mutagenic effects despite metabolic activation with S9-mix.
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
Risk management is an indispensable component of the financial system. In this context, capital requirements are built by financial institutions to avoid future bankruptcy. Their calculation is based on a specific kind of maps, so-called risk measures. There exist several forms and definitions of them. Multi-asset risk measures are the starting point of this dissertation. They determine the capital requirements as the minimal amount of money invested into multiple eligible assets to secure future payoffs. The dissertation consists of three main contributions: First, multi-asset risk measures are used to calculate pricing bounds for European type options. Second, multi-asset risk measures are combined with recently proposed intrinsic risk measures to obtain a new kind of a risk measure which we call a multi-asset intrinsic (MAI) risk measure. Third, the preferences of an agent are included in the calculation of the capital requirements. This leads to another new risk measure which we call a scalarized utility-based multi-asset (SUBMA) risk measure.
In the introductory chapter, we recall the definition and properties of multi-asset risk
measures. Then, each of the aforementioned contributions covers a separate chapter. In
the following, the content of these three chapters is explained in more detail:
Risk measures can be used to calculate pricing bounds for financial derivatives. In
Chapter 2, we deal with the pricing of European options in an incomplete financial market
model. We use the common risk measures Value-at-Risk and Expected Shortfall to define
good deals on a financial market with log-normally distributed rates of return. We show that the pricing bounds obtained from Value-at-Risk may have a non-smooth behavior under parameter changes. Additionally, we find situations in which the seller's bound for a call option is smaller than the buyer's bound. We identify the missing convexity of the Value-at-Risk as main reason for this behavior. Due to the strong connection between the obtained pricing bounds and the theory of risk measures, we further obtain new insights in the finiteness and the continuity of multi-asset risk measures.
In Chapter 3, we construct the MAI risk measure. Therefore, recall that a multi-asset risk measure describes the minimal external capital that has to be raised into multiple eligible assets to make a future financial position acceptable, i.e., that it passes a capital adequacy test. Recently, the alternative methodology of intrinsic risk measures
was introduced in the literature. These ask for the minimal proportion of the financial position that has to be reallocated to pass the capital adequacy test, i.e., only internal capital is used. We combine these two concepts and call this new type of risk measure an MAI risk measure. It allows to secure the financial position by external capital as well as reallocating parts of the portfolio as an internal rebooking. We investigate several properties to demonstrate similarities and differences to the two
aforementioned classical types of risk measures. We find out that diversification reduces
the capital requirement only in special situations depending on the financial positions. With the help of Sion's minimax theorem we also prove a dual representation for MAI risk measures. Finally, we determine capital requirements in a model motivated by the Solvency II methodology.
In the final Chapter 4, we construct the SUBMA risk measure. In doing so, we consider the situation in which a financial institution has to satisfy a capital adequacy test, e.g., by the Basel Accords for banks or by Solvency II for insurers. If the financial situation of this institution is tight, then it can happen that no reallocation of the initial
endowment would pass the capital adequacy test. The classical portfolio optimization approach breaks down and a capital increase is needed. We introduce the SUBMA risk measure which optimizes the hedging costs and the expected utility of the institution simultaneously subject to the capital adequacy test. We find out that the SUBMA risk measure is coherent if the utility function has constant relative risk aversion and the capital adequacy test leads to a coherent acceptance set. In a one-period financial market model we present a sufficient condition for the SUBMA risk measure to be finite-valued and continuous. Finally, we calculate the SUBMA risk measure in a continuous-time financial market model for two benchmark capital adequacy tests.
This thesis reports about the investigation of di- and trinuclear coinage metal (Cu, Ag, Au) phosphine complexes with different anion adducts. Several mass spectrometric methods were utilized to investigate the complexes in gas phase without disturbing influences e.g. by the solvent. Electrospray Ionization (ESI) enabled the transfer of ions into the gas phase. In order to determine the fragmentation pathways and relative gas phase stabilities of these ions, Collision Induced Dissociation (CID) was used. The binding motifs and structures of the complexes were assigned by the help of Infrared (Multiple) Photon Dissociation (IR-(M)PD) at cryo (40 K, N2-tagged) and room temperature (300 K). Electron Transfer Dissociation/Reduction (ETD/R) was used to reduce the dicationic complexes to monocationic complexes. A tunable OPO/OPA laser system and the FELIX free-electron laser were used as IR laser sources. All experimental findings were supported by Density Functional Theory (DFT) calculation. In the first part of this thesis, the binding motifs and fragmentation behavior of the dinuclear coinage metal phosphine complexes with formate adduct were determined. Two different binding motifs were found and a stronger Cu-formate binding than in the case of Ag-formate. The dynamic bonding of hydrogen oxalate to phosphine ligand stabilized complexes were investigated in the second part. Several different binding motifs were determined. IR induced isomeric interconversions were found for the Ag complex whereas in case of the Cu complex a stiff hydrogen oxalate coordination seems to suppress such conversions. In the last part of this thesis, the ETD/R method was utilized to unravel the influence of oxidation states on the hydride and deuteride vibration modes of the trinuclear coinage metal complexes as well as the O2 adduct complexes and fragments with less complexity via IR-MPD and the FELIX free-electron laser. Unfortunately, an unambiguous assignment for the hydride and deuteride vibration modes is only possible for the fragments with less complexity.
The main objects of study in this thesis are abelian varieties and their endomorphism rings. Abelian varieties are not just interesting in their own right, they also have numerous applications in various areas such as in algebraic geometry, number theory and information security. In fact, they make up one of the best choices in public key cryptography and more recently in post-quantum cryptography. Endomorphism rings are objects attached to abelian varieties. Their computation plays an important role in explicit class field theory and in the security of some post-quantum cryptosystems.
There are subexponential algorithms to compute the endomorphism rings of abelian varieties of dimension one and two. Prior to this work, all these subexponential algorithms came with a probability of failure and additional steps were required to unconditionally prove the output. In addition, these methods do not cover all abelian varieties of dimension two. The objective of this thesis is to analyse the subexponential methods and develop ways to deal with the exceptional cases.
We improve the existing methods by developing algorithms that always output the correct endomorphism ring. In addition to that, we develop a novel approach to compute endomorphism rings of some abelian varieties that could not be handled before. We also prove that the subexponential approaches are simply not good enough to cover all the cases. We use some of our results to construct a family of abelian surfaces with which we build post-quantum cryptosystems that are believed to resist subexponential quantum attacks - a desirable property for cryptosystems. This has the potential of providing an efficient non interactive isogeny based key exchange protocol, which is also capable of resisting subexponential quantum attacks and will be the first of its kind.
Membrane proteins are of high pharmacological interest as they are involved in a variety of vital functions. However, to make them accessible to in vitro studies, they often need to be extracted from their natural lipid environment and stabilized with the aid of membrane-mimetic systems. Such membrane mimics can consist of diverse amphiphilic molecules. Small-molecule amphiphiles that can solubilize lipid bilayers, so-called detergents, have been invaluable tools for membrane-protein research in recent decades. Herein, novel small-molecule glyco-amphiphiles embodying three distinct design principles are introduced, and their biophysical and physicochemical properties are investigated. In doing so, the major aims consist in establishing new promising amphiphiles and in determining structure–efficacy relationships for their synthesis and application.
First, the software package D/STAIN was introduced to facilitate the analysis of demicellization curves obtained by isothermal titration calorimetry. The robustness of the underlying algorithm was demonstrated by analyzing demicellization curves representing large variations in amphiphile concentrations and thermodynamic parameters.
Second, the interactions of diastereomeric cyclopentane maltoside amphiphiles (CPMs) with lipid bilayers and membrane proteins were investigated. To this end, lipid model membranes, cellular membranes, and model membrane proteins were treated with different stereoisomer CPMs. These investigations pointed out the importance of stereochemical configuration in the solubilization of lipid bilayers, in the extraction of membrane proteins, and, ultimately, in the stabilization of the latter. Ultimately, CPM C12 could be identified as a particularly stabilizing agent.
Third, the influence of a polymerizable group attached to detergent-like amphiphiles was characterized regarding their micellization, micellar properties, and ability to solubilize lipid membranes. This revealed that such chemical modifications can have different degrees of impact regarding the investigated properties. In particular, micellization was influenced substantially, whereas the sizes of the resulting micelles varied slightly. The polymerizable amphiphiles were shown to solubilize artificial and natural lipid membranes and, consequently, to extract membrane proteins.
Last, the self-assembly of diglucoside amphiphiles bearing either a hydrocarbon or a lipophobic fluorocarbon chain to form native nanodiscs was investigated. It was shown that the presence of a fluorocarbon hydrophobic chain conveys superior stabilization properties onto the amphiphile and the resulting nanodiscs. Moreover, the kinetics of lipid exchange were fundamentally altered by the presence of the fluorocarbon amphiphiles in the nanodisc rim.
In the field of measurement technology, the use of unmanned aerial vehicles is becoming more and more popular. For many measurement tasks, the use of such devices offers many advantages in terms of cost and measurement effort. However, the occurring vibrations and disturbances are a significant disadvantage for the application of these devices for several measurement tasks. Within the scope of this work, a platform for measurement devices is developed. The platform is designed specifically for use on drones. The task of the platform is to isolate measurement equipments mounted on it from the drone disturbances. For this purpose we go through the product development process according to VDI 2221 to design a mechanical model of the platform. Then, control strategies are applied to isolate the platform. Since the disturbances acting on a drone are not always stationary, two control strategies known for their ability to handle uncertain systems are used. One of them comes from the field of acoustic.
In dieser Arbeit wurden photoaktive Übergangsmetallkomplexe mit häufig vorkommenden Metallen wie Chrom, Vanadium und Kupfer untersucht. Hierbei wurden ausgewählte Exemplare mit besonders interessanten photophysikalischen und photochemischen Eigenschaften in Bezug auf praktische Anwendungen spektroskopisch charakterisiert. Über statische und insbesondere zeitaufgelöste FTIR- und Lumineszenzspektroskopie wurde ein tieferes Verständnis der Dynamik nach Lichtanregung erzielt. Das Hauptziel dieser Forschung besteht darin seltene und teure Elemente wie Ruthenium und Iridium gegen häufigere Metalle zu ersetzen.
In diesem Zusammenhang wurden mononukleare, oktaedrische Chrom(III)- und Vanadium(III)-Komplexe mit Polypyridylliganden, die im Arbeitskreis von Prof. Dr. Katja Heinze synthetisiert wurden, spektroskopisch charakterisiert. Diese Systeme zeigen vielversprechende Lumineszenzeigenschaften mit einer roten bzw. nahinfraroten Phosphoreszenz, wobei bei tiefen Temperaturen besonders hohe Quantenausbeuten und längere Lebensdauern beobachtet werden konnten.
Außerdem wurden einkernige Chrom(0)-, Molybdän(0)- und Wolfram(0)-Komplexe spektroskopisch charakterisiert, die allesamt im Arbeitskreis von Prof. Dr. Biprajit Sarkar synthetisiert wurden. Es sind mononukleare Komplexe mit Pyridyl-Carben-Liganden und Carbonyl-Coliganden mit einer dualen Phosphoreszenz (Emissionsbande im roten und nahinfraroten Bereich), wobei sich die niederenergetische Bande interessanterweise bis 1300 nm erstreckt. Außerdem zeigen die drei Komplexe bei intensiver Bestrahlung mit sichtbarem oder UV-Licht in organischer Lösung eine photochemische Reaktivität.
Als weitere vielversprechende Luminophore (sichtbare Emission) wurden Kupfer(I)-Komplexe analysiert, die für organische Leuchtdioden relevant sind. Einerseits wurden zweikernige Systeme mit einer zentralen Cu2I2-Einheit untersucht, die sich durch eine Fluorierung an den Phosphin-Hilfsliganden von den Derivaten aus Vorarbeiten unterscheiden. Die Systeme wurden im Arbeitskreis von Prof. Dr. Stefan Bräse zur Verbesserung der Löslichkeit im Vergleich zu unfluorierten Derivaten entwickelt. Die spektroskopischen Befunde dieser Arbeit zeigen, dass insbesondere die Einführung von Trifluormethylgruppen nicht nur die Löslichkeit, sondern auch die Stabilität verbessert. Andererseits wurden vierkernige Komplexe mit näherungsweise oktaedrischen Cu4X4-Clustern (X = I, Br, Cl) charakterisiert, wobei sich teilweise eine stark thermochrome Lumineszenz mit zwei klar separierten roten bzw. blauen Phosphoreszenzbanden ergab. Der Ursprung dieser Thermochromie konnte erstmalig auf experimentellem Weg den starken strukturellen Veränderung innerhalb des Cu4X4-Clusters zugeordnet werden.
Außerdem sind Kupfer(I)-Komplexe vielversprechende Kandidaten zur Verwendung als Photosensibilisatoren. Bei einem vom Arbeitskreis von Dr. Michael Karnahl zu Verfügung gestellten Kupfer(I)-Einkerner mit einem Liganden mit ausgedehntem 𝜋-System ergab sich ein langlebiger, nicht-strahlender Triplett-Zustand. In einem verwandten Projekt wurden ein- und zweikernige Kupfer(I)-Komplexe untersucht, die im Arbeitskreis von Dr. Claudia Bizzarri synthetisiert wurden. Der Fokus lag hierbei auf dem Einfluss einer Dimerisierung (kovalente Verbindung zweier mononuklearer Komplexe) oder einer Protonierung eines Liganden auf die photophysikalischen Eigenschaften.
Demonstrating perception without visual awareness: Double dissociations between priming and masking
(2022)
A double dissociation impressively demonstrates that visual perception and visual awareness can be independent of each other and do not have to rely on the same source of information (T. Schmidt & Vorberg, 2006). Traditionally, an indirect measure of stimulus processing and a direct measure of visual awareness are compared (dissociation paradigm or classic dissociation paradigm, Erdelyi, 1986; formally described by Reingold & Merikle, 1988; Merikle & Reingold, 1990; Reingold, 2004). If both measures exhibit opposite time courses, a double dissociation is demonstrated. One tool that is well suited to measure stimulus processing as fast visuomotor response activation is the response priming method (Klotz & Neumann, 1999; Klotz & Wolff, 1995; see also F. Schmidt et al., 2011; Vorberg et al., 2003). Typically, observers perform speeded responses to a target stimulus preceded by a prime stimulus, which can trigger the same motor response by sharing consistent features (e.g., shape) or different responses due to inconsistent features. While consistent features cause speeded motor responses, inconsistent trials can induce response conflicts and result in slowed responses. These response time differences describe the response priming effect (Klotz & Neumann, 1999; Klotz & Wolff, 1995; see also F. Schmidt et al., 2011; Vorberg et al., 2003). The theoretical background of this method forms the Rapid-Chase Theory (T. Schmidt et al., 2006, 2011; see also T. Schmidt, 2014), which assumes that priming is based on neuronal feedforward processing within the visuomotor system. Lamme and Roelfsema (2000; see also Lamme, 2010) claim that this feedforward processing does not generate visual awareness because neuronal feedback and recurrent processes are needed. Fascinatingly, while prime visibility can be manipulated by visual masking techniques (Breitmeyer & Öğmen, 2006), priming effects can still increase over time. Masking effects are used as a direct measure of prime awareness. Based on their time course, type-A and type-B masking functions are distinguished (Breitmeyer & Öğmen, 2006; see also Albrecht & Mattler, 2010, 2012, 2016). Type-A masking is most commonly shown with a typically increasing function over time. In contrast, type-B masking functions are rarely observed, which demonstrate a decreasing or u-shaped time course. This masking type is usually only found under metacontrast backward masking (Breitmeyer & Öğmen, 2006; see also Albrecht & Mattler, 2010, 2012, 2016). While priming effects are expected to increase over time by Rapid-Chase Theory (T. Schmidt et al., 2006, 2011; see also T. Schmidt, 2014), the masking effect can show an opposite trend with a decreasing or u-shaped type-B masking curve, forming a double dissociation.
In empirical practice, double dissociations are a rarity, while historically simple dissociations have been the favored data pattern to demonstrate perception without awareness, despite suffering from statistical measurement problems (T. Schmidt & Vorberg, 2006). Motivated by this shortcoming, I aim to demonstrate that a double dissociation is the most powerful and convincing data pattern, which provides evidence that visual perception does not necessarily generate visual awareness, since both processes are based on different neuronal mechanisms. I investigated which experimental conditions allow for a double dissociation between priming and prime awareness. The first set of experiments demonstrated that a double-dissociated pattern between priming and masking can be induced artificially, and that the technique of induced dissociations is of general utility. The second set of experiments used two awareness measures (objective vs. subjective) and a response priming task in various combinations, resulting in different task settings (single-, dual-, triple tasks). The experiments revealed that some task types constitute an unfavorable experimental environment that can prevent a double dissociation from occurring naturally, especially when a pure feedforward processing of the stimuli seems to be disturbed. The present work provides further important findings. First, stimulus perception and stimulus awareness show a general dissociability in most of the participants, supporting the idea that different neuronal processes are responsible for this kind of data pattern. Second, any direct awareness measure (no matter whether objective or subjective) is highly observer-dependent, requiring the individual analysis at the level of single participants. Third, a deep analysis of priming effects at the micro level (e.g., checking for fast errors) can provide further insights regarding information processing of different visual stimuli (e.g., shape vs. color) and under changing experimental conditions (e.g. single- vs. triple tasks).
The knowledge of structural properties in microscopic materials contributes to a deeper understanding of macroscopic properties. For the study of such materials, several imaging techniques reaching scales in the order of nanometers have been developed. One of the most powerful and sophisticated imaging methods is focused-ion-beam scanning electron
microscopy (FIB-SEM), which combines serial sectioning by an ion beam and imaging by
a scanning electron microscope.
FIB-SEM imaging reaches extraordinary scales below 5 nm with large representative
volumes. However, the complexity of the imaging process results in the addition of artificial distortion and artifacts generating poor-quality images. We introduce a method
for the quality evaluation of images by analyzing general characteristics of the images
as well as artifacts exclusively for FIB-SEM, namely curtaining and charging. For the
evaluation, we propose quality indexes, which are tested on several data sets of porous and non-porous materials with different characteristics and distortions. The quality indexes report objective evaluations in accordance with visual judgment.
Moreover, the acquisition of large volumes at high resolution can be time-consuming. An approach to speed up the imaging is by decreasing the resolution and by considering cuboidal voxel configurations. However, non-isotropic resolutions may lead to errors in the reconstructions. Even if the reconstruction is correct, effects are visible in the analysis.
We study the effects of different voxel settings on the prediction of material and flow properties of reconstructed structures. Results show good agreement between highly resolved cases and ground truths as is expected. Structural anisotropy is reported as
resolution decreases, especially in anisotropic grids. Nevertheless, gray image interpolation
remedies the induced anisotropy. These benefits are visible at flow properties as well.
For highly porous structures, the structural reconstruction is even more difficult as
a consequence of deeper parts of the material visible through the pores. We show as an application example, the reconstruction of two highly porous structures of optical layers, where a typical workflow from image acquisition, preprocessing, reconstruction until a
spatial analysis is performed. The study case shows the advantages of 3D imaging for
optical porous layers. The analysis reveals geometrical structural properties related to the manufacturing processes.
Sequence learning describes the process of understanding the spatio-temporal
relations in a sequence in order to classify it, label its elements or generate
new sequences. Due to the prevalence of structured sequences in nature
and everyday life, it has many practical applications including any language
related processing task. One particular such task that has seen recent success
using sequence learning techniques is the optical recognition of characters
(OCR).
State-of-the-art sequence learning solutions for OCR achieve high performance
through supervised training, which requires large amounts of transcribed
training data. On the other hand, few solutions have been proposed on how
to apply sequence learning in the absence of such data, which is especially
common for hard to transcribe historical documents. Rather than solving
the unsupervised training problem, research has focused on creating efficient
methods for collecting training data through smart annotation tools or generating
synthetic training data. These solutions come with various limitations
and do not solve all of the related problems.
In this work, first the use of erroneous transcriptions for supervised sequence
learning is introduced and it is described how this concept can be applied in
unsupervised training scenarios by collecting or generating such transcriptions.
The proposed OCR pipeline reduces the need of domain specific expertise
to apply OCR, with the goal of making it more accessible. Furthermore, an
approach for evaluating sequence learning OCR models in the absence of
reference transcriptions is presented and its different properties compared
to the standard method are discussed. In a second approach, unsupervised
OCR is treated as an alignment problem between the latent features of the
different language modalities. The outlined solution is to extract language
properties from both the text and image domain through adversarial training
and learn to align them by adding a cycle consistency constraint. The proposed
approach has some strict limitations on the input data, but the results
encourage future research into more widespread applications.
Reinforcing sand soils using tyre rubber chips is a novel technology that is under investigation to optimize its engineering application. Previous studies concentrated on static behaviour and very few on cyclic and dynamic behaviour of sand rubber mixtures leaving gaps that need to be addressed.
This research focuses on evaluating the static, cyclic and dynamic behaviours of sand rubber mixtures. The basic properties of sands S2, S3, S4, rubber chips and sand rubber chips mixtures at 10/20/30% rubber chips content by dry mass were first evaluated in order to obtain the parameters essential for subsequent testing. Oedometer, direct shear with larger box 300x300 mm and static triaxial compression tests were performed to assess the static behaviour of the composite material. Further, dynamic cyclic triaxial tests were performed to evaluate the cyclic behaviour of saturated, dry and wet mixtures. All specimens were first isotropically consolidated at 100 kPa. For saturated material a static deviatoric stress of 45 kPa was imposed prior to cycling to simulate the field anisotropic consolidation condition. Cycling was applied stress-controlled with amplitude of 50kPa. Both undrained and drained tests were performed. Cyclic tests in dry or wet conditions were also performed under anisotropic consolidation condition with the application of different stress amplitudes. For all cyclic tests the loading frequency was 1 Hz. With regard to dynamic behaviour of the mixtures, the resonant column tests were conducted. Calibration was first performed yielding a frequency dependent drive head inertia. Wet mixture specimens were prepared at relative density of 50% and tested at various confining stresses. Note that all specimens tested in both triaxial and resonant column were 100 mm diameter. The results from the entire investigation are promising.
In summary, rubber chips in the range of 4 to 14 mm mixed with sands were found to increase the shear resistance of the mixtures. They yield an increase of the cyclic resistance under saturated condition, to a decrease of stiffness and to an increase of damping ratio. Increased confining stress increased the shear modulus reduction and decreased damping ratio of the mixtures. Increased rubber content increased both shear modulus reduction and damping ratio. Several new design equations were proposed that can be used to compute the compression deformation, pore pressure ratio, maximum shear modulus and minimum damping ratio, as well as the modulus reduction with shear strain. Finally, chips content around 20% to 30% by dry mass can be used to reinforce sandy soils. The use of this novel composite material in civil engineering application could consume a large volume of scrap tyres and at the same time contribute to cleaning environment and saving natural resources.
Automation of the Hazard and Operability Method Using Ontology-based Scenario Causation Models
(2022)
The hazard and operability (HAZOP) method is widely used in chemical and process industries to identify and evaluate hazards. Due to its human-centered nature, it is time-consuming, and the results depend on the team composition. In addition, the factors time pressure, type of implementation, experience of the participants, and participant involvement affect the results. This research aims to digitize the HAZOP method. The investigation shows that knowledge-based systems with ontologies for knowledge representation are suitable to achieve the objective. Complex interdisciplinary knowledge regarding facility, process, substance, and site information must be represented to perform the task. A result of this work is a plant part taxonomy and a developed object-oriented equipment entity library. During ontology development, typical HAZOP scenarios, as well as their structure, components, and underlying causal model, were investigated. Based on these observations, semantic relationships between the scenario components were identified. The likelihood of causes and severity of consequences were determined as part of an automatic risk assessment using a risk matrix to determine safeguards reliably. An inference algorithm based on semantic reasoners and case-based reasoning was developed to exploit the ontology and evaluate the input data object containing the plant representation. With consideration given to topology, aspects like the propagation of sub-scenarios through plant parts were considered. The results of the developed knowledge-based system were automatically generated HAZOP worksheets. Evaluation of the achieved results was based on representative case studies in which the relevance, comprehensibility, and completeness of the automatically identified scenarios were considered. The achieved results were compared with conventionally prepared HAZOP tables for benchmark purposes. By paying particular attention to the causal relationships between scenario components, the risk assessment, and with consideration of safeguards, the quality of the automatically generated results was comparable to conventional HAZOP worksheets. This research shows that formal ontologies are suitable for representing complex interdisciplinary knowledge in the field of process and plant safety. The results contribute to the use of knowledge-based systems for digitizing the HAZOP method. When used correctly, knowledge-based systems can help decrease the preparation time and repetitious nature of HAZOP studies and standardize results.
Recommender systems recommend items (e.g., movies, products, books) to users. In this thesis, we proposed two comprehensive and cluster-induced recommendation-based methods: Orthogonal Inductive Matrix Completion (OMIC) and Burst-induced Multi-armed Bandit (BMAB). Given the presence of side information, the first method is categorized as context-aware. OMIC is the first matrix completion method to approach the problem of incorporating biases, side information terms and a pure low-rank term into a single flexible framework with a well-principled optimization procedure. The second method, BMAB, is context-free. That is, it does not require any side data about users or items. Unlike previous context-free multi-armed bandit approaches, our method considers the temporal dynamics of human communication on the web and treats the problem in a continuous time setting. We built our models' assumptions under solid theoretical foundations. For OMIC, we provided theoretical guarantees in the form of generalization bounds by considering the distribution-free case: no assumptions about the sampling distribution are made. Additionally, we conducted a theoretical analysis of community side information when the sampling distribution is known and an adjusted nuclear norm regularization is applied. We showed that our method requires just a few entries to accurately recover the ratings matrix if the structure of the ground truth closely matches the cluster side information. For BMAB, we provided regret guarantees under mild conditions that demonstrate how the system's stability affects the expected reward. Furthermore, we conducted extensive experiments to validate our proposed methodologies. In a controlled environment, we implemented synthetic data generation techniques capable of replicating the domains for which OMIC and BMAB were designed. As a result, we were able to analyze our algorithms' performance across a broad spectrum of ground truth regimes. Finally, we replicated a real-world scenario by utilizing well-established recommender datasets. After comparing our approaches to several baselines, we observe that they achieved state-of-the-art results in terms of accuracy. Apart from being highly accurate, these methods improve interpretability by describing and quantifying features of the datasets they characterize.
On a route from whole genome duplication to aneuploidy and cancer: consequences and adaptations
(2022)
Whole genome duplication (WGD) is commonly accepted as an intermediate state between healthy cells and aneuploid cancer cells. Usually, cells after WGD get removed from the replicating pool by p53-dependent cell cycle arrest or apoptosis. Cells, which are able to bypass these mechanisms exhibit chromosomal instability (CIN) and DNA damage, promoting the formation of highly aneuploid karyotypes. In general, WGD favors several detrimental consequences such as increased drug resistance, transformation and metastasis formation. Therefore, it is of special interest to investigate the limiting factors and consequences of tetraploid proliferation as well as the adaptations to WGD. In the past it has been difficult to study the consequences of such large-scale genomic changes and how cells adapt to tetraploidy in order to survive. Our lab established protocols to generate tetraploids as well as isolated post-tetraploid/aneuploid single cells clones derived from euploid parental cell lines after induction of cytokinesis failure. This system enables to study the consequences and adaptations of WGD in newly generated tetraploid cells and evolved post-tetraploid clones in comparison to their isogenic parental cell line.
Using newly generated tetraploids from HCT116 cells, we identified USP28 and SPINT2 as novel factors limiting the proliferation after WGD. Using mass spectrometry and immunoprecipitation, we revealed an interaction between USP28 and NuMA1 upon WGD, which affects centrosome coalescence of supernumerary centrosomes, an important process that enhances survival of tetraploids. Furthermore, we validated the occurrence of DNA damage in tetraploid cells and found that USP28 depletion diminished the DNA damage dependent checkpoint activation. SPINT2 influences the proliferation after WGD by regulating the transcription of CDKN1A via histone acetylation. Following proliferating tetraploid cells, we confirmed the activation of the DNA damage response (DDR) by immunoblotting and microscopic approaches. Furthermore, we show that the DDR in the arising post-tetraploid clones is reduced. Further experiments verified the appearance of severe mitotic aberrations, replication stress and accumulation of reactive oxygen species in newly generated tetraploids as well as in the aneuploid cancer cells contributing to the occurrence of DNA damage. Using various drug treatments, we observed an increased dependency on the spindle assembly checkpoint in aneuploid cancer cells compared to their diploid parental cell line. Additionally, siRNA knock down experiments revealed the kinesin motor protein KIF18A as an essential protein in aneuploid cells.
Taken together, the results point out cellular consequences of proliferation after tetraploidization as well as the cellular adaptations needed to cope with the increased amount of DNA.
The ability to categorize is a fundamental cognitive skill for animals, including human beings. Our lives would be utterly confusing without categories. We would feel overwhelmed or miss out on important aspects of our environment if we would perceive every single entity as one-of-a-kind. Therefore, categorization is of great importance for perception, learning, remembering, decision making, performing an action, certain aspects of social interaction, and reasoning. The seemingly effortless and instantaneous ability to transform sensory information into meaningful categories determines the success for interacting with our environment. However, the apparent ease with which we use categorization and categories conceals the complexity of the underlying brain processing that makes categorization and categorical representations possible. Therefore, the question arises: how are categorical information encoded and represented in the brain?
Multi-omics analysis as a tool to investigate causes and consequences of impaired genome integrity
(2022)
Impaired genome integrity has severe consequences for the viability of any cell. Unrepaired DNA lesions can lead to genomically unstable cells, which will often become predisposed for malignant growth and tumorigenesis, where genomic instability turns into a driving factor through the selection of more aggressive clones. Aneuploidy and polyploidy are both poorly tolerated in somatic cells, but frequently observed hallmarks of cancer. Keeping the genome intact requires the concentrated action of cellular metabolism, cell cycle and DNA damage response.
This study presents multi-omics analysis as a versatile tool to understand the various causes and consequences of impaired genome integrity. The possible computational approaches are demonstrated on three different datasets. First, an analysis of a collection of DNA repair experiments is shown, which features the creation of a high-fidelity dataset for the identification and characterization of DNA damage factors. Additionally, a web-application is presented that allows scientists without a computational background to interrogate this dataset. Further, the consequences of chromosome loss in human cells are analyzed by an integrated analysis of TMT labeled mass spectrometry and sequencing data. This analysis revealed heterogeneous cellular responses to chromosome losses that differ from chromosome gains. My analysis further revealed that cells possess both transcriptional and post-transcriptional mechanisms that compensate for the loss of genes encoded on a monosomic chromosome to alleviate the detrimental consequences of reduced gene expression. In my final project, I present a multi-omics analysis of data obtained from SILAC labeled mass spectrometry and dynamic transcriptome analysis of yeast cells of different ploidy, from haploidy to tetraploid. This analysis revealed that unlike cell volume, the proteome of a cell does not scale linearly with increasing ploidy. While the expression of most proteins followed this scaling, several proteins showed ploidy-dependent regulation that could not be explained by transcriptome expression. Hence, this ploidy-dependent regulation occurs mostly on a post-transcriptional level. The analysis uncovered that ribosomal and translation related proteins are downregulated with increasing ploidy, emphasizing a remodeling of the cellular proteome in response to increasing ploidy to ensure survival of cells after whole genome doubling. Altogether this study intends to show how state-of-the-art multi-omics analysis can uncover cellular responses to impaired genome integrity in a highly diverse field of research.
Like many other bacteria, the opportunistic pathogen P. aeruginosa encodes a broad network of enzymes that regulate the intracellular concentration of the second messenger c-di-GMP. One of these enzymes is the phosphodiesterase NbdA that consists of three domains: a membrane anchored, putative sensory MHYT domain, a non-functional diguanylate cyclase domain with degenerated GGDEF motif and an active PDE domain with EAL motif. Analysis of the nbdA open reading frame by 5’-RACE PCR revealed an erroneous annotation of nbdA in the Pseudomonas database with the ORF 170 bp shorter than previously predicted. The newly defined promoter region of nbdA contains recognition sites for the alternative sigma-factor RpoS as well as the transcription factor AmrZ. Promoter analysis within PAO1 wt as well as rpoS and amrZ mutant strains utilizing transcriptional fusions of the nbdA promoter to the reporter gene lacZ revealed transcriptional activation of nbdA by RpoS in stationary growth phase and transcriptional repression by AmrZ. Additionally, no influence of nitrite and neither exogenous nor endogenous NO on nbdA transcription could be shown in this study. However, deletion of the nitrite reductase gene nirS led to a strong increase of nbdA promoter activity which needs to be characterized further. Predicted secondary structures of the 5’-UTR of the nbdA mRNA indicated either an RNA thermometer function of the mRNA or post-transcriptional regulation of nbdA by the RNA binding proteins RsmA and RsmF. Nevertheless, translational studies using fusions of the 5’ UTR of nbdA to the reporter gene bgaB did not verify either of these hypotheses. In general, nbdA translational levels were very low and neither the production of the reporter BgaB nor genomically encoded NbdA could be detected on a western blot. Overproduction of NbdA variants induced many phenotypic changes in motility and biofilm formation. But strains overproducing variants containing the MHYT domain revealed greatly elongated cells and were impaired in surface growth, indicating a misbalance in the membrane protein homeostasis. Therefore, these phenotypes have to be interpreted very critically. Microscopic studies with fluorescently tagged NbdA revealed either a diffuse fluorescent signal of NbdA or the formation of fluorescent foci which were located mainly at the cell poles. Co-localization studies with the polar flagellum and the chemotaxis protein CheA showed that NbdA is not generally localizing to the flagellated cell pole. NbdA localization indicates the control of a specific local c-di-GMP pool in the cell which is most likely involved in MapZ mediated chemotactic flagellar motor switching.
Multicore processors and Multiprocessor System-on-Chip (MPSoC) have become essential in Real-Time Systems (RTS) and Mixed-Criticality Systems (MCS) because of their additional computing capabilities that help reduce Size, Weight, and Power (SWaP), required wiring, and associated costs. In distributed systems, a single shared multicore or MPSoC node executes several applications, possibly of different criticality levels. However, there is interference between applications due to contention in shared resources such as CPU core, cache, memory, and network.
Existing allocation and scheduling methods for RTS and MCS often rely on implicit assumptions of the constant availability of individual resources, especially the CPU, to provide guaranteed progress of tasks. Most existing approaches aim to resolve contention in only a specific shared resource or a set of specific shared resources. Moreover, they handle a limited number of events such as task arrivals and task completions.
In distributed RTS and MCS with several nodes, each having multiple resources, if the applications, resource availability, or system configurations change, obtaining assumptions about resources becomes complicated. Thus, it is challenging to meet end-to-end constraints by considering each node, resource, or application individually.
Such RTS and MCS need global resource management to coordinate and dynamically adapt system-wide allocation of resources. In addition, the resource management can dynamically adapt applications to changing availability of resources and maintains a system-wide (global) view of resources and applications.
The overall aim of global resource management is twofold.
Firstly, it must ensure real-time applications meet their end-to-end deadlines even in the presence of faults and changing environmental conditions. Secondly, it must provide efficient resource utilization to improve the Quality of Service (QoS) of co-executing Best-Effort (BE) (or non-critical) applications.
A single fault in global resource management can render it useless. In the worst case, the resource management can make faulty decisions leading to a deadline miss in real-time applications. With the advent of Industry 4.0, cloud computing, and Internet-of-Things (IoT), it has become essential to combine stringent real-time constraints and reliability requirements with the need for an open-world assumption and ensure that the global resource management does not become an inviting target for attackers.
In this dissertation, we propose a domain-independent global resource management framework for distributed RTS and MCS consisting of heterogeneous nodes based on multicore processors or MPSoC. We initially developed the framework with the French Aerospace Lab -- ONERA and Thales Research & Technology during the DREAMS project and later extended it during SECREDAS and other internal projects. Unlike previous resource management frameworks RTS and MCS, we consider both safety and security for the framework itself.
To enable real-time industries to use cloud computing and enter a new market segment -- real-time operation as a cloud-based service, we propose a Real-Time-Cloud (RT-Cloud) based on global resource management for hosting RTS and MCS.
Finally, we present a mixed-criticality avionics use case for evaluating the capabilities of the global resource management framework in handling permanent core failures and temporal overload condition, and a railway use case to motivate the use of RT-Cloud with global resource management.