Doctoral Thesis
Refine
Year of publication
Document Type
- Doctoral Thesis (1876) (remove)
Language
- English (939)
- German (931)
- Multiple languages (6)
Keywords
- Visualisierung (21)
- Simulation (19)
- Katalyse (15)
- Stadtplanung (15)
- Apoptosis (12)
- Finite-Elemente-Methode (12)
- Phasengleichgewicht (12)
- Modellierung (11)
- Infrarotspektroskopie (10)
- Mobilfunk (10)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Chemie (389)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (370)
- Kaiserslautern - Fachbereich Mathematik (292)
- Kaiserslautern - Fachbereich Informatik (235)
- Kaiserslautern - Fachbereich Biologie (133)
- Kaiserslautern - Fachbereich Bauingenieurwesen (94)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (92)
- Kaiserslautern - Fachbereich ARUBI (71)
- Kaiserslautern - Fachbereich Sozialwissenschaften (64)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (37)
Synapses are the fundamental structures that regulate the functionality of the neural circuit. The ability of the synapse to modulate its structure and function at a fast rate due to various sensory inputs provides the strength to the nervous system to incorporate new adaptations and behaviors in the animal. The synapses are very dynamic throughout the life of the animal starting from early development. Continuous events of formation and elimination of synapse, activation and inhibition of synaptic function are observed in almost all synapses. These processes occur at a high speed and require controlled cellular mechanisms. Imbalance in these processes results in defective nervous system and has been reported in many neurological disorders. Thus, it is important to understand the mechanisms that regulate process of synapse development maintenance and function.
Kinases and phosphatases are the key regulators of cellular mechanisms. Understanding the function of these molecules in the neuron will shed light on the molecular mechanisms of synaptic plasticity. Using Drosophila melanogaster larval neuromuscular junction as a model, Bulat et al. (2014) performed a large RNAi based screen targeting kinome and phosphatome of Drosophila to identify the essential kinases and phosphatases and found Myeloid leukemia factor-1 adaptor molecule (Madm) and Protein phosphatase 4 (PP4) as novel regulators of synapse development and maintenance. The function of these molecules in the nervous system has not been reported and hence I investigated on the role of Madm and PP4 in the regulation of synapse development, maintenance and function.
Myeloid leukemia factor-1 adaptor molecule (Madm), a ubiquitously expressing psuedokinase essentially functions to regulate synaptic growth, stability and function. Using a combination of genetic and high throughput imaging, I could demonstrate that Madm functions to regulate the synaptic growth and stability from the presynapse and synaptic organization form the postsynapse. Also, I could demonstrate that Madm functions in association with mTOR pathway to regulate synapse growth acting downstream of 4E-BP. In addition, using electrophysiology, we could demonstrate that Madm is essential for the basic synaptic transmission with an additive function of retrograde synaptic potentiation. In summary, I could demonstrate that Madm is a novel regulator of synaptic development, maintenance and function.
Protein phosphatase 4 (PP4), a ubiquitously expressing protein phosphatase is involved in the regulation of multiple aspects of the nervous system. I could demonstrate that PP4 is essential for the development of nervous system and the metamorphosis. Using genetics and imaging analysis, I could demonstrate that loss of PP4 results in the abnormal morphology of cell organelles. In addition, I could show that loss of PP4 results in defective brain development with poorly developed structures.
Altogether, in this study, I could demonstrate the importance of novel molecules, a pesudokinase Madm and protein phosphatases PP4 in the nervous system to regulate distinct aspects of the neuron.
With the burgeoning computing power available, multiscale modelling and simulation has these days become increasingly capable of capturing the details of physical processes on different scales. The mechanical behavior of solids is oftentimes the result of interaction between multiple spatial and temporal scales at different levels and hence it is a typical phenomena of interest exhibiting multiscale characteristic. At the most basic level, properties of solids can be attributed to atomic interactions and crystal structure that can be described on nano scale. Mechanical properties at the macro scale are modeled using continuum mechanics for which we mention stresses and strains. Continuum models, however they offer an efficient way of studying material properties they are not accurate enough and lack microstructural information behind the microscopic mechanics that cause the material to behave in a way it does. Atomistic models are concerned with phenomenon at the level of lattice thereby allowing investigation of detailed crystalline and defect structures, and yet the length scales of interest are inevitably far beyond the reach of full atomistic computation and is rohibitively expensive. This makes it necessary the need for multiscale models. The bottom line and a possible avenue to this end is, coupling different length scales, the continuum and the atomistics in accordance with standard procedures. This is done by recourse to the Cauchy-Born rule and in so doing, we aim at a model that is efficient and reasonably accurate in mimicking physical behaviors observed in nature or laboratory. In this work, we focus on concurrent coupling based on energetic formulations that links the continuum to atomistics. At the atomic scale, we describe deformation of the solid by the displaced positions of atoms that make up the solid and at the continuum level deformation of the solid is described by the displacement field that minimize the total energy. In the coupled model, continuum-atomistic, a continuum formulation is retained as the overall framework of the problem and the atomistic feature is introduced by way of constitutive description, with the Cauchy-Born rule establishing the point of contact. The entire formulation is made in the framework of nonlinear elasticity and all the simulations are carried out within the confines of quasistatic settings. The model gives direct account to measurable features of microstructures developed by crystals through sequential lamination.
Understanding human crowd behaviour has been an intriguing topic of interdisciplinary research in recent decades. Modelling of crowd dynamics using differential equations is an indispensable approach to unraveling the various complex dynamics involved in such interacting particle systems. Numerical simulation of pedestrian crowd via these mathematical models allows us to study different realistic scenarios beyond the limitations of studies via controlled experiments.
In this thesis, the main objective is to understand and analyse the dynamics in a domain shared by both pedestrians and moving obstacles. We model pedestrian motion by combining the social force concept with the idea of optimal path computation. This leads to a system of ordinary differential equations governing the dynamics of individual pedestrians via the interaction forces (social forces) between them. Additionally, a non-local force term involving the optimal path and desired velocity governs the pedestrian trajectory. The optimal path computation involves solving a time-independent Eikonal equation, which is coupled to the system of ODEs. A hydrodynamic model is developed from this microscopic model via the mean-field limit.
To consider the interaction with moving obstacles in the domain, we model a set of kinematic equations for the obstacle motion. Two kinds of obstacles are considered - "passive", which move in their predefined trajectories and have only a one-way interaction with pedestrians, and "dynamic", which have a feedback interaction with pedestrians and have their trajectories changing dynamically. The coupled model of pedestrians and obstacles is used to discern pedestrian collision avoidance behaviour in different computational scenarios in a long rectangular domain. We observe that pedestrians avoid collisions through route choice strategies that involve changes in speed and path. We extend this model to consider the interaction between pedestrians and vehicular traffic. We appropriately model the interactions of vehicles, following lane traffic, based on the car-following approach. We observe how the deceleration and braking mechanism of vehicles is executed at pedestrian crossings depending on the right of way on the roads.
As a second objective, we study the disease contagion in moving crowds. We consider the influence of the crowd motion in a complex dynamical environment on the course of infection of pedestrians. A hydrodynamic model for multi-group pedestrian flow is derived from the kinetic equations based on a social force model. It is coupled along with an Eikonal equation to a non-local SEIS contagion model for disease spread. Here, apart from the description of local contacts, the influence of contact times has also been modelled. We observe that the nature of the flow and the geometry of the domain lead to changes in density which affect the contact time and, consequently, the rate of spread of infection.
Finally, the social force model is compared to a variable speed based rational behaviour pedestrian model. We derive a hierarchy of the heuristics-based model from microscopic to macroscopic scales and numerically investigate these models in different density scenarios. Various numerical test cases are considered, including uni- and bi-directional flows and scenarios with and without obstacles. We observe that in low-density scenarios, collision avoidance forces arising from the behavioural heuristics give valid results. Whereas in high-density scenarios, repulsive force terms are essential.
The numerical simulations of all the models are carried out using a mesh-free particle method based on least square approximations. The meshfree numerical framework provides an efficient and elegant way to handle complex geometric situations involving boundaries and stationary or moving obstacles.
This Dissertation tried to provide insights into the influences of individual and contextual factors on Technical and Vocational Education and Training (TVET) teachers’ learning and professional development in Ethiopia. Specifically, this research focused on identifying and determining the influences of teachers’ self perception as learners and professionals, and investigates the impact of the context, process and content of their learning and experiences on their professional development. The knowledge of these factors and their impacts help in improving the learning and professional development of the TVET teachers and their professionalization. This research tried to provide answers for the following five research questions. (1) How do TVET teachers perceive themselves as active learners and as professionals? And what are the implications of their perceptions on their learning and development? (2) How do TVET teachers engage themselves in learning and professional development activities? (3) What contextual factors facilitated or hindered the TVET Teachers’ learning and professional development? (4) Which competencies are found critical for the TVET teachers’ learning and professional development? (5) What actions need to be considered to enhance and sustain TVET teachers learning and professional development in their context? It is believed that the research results are significant not only to the TVET teachers, but also to schools leaders, TVET Teacher Training Institutions, education experts and policy makers, researchers and others stakeholders in the TVET sector. The theoretical perspectives adopted in this research are based on the systemic constructivist approach to professional development. An integrated approach to professional development requires that the teachers’ learning and development activities to be taken as an adult education based on the principles of constructivism. Professional development is considered as context - specific and long-term process in which teachers are trusted, respected and empowered as professionals. Teachers’ development activities are sought as more of collaborative activities portraying the social nature of learning. Schools that facilitate the learning and development of teachers exhibit characteristics of a learning organisation culture where, professional collaboration, collegiality and shared leadership are practiced. This research has drawn also relevant point of views from studies and reports on vocational education and TVET teacher education programs and practices at international, continental and national levels. The research objectives and the types of research questions in this study implied the use of a qualitative inductive research approach as a research strategy. Primary data were collected from TVET teachers in four schools using a one-on-one qualitative in-depth interview method. These data were analyzed using a Qualitative Content Analysis method based on the inductive category development procedure. ATLAS.ti software was used for supporting the coding and categorization process. The research findings showed that most of the TVET teachers neither perceive themselves as professionals nor as active learners. These perceptions are found to be one of the major barriers to their learning and development. Professional collaborations in the schools are minimal and teaching is sought as an isolated individual activity; a secluded task for the teacher. Self-directed learning initiatives and individual learning projects are not strongly evident. The predominantly teacher-centered approach used in TVET teacher education and professional development programs put emphasis mainly to the development of technical competences and has limited the development of a range of competences essential to teachers’ professional development. Moreover, factors such as the TVET school culture, the society’s perception of the teaching profession, economic conditions, and weak links with industries and business sectors are among the major contextual factors that hindered the TVET teachers’ learning and professional development. A number of recommendations are forwarded to improve the professional development of the TVET teachers. These include change in the TVET schools culture, a paradigm shift in TVET teacher education approach and practice, and development of educational policies that support the professionalization of TVET teachers. Areas for further theoretical research and empirical enquiry are also suggested to support the learning and professional development of the TVET teachers in Ethiopia.
The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.
In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber
Learning From Networked-data: Methods and Models for Understanding Online Social Networks Dynamics
(2020)
Abstract
Nowadays, people and systems created by people are generating an unprecedented amount of
data. This data has brought us data-driven services with a variety of applications that affect
people’s behavior. One of these applications is the emergent online social networks as a method
for communicating with each other, getting and sharing information, looking for jobs, and many
other things. However, the tremendous growth of these online social networks has also led to many
new challenges that need to be addressed. In this context, the goal of this thesis is to better understand
the dynamics between the members of online social networks from two perspectives. The
first perspective is to better understand the process and the motives underlying link formation in
online social networks. We utilize external information to predict whether two members of an online
social network are friends or not. Also, we contribute a framework for assessing the strength of
friendship ties. The second perspective is to better understand the decay dynamics of online social
networks resulting from the inactivity of their members. Hence, we contribute a model, methods,
and frameworks for understanding the decay mechanics among the members, for predicting members’
inactivity, and for understanding and analyzing inactivity cascades occurring during the decay.
The results of this thesis are: (1) The link formation process is at least partly driven by interactions
among members that take place outside the social network itself; (2) external interactions might
help reduce the noise in social networks and for ranking the strength of the ties in these networks;
(3) inactivity dynamics can be modeled, predicted, and controlled using the models contributed in
this thesis, which are based on network measures. The contributions and the results of this thesis
can be beneficial in many respects. For example, improving the quality of a social network by introducing
new meaningful links and removing noisy ones help to improve the quality of the services
provided by the social network, which, e.g., enables better friend recommendations and helps to
eliminate fake accounts. Moreover, understanding the decay processes involved in the interaction
among the members of a social network can help to prolong the engagement of these members. This
is useful in designing more resilient social networks and can assist in finding influential members
whose inactivity may trigger an inactivity cascade resulting in a potential decay of a network.
Typically software engineers implement their software according to the design of the software
structure. Relations between classes and interfaces such as method-call relations and inheritance
relations are essential parts of a software structure. Accordingly, analyzing several types of
relations will benefit the static analysis process of the software structure. The tasks of this
analysis include but not limited to: understanding of (legacy) software, checking guidelines,
improving product lines, finding structure, or re-engineering of existing software. Graphs with
multi-type edges are possible representation for these relations considering them as edges, while
nodes represent classes and interfaces of software. Then, this multiple type edges graph can
be mapped to visualizations. However, the visualizations should deal with the multiplicity of
relations types and scalability, and they should enable the software engineers to recognize visual
patterns at the same time.
To advance the usage of visualizations for analyzing the static structure of software systems,
I tracked difierent development phases of the interactive multi-matrix visualization (IMMV)
showing an extended user study at the end. Visual structures were determined and classified
systematically using IMMV compared to PNLV in the extended user study as four categories:
High degree, Within-package edges, Cross-package edges, No edges. In addition to these structures
that were found in these handy tools, other structures that look interesting for software
engineers such as cycles and hierarchical structures need additional visualizations to display
them and to investigate them. Therefore, an extended approach for graph layout was presented
that improves the quality of the decomposition and the drawing of directed graphs
according to their topology based on rigorous definitions. The extension involves describing
and analyzing the algorithms for decomposition and drawing in detail giving polynomial time
complexity and space complexity. Finally, I handled visualizing graphs with multi-type edges
using small-multiples, where each tile is dedicated to one edge-type utilizing the topological
graph layout to highlight non-trivial cycles, trees, and DAGs for showing and analyzing the
static structure of software. Finally, I applied this approach to four software systems to show
its usefulness.
In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).
In der vorliegenden Arbeit wird das Tragverhalten von durchlaufenden stahlfaserbewehrten Stahlverbunddecken analysiert. Auf der Basis von experimentellen und rechnerischen Untersuchungen werden zwei Bemessungsmodelle entwickelt. Anhand der experimentellen Untersuchungen an einfeldrigen und durchlaufenden stahlfaserbewehrten Verbunddecken werden Aufschlüsse über das Trag- und Verformungsverhalten der Decken gewonnen. Dabei werden sowohl offene trapezförmige als auch hinterschnittene Profilbleche verwendet. Auf eine konventionelle Betonstahlbewehrung wird gänzlich verzichtet. Das Stützmoment wird vom Stahlfaserbeton alleine aufgenommen. In vier Versuchsserien mit insgesamt 18 Versuchen werden einzelne Parameter wie z. B. unterschiedliche Deckenstärken, unterschiedliche Profilblechgeometrien sowie unterschiedliche Stahlfaserbetonmischungen untersucht. Für die Berechnung und Bemessung werden die im Verbundbau üblichen Nachweisverfahren aufgegriffen und modifiziert. Die Traganteile des Stahlfaserbetons werden über den Ansatz von Spannungsblöcken implementiert. Bei der Nachrechnung der einzelnen Versuche zeigt sich die Eignung der Verfahren. Für die einzelnen Nachweise werden in Parameterstudien Bemessungsdiagramme und –tabellen erstellt, die dem anwendenden Ingenieur ein einfaches und sicheres Bemessen ermöglichen. Anhand der experimentellen Ergebnisse und der rechnerischen Untersuchungen werden zwei mögliche Bemessungsmodelle entwickelt, mit denen die Tragfähigkeit von stahlfaserbewehrten durchlaufenden Verbunddecken nachgewiesen werden kann. Dabei kann der Nachweis entweder nach den Verfahren Elastisch-Plastisch oder Plastisch-Plastisch erfolgen.
In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.
Acidic zeolites like H-Y, H-ZSM-5, H-MCM-22 and H-MOR zeolites were found to be the selective adsorbents for the removal of thiophene from toluene or n-heptane as solvent. The competitive adsorption of toluene is found to influence the adsorption capacity for thiophene and is more predominant when high-alumina zeolites are used as adsorbents. This behaviour is also reflected by the results of the adsorption of thiophene on H-ZSM-5 zeolites with varied nSi/nAl ratios (viz. 13, 19 and 36) from toluene and n-heptane as solvents, respectively. UV-Vis spectroscopic results show that the oligomerization of thiophene leads to the formation of dimers and trimers on these zeolites. The oligomerization in acid zeolites is regarded to be dependent on the geometry of the pore system of the zeolites. The sulphur-containing compounds with more than one ring viz. benzothiophene, which are also present in substantial amounts in certain hydrocarbon fractions, are not adsorbed on H-ZSM-5 zeolites. This is obvious, as the diameter of the pore aperture of zeolite H-ZSM-5 is smaller than the molecular size of benzothiophene. Metal ion-exchanged FAU-type zeolites are found to be promising adsorbents for the removal of sulphur-containing compounds from model solutions. The introduction of Cu+-, Ni2+-, Ce3+-, La3+- and Y3+- ions into zeolite Na+-Y by aqueous ion-exchange substantially improves the adsorption capacity for thiophene from toluene or n-heptane as solvent. More than the absolute content of Cu+-ions, the presence of Cu+-ions at the sites exposed to supercages is believed to influence the adsorption of thiophene on Cu+-Y zeolite. It was shown experimentally for the case of Cu+-Y and Ce3+-Y that the supercages present in the FAU zeolite allow for an access of bulkier sulphur-containing compounds (viz. benzothiophene, dibenzothiophene and dimethyl dibenzothiophene). The presence of these bulkier compounds compete with thiophene and are preferentially adsorbed on Cu+-Y zeolite. IR spectroscopic results revealed that the adsorption of thiophene on Na+-Y, Cu+-Y and Ni2+-Y is primarily a result of the interaction of thiophene via pi-complexation between C=C double bond (of thiophene) and metal ions (in the zeolite framework). A different mode of interaction of thiophene with Ce3+-, La3+- and Y3+-metal ions was observed in the IR spectra of thiophene adsorbed on Ce3+-Y, La3+-Y and Y3+-Y zeolites, respectively. On these adsorbents, thiophene is believed to interact via a lone electron pair of the sulphur atom with metal ions present in the adsorbent (M-S interaction). The experimental results show that there is a large difference in the thiophene adsorption capacities of pi-complexation adsorbents (like Cu+-Y, Ni2+-Y) between the model solution with toluene as solvent and the model solution with n-heptane as solvent. The lower capacity of these zeolites for the adsorption of thiophene from toluene than from n-heptane as solvent is the clear indication of competition of toluene in interating with adsorbent in a way similar to thiophene. The difference in thiophene adsorption capacities is very low in the case of adsorbents Ce3+-Y, La3+-Y and Y3+-Y, which are believed to interact with thiophene predominantly by direct M3+-S bond (thiophene interacting with metal ion via a lone pair of electrons). TG-DTA analysis was used to study the regeneration behaviour of the adsorbents. Acid zeolites can be regenerated by simply heating at 400 °C in a flow of nitrogen whereas thiophene is chemically adsorbed on the metal ion. By contrast, it is not possible to regenerate by heating under idle inert gas flow. The only way to regenerate these adsorbents is to burn off the adsorbate, which eventually brings about an undesired emission of SOx. The exothermic peaks appeared at different temperatures in the heat flow profiles of Cu+-Y, Ce3+-Y, La3+-Y and Y3+-Y are also indicating that two different types of interaction are present as revealed by IR spectroscopy, too. One major difficulty in reducing the sulphur content in fuels to value below 10 ppm is the inability in removing alkyl dibenzothiophenes, viz. 4,6 dimethyl dibenzothiophene, by the existing catalytic hydrodesulphurization technique. Cu+-Y and Ce3+-Y were found in the present study to adsorb this compound from toluene to a certain extent. To meet the stringent regulations on sulphur content, selective adsorption by zeolites could be a valuable post-purification method after the catalytic hydrodesulphurization unit.
Highly Automated Driving (HAD) vehicles represent complex and safety critical systems. They are deployed in an open context i.e., an intricate environment which undergoes continual changes. The complexity of these systems and insufficiencies in sensing and understanding the open context may result in unsafe and uncertain behaviour. The safety critical nature of the HAD vehicles requires modelling of root causes for unsafe behaviour and their mitigation to argue sufficient reduction of residual risk.
Standardization activities such as ISO 21448 provide guidelines on the Safety Of The Intended Functionality (SOTIF) and focus on the analysis of performance limitations under the influence of triggering conditions that can lead to hazardous behaviour. SOTIF references traditional safety analyses methods e.g., Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) to perform safety analysis. These analyses methods are based on certain assumptions e.g., single point failure in FMEA and independence of basic events in FTA. Moreover, these analyses are generally based on expert knowledge i.e., data-based models or hybrid approaches (expert and data) are seldom practised. The resulting safety model is fixed i.e., it is generally seen as a one-time artefact. Open context environment may contain triggering conditions which may not be evident to the expert. Open context also evolves over time and new phenomena may emerge.
This thesis explores the applicability of the traditional safety analyses techniques to provide safety models for HAD vehicles operating in the open context, under the light of modelling assumptions taken by traditional safety analyses techniques. Moreover, incorporating uncertainties into safety analyses models is also explored. An explicit distinction between the inherent uncertainty of a probabilistic event (aleatory) and uncertainty due to lack of knowledge (epistemic) is made to formalize models to perform SOTIF analysis. A further distinction is made for conditions of complete ignorance and termed as ontological uncertainty. The distinction is important as for HAD vehicles operating in open context the ontological uncertainty can never be completely disregarded.
This thesis proposes a novel framework of SOTIF to model, estimate and dis cover triggering conditions relevant to performance limitations. The framework provides the ability to model uncertainties while also providing a hybrid approach i.e., supporting inclusion of expert knowledge as well as data driven engineering processes. Two representative algorithms are provided to support the framework. Bayesian Network (BN) and p-value hypothesis testing are utilised in this regard. The framework is implemented on a real-world case study in which LIDARs based perception systems are used as vehicle detection system.
Dealing with information in modern times involves users to cope with hundreds of thousands of documents, such as articles, emails, Web pages, or News feeds.
Above all information sources, the World Wide Web presents information seekers with great challenges.
It offers more text in natural language than one is capable to read.
The key idea for this research intends to provide users with adaptable filtering techniques, supporting them in filtering out the specific information items they need.
Its realization focuses on developing an Information Extraction system,
which adapts to a domain of concern, by interpreting the contained formalized knowledge.
Utilizing the Resource Description Framework (RDF), which is the Semantic Web's formal language for exchanging information,
allows extending information extractors to incorporate the given domain knowledge.
Because of this, formal information items from the RDF source can be recognized in the text.
The application of RDF allows a further investigation of operations on recognized information items, such as disambiguating and rating the relevance of these.
Switching between different RDF sources allows changing the application scope of the Information Extraction system from one domain of concern to another.
An RDF-based Information Extraction system can be triggered to extract specific kinds of information entities by providing it with formal RDF queries in terms of the SPARQL query language.
Representing extracted information in RDF extends the coverage of the Semantic Web's information degree and provides a formal view on a text from the perspective of the RDF source.
In detail, this work presents the extension of existing Information Extraction approaches by incorporating the graph-based nature of RDF.
Hereby, the pre-processing of RDF sources allows extracting statistical information models dedicated to support specific information extractors.
These information extractors refine standard extraction tasks, such as the Named Entity Recognition, by using the information provided by the pre-processed models.
The post-processing of extracted information items enables representing these results in RDF format or lists, which can now be ranked or filtered by relevance.
Post-processing also comprises the enrichment of originating natural language text sources with extracted information items by using annotations in RDFa format.
The results of this research extend the state-of-the-art of the Semantic Web.
This work contributes approaches for computing customizable and adaptable RDF views on the natural language content of Web pages.
Finally, due to the formal nature of RDF, machines can interpret these views allowing developers to process the contained information in a variety of applications.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
Hydrogels are known to be covalently or ionic cross-linked, hydrophilic three-dimensional
polymer networks, which exist in our bodies in a biological gel form such as the vitreous
humour that fills the interior of the eyes. Poly(N-isopropylacrylamide) (poly(NIPAAm))
hydrogels are attracting more interest in biomedical applications because, besides others, they
exhibit a well-defined lower critical solution temperature (LCST) in water, around 31–34°C,
which is close to the body temperature. This is considered to be of great interest in drug
delivery, cell encapsulation, and tissue engineering applications. In this work, the
poly(NIPAAm) hydrogel is synthesized by free radical polymerization. Hydrogel properties
and the dimensional changes accompanied with the volume phase transition of the
thermosensitive poly(NIPAAm) hydrogel were investigated in terms of Raman spectra,
swelling ratio, and hydration. The thermal swelling/deswelling changes that occur at different
equilibrium temperatures and different solutions (phenol, ethanol, propanol, and sodium
chloride) based on Raman spectrum were investigated. In addition, Raman spectroscopy has
been employed to evaluate the diffusion aspects of bovine serum albumin (BSA) and phenol
through the poly(NIPAAm) network. The determination of the mutual diffusion coefficient,
\(D_{mut}\) for hydrogels/solvent system was achieved successfully using Raman spectroscopy at
different solute concentrations. Moreover, the mechanical properties of the hydrogel, which
were investigated by uniaxial compression tests, were used to characterize the hydrogel and to
determine the collective diffusion coefficient through the hydrogel. The solute release coupled
with shrinking of the hydrogel particles was modelled with a bi-dimensional diffusion model
with moving boundary conditions. The influence of the variable diffusion coefficient is
observed and leads to a better description of the kinetic curve in the case of important
deformation around the LCST. A good accordance between experimental and calculated data
was obtained.
Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
Entwicklung von Fermentationsstrategien zur stofflichen Nutzung von nachwachsenden Rohstoffen
(2022)
Biertreber stellt einen wichtigen Vertreter eines nachwachsenden Rohstoffes dar, da es sich dabei um ein niedrigpreisiges Nebenprodukt des Brauprozesses handelt, welches jedes Jahr in großen Mengen anfällt. In der vorliegenden Arbeit wurde Biertreber aus sieben verschiedenen Braurezepten, sowohl aus eigener Herstellung als auch industriellen Ursprungs, analysiert und in Bezug auf die zugrundeliegenden Braugänge klassifiziert. Darüber hinaus wurde der Treber durch Pressen in zwei separate Stoffströme aufgeteilt: eine flüssige und eine feste Fraktion. Für beide Fraktionen wurden Bioprozesse etabliert, um einerseits das flüssige Substrat (Treberpresssaft) mit einem Milchsäurebakterium (Lactobacillus delbrueckii subsp. lactis) zu Milchsäure und andererseits das feste Substrat (Treberrückstand) mit einem lignocellulolytischen und gemischtsäuregärung-betreibenden Stamm (Cellulomonas uda) zu Ethanol und Essigsäure umzusetzen. Außerdem wurde ein kinetisches Modell aufgestellt, welches u.a. die Milchsäurebildung und das Zellwachstum von L. delbrueckii subsp. lactis für drei Treberpresssäfte unterschiedlicher Braurezepturen, d.h. mit unterschiedlicher Nährstoffausstattung, in einer simultanen Verzuckerung und Fermentation vorhersagen konnte. Des Weiteren konnten die entwickelten Fermentationsstrategien zur Verwertung des Treberpresssaftes und Treberrückstandes sowie die zugrundeliegenden Prozessüberwachungs- und Regelungsstrategien auf Fermentationen mit den gleichen Organismen aber dem Substrat Wiesenschnitt – also einen weiteren nachwachsenden Rohstoff – übertragen werden.
The advent of heterogeneous many-core systems has increased the spectrum
of achievable performance from multi-threaded programming. As the processor components become more distributed, the cost of synchronization and
communication needed to access the shared resources increases. Concurrent
linearizable access to shared objects can be prohibitively expensive in a high
contention workload. Though there are various mechanisms (e.g., lock-free
data structures) to circumvent the synchronization overhead in linearizable
objects, it still incurs performance overhead for many concurrent data types.
Moreover, many applications do not require linearizable objects and apply
ad-hoc techniques to eliminate synchronous atomic updates.
In this thesis, we propose the Global-Local View Model. This programming model exploits the heterogeneous access latencies in many-core systems.
In this model, each thread maintains different views on the shared object: a
thread-local view and a global view. As the thread-local view is not shared,
it can be updated without incurring synchronization costs. The local updates
become visible to other threads only after the thread-local view is merged
with the global view. This scheme improves the performance at the expense
of linearizability.
Besides the weak operations on the local view, the model also allows strong
operations on the global view. Combining operations on the global and the
local views, we can build data types with customizable consistency semantics
on the spectrum between sequential and purely mergeable data types. Thus
the model provides a framework that captures the semantics of Multi-View
Data Types. We discuss a formal operational semantics of the model. We
also introduce a verification method to verify the correctness of the implementation of several multi-view data types.
Frequently, applications require updating shared objects in an “all-or-nothing” manner. Therefore, the mechanisms to synchronize access to individual objects are not sufficient. Software Transactional Memory (STM)
is a mechanism that helps the programmer to correctly synchronize access to
multiple mutable shared data by serializing the transactional reads and writes.
But under high contention, serializable transactions incur frequent aborts and
limit parallelism, which can lead to severe performance degradation.
Mergeable Transactional Memory (MTM), proposed in this thesis, allows accessing multi-view data types within a transaction. Instead of aborting
and re-executing the transaction, MTM merges its changes using the data-type
specific merge semantics. Thus it provides a consistency semantics that allows
for more scalability even under contention. The evaluation of our prototype
implementation in Haskell shows that mergeable transactions outperform serializable transactions even under low contention while providing a structured
and type-safe interface.
Towards A Non-tracking Web
(2016)
Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.
In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.
This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.
The goal of this work is to develop statistical natural language models and processing techniques
based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short-
Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more
robust, and easier to train than traditional methods, i.e., words list and rule-based models. They
improve the output of recognition systems and make them more accessible to users for browsing
and reading. These techniques are required, especially for historical books which might take
years of effort and huge costs to manually transcribe them.
The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As
a second contribution, a hyphenation model for difficult transcription for alignment purposes
is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography.
A size normalization alignment is implemented to equal the size of strings, before the training
phase. Using the LSTM networks as a language model to improve the recognition results is
the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State
Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions
will be elaborated in more detail.
Context-dependent confusion rules is a new technique to build an error model for Optical
Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions
which appear in the recognition outputs and are translated into edit operations, e.g., insertions,
deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations
are extracted in a form of rules with respect to the context of the incorrect string to build an
error model using WFSTs. The context-dependent rules assist the language model to find the
best candidate corrections. They avoid the calculations that occur in searching the language
model and they also make the language model able to correct incorrect words by using context-
dependent confusion rules. The context-dependent error model is applied on the university of
Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR
results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the
state-of-the-art single rule-based which returns an error rate of 1.0%.
This thesis describes a new, simple, fast, and accurate system for generating correspondences
between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the
original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will
not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations
model that allows the alignment with respect to the varieties of the hyphenated words in the line
breaks of the OCR documents. In this work, several approaches are implemented to be used for
the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches
are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan-
derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach
returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0%
using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from
the transcription. They provide the transcription in a readable and reflowable format to be used
for alignment purposes. We consider the task as classification problem and classify the hyphens
from the given patterns as hyphens for line breaks, combined words, or noise. The methods are
applied on clean and noisy transcription for different languages. The Decision Trees classifier
returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97%
on Fraktur script.
A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New
High German applied on the Luther bible. It performed better than the rule-based word-list
approaches. It provides a transcription for various purposes such as part-of-speech tagging and
n-grams. Also two new techniques are presented for aligning the OCR results and normalize the
size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern
wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined
rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the
LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown
data.
In this thesis, a deep investigation has been done on constructing high-performance language
modeling for improving the recognition systems. A new method to construct a language model
using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu
script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens
during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from
1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate
from 6.9% to 1.58%.
Finally, the integration of multiple recognition outputs can give higher performance than a
single recognition system. Therefore, a new method for combining the results of OCR systems is
explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output
to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription
so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech
tagging. The method consists of two alignment steps. First, two recognition systems are aligned
using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors.
The LSTM model then is used to vote the best candidate correction of the two systems and
improve the incorrect tokens which are produced during the first alignment. The approaches
are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset
which are obtained from state-of-the-art OCR systems. The Experiments show that the error
rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the
error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has
the best performance with 0.40%.
The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and
more effective use of the documents in digital libraries.
One of the biggest social issues in mature societies such as Europe and Japan
is the aging population and declining birth rate. These societies have a serious
problem with the retirement of the expert workers, doctors, and engineers etc.
Especially in the sectors that require long time to make experts in fields like medicine and industry; the retirement and injuries of the experts, is a serious problem. The technology to support the training and assessment of skilled workers (like doctors, manufacturing
workers) is strongly required for the society. Although there are some solutions for
this problem, most of them are video-based which violates the privacy of the subjects.
Furthermore, they are not easy to deploy due to the need for large training data.
This thesis provides a novel framework to recognize, analyze, and assess human
skills with minimum customization cost. The presented framework tackles this problem
in two different domains, industrial setup and medical operations of catheter-based
cardiovascular interventions (CBCVI).
In particular, the contributions of this thesis are four-fold. First, it proposes an
easy-to-deploy framework for human activity recognition based on zero-shot learning
approach, which is based on learning basic actions and objects. The model recognizes
unseen activities by combinations of basic actions learned in a preliminary way and involved objects. Therefore, it is completely configurable by the user and can be used to detect completely new activities.
Second, a novel gaze-estimation model for attention driven object detection task is
presented. The key features of the model are: (i) usage of the deformable convolutional
layers to better incorporate spatial dependencies of different shapes of objects and
backgrounds, (ii) formulation of the gaze-estimation problem in two different way, as a
classification as well as a regression problem. We combine both formulations using a
joint loss that incorporates both the cross-entropy as well as the mean-squared error in
order to train our model. This enhanced the accuracy of the model from 6.8 by using only
the cross-entropy loss to 6.4 for the joint loss.
The third contribution of this thesis targets the area of quantification of quality of
i
actions using wearable sensor. To address the variety of scenarios, we have targeted two
possibilities: a) both expert and novice data is available , b) only expert data is available,
a quite common case in safety critical scenarios.
Both of the developed methods from these scenarios are deep learning based. In the
first one, we use autoencoders with OneClass SVM, and in the second one we use the
Siamese Networks. These methods allow us to encode the expert’s expertise and to learn
the differences between novice and expert workers. This enables quantification of the
performance of the novice in comparison to the expert worker.
The fourth contribution, explicitly targets medical practitioners and provides a
methodology for novel gaze-based temporal spatial analysis of CBCVI data. The developed
methodology allows continuous registration and analysis of gaze data for analysis
of the visual X-ray image processing (XRIP) strategies of expert operators in live-cases scenarios and may assist in transferring experts’ reading skills to novices.
Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)
Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.
The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large.
The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed.
MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures.
The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined.
Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage.
Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information.
This phase is called MCS analysis I in this thesis.
The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible.
This phase is called MCS analysis II in this thesis.
The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT).
The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities,
I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II.
Thus, the following approach was taken in my thesis:
1- First, a triangulation of mixed methods and data sources was conducted.
2- Then, four novel interactive visualizations and one novel interaction widget were developed.
3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools)
from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies.
The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs.
Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.
This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
Untersuchung der spektroskopischen und kinetischen Eigenschaften von Dihydroxysäure-Dehydratasen
(2019)
Eisen-Schwefel-Cluster sind wichtige Cofaktoren, die an der Redox- und Nicht-Redox-Katalyse beteiligt sind. Enzyme der Lyase-Familie wie Aconitase, Fumarase und DihydroxysäureDehydratase enthalten Cluster, die an drei Cystein-Liganden koordiniert sind. Das Eisenion, das an einen Nicht-Cysteinyl-Liganden koordiniert ist, wirkt als Lewis-Säure und interagiert mit dem Substrat über die Hydroxygruppe des dritten Kohlenstoffatoms und der Carboxygruppe. Das in dieser Arbeit untersuchte Enzym ist die Dihydroxysäure-Dehydratase (DHAD), welche an der Biosynthese der Aminosäuren Isoleucin, Leucin und Valin beteiligt ist. In dieser Arbeit wurden die kinetischen und spektroskopischen Eigenschaften von DHAD aus Streptococcus mutans, Streptococcus thermophilus, Saccharomyces cerevisiae und Escherichia coli untersucht. Zu diesem Zweck wurden ihre Gene kloniert und die Proteine in E. coli Zellen exprimiert. Nach der Proteinreinigung zeigten die UV-Vis-, ESR- und Mössbauer-Spektroskopie die Anwesenheit eines [2Fe-2S]2+-Clusters in der S. mutans DHAD, eines [4Fe-4S]2+-Cluster in E. coli DHAD und einer Mischung von [2Fe-2S]2+- und [4Fe-4S]2+ -Cluster in der S. cerevisiae DHAD. Darüber hinaus unterstützten die Ergebnisse der Sauerstoffstabilitätstests und des Eisen- und säurelabilen Sulfidgehalts die spektroskopischen Analysen. MössbauerSpektroskopie lieferte zusätzlich Information für das Vorhandensein eines nicht-cysteinyl-koordinierten Eisenions in den Clustern der S. mutans und in E. coli DHAD. Enzymaktivitätsmessungen mit dem Dinitrophenylhydrazin-Assay und den hier etablierten gekoppelten Assays mit NADH-abhängigen Ketoisovalerat-reduzierenden Dehydrogenasen wurden durchgeführt, um die spezifischen Aktivitäten und die kinetischen Parameter der DHAD zu bestimmen. Interaktionsstudien der S. mutans DHAD mit dem Substrat, dem Produkt, den substrat- und produktähnlichen Verbindungen mittels UV-Vis-, ESR-, Mössbauer- und FT-IR-Spektroskopie zeigten, dass nur das (2R)-Isomer des Substrats und 2-Ketosäuren (KIV, Kbut) mit dem Cluster der S. mutans DHAD interagierten. Das DHAD-Produkt (2-Ketoisovalerat) interagiert vermutlich über seine Enolform mit dem Cluster. Interessanterweise wurde starke Interaktion des Clusters mit der β-Mercaptogruppe von 3-Mercaptopropionat beobachtet. Diese Wechselwirkung wurde unabhängig durch Inhibitionsstudien verifiziert. Anschließend zeigte eine Gelfiltrationsanalyse die Reversibilität der Interaktionen. Insgesamt hat die vorliegende Arbeit unser Wissen über die biotechnologisch wichtigen DHAD-Enzyme erweitert.
Durch den Einsatz von Hohlkörpern in Stahlbetondecken können Beton, Stahl und folglich Gewicht eingespart werden. Die Materialeinsparung reduziert den Primärenergiebedarf sowie die Treibhausgasemissionen bei der Herstellung. Hierdurch stellen Hohlkörperdecken im Vergleich zu konventionellen Massivdecken eine ressourcenschonendere Bauweise dar. Infolge der deutlich reduzierten Eigenlast und einem im Verhältnis geringeren Steifigkeitsabfall können zudem Decken mit großen Spannweiten realisiert werden.
Die einzelnen Traganteile der Decken werden durch die Hohlkörper grundsätzlich nachteilig beeinflusst. Die Tragfähigkeit von Hohlkörperdecken mit abgeflachten rotationssymmetrischen Hohlkörpern wurde in der vorliegenden Dissertationsschrift im Detail analysiert. Auf Grundlage experimenteller und theoretischer Untersuchungen wurden Bemessungskonzepte für die Biegetragfähigkeit, die Querkrafttragfähigkeit, die Schubkraftübertragung in der Verbundfuge und das lokale Durchstanzen des Deckenspiegels oberhalb der Hohlkörper entwickelt. Unter Berücksichtigung der Bemessungskonzepte können die Hohlkörperdecken auf dem bauaufsichtlich geforderten Sicherheitsniveau hergestellt werden.
Für die Querkrafttragfähigkeit von Stahlbetondecken ohne Querkraftbewehrung steht derzeit kein allgemein anerkanntes mechanisch begründetes Bemessungskonzept zur Verfügung. Der Einfluss der einzelnen Traganteile auf das Versagen wurde experimentell analysiert. Hierzu wurden Versuche mit verlagerter Druckzone sowie mit ausgeschalteter Rissuferverzahnung und mit ausgeschalteter Dübelwirkung durchgeführt. Der rechnerische Einfluss der einzelnen Traganteile an der Gesamttragfähigkeit konnte durch die Nachrechnung von Versuchen zu Hohlkörper- und Installationsdecken unter Verwendung eines bestehenden mechanisch begründeten Rechenmodells visualisiert und verifiziert werden. Hierdurch wird ein Beitrag zum besseren Verständnis der Querkrafttragfähigkeit geleistet.
Es wurden Untersuchungen zur Expression und Wechsel von Serotypproteinen bei Paramecium primaurelia, Stamm 156 durchgeführt. Zum Nachweis der unterschiedlichen Serotypexpressionen wurden Immunofluoreszenzfärbungen und eine spezifische RT-PCR etabliert. Mit dieser Methode wurde der Ablauf eines temperaturinduzierten Serotypwechsels dokumentiert. Es wurde der Einfluss weiterer Umweltparameter auf die Ausprägung des Serotyps untersucht. Freilandexperimente sollten die Ausprägung der Serotypen unter multifaktorieller Reizeinwirkung zeigen. Zusätzlich konnte die Koexpression von zwei Serotypproteinen auf einer Zelle nachgewiesen werden.
In 2022 verfehlten Gebäude- und Verkehrssektor die Klimaschutzziele in Deutschland. Im Gegensatz zum Verkehrssektor stehen im Gebäudesektor lange Lebensdauern schnellen Technologiewechseln entgegen, weshalb Strategien besonders frühzeitig umgesetzt werden müssen. Zudem ist der Gebäudebestand durch hohe Investitionskosten bei vergleichsweise geringen Treibhausgaseinsparungen je investiertem Euro geprägt. In Kombination erschweren diese Hemmnisse die Erreichung der Klimaschutzziele für den Wohngebäudebestand deutlich.
Ziel dieser Arbeit ist die Entwicklung eines Wohngebäudebestandsmodells, um Transformationspfade unter dem Einfluss variierender ökonomischer Rahmenbedingungen, wie z.B. dem Einfluss unterschiedlicher CO2-Preisverläufe und eine Reinvestition der CO2-Steuer in die Modernisierung der Gebäude, simulieren und analysieren zu können.
Im ersten Schritt wird ein Wohngebäudebestandsmodell bei Fortschreibung der ökonomischen Rahmenbedingungen im Startjahr entwickelt und angewendet. Hierzu werden wichtige Parameter des Gebäudebestands identifiziert und diese anhand des vergangenen Verlaufs analysiert sowie Szenarien und Prognosen betrachtet. Ergebnis sind Ausgangsbedingungen und Einflussfaktoren auf den weiteren Verlauf, die für die Modellierung genutzt werden. Im zweiten Schritt wird eine Systematik entwickelt, um Modernisierungsraten endogen bei Variation der ökonomischen Rahmenbedingungen berechnen zu können.
In der vorliegenden Arbeit wird ein Modell vorgestellt, dass die ökonomischen Rahmenbedingungen und das Kopplungsprinzip dynamisch bei der Simulation von Vollmodernisierungsraten berücksichtigt. Die Ergebnisse zeigen, dass Vollmodernisierungsraten von 2 %/a über längere Zeiträume extreme Rahmenbedingungen benötigen und unrealistisch sind. Haupthemmnisse sind der Sanierungsbedarf (Kopplungsprinzip), sinkende Energieeinsparpotenziale der jüngeren Baualtersklassen und Mitnahmeeffekte bei verbesserter Förderung. Da eine Erreichung der Klimaschutzziele nur durch Anpassung der CO2-Steuer (auch bei Reinvestition) nicht innerhalb realistischer Steuerhöhen im Modell möglich ist, wird stattdessen ein Maßnahmenpaket aus wirtschaftlichen und legislativen Rahmenbedingungen zur Zielerreichung vorgestellt.
This doctoral dissertation is comprised of nine published articles covering different
methods for ‘Fast, Robust Rigid and Non-Rigid Registration for Globally Consistent
3D Scene and Shape Reconstruction’. Overall the contributing articles are separated
and discussed in three stages – The first part of the thesis i.e., chapter 2 explains
three novel method classes of rigid point set registration namely Gravitational Approach (GA), Fast Gravitational Approach (FGA), and RPSRNet. GA was introduced as the first physics-based rigid point set registration. It includes elegant modeling of rigid by dynamics using Newtonian mechanics. The method proposed many new avenues for other types of pattern matching tasks thank point set registration. Next, FGA method, published 4 years after GA presented as an extension that breaks the algorithmic complexity of GA from O(M N ) to O(M log N ) using Barnes-Hut tree representation of point cloud. It also eliminates the requirement of heuristic optimization parameter settings by GA, and achieve state-of-the-art alignment accuracy on LiDAR odometry. Finally, RPSRNet presents deep learning version of FGA, with custom convolution layers for hierarchical point feature embedding. RPSRNet is robust and the fastest among SoA methods for LiDAR data registration. The second part, i.e., chapter 3, of the thesis introduces NRGA as the fist physics-based non-rigid point set
registration method which is computationally slow but robust against noisy and partial inputs. NRGA preserves structural consistency as it coherently regularize motion of deformable vertices. For articulated hand shape reconstruction, a tailored version of NRGA -- Articulated-NRGA -- is effective to refine final hand shape. Collision and penetration avoidance between source and target surfaces are tackled by constrained optimization in NRGA. This setting has improved hand and object interaction reconstruction. Next contribution FoldMatch method remodels the shape deformation by introducing wrinkle vector field (WVF) for capturing complex clothing and garment details while fitting body models onto 3D Scans. Quantitative evaluation of FoldMatch and NRGA shows their effectiveness in geometrically consistent surface modeling and reconstruction tasks. Finally, the third part of the thesis explains globally consistent outdoor scene reconstruciton, odometry estimation, and uncertainty guided pose-graph optimization in a novel LiDAR-based localization and map building method, called Deep Evidential LiDAR Odometry (DELO). This is the first Odometry method to use predictive uncertainty modeling for sensor pose prediction network.
Tropical intersection theory
(2010)
This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.
Der Flächennutzungsplan ist das zentrale Instrument der Gesamtplanung auf der gesamtstädtischen Ebene und kann gleichzeitig als Paradebeispiel für den – angesichts der in der Praxis zu lösenden Probleme nicht gerechtfertigten – Bedeutungsverlust formeller Pläne herangezogen werden. Mit der Bewältigung aktueller Herausforderungen der Stadtentwicklung konfrontiert, werden vor allem die seiner Aufstellung dienenden, zu langwierigen Verfahren und seine zu starren, Unsicherheiten in der tatsächlichen Entwicklung ungenügend berücksichtigenden Inhalte kritisiert. Folglich muss nach Möglichkeiten einer Weiterentwicklung des formellen Instrumentariums gesucht werden. Im Laufe der letzten Jahrzehnte wurden einige punktuelle Anpassungen im Modell des Flächennutzungsplanes vorgenommen. Des Weiteren sind Entwicklungen im benachbarten europäischen Ausland beachtenswert: Der im englischen Planungssystem neu eingeführte Local Development Framework soll sich durch Flexibilität und Modularität bei gleichzeitiger Stärkung der strategischen Steuerungswirkung seiner Inhalte auszeichnen. An einer systematischen Untersuchung der Erfordernisse, Potenziale und Grenzen einer Weiterentwicklung des Modells des Flächennutzungsplanes fehlt es bislang. Damit ein zukünftiges Modell die ihm zugedachten Wirkungen entfalten kann, ist zudem eine grundlegende Auseinandersetzung mit dem vorherrschenden Verständnis von gesamtstädtischer Planung und ihren Ergebnissen erforderlich. Vor diesem Hintergrund ist es das Ziel der vorliegenden Arbeit, das Modell des Flächennutzungsplanes systematisch abzuleiten und zu untersuchen, um es anschließend mit dem Ziel einer Erhöhung der Steuerungskraft der Inhalte des gesamtstädtischen Planes weiterentwickeln zu können. Hierbei fließen die Erkenntnisse aus einer Betrachtung des Local Development Framework mit ein. Die Arbeit kommt zu dem Ergebnis, dass trotz zahlreicher Anpassungen des Modells des Flächennutzungsplanes einige Charakteristika aus dessen Anfangszeit erhalten geblieben sind, die als nicht mehr angemessen bezeichnet werden müssen. Zu den Hauptschwächen des gegenwärtigen Modells zählen sein statischer Charakter und die unzureichende Berücksichtigung der Prozesshaftigkeit von Stadtentwicklung inklusive der Auseinandersetzung mit potenziellen Entwicklungsalternativen. Die Beschäftigung mit dem Local Development Framework zeigt, dass von einer Übertragbarkeit von Elementen auf das deutsche System auszugehen ist. Die erarbeiteten Vorschläge zu den Anpassungen am Modell des Flächennutzungsplanes eröffnen insgesamt die Möglichkeit, den Flächennutzungsplan zum modularen, dynamischen und strategischen Instrument gesamtstädtischer Planung weiterzuentwickeln. Im Fokus der Anpassungen stehen die neue Gesamtstruktur als Portfolio aus zeichnerischen und textlichen, formellen und informellen Bestandteilen, die Integration des Faktors „Zeit“ sowie sonstiger strategischer Aspekte von Stadtentwicklung – begleitet von einem neuen Verständnis vom Ergebnis gesamtstädtischer Planung, nach dem der Flächennutzungsplan nicht mehr als der eine Plan das kanonische Endprodukt darstellt, sondern kontinuierlich und mit seinen diversen Bestandteilen überprüft und fortentwickelt wird.
Information Visualization (InfoVis) and Human-Computer Interaction (HCI) have strong ties with each other. Visualization supports the human cognitive system by providing interactive and meaningful images of the underlying data. On the other side, the HCI domain cares about the usability of the designed visualization from the human perspectives. Thus, designing a visualization system requires considering many factors in order to achieve the desired functionality and the system usability. Achieving these goals will help these people in understanding the inside behavior of complex data sets in less time.
Graphs are widely used data structures to represent the relations between the data elements in complex applications. Due to the diversity of this data type, graphs have been applied in numerous information visualization applications (e.g., state transition diagrams, social networks, etc.). Therefore, many graph layout algorithms have been proposed in the literature to help in visualizing this rich data type. Some of these algorithms are used to visualize large graphs, while others handle the medium sized graphs. Regardless of the graph size, the resulting layout should be understandable from the users’ perspective and at the same time it should fulfill a list of aesthetic criteria to increase the representation readability. Respecting these two principles leads to produce a resulting graph visualization that helps the users in understanding and exploring the complex behavior of critical systems.
In this thesis, we utilize the graph visualization techniques in modeling the structural and behavioral aspects of embedded systems. Furthermore, we focus on evaluating the resulting representations from the users’ perspectives.
The core contribution of this thesis is a framework, called ESSAVis (Embedded Systems Safety Aspect Visualizer). This framework visualizes not only some of the safety aspects (e.g. CFT models) of embedded systems, but also helps the engineers and experts in analyzing the system safety critical situations. For this, the framework provides a 2Dplus3D environment in which the 2D represents the graph representation of the abstract data about the safety aspects of the underlying embedded system while the 3D represents the underlying system 3D model. Both views are integrated smoothly together in the 3D world fashion. In order to check the effectiveness and feasibility of the framework and its sub-components, we conducted many studies with real end users as well as with general users. Results of the main study that targeted the overall ESSAVis framework show high acceptance ratio and higher accuracy with better performance using the provided visual support of the framework.
The ESSAVis framework has been designed to be compatible with different 3D technologies. This enabled us to use the 3D stereoscopic depth of such technologies to encode nodes attributes in node-link diagrams. In this regard, we conducted an evaluation study to measure the usability of the stereoscopic depth cue approach, called the stereoscopic highlighting technique, against other selected visual cues (i.e., color, shape, and sizes). Based on the results, the thesis proposes the Reflection Layer extension to the stereoscopic highlighting technique, which was also evaluated from the users’ perspectives. Additionally, we present a new technique, called ExpanD (Expand in Depth), that utilizes the depth cue to show the structural relations between different levels of details in node-link diagrams. Results of this part opens a promising direction of the research in which visualization designers can get benefits from the richness of the 3D technologies in visualizing abstract data in the information visualization domain.
Finally, this thesis proposes the application of the ESSAVis frame- work as a visual tool in the educational training process of engineers for understanding the complex concepts. In this regard, we conducted an evaluation study with computer engineering students in which we used the visual representations produced by ESSAVis to teach the principle of the fault detection and the failure scenarios in embedded systems. Our work opens the directions to investigate many challenges about the design of visualization for educational purposes.
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
More than ten years ago, ER-ANT1 was shown to act as an ATP/ADP antiporter and to exist in the endoplasmic reticulum (ER) of higher plants. Because structurally different transporters generally mediate energy provision to the ER, the physiological function of ER-ANT1 was not directly evident.
Interestingly, mutant plants lacking ER-ANT1 exhibit a photorespiratory phenotype. Although many research efforts were undertaken, the possible connection between the transporter and photorespiration also remained elusive. Here, a forward genetic approach was used to decipher the role of ER-ANT1 in the plant context and its association to photorespiration.
This strategy identified that additional absence of a putative HAD-type phosphatase partially restored the photorespiratory phenotype. Localisation studies revealed that the corresponding protein is targeted to the chloroplast. Moreover, biochemical analyses demonstrate that the HAD-type phosphatase is specific for pyridoxal phosphate. These observations, together with transcriptional and metabolic data of corresponding single (ER-ANT1) and double (ER-ANT1, phosphatase) loss-of-function mutant plants revealed an unexpected connection of ER-ANT1 to vitamin B6 metabolism.
Finally, a scenario is proposed, which explains how ER-ANT1 may influence B6 vitamer phosphorylation, by this affects photorespiration and causes several other physiological alterations observed in the corresponding loss-of-function mutant plants.
The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.
Die Trends und Entwicklungen, mit denen sich die Weinwirtschaft auf der Konsumentenseite konfrontiert sieht, sind vielfältig. Seit einigen Jahren lässt sich in Deutschland ein Rückgang des Pro-Kopf-Konsums alkoholischer Getränke beobachten. Ein weiterer Trend ist das zunehmende Gesundheitsbewusstsein in der Gesellschaft. Diese und weitere Entwicklungen treiben die Weinbranche an, alkoholfreie und alkoholreduzierte Weine als Produktinnovationen auf dem Markt zu positionieren. Die Dissertation widmet sich einer theoretischen und empirischen Analyse der Zielgruppen sowie der Einflussfaktoren der Innovationsadoption im Kontext alkoholfreier Weine. Sie ist thematisch in das Strukturförderungsprojekt „Weinnova: Innovative Produkte mit verringertem Alkoholgehalt im Segment Wein“ eingebettet, das durch die Europäische Innovationspartnerschaft für landwirtschaftliche Produktivität und Nachhaltigkeit (EIP-AGRI) als Teil des Maßnahmen- und Entwicklungsplans ländlicher Raum Baden-Württemberg (MEPL III) im Zeitraum 2019 bis 2022 gefördert wurde.
In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.
Weit ab von Wachstumskernen, raumordnerischen Entwicklungsachsen und ökonomi-scher Wettbewerbsfähigkeit befinden sich peripherisierte Räume in Nord-Thüringen bzw. im südlichen Sachsen-Anhalt. Der dort persistente Transformationsprozess ist durch Abwanderung, mangelnde Investitionen oder überdurchschnittlich hohe Arbeits-losenzahlen gekennzeichnet. Das Dilemma besteht darin, dass die durch nicht selbst verschuldete Abkopplung, Stigmatisierung und Abhängigkeiten gekennzeichneten Kommunen nicht in der Lage sind, durch endogene Kräfte sich neu zu erfinden, was eine Regenerierung möglich machte, um letztendlich in der Wertschöpfungskette den für Investoren derzeit unattraktiven Immobilienmarkt wieder zu beleben. Diese seit mehr als 20 Jahren durchlaufenen Entwicklungspfade wirken sich auf die Siedlungskör-per aus, die in vielen Orten zu perforieren drohen. Es ist festzustellen, dass der Prozess des Niedergangs längst noch nicht abgeschlossen ist.
Soziale Infrastrukturbauten, wie ehemaligen Schulen, Kitas und Krankenhäusern, sind im besonderen Maß von diesen Entwicklungen betroffen. Insbesondere durch den selbst verstärkenden Effekt des demografischen Wandels dienen sie als stadtplanerischer For-schungsgegenstand. Dies vor dem Hintergrund einer möglichen Inwertsetzung als städ-tebauliche Innenentwicklungsstrategie (Anpassung) nach dem diese Immobilien ihre ursprüngliche Nutzung verloren haben. Die Notwendigkeit zum stadtplanerischen Handeln ergibt sich u.a. aus der nicht selten städtebaulich exponierten Lage, als seltene bauliche Zeitzeugnisse auch als Teil eines Ensemble mit kulturhistorischem Wert sowie als Merkpunkte einer gesamtstädtischen bzw. dörflichen Ordnung.
Die Arbeit identifiziert die neuen Herausforderungen, die im Umgang mit leer stehen-den sozialen Infrastrukturbauten in peripherisierten Klein- und Mittelstädten durch die Eigentümer zu bewältigen sind und reflektiert kritisch die Wirksamkeit der informellen sowie formellen planerischen Instrumente. Es werden konkrete Vorschläge gemacht, wie das Immobilienmanagement sowie die Eigentümereinbindung bei sehr stark beru-higten Wohnimmobilienmärkten zu erfolgen hat. Weiterhin werden Strategieansätze des Verwaltungshandelns empfohlen, die auf die speziellen Marktbedingungen abge-stimmt sind.
Neben diesen aus der Theorie gewonnenen Analogieschlüssen zeigen die aus dem Feldexperiment in der o.g. Untersuchungsregion durch umfangreiche Erhebungen ope-rationalisierbare Daten. Aus dieser Dichte der Informationen entstanden valide Aussa-gen, deren Reliabilität in die Entwicklung einer Standortanalysedatenbank einflossen sind. Somit konnte nicht nur die Problemlage objektiv nachgewiesen werden, sondern es gelang auch in der Exploration ein für die Kommunen handhabbares Planungs-instrument zu entwickeln, das auch anderswohin übertragbar ist.
DeepKAF: A Knowledge Intensive Framework for Heterogeneous Case-Based Reasoning in Textual Domains
(2021)
Business-relevant domain knowledge can be found in plain text across message exchanges
among customer support tickets, employee message exchanges and other business transactions.
Decoding text-based domain knowledge can be a very demanding task since traditional
methods focus on a comprehensive representation of the business and its relevant paths. Such
a process can be highly complex, time-costly and of high maintenance effort, especially in
environments that change dynamically.
In this thesis, a novel approach is presented for developing hybrid case-based reasoning
(CBR) systems that bring together the benefits of deep learning approaches with CBR advantages.
Deep Knowledge Acquisition Framework (DeepKAF) is a domain-independent
framework that features the usage of deep neural networks and big data technologies to decode
the domain knowledge with the minimum involvement from the domain experts. While
this thesis is focusing more on the textual data because of the availability of the datasets, the
target CBR systems based on DeepKAF are able to deal with heterogeneous data where a
case can be represented by different attribute types and automatically extract the necessary
domain knowledge while keeping the ability to provide an adequate level of explainability.
The main focus within this thesis are automatic knowledge acquisition, building similarity
measures and cases retrieval.
Throughout the progress of this research, several sets of experiments have been conducted
and validated by domain experts. Past textual data produced over around 15 years have
been used for the needs of the conducted experiments. The text produced is a mixture
between English and German texts that were used to describe specific domain problems
with a lot of abbreviations. Based on these, the necessary knowledge repositories were built
and used afterwards in order to evaluate the suggested approach towards effective monitoring
and diagnosis of business workflows. Another public dataset has been used, the CaseLaw
dataset, to validate DeepKAF when dealing with longer text and cases with more attributes.
The CaseLaw dataset represents around 22 million cases from different US states.
Further work motivated by this thesis could investigate how different deep learning models
can be used within the CBR paradigm to solve some of the chronic CBR challenges and be
of benefit to large-scale multi-dimensional enterprises.
The main aim of this work was to obtain an approximate solution of the seismic traveltime tomography problems with the help of splines based on reproducing kernel Sobolev spaces. In order to be able to apply the spline approximation concept to surface wave as well as to body wave tomography problems, the spherical spline approximation concept was extended for the case where the domain of the function to be approximated is an arbitrary compact set in R^n and a finite number of discontinuity points is allowed. We present applications of such spline method to seismic surface wave as well as body wave tomography, and discuss the theoretical and numerical aspects of such applications. Moreover, we run numerous numerical tests that justify the theoretical considerations.
Biological clocks exist across all life forms and serve to coordinate organismal physiology with periodic environmental changes. The underlying mechanism of these clocks is predominantly based on cellular transcription-translation feedback loops in which clock proteins mediate the periodic expression of numerous genes. However, recent studies point to the existence of a conserved timekeeping mechanism independent of cellular transcription and translation, but based on cellular metabolism. These metabolic clocks were concluded based upon the observation of circadian and ultradian oscillations in the level of hyperoxidized peroxiredoxin proteins. Peroxiredoxins are enzymes found almost ubiquitously throughout life. Originally identified as H2O2 scavengers, recent studies show that peroxiredoxins can transfer oxidation to, and thereby regulate, a wide range of cellular proteins. Thus, it is conceivable that peroxiredoxins, using H2O2 as the primary signaling molecule, have the potential to integrate and coordinate much of cellular physiology and behavior with metabolic changes. Nonetheless, it remained unclear if peroxiredoxins are passive reporters of metabolic clock activity or active determinants of cellular timekeeping. Budding yeast possess an ultradian metabolic clock termed the Yeast Metabolic Cycle (YMC). The most obvious feature of the YMC is a high amplitude oscillation in oxygen consumption. Like circadian clocks, the YMC temporally compartmentalizes cellular processes (e.g. metabolism) and coordinates cellular programs such as gene expression and cell division. The YMC also exhibits oscillations in the level of hyperoxidized peroxiredoxin proteins.
In this study, I used the YMC clock model to investigate the role of peroxiredoxins in cellular timekeeping, as well as the coordination of cell division with the metabolic clock. I observed that cytosolic 2-Cys peroxiredoxins are essential for robust metabolic clock function. I provide direct evidence for oscillations in cytosolic H2O2 levels, as well as cyclical changes in oxidation state of a peroxiredoxin and a model peroxiredoxin target protein during the YMC. I noted two distinct metabolic states during the YMC: low oxygen consumption (LOC) and high oxygen consumption (HOC). I demonstrate that thiol-disulfide oxidation and reduction are necessary for switching between LOC and HOC. Specifically, a thiol reductant promotes switching to HOC, whilst a thiol oxidant prevents switching to HOC, forcing cells to remain in LOC. Transient peroxiredoxin inactivation triggered rapid and premature switching from LOC to HOC. Furthermore, I show that cell division is normally synchronized with the YMC and that deletion of typical 2-Cys peroxiredoxins leads to complete uncoupling of cell division from metabolic cycling. Moreover, metabolic oscillations are crucial for regulating cell cycle entry and exit. Intriguingly, switching to HOC is crucial for initiating cell cycle entry whilst switching to LOC is crucial for cell cycle completion and exit. Consequently, forcing cells to remain in HOC by application of a thiol reductant leads to multiple rounds of cell cycle entry despite failure to complete the preceding cell cycle. On the other hand, forcing cells to remain in LOC by treating with a thiol oxidant prevents initiation of cell cycle entry.
In conclusion, I propose that peroxiredoxins – by controlling metabolic cycles, which are in turn crucial for regulating the progression through cell cycle – play a central role in the coordination of cellular metabolism with cell division. This proposition, thus, positions peroxiredoxins as active players in the cellular timekeeping mechanism.
We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.
An autoregressive-ARCH model with possible exogeneous variables is treated. We estimate the conditional volatility of the model by applying feedforward networks to the residuals and prove consistency and asymptotic normality for the estimates under the rate of feedforward networks complexity. Recurrent neural networks estimates of GARCH and value-at-risk is studied. We prove consistency and asymptotic normality for the recurrent neural networks ARMA estimator under the rate of recurrent networks complexity. We also overcome the estimation problem in stochastic variance models in discrete time by feedforward networks and the introduction of a new distributions on the innovations. We use the method to calculate market risk such as expected shortfall and Value-at risk. We tested this distribution together with other new distributions on the GARCH family models against other common distributions on the financial market such as Normal Inverse Gaussian, normal and the Student's t- distributions. As an application of the models, some German stocks are studied and the different approaches are compared together with the most common method of GARCH(1,1) fit.
A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.
Continuous fibre reinforced thermoplastics are a high competitive material class for
diversified applications because of their inherent properties like light-weight construction
potential, integral design, corrosion resistance and high energy absorption level.
Using these materials, one approach towards a large volume scaled part production
rate is covered by an automated process line, consisting of a pressing process for
semi-finised sheet material production, a thermoforming step and some additional
joining technologies. To allow short cycle times in the thermoforming step, the utilised
semi-finished sheet materials, which are often referred to as “organic sheets”, have
to be fully impregnated and consolidated.
Nowadays even this combination of outstanding physical and chemical material
properties combined with the economic processing technology are no guarantee for
the break-through of continuous fibre reinforced thermoplastics, mainly because of
the high material costs for the semi-finished sheet materials. These costs can be attributed
to a non adapted material selection or choice of process parameters, as well
as by unfavourable pressing process type itself.
Therefore the aim of the present investigations was to generate some alternatives
regarding the choice of raw materials, the set-up or the selection of the pressing
process line and to provide some theoretical tools for the determination of process
parameters and dimensions.
Concerning raw material aspects, the use of the blending technology is one promising
approach towards cost reduction for the matrix component. Novel characteristics
related to the fibre structure are CF-yarns with high filament numbers (e.g. 6K or 12K instead of 3K) or multiaxial fibre orientations. These two approaches were both conducted
for sheet materials with carbon fibre reinforcement and high temperature
thermoplastics.
Two new developed ternary blend matrices consisting of PEEK and PEI as the main
ingredients were tested in comparison with neat PEEK. PES and PSU were used as
the third blend component, which provides a cost reduction potential of approximately
30 % compared to the basis PEEK polymer. The results of the static pressing experiments
pointed out that the processing behaviour of the new blends is similar to
the neat PEEK matrix. A maximum process temperature of 410 °C should not be surpassed, otherwise thermal degradation will occur and will have a negative influence
on mechanical laminat properties. To accelerate the impregnation progress a
process pressure of 25 bar in combination with a sidewise opened tooling concept is
helpful. No differences were identified if film-stacking technique was substituted by
powder-prepreg-technology or vice versa. By increasing the yarn filament number
from 3K over 6K to 12K, which is equal to an increase in bundle diameter and therefore
transverse flow distance, the impregnation time has to be extended. If unspread
yarns are used, the risk of void entrapment rises tremendously, especially with 12K
and UD-structures. To reach full impregnation with a woven 6K-fabric, an increase of
process time of 20 to 30 % compared to a 3K textile structure is required. Furthermore,
it was shown that if only transverse flow is used for the impregnation of a UDstructure,
a maximum area weight of 300-400 g/m² should not be exceeded. Additionally,
the transport of air is strictly affected by the fibre orientation, because the
main amount of displaced air runs in longitudinal fibre direction. These facts play an
important role in the design of a multaxial laminat or an impregnation process for
such a structure and have to be taken into account.
Apart from these static pressing experiments the semi-continuous (stepwise compression
moulding) and continuous (double belt press processing) processing technology
were investigated and compared to each other. The first basic processing
trails on the stepwise compression moulding equipment were carried out with the material
system GF/PA66. Whereas the processing behaviour of this material combination
in a double belt press is known quite well, there is only little information about
semi-continuous processing. The performed trials pointed out that the resulting laminate
quality for both technologies only differs in the achievable local surface quality.
Mechanical laminate properties like three point bending stiffness and strength are
directly comparable. Due to the fact that there is only small experience with the stepwise
compression moulding process, potential improvements regarding surface Quality are feasible by adapting the step procedure and the temperature distribution within
the tooling concept. If laminates, produced by semi-continuous processing, are deployed
in a thermoforming process or in a non visible structural application, the surface
appearance only plays an inferior role.
The present results with high temperature thermoplastic matrices and CF do confirm
the positive assessment for the stepwise compression moulding technology, even though the mechanical laminate values have only reached 90 % of the data received
by static press processing. In comparison to the data from literature, 90 % is already
a high mechanical performance level. The results are quite promising for the use of
the semi-continuous technology, despite the process set-up and processing parameters
not being optimised. Furthermore there are tremendous advantages in processing
equipment costs.
Finally a process model was developed based on the experimental data pool. This
model can be characterised as a tool, which provides useful boundary conditions and
dimension values for the selection of a certain pressing process depending on the
desired material combination, laminate thickness and production output. The applicability
and accuracy of the model was proofed by a direct comparison between experimental
and calculated data.
First of all the temperature profile of the pressing process was generalised by a very
common structure. This profile reflects the main characteristics for the processing of
a thermoplastic composite material. Depending on the material combination, the
laminate thickness and the occurring heat transfers, several process- and processing-
portfolios were calculated. For a defined combination of the aforementioned parameters,
these portfolios directly provide the periods of time for heating and cooling
of the laminate structure. The last step is to convert these information into an equipment
dimension and to decide which machinery configuration fulfils these requirements.
Laser-induced interstitial thermotherapy (LITT) is a minimally invasive procedure to destroy liver
tumors through thermal ablation. Mathematical models are the basis for computer simulations
of LITT, which support the practitioner in planning and monitoring the therapy.
In this thesis, we propose three potential extensions of an established mathematical model of
LITT, which is based on two nonlinearly coupled partial differential equations (PDEs) modeling
the distribution of the temperature and the laser radiation in the liver.
First, we introduce the Cattaneo–LITT model for delayed heat transfer in this context, prove its
well-posedness and study the effect of an inherent delay parameter numerically.
Second, we model the influence of large blood vessels in the heat-transfer model by means
of a spatially varying blood-perfusion rate. This parameter is unknown at the beginning of
each therapy because it depends on the individual patient and the placement of the LITT
applicator relative to the liver. We propose a PDE-constrained optimal-control problem for the
identification of the blood-perfusion rate, prove the existence of an optimal control and prove
necessary first-order optimality conditions. Furthermore, we introduce a numerical example
based on which we demonstrate the algorithmic solution of this problem.
Third, we propose a reformulation of the well-known PN model hierarchy with Marshak
boundary conditions as a coupled system of second-order PDEs to approximate the radiative-transfer
equation. The new model hierarchy is derived in a general context and is applicable
to a wide range of applications other than LITT. It can be generated in an automated way by
means of algebraic transformations and allows the solution with standard finite-element tools.
We validate our formulation in a general context by means of various numerical experiments.
Finally, we investigate the coupling of this new model hierarchy with the LITT model numerically.
Beim Bauen im Bestand werden häufig neue Stahlbetonbauteile kraftschlüssig an bestehende Tragstrukturen angeschlossen. Dies wird bei Ortbetonbauteilen günstig mit dem Übergreifungsstoß realisiert.
Bis Ende der 1950-er Jahre wurden im Stahlbetonbau überwiegend glatte Betonstähle verwendet, bevor sie mit einer Übergangszeit bis Ende der 1970-er Jahre von den heute eingesetzten gerippten Betonstählen abgelöst wurden. Im Gegensatz zu den seit 1925 genormten Übergreifungsstößen mit Betonstählen gleicher Art und Güte sind kombinierte Übergreifungsstöße von Glatt- und Rippenstählen jedoch bis heute nicht geregelt.
Zur Beseitigung dieses Defizits wurden im Rahmen dieser Arbeit differenzierte Bewehrungsregeln hergeleitet, die wissenschaftlich abgesicherte und gleichzeitig wirtschaftliche Lösungen für kombinierte Übergreifungsstöße ermöglichen, denn unter Einbeziehung des Rückbaus bestehender Altbetonsubstanz verlangt eine ökonomische Bauweise für Übergreifungsstöße von freigelegten historischen Glattstählen mit aktuell verwendeten Rippenstählen nach Vollstößen mit kleinstmöglichen Übergreifungslängen. Dabei sind die Anforderungen nach heute gültigem Regelwerk an die Zuverlässigkeit gegen Versagen im Grenzzustand der Tragfähigkeit (GZT) und die Sicherstellung der vorgegebenen Nutzung durch Begrenzung der Rissbreiten im Grenzzustand der Gebrauchstauglichkeit (GZG) zu beachten.
Für verschiedene kombinierte Übergreifungsstöße von mit Endhaken versehenen glatten Betonstählen BStI und gerippten Betonstählen B500 mit geraden Stabenden oder Endhaken wurden die erforderlichen Übergreifungslängen anhand systematisch aufgebauter Versuchsreihen empirisch ermittelt. Dabei wurde ein grundlegendes Verständnis für die Tragwirkung kombinierter Übergreifungsstöße gewonnen und ein allgemeingültiges Lastübertragungsmodell erarbeitet.
Zur Bemessung kombinierter Übergreifungsstöße wurde weiter ein Ingenieurmodell abgeleitet, welches die Tragwirkung derartiger Stöße zuverlässig beschreibt und die experimentell ermittelten Übergreifungslängen bestätigt. Dabei wurde unter Berücksichtigung der für den Verbund maßgebenden Betonzugfestigkeit, der Stahlspannungen und den Stabdurchmessern auf Basis statistischer Methoden ein Bemessungsdiagramm für die erforderliche Übergreifungslänge bestimmter Stoßkombinationen erarbeitet und eine ergänzende FE-Modellierung durchgeführt.
Darauf aufbauend werden allgemeingültige Gleichungen zur Ermittlung der Bemessungswerte der Übergreifungslängen kombinierter Übergreifungsstöße mit Glattstahl BStI und Rippenstahl B500 angegeben und Konstruktionsregeln für in der Praxis regelmäßig vorkommende Kombinationen von Stabdurchmessern, Betongüten und Verbundbedingungen erarbeitet, die für Kombi-Stöße gleichwertig zu den Regeln des EC2 für den Neubaufall angewendet werden können.
In recent years, there has been a growing need for accurate 3D scene reconstruction. Recent developments in the automotive industry have led to the increased use of ADAS where 3D reconstruction techniques are used, for example, as part of a collision detection system. For such applications, scene geometry reconstruction is usually performed in the form of depth estimation, where distances to scene objects are obtained.
In general, depth estimation systems can be divided into active and passive. Both systems have their advantages and disadvantages, but passive systems are usually cheaper to produce and easier to assemble and integrate than active systems. Passive systems can be stereo- or multiple-view based. Up to a certain limit, increasing the number of views in multi-view systems usually results in improved depth estimation accuracy.
One potential problem for ensuring the reliability of multi-view systems is the need to accurately estimate the orientation of their optical sensors. One way to ensure sensor placement for multi-view systems is to rigidly fix the sensors at the manufacturing stage. Unlike arbitrary sensor placement, using of a simplified and known sensor placement geometry further simplifies the depth estimation.
We meet with the concept of light field, which parameterizes all visible light passing through all viewpoints by their intersection with angular and spatial planes. When applied to computer vision, this gives us a 2D set of 2D images, where the physical distances between each image are fixed and proportional to each other.
Existing light field depth estimation methods provide good accuracy, which is suitable for industrial applications. However, the main problems of these methods are related to their running time and resource requirements. Most of the algorithms presented in the literature are typically sharpened for accuracy, can only be run on high-performance machines and often require a significant amount of time to process and obtain results.
Real-world applications often have running time requirements. Also, often there is a power-consumption limitation. In this dissertation, we investigate the problem of building a depth estimation system with an light field camera that satisfies the operating time and power consumption constraints without significant loss of estimation accuracy.
First, an algorithm for calibrating light field cameras is proposed, together with an algorithm for automatic calibration refinement, that works on arbitrary captured scenes. An algorithm for classical geometric depth estimation using light field cameras is proposed. Ways to optimize the algorithm for real-time use without significant loss of accuracy are presented. Finally, the ways how the presented depth estimation methods can be extended using modern deep learning paradigms under the two previously mentioned constraints are shown.
Interactions between flow hydrodynamics and biofilm attributes and functioning in stream ecosystems
(2023)
Biofilms constitute an integral part of freshwater ecosystems and are central to regulating essential stream biogeochemical functions, such as nutrient uptake and metabolism. Under-standing the environmental factors that dictate the composition of biofilm communities and their role in whole-system nutrient cycling remains challenging, given the large spatial and temporal variability of biofilm communities. Pristine mountain streams exhibit a heteroge-neous streambed ranging from boulders to sand, provoking high spatiotemporal flow varia-bility. Our current knowledge of the interactions between flow hydrodynamics and biofilm attributes stems from mesocosm studies, which are inherently limited in environmental real-ism. Moreover, the mechanism linking flow hydrodynamics to microbial biodiversity and ecosystem functioning is currently not studied. My thesis aims to link streambed heteroge-neity and the associated development of the flow field to biofilm attributes and nitrogen uptake based on a multidisciplinary field approach. It integrates several spatial and temporal scales ranging from millimeter-sized spots to stream reaches and from milliseconds to minutes (i.e., the hydraulic scale of velocity fluctuations), up to days, months and years (i.e., the hydrological scale of flow fluctuations). I demonstrate that the spatial niche variability of flow hydrodynamics was an essential driver of biofilm community composition, diversity and morphology, in line with the habitat heterogeneity hypothesis initially formulated for terrestrial ecosystems. Furthermore, hydraulic mass transfer associated to flow diversity and biofilm biomass determined biofilm areal nitrogen uptake at scales ranging from spots to the stream reach. At the whole-ecosystem level, flow diversity determined the quantitative role of biofilms compared to other nitrogen uptake compartments by sorting them according to prevailing flow conditions. The magnitude of effects depended on ambient nutrient back-ground and season, suggesting a hierarchy of the environmental controls on biofilms. In summary, my interdisciplinary research provided a mechanistic understanding of how hy-dromorphological diversity determines the diversity, morphology, and the functional role of biofilms in streams. By improving the understanding of these relationships, my research improves our ability to predict and scale measurements of important stream biogeochemical functions. Moreover, it helps to face the challenges imposed by environmental changes and biodiversity loss.
The wireless spectrum is already a scarce good, shared by multiple competing technologies such as Bluetooth, ZigBee and Wi-Fi, and the hunger for traffic is only increasing. Due to the heterogeneity of the existing wireless technologies and the real threat that interference poses to network performance, sophisticated techniques must be developed to ensure acceptable levels of quality of service.
In this thesis, we present a passive channel sensing scheme based on both energy and signal detection, that primarily considers the spectrum occupation of foreign traffic while allowing for additional complementary information such as the signal-to-noise ratio. The resulting channel quality metric is first corrected for the spectrum occupation of internal transmissions and later aggregated with help of a moving average followed by an exponential weighted moving average. This aggregation keeps the metric both sufficiently stable and adaptive to significant changes in channel usage. Moreover, the channel quality metric is made volatility-aware by penalizing qualities proportionally to their downward volatility. This yields a conservative metric and allows to differentiate channels with similar aggregated qualities but different volatility behavior.
Our second main contribution is in the form of a schedule-based channel sensing protocol, in which nodes possess two network interfaces, one for communication and one for channel sensing. Channel sensing schedules are derived from communication schedules, i.e. channel hopping sequences used for communication, with help of a stochastic local search-based heuristic that attempts to minimize channel sensing bias, the channel overlap between both schedules and to maximize overlap fairness. This minimizes the effect of internal transmissions in the resulting channel quality metric, allowing nodes to derive channel quality primarily based on foreign traffic in an unbiased manner.
Finally, we propose and implement a stabilization protocol for keeping nodes in an ad-hoc network tick-synchronized and schedule-consistent w.r.t. a communication schedule. This stabilization protocol makes use of special messages, namely tick frames for synchronization, channel quality reports for sharing local views of channel conditions and schedule reports for disseminating the global communication hopping sequence. The communication schedules are computed by a master node based on an aggregation of local channel quality views and the re-computation of these schedules is triggered by significant changes in channel conditions. The resulting protocol is robust against changes in topology and channel conditions.
The transfer of substrates between to enzymes within a biosynthesis pathway is an effective way to synthesize the specific product and a good way to avoid metabolic interference. This process is called metabolic channeling and it describes the (in-)direct transfer of an intermediate molecule between the active sites of two enzymes. By forming multi-enzyme cascades the efficiency of product formation and the flux is elevated and intermediate products are transferred and converted in a correct manner by the enzymes.
During tetrapyrrole biosynthesis several substrate transfer events occur and are prerequisite for an optimal pigment synthesis. In this project the metabolic channeling process during the pink pigment phycoerythrobilin (PEB) was investigated. The responsible ferredoxin-dependent bilin reductases (FDBR) for PEB formation are PebA and PebB. During the pigment synthesis the intermediate molecule 15,16-dihydrobiliverdin (DHBV) is formed and transferred from PebA to PebB. While in earlier studies a metabolic channeling of DHBV was postulated, this work revealed new insights into the requirements of this protein-protein interaction. It became clear, that the most important requirement for the PebA/PebB interaction is based on the affinity to their substrate/product DHBV. The already high affinity of both enzymes to each other is enhanced in the presence of DHBV in the binding pocket of PebA which leads to a rapid transfer to the subsequent enzyme PebB. DHBV is a labile molecule and needs to be rapidly channeled in order to get correctly further reduced to PEB. Fluorescence titration experiments and transfer assays confirmed the enhancement effect of DHBV for its own transfer.
More insights became clear by creating an active fusion protein of PebA and PebB and comparing its reaction mechanism with standard FDBRs. This fusion protein was able to convert biliverdin IXα (BV IXα) to PEB similar to the PebS activity, which also can convert BV IXα via DHBV to PEB as a single enzyme. The product and intermediate of the reaction were identified via HPLC and UV-Vis spectroscopy.
The results of this work revealed that PebA and PebB interact via a proximity channeling process where the intermediate DHBV plays an important role for the interaction. It also highlights the importance of substrate channeling in the synthesis of PEB to optimize the flux of intermediates through this metabolic pathway.
Mixed Isogeometric Methods for Hodge–Laplace Problems induced by Second-Order Hilbert Complexes
(2024)
Partial differential equations (PDEs) play a crucial role in mathematics and physics to describe numerous physical processes. In numerical computations within the scope of PDE problems, the transition from classical to weak solutions is often meaningful. The latter may not precisely satisfy the original PDE, but they fulfill a weak variational formulation, which, in turn, is suitable for the discretization concept of Finite Elements (FE). A central concept in this context is the
well-posed problem. A class of PDE problems for which not only well-posedness statements but also suitable weak formulations are known are the so-called abstract Hodge–Laplace problems. These can be derived from Hilbert complexes and constitute a central aspect of the Finite Element Exterior Calculus (FEEC).
This thesis addresses the discretization of mixed formulations of Hodge-Laplace problems, focusing on two key aspects. Firstly, we utilize Isogeometric Analysis (IGA) as a specific paradigm for discretization, combining geometric representations with Non-Uniform Rational B-Splines (NURBS) and Finite Element discretizations.
Secondly, we primarily concentrate on mixed formulations exhibiting a saddle-point structure and generated from Hilbert complexes with second-order derivative operators. We go beyond the well-known case of the classical de Rham
complex, considering complexes such as the Hessian or elasticity complex. The BGG (Bernstein–Gelfand–Gelfand) method is employed to define and examine these second-order complexes. The main results include proofs of discrete well-posedness and a priori error estimates for two different discretization approaches. One approach demonstrates, through the introduction of a Lagrange multiplier, how the so-called isogeometric discrete differential forms can be reused.
A second method addresses the question of how standard NURBS basis functions, through a modification of the mixed formulation, can also lead to convergent procedures. Numerical tests and examples, conducted using MATLAB and the open-source software GeoPDEs, illustrate the theoretical findings. Our primary application extends to linear elasticity theory, extensively
discussing mixed methods with and without strong symmetry of the stress tensor.
The work demonstrates the potential of IGA in numerical computations, particularly in the challenging scenario of second-order Hilbert complexes. It also provides insights into how IGA and FEEC can be meaningfully combined, even for non-de Rham complexes.
In this thesis, we present the basic concepts of isogeometric analysis (IGA) and we consider Poisson's equation as model problem. Since in IGA the physical domain is parametrized via a geometry function that goes from a parameter domain, e.g. the unit square or unit cube, to the physical one, we present a class of parametrizations that can be viewed as a generalization of polar coordinates, known as the scaled boundary parametrizations (SB-parametrizations). These are easy to construct and are particularly attractive when only the boundary of a domain is available. We then present an IGA approach based on these parametrizations, that we call scaled boundary isogeometric analysis (SB-IGA). The SB-IGA derives the weak form of partial differential equations in a different way from the standard IGA. For the discretization projection
on a finite-dimensional space, we choose in both cases Galerkin's method. Thanks to this technique, we state an equivalence theorem for linear elliptic boundary value problems between the standard IGA, when it makes use of an SB-parametrization,
and the SB-IGA. We solve Poisson's equation with Dirichlet boundary conditions on different geometries and with different SB-parametrizations.
Since their invention in the 1980s, behaviour-based systems have become very popular among roboticists. Their component-based nature facilitates the distributed implementation of systems, fosters reuse, and allows for early testing and integration. However, the distributed approach necessitates the interconnection of many components into a network in order to realise complex functionalities. This network is crucial to the correct operation of the robotic system. There are few sound design techniques for behaviour networks, especially if the systems shall realise task sequences. Therefore, the quality of the resulting behaviour-based systems is often highly dependant on the experience of their developers.
This dissertation presents a novel integrated concept for the design and verification of behaviour-based systems that realise task sequences. Part of this concept is a technique for encoding task sequences in behaviour networks. Furthermore, the concept provides guidance to developers of such networks. Based on a thorough analysis of methods for defining sequences, Moore machines have been selected for representing complex tasks. With the help of the structured workflow proposed in this work and the developed accompanying tool support, Moore machines defining task sequences can be transferred automatically into corresponding behaviour networks, resulting in less work for the developer and a lower risk of failure.
Due to the common integration of automatically and manually created behaviour-based components, a formal analysis of the final behaviour network is reasonable. For this purpose, the dissertation at hand presents two verification techniques and justifies the selection of model checking. A novel concept for applying model checking to behaviour-based systems is proposed according to which behaviour networks are modelled as synchronised automata. Based on such automata, properties of behaviour networks that realise task sequences can be verified or falsified. Extensive graphical tool support has been developed in order to assist the developer during the verification process.
Several examples are provided in order to illustrate the soundness of the presented design and verification techniques. The applicability of the integrated overall concept to real-world tasks is demonstrated using the control system of an autonomous bucket excavator. It can be shown that the proposed design concept is suitable for developing complex sophisticated behaviour networks and that the presented verification technique allows for verifying real-world behaviour-based systems.
Die Zielsetzung dieser Arbeit bestand in der Entwicklung effizienter und nachhaltiger Verfahren zur Addition von N-H Nukleophilen an terminale Alkine und für die Insertion von CO2 in die C-H Bindung terminaler Alkine.
Im ersten Teil dieser Dissertation wurde der Mechanismus der Ruthenium-katalysierten Addition von Amiden an terminale Alkine eingehend durch eine Kombination von Kontrollexperimenten, kinetischen Studien, spektroskopischen Untersuchungen und theoretischen Berechnungen untersucht. Zunächst wurden vier literaturbekannte Katalysezyklen identifiziert, die plausible Mechanismen für die Hydroamidierung terminaler Alkine darstellen. Aufbauend auf nachgewiesenen Elementarschritten chemisch verwandter Reaktionen wurde zusätzlich ein weiterer Mechanismus für die Hydroamidierung abgeleitet. Anschließend wurde eine Reihe von Kontrollexperimenten durchgeführt, mit deren Hilfe einzelne Elementarschritte der Katalysezyklen falsifiziert und somit Mechanismen ausgeschlossen werden konnten. Um herauszufinden, ob die Hydroamidierung mit dem einzig verbliebenen Mechanismus zutreffend beschrieben werden kann, wurden spektroskopische Studien durchgeführt. Diese Untersuchungen wurden vor, während und nach Hydroamidierungstestreaktionen durchgeführt, und auf diese Weise konnten zahlreiche postulierte Intermediate nachgewiesen und der verbleibende Katalysezyklus bekräftigt werden. Die in diesen mechanistischen Studien gewonnenen Erkenntnisse wurden zur Entwicklung einer neuen Katalysatorgeneration mit ausgesprochen hoher Selektivität für die Bildung wertvoller Z-Enamide und Z-Enimide genutzt. Das synthetische Potential wurde zudem durch die Darstellung der biologisch aktiven Naturstoffe Lansiumamid A und B, Lansamid I sowie Botryllamid C und E demonstriert.
Im zweiten Teil dieser Arbeit gelang es, hocheffiziente Silber(I)/DMSO-katalysierte Methoden zur Carboxylierung terminaler Alkine mit CO2 bei Normaldruck zu entwickeln.
When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.
The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.
Einfluss verschiedener Angussszenarien auf den Harzinjektionsprozess und dessen simulative Abbildung
(2014)
Die Herstellung von hochleistungs Kunststoff Verbunden für Strukturbauteile erfolgt
in der Automobilindustrie mittels Resin Transfer Molding (RTM), wobei die Kosten für
die Bauteile sehr hoch sind. Die Kosten müssen durch Prozessoptimierungen deutlich
reduziert werden, um eine breite Anwendung von faserverstärkten Kunststoff
Verbunde zu ermöglichen. Prozesssimulationen spielen hierbei eine entscheidende
Rolle, da zeitaufwendige und kostspielige Praxisversuche ersetzt werden können.
Aus diesem Grund wurden in dieser Arbeit die Potentiale der simulativen Abbbildung
des RTM-Prozesses untersucht. Basis der Simulationen bildete eine umfangreiche
Materialparameterstudie bei der die Permeabilität, von für die Automobilindustrie relevanten
Textilhalbzeugen im ungescherten und gescherten Zustand, untersucht
wurde. Somit konnte der Einfluss von Drapierung bei der Fließsimulation evaluiert
werden. Zudem wurde eine neue Methode zur Ermittlung der zeit-, vernetzungs- und
temperaturabhängigen Viskositätsverläufe von hochreaktiven Harzsystemen entwickelt
und angewendet. Die Fließsimulationsmethode wurde zunächst erfolgreich an
einem ebenen Plattenwerkzeug validiert, um zu zeigen, dass die ermittelten Materialparameter
korrekt bestimmt wurden.
Zur Validierung der Simulation wurde ein komplexes Technologieträgerwerkzeug
(TTW) entwickelt. Die Auslegung der Temperierung wurde mittels Temperiersimulationen
unterstützt. Untersuchungen an markanten Kantenbereichen, wie sie bei Automobilbauteilen
häufig auftreten, haben gezeigt, dass bei Kantenradien < 5 mm ein
Voreilen des Harzsystem zu beachten ist. Zudem konnte mittels verschiedener Angussleisten,
der Einfluss verschiedener Angussszenarien untersucht werden.
Mit Hilfe von Sensoren im TTW wurden die Prozessdaten protokolliert und anschließend
mit den Simulationen verglichen. Die Ergebnisse zeigen, dass die simulative
Abbildung des Füllprozesses bei einem komplexen RTM-Werkzeug, trotz einer Vielzahl
an Prozesseinflüssen, möglich ist. Die Abweichungen zwischen der Simulation
und dem Versuch lagen teilweise unter 15 %. Die Belastbarkeit der ermittelten Permeabilitäts-
und Viskositätswerte wurde dadurch nochmals bestätigt. Zudem zeigte
sich, dass die Angussleistenlänge einen signifikanten Einfluss auf die Prozesszeit
hat, wohingegen der Angussleistenquerschnitt eine untergeordnete Rolle spielt.
Die Versorgungsaufgaben für Niederspannungsnetze werden sich in den kommenden Jahrzehnten durch die weitere Verbreitung von Photovoltaikanlagen, Wärmepumpenheizungen und Elektroautomobilen gegenüber denen des Jahres 2018 voraussichtlich stark ändern. In der Praxis verbreitete Planungsgrundsätze für den Neubau von Niederspannungsnetzen sind veraltet, denn sie stammen vielfach in ihren Grundzügen aus Zeiten, in denen die neuen Lasten und Einspeisungen nicht erwartet und dementsprechend nicht berücksichtigt wurden. Der Bedarf für neue Planungsgrundsätze fällt zeitlich mit der Verfügbarkeit regelbarer Ortsnetztransformatoren (rONT) zusammen, die zur Verbesserung der Spannungsverhältnisse im Netz eingesetzt werden können. Die hier entwickelten neuen Planungsgrundsätze erfordern für ländliche und vorstädtische Versorgungsaufgaben (nicht jedoch für städtische Versorgungsaufgaben) den rONT-Einsatz, um die hohen erwarteten Leistungen des Jahres 2040 zu geringen Kosten beherrschen zu können. Eine geeignete rONT-Standardregelkennlinie wird angegeben. In allen Fällen werden abschnittsweise parallelverlegte Kabel mit dem Querschnitt 240 mm² empfohlen.
Modern microtechnology has the central task of ensuring technological progress through the miniaturization and reduction of component dimensions. Micro grinding with micro pencil grinding tools (MPGTs) has established itself as a manufacturing process in microtechnology, especially for the machining of hard and brittle materials. The process has been investigated by numerous researchers. Yet, tools with diameters of <100 μm, could not satisfy the needs of the industry. The tool life of MPGTs was insufficient, their feed rates were too slow for a mean-ingful application and both the MPGTs, and their microstructures were not reproducible. Therefore, this dissertation is dedicated to the task of investigating and revising the complete manu-facturing process and application methodology of these tools. New substrate geometries and materials are investigated. Surface treatment methods are investigated to increase adhesion between the abrasive layer and the substrate. In addition, conventional coating processes like electroplating are replaced by an autocatalytic electroless plating process, that has a much higher reproducibility rate of MPGTs with diameters of about 50 μm and less. The micro grinding methodology is optimized by parameter studies, and new coolant supplying methods with new metalworking fluids, which are introduced to achieve the best possible result when machining 16MnCr5.
The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.
Within the last decades, a remarkable development in materials science took place -- nowadays, materials are not only constructed for the use of inert structures but rather designed for certain predefined functions. This innovation was accompanied with the appearance of smart materials with reliable recognition, discrimination and capability of action as well as reaction. Even though ferroelectric materials serve smartly in real applications, they also possess several restrictions at high performance usage. The behavior of these materials is almost linear under the action of low electric fields or low mechanical stresses, but exhibits strong non-linear response under high electric fields or mechanical stresses. High electromechanical loading conditions result in a change of the spontaneous polarization direction with respect to individual domains, which is commonly referred to as domain switching. The aim of the present work is to develop a three-dimensional coupled finite element model, to study the rate-independent and rate-dependent behavior of piezoelectric materials including domain switching based on a micromechanical approach. The proposed model is first elaborated within a two-dimensional finite element setting for piezoelectric materials. Subsequently, the developed two-dimensional model is extended to the three-dimensional case. This work starts with developing a micromechanical model for ferroelectric materials. Ferroelectric materials exhibit ferroelectric domain switching, which refers to the reorientation of domains and occurs under purely electrical loading. For the simulation, a bulk piezoceramic material is considered and each grain is represented by one finite element. In reality, the grains in the bulk ceramics material are randomly oriented. This property is taken into account by applying random orientation as well as uniform distribution for individual elements. Poly-crystalline ferroelectric materials at un-poled virgin state can consequently be characterized by randomly oriented polarization vectors. Energy reduction of individual domains is adopted as a criterion for the initiation of domain switching processes. The macroscopic response of the bulk material is predicted by classical volume-averaging techniques. In general, domain switching does not only depend on external loads but also on neighboring grains, which is commonly denoted as the grain boundary effect. These effects are incorporated into the developed framework via a phenomenologically motivated probabilistic approach by relating the actual energy level to a critical energy level. Subsequently, the order of the chosen polynomial function is optimized so that simulations nicely match measured data. A rate-dependent polarization framework is proposed, which is applied to cyclic electrical loading at various frequencies. The reduction in free energy of a grain is used as a criterion for the onset of the domain switching processes. Nucleation in new grains and propagation of the domain walls during domain switching is modeled by a linear kinetics theory. The simulated results show that for increasing loading frequency the macroscopic coercive field is also increasing and the remanent polarization increases at lower loading amplitudes. The second part of this work is focused on ferroelastic domain switching, which refers to the reorientation of domains under purely mechanical loading. Under sufficiently high mechanical loading, however, the strain directions within single domains reorient with respect to the applied loading direction. The reduction in free energy of a grain is used as a criterion for the domain switching process. The macroscopic response of the bulk material is computed for the hysteresis curve (stress vs strain) whereby uni-axial and quasi-static loading conditions are applied on the bulk material specimen. Grain boundary effects are addressed by incorporating the developed probabilistic approach into this framework and the order of the polynomial function is optimized so that simulations match measured data. Rate dependent domain switching effects are captured for various frequencies and mechanical loading amplitudes by means of the developed volume fraction concept which relates the particular time interval to the switching portion. The final part of this work deals with ferroelectric and ferroelastic domain switching and refers to the reorientation of domains under coupled electromechanical loading. If this free energy for combined electromechanical loading exceeds the critical energy barrier elements are allowed to switch. Firstly, hysteresis and butterfly curves under purely electrical loading are discussed. Secondly, additional mechanical loads in axial and lateral directions are applied to the specimen. The simulated results show that an increasing compressive stress results in enlarged domain switching ranges and that the hysteresis and butterfly curves flatten at higher mechanical loading levels.
An Efficient Automated Machine Learning Framework for Genomics and Proteomics Sequence Analysis
(2023)
Genomics and Proteomics sequence analyses are the scientific studies of understanding the language of Deoxyribonucleic Acid (DNA), Ribonucleic Acid (RNA) and protein biomolecules with an objective of controlling the production of proteins and understanding their core functionalities. It helps to detect chronic diseases in early stages, root causes of clinical changes, key genetic targets for pharmaceutical development and optimization of therapeutics for various age groups. Most Genomics and Proteomics sequence analysis work is performed using typical wet lab experimental approaches that make use of different genetic diagnostic technologies. However, these approaches are costly, time consuming, skill and labor intensive. Hence, these approaches slow down the process of developing an efficient and economical sequence analysis landscape essential to demystify a variety of cellular processes and functioning of biomolecules in living organisms. To empower manual wet lab experiment driven research, many machine learning based approaches have been developed in recent years. However, these approaches cannot be used in practical environment due to their limited performance. Considering the sensitive and inherently demanding nature of Genomics and Proteomics sequence
analysis which can have very far-reaching as well as serious repercussions on account of misdiagnosis, the main
objective of this research is to develop an efficient automated computational framework for Genomics and Proteomics sequence analysis using the predictive and prescriptive analytical powers of Artificial Intelligence (AI) to significantly improve healthcare operations.
The proposed framework is comprised of 3 main components namely sequence encoding, feature engineering and
discrete or continuous value predictor. The sequence encoding module is equipped with a variety of existing and newly developed sequence encoding algorithms that are capable of generating a rich statistical representation of DNA, RNA and protein raw sequences. The feature engineering module has diverse types of feature selection and dimensionality reduction approaches which can be used to generate the most effective feature space. Furthermore, the discrete and/or continuous value predictor module of the proposed framework contains a wide range of existing machine learning and newly developed deep learning regressors and classifiers. To evaluate the integrity and generalizability of the proposed framework, we have performed a large-scale experimentation over diverse types of Genomics and Proteomics sequence analysis tasks (i.e., DNA, RNA and proteins).
In Genomics analysis, Epigenetic modification detection is one of the key component. It helps clinical researchers and practitioners to distinguish normal cellular activities from malfunctioned ones, which can lead to diverse genetic disorders such as metabolic disorders, cancers, etc. To support this analysis, the proposed framework is used to solve the problem of DNA and Histone modification prediction where it has achieved state-of-the-art performance on 27 publicly available benchmark datasets of 17 different species with best accuracy of 97%. RNA sequence analysis is another vital component of Genomics sequence analysis where the identification of different coding and non-coding RNAs as well as their subcellular localization patterns help to demystify the functions of diverse RNAs, root causes of clinical changes, develop precision medicine and optimize therapeutics. To support this analysis, the proposed framework is utilized for non-coding RNA classification and multi-compartment RNA subcellular localization prediction. Where it achieved state-of-the-art performance on 10 publicly available benchmark datasets of Homo sapiens and Mus Musculus species with best accuracy of 98%.
Proteomics sequence analysis is essential to demystify the virus pathogenesis, host immunity responses, the way
proteins affect or are affected by the cell processes, their structure and core functionalities. To support this analysis, the proposed framework is used for host protein-protein and virus-host protein-protein interaction prediction. It has achieved state-of-the-art performance on 2 publicly available protein protein interaction datasets of Homo Sapiens and Mus Musculus species with best accuracy of 96% and 7 viral host protein protein interaction datasets of multiple hosts and viruses with best accuracy of 94%. Considering the performance and practical significance of proposed framework, we believe proposed framework will help researchers in developing cutting-edge practical applications for diverse Genomic and Proteomic sequence analyses tasks (i.e., DNA, RNA and proteins).
Wetting of a solid surface with liquids is an important parameter in the chemical engineering process such as distillation, absorption and desorption. The degree of wetting in packed columns mainly contributes in the generating of the effective interfacial area and then enhancing of the heat and mass transfer process. In this work the wetting of solid surfaces was studied in real experimental work and virtually through three dimensional CFD simulations using the multiphase flow VOF model implemented in the commercial software FLUENT. That can be used to simulate the stratified flows [1]. The liquid rivulet flow which is a special case of the film flow and mostly found in packed columns has been discussed. Wetting of a solid flat and wavy metal plate with rivulet liquid flow was simulated and experimentally validated. The local rivulet thickness was measured using an optically assisted mechanical sensor using a needle which is moved perpendicular to the plate surface with a step motor and in the other two directions using two micrometers. The measured and simulated rivulet profiles were compared to some selected theoretical models founded in the literature such as Duffy & Muffatt [2], Towell & Rothfeld [3] and Al-Khalil et al. [4]. The velocity field in a cross section of a rivulet flow and the non-dimensional maximum and mean velocity values for the vertical flat plate was also compared with models from Al-Khalil et al. [4] and Allen & Biggin [5]. Few CFD simulations for the wavy plate case were compared to the experimental findings, and the Towel model for a flat plate [3]. In the second stage of this work 3-D CFD simulations and experimental study has been performed for wetting of a structured packing element and packing sheet consisting of three elements from the type Rombopak 4M, which is a product of the company Kuhni, Switzerland. The hydrodynamics parameters of a packed column, e. i. the degree of wetting, the interfacial area and liquid hold-up have been depicted from the CFD simulations for different liquid systems and liquid loads. Flow patterns on the degree of wetting have been compared to that of the experiments, where the experimental values for the degree of wetting were estimated from the snap shooting of the flow on the packing sheet in a test rig. A new model to describe the hydrodynamics of packed columns equipped with Rombopak 4M was derived with help of the CFD–simulation results. The model predicts the degree of wetting, the specific or interfacial area and liquid hold-up at different flow conditions. This model was compared to Billet & Schultes [6], the SRP model Rocha et al. [7-9], to Shi & Mersmann [10] and others. Since the pressure drop is one of the most important parameter in packed columns especially for vacuum operating columns, few CFD simulations were performed to estimate the dry pressure drop in a structured and flat packing element and were compared to the experimental results. It was found a good agreement from one side, between the experimental and the CFD simulation results, and from the other side between the simulations and theoretical models for the rivulet flow on an inclined plate. The flow patterns and liquid spreading behaviour on the packing element agrees well with the experimental results. The VOF (Volume of Fluid) was found very sensitive to different liquid properties and can be used in optimization of the packing geometries and revealing critical details of wetting and film flow. An extension of this work to perform CFD simulations for the flow inside a block of the packing to get a detailed picture about the interaction between the liquid and packing surfaces is recommended as further perspective.
The polydispersive nature of the turbulent droplet swarm in agitated liquid-liquid contacting equipment makes its mathematical modelling and the solution methodologies a rather sophisticated process. This polydispersion could be modelled as a population of droplets randomly distributed with respect to some internal properties at a specific location in space using the population balance equation as a mathematical tool. However, the analytical solution of such a mathematical model is hardly to obtain except for particular idealized cases, and hence numerical solutions are resorted to in general. This is due to the inherent nonlinearities in the convective and diffusive terms as well as the appearance of many integrals in the source term. In this work two conservative discretization methodologies for both internal (droplet state) and external (spatial) coordinates are extended and efficiently implemented to solve the population balance equation (PBE) describing the hydrodynamics of liquid-liquid contacting equipment. The internal coordinate conservative discretization techniques of Kumar and Ramkrishna (1996a, b) originally developed for the solution of PBE in simple batch systems are extended to continuous flow systems and validated against analytical solutions as well as published experimental droplet interaction functions and hydrodynamic data. In addition to these methodologies, we presented a conservative discretization approach for droplet breakage in batch and continuous flow systems, where it is found to have identical convergence characteristics when compared to the method of Kumar and Ramkrishna (1996a). Apart from the specific discretization schemes, the numerical solution of droplet population balance equations by discretization is known to suffer from inherent finite domain errors (FDE). Two approaches that minimize the total FDE during the solution of the discrete PBEs using an approximate optimal moving (for batch) and fixed (for continuous systems) grids are introduced (Attarakih, Bart & Faqir, 2003a). As a result, significant improvements are achieved in predicting the number densities, zero and first moments of the population. For spatially distributed populations (such as extraction columns) the resulting system of partial differential equations is spatially discretized in conservative form using a simplified first order upwind scheme as well as first and second order nonoscillatory central differencing schemes (Kurganov & Tadmor, 2000). This spatial discretization avoids the characteristic decomposition of the convective flux based on the approximate Riemann Solvers and the operator splitting technique required by classical upwind schemes (Karlsen et al., 2001). The time variable is discretized using an implicit strongly stable approach that is formulated by careful lagging of the nonlinear parts of the convective and source terms. The present algorithms are tested against analytical solutions of the simplified PBE through many case studies. In all these case studies the discrete models converges successfully to the available analytical solutions and to solutions on relatively fine grids when the analytical solution is not available. This is accomplished by deriving five analytical solutions of the PBE in continuous stirred tank and liquid-liquid extraction column for especial cases of breakage and coalescence functions. As an especial case, these algorithms are implemented via a windows computer code called LLECMOD (Liquid-Liquid Extraction Column Module) to simulate the hydrodynamics of general liquid-liquid extraction columns (LLEC). The user input dialog makes the LLECMOD a user-friendly program that enables the user to select grids, column dimensions, flow rates, velocity models, simulation parameters, dispersed and continuous phases chemical components, and droplet phase space-time solvers. The graphical output within the windows environment adds to the program a distinctive feature and makes it very easy to examine and interpret the results very quickly. Moreover, the dynamic model of the dispersed phase is carefully treated to correctly predict the oscillatory behavior of the LLEC hold up. In this context, a continuous velocity model corresponding to the manipulation of the inlet continuous flow rate through the control of the dispersed phase level is derived to get rid of this behavior.
Kulturpolitik erlebt einen Aufschwung in Deutschland – steigende Etats, eine zugebilligte Funktion als Allheilmittel eine wachsende Relevanz in der Wissenschaft sowie eine Charakterisierung als wirtschafts- und arbeitsmarktpolitischer Faktor sind Ausdruck einer zunehmenden Bedeutsamkeit.
Gleichzeitig steht die Kulturpolitik vor wachsenden Herausforderungen in der Ausfinanzierung eines Kulturbetriebs, der Bewältigung eines digitalen und demografischen Wandels, einer sich verstetigenden Teilhabe-Ungerechtigkeit sowie einer sich in einem Rechtfertigungskonsens ausdrückenden Legitimationskrise.
Diese Kontroverse begünstigt den Einsatz konzeptbasierter Kulturpolitik als Qualitätsmerkmal der Kulturpolitik zu deren renommiertesten Komponenten ein Kulturentwicklungsplan (KEP) zählt.
Die Messung und Deutung der kommunalen Unterschiede in der Intensität konzeptbasierter Kulturpolitik, die empirische Erforschung der für die Intensitätsunterschiede verantwortlichen Ursachen sowie die Untersuchung der Wirkung konzeptbasierter Kulturpolitik sind Gegenstand der Dissertation.
We discuss some first steps towards experimental design for neural network regression which, at present, is too complex to treat fully in general. We encounter two difficulties: the nonlinearity of the models together with the high parameter dimension on one hand, and the common misspecification of the models on the other hand.
Regarding the first problem, we restrict our consideration to neural networks with only one and two neurons in the hidden layer and a univariate input variable. We prove some results regarding locally D-optimal designs, and present a numerical study using the concept of maximin optimal designs.
In respect of the second problem, we have a look at the effects of misspecification on optimal experimental designs.
Das primäre Ziel der vorliegenden Dissertation war es, vertiefte Kenntnisse über die Luftpermeabilität von ultrahochfesten Betonen (engl. UHPC) zu erlangen. Auf Grundlage von experimentellen Untersuchungen wurden herstellungsbedingte sowie lagerungsbedingte Parameter erforscht, welche die Luftpermeabilität beeinflussen können. Von einem großen Interesse bei diesen Untersuchungen war die Beobachtung der Permeabilitätsänderung über die Zeit an drei UHPC-Mischungen mit verschiedenen Zusammensetzungen bei unterschiedlichem Betonalter (28, 90, 180 und 365 Tage). Darüber hinaus wurden potenzielle Korrelationen zwischen der Permeabilität und anderen Kennwerten des UHPC untersucht. Für die experimentellen Untersuchungen wurde ein neu an der Technischen Universität Kaiserslautern entwickeltes und validiertes Messverfahren zur Bestimmung des Permeabilitätskoeffizienten ultrahochfester Betone verwendet.
Insgesamt zeigten die Untersuchungsergebnisse, dass sowohl die Wärmebehandlung als auch die Wasserlagerung effiziente Maßnahmen zur Permeabilitätsreduktion sind. Die Untersuchungen zum Langzeitverhalten (bis 365 Tagen) deuteten auf einen wesentlichen Zusammenhang zwischen der Permeabilität und der vorgenommenen Nachbehandlung im jungen Betonalter (28 Tage) hin. Darüber hinaus nahm die Permeabilität unter Frost-Tau-Beanspruchung ab, was den hohen Widerstand von UHPC gegenüber solchen Expositionen erklärt.
Die hervorragenden Eigenschaften von UHPC eröffnen ein breites Spektrum neuer Anwendungsgebiete. Die sehr niedrige Luftdurchlässigkeit von UHPC ermöglicht dessen Verwendung im Bereich der Vakuumisolationspaneele (VIP). Diese Art der Vakuumdämmung weist ca. 1/5 bis 1/10 der Wärmeleitfähigkeit im Vergleich zu konventionellen Dämmungen auf, bei gleichzeitig sehr geringer Dicke (2 – 3 cm). Infolge des im Paneel erzeugten Vakuums wird der Wärmetransport durch Strahlung, Konvektion und Wärmeleitung wesentlich behindert. Auf der Grundlage der aus den experimentellen Untersuchungen gewonnenen Permeabilitätswerte wurde eine kritische Beurteilung der Anwendbarkeit von UHPC als vakuumisoliertes Element vorgenommen.
This dissertation focuses on the evaluation of technical and environmental sustainability of water distribution systems based on scenario analysis. The decision support system is created to assist in the decision making-process and to visualize the results of the sustainability assessment for current and future populations and scenarios. First, a methodology is developed to assess the technical and environmental sustainability for the current and future water distribution system scenarios. Then, scenarios are produced to evaluate alternative solutions for the current water distribution system as well as future populations and water demand variations. Finally, a decision support system is proposed using a combination of several visualization approaches to increase the data readability and robustness for the sustainability evaluations of the water distribution system.
The technical sustainability of a water distribution system is measured using the sustainability index methodology which is based on the reliability, resiliency and vulnerability performance criteria. Hydraulic efficiency and water quality requirements are represented using the nodal pressure and water age parameters, respectively. The U.S. Environmental Protection Agency EPANET software is used to simulate hydraulic (i.e. nodal pressure) and water quality (i.e. water age) analysis in a case study. In addition, the environmental sustainability of a water network is evaluated using the “total fresh water use” and “total energy intensity” indicators. For each scenario, multi-criteria decision analysis is used to combine technical and environmental sustainability criteria for the study area.
The technical and environmental sustainability assessment methodology is first applied to the baseline scenario (i.e. the current water distribution system). Critical locations where hydraulic efficiency and water quality problems occur in the current system are identified. There are two major scenario options that are considered to increase the sustainability at these critical locations. These scenarios focus on creating alternative systems in order to test and verify the technical and environmental sustainability methodology rather than obtaining the best solution for the current and future water distribution systems. The first scenario is a traditional approach in order to increase the hydraulic efficiency and water quality. This scenario includes using additional network components such as booster pumps, valves etc. The second scenario is based on using reclaimed water supply to meet the non-potable water demand and fire flow. The fire flow simulation is specifically included in the sustainability assessment since regulations have significant impact on the urban water infrastructure design. Eliminating the fire flow need from potable water distribution systems would assist in saving fresh water resources as well as to reduce detention times.
The decision support system is created to visualize the results of each scenario and to effectively compare these results with each other. The EPANET software is a powerful tool used to conduct hydraulic and water quality analysis but for the decision support system purposes the visualization capabilities are limited. Therefore, in this dissertation, the hydraulic and water quality simulations are completed using EPANET software and the results for each scenario are visualized by combining several visualization techniques in order to provide a better data readability. The first technique introduced here is using small multiple maps instead of the animation technique to visualize the nodal pressure and water age parameters. This technique eliminates the change blindness and provides easy comparison of time steps. In addition, a procedure is proposed to aggregate the nodes along the edges in order to simplify the water network. A circle view technique is used to visualize two values of a single parameter (i.e. the nodal pressure or water age). The third approach is based on fitting the water network into a grid representation which assists in eliminating the irregular geographic distribution of the nodes and improves the visibility of each circle view. Finally, a prototype for an interactive decision support tool is proposed for the current population and water demand scenarios. Interactive tools enable analyzing of the aggregated nodes and provide information about the results of each of the current water distribution scenarios.
For the last decade, optimization of beam orientations in intensity-modulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity profiles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity profiles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity profiles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity profiles for every selection of beam orientations, making the dependence between beam orientations and its intensity profiles less important. This thesis takes advantage of this property to accelerate the optimization process through an approximation of the intensity profiles that are used for multiple selections of beam orientations, saving a considerable amount of calculation time. A dynamic algorithm (DA) and evolutionary algorithm (EA), for beam orientations in IMRT planning will be presented. The DA mimics, automatically, the methods of beam's eye view and observer's view which are recognized in conventional conformal radiation therapy. The EA is based on a dose-volume histogram evaluation function introduced as an attempt to minimize the deviation between the mathematical and clinical optima. To illustrate the efficiency of the algorithms they have been applied to different clinical examples. In comparison to the standard equally spaced beams plans, improvements are reported for both algorithms in all the clinical examples even when, for some cases, fewer beams are used. A smaller number of beams is always desirable without compromising the quality of the treatment plan. It results in a shorter treatment delivery time, which reduces potential errors in terms of patient movements and decreases discomfort.
Nachhaltige Chemie beinhaltet die Nutzung stofflicher Ressourcen und deren Umwandlung ohne Schaden für zukünftige Generationen. Dabei hat sich insbesondere die Katalyse als nützliche Technologie etabliert, durch die Syntheserouten zu hochwertigen Produkten abgekürzt und dabei die CO2-Bilanz des Gesamtprozesses verbessert wird. Im Rahmen dieser Dissertation wurden nachhaltige, homogen-katalytische Prozesse zur Einbindung nachwachsender Rohstoffe in die chemische Wertschöpfungskette und zur abfallminimierten Synthese von Amiden und Peptiden entwickelt.
Im ersten Teil dieser Arbeit wurde die isomerisierende Metathese als Methode zur Valorisierung nachwachsender Rohstoffe etabliert. Mit einem bimetallischen Katalysatorsystem, bestehend aus dem Isomerisierungskatalysator [Pd(µ-Br)(tBu3P)]2 und NHC-basierten Ruthenium-Metathesekatalysatoren, werden Doppelbindungen ungesättigter Verbindungen kontinuierlich entlang der Kohlenwasserstoffkette verschoben und können gleichzeitig, ungeachtet ihrer Position, eine Metathese durchlaufen. Dies erlaubt die Umwandlung von zwei unterschiedlichen Olefinen in ein Gemisch mit homogener Produktverteilung und einstellbarer mittlerer Kettenlänge. Das synthetische Potential dieser Transformation wurde anhand der Darstellung von Dieselersatzkraftstoffen demonstriert, die vollständig auf erneuerbaren Ressourcen basieren und aufgrund ihres Siedeverhaltens in modernen Motoren in unverdünnter Form eingesetzt werden können. Der neu entwickelte Tandemprozess ermöglicht weiterhin die gezielte Kürzung olefinischer Seitenketten in Gegenwart von Ethen. Die isomerisierende Ethenolyse der natürlich vorkommenden Allylbenzole Eugenol, Allylanisol, Safrol und Methyleugenol wurde zur Synthese wertvoller Styrole mit komplexen Substitutionsmustern eingesetzt. Die isomerisierende Ethenolyse stellt außerdem die Schlüsseltechnologie zur Valorisierung von Cashew-Nussschalenöl dar. Ausgehend von dem bisher ungenutzten Abfallstoff wurde die Synthese der Tsetsefliegen-Lockstoffe 3-Ethyl- und 3-Propylphenol sowie des Polymervorläufers 3,3’-Hydroxystilben demonstriert.
Der zweite Teil dieser Doktorarbeit umfasste die rationale Entwicklung einer abfallminimierten und umweltfreundlichen Methode zur Synthese von Amiden aus Carbonsäuren und Aminen. Dazu wurde ein hocheffektives, luft- und wasserstabiles Ru(IV)-Katalysatorsystem identifiziert, das die Addition von Carbonsäuren an Alkine unter Bildung von Enolestern sowie die weitere Umsetzung dieser Aktivester mit Aminen zu Amiden vermittelt. Ein einstufiges Eintopf-Verfahren zur Synthese von Amiden, bei dem alle Reagenzien zu Beginn der Reaktion zugegeben werden, wurde unter Verwendung von Ethoxyacetylen als Aktivierungsreagenz entwickelt. Hierbei werden die Carbonsäuren in Gegenwart eines Amins intermediär in hochreaktive Ketenacetale überführt, die nach Aminolyse die entsprechenden Amide in sehr guten Ausbeuten liefern. Die Anwendungsbreite dieses milden Reaktionsprotokolls umfasst aliphatische und aromatische Carbonsäuren sowie N- und C terminal geschützten Aminosäuren.
In this thesis we have discussed the problem of decomposing an integer matrix \(A\) into a weighted sum \(A=\sum_{k \in {\mathcal K}} \alpha_k Y^k\) of 0-1 matrices with the strict consecutive ones property. We have developed algorithms to find decompositions which minimize the decomposition time \(\sum_{k \in {\mathcal K}} \alpha_k\) and the decomposition cardinality \(|\{ k \in {\mathcal K}: \alpha_k > 0\}|\). In the absence of additional constraints on the 0-1 matrices \(Y^k\) we have given an algorithm that finds the minimal decomposition time in \({\mathcal O}(NM)\) time. For the case that the matrices \(Y^k\) are restricted to shape matrices -- a restriction which is important in the application of our results in radiotherapy -- we have given an \({\mathcal O}(NM^2)\) algorithm. This is achieved by solving an integer programming formulation of the problem by a very efficient combinatorial algorithm. In addition, we have shown that the problem of minimizing decomposition cardinality is strongly NP-hard, even for matrices with one row (and thus for the unconstrained as well as the shape matrix decomposition). Our greedy heuristics are based on the results for the decomposition time problem and produce better results than previously published algorithms.
Liquid Composite Molding (LCM) processes, like Resin Transfer Molding (RTM) and Vacuum Assisted Resin Infusion (VARI), are gaining increasing interest for the cost-efficient production of fiber reinforced polymer matrix composites, e. g. Airbus A380 rear bulkhead. Meanwhile, purpose-built thermoset resin systems with adequately low processing viscosities are available. Although showing a better fatigue resistance composites from epoxy resins (EP) tend to be expensive while vinylester resin (VE) composites are more brittle and, hence, less fatigue resistant but attract due to their lower material costs. Following research on the toughness improvement of vinylester based resin systems, one subject of this thesis was the broad experimental characterization of the static and cyclic behavior of carbon fiber reinforced composites from resin systems which were toughened by either the generation of interpenetrating networks with aliphatic (Al-EP) and cyclo-aliphatic epoxy resins (Cal-EP) or by addition of a liquid, epoxy-terminated butadiene-nitrile rubber (ETBN). While quasi-static in-plane tension, compression and shear testing of [0°]8 and [±45°]3S laminates resulted in an unclear picture with regard to the mechanical performance of the investigated resin systems, R = -1 cyclic step loading provided a definite indication of the considerably higher cyclic fatigue strength of the modified carbon fiber reinforced vinylester-urethane (CF/VEUH:ETBN) composite which, consequently, was selected for detailed mechanical testing. To provide experimental input for subsequent fatigue life simulations applying the Critical Element Concept of Reifsnider et al. [76] the study included the determination of ultimate in-plane tension, compression and shear properties as well as the characterization of the cyclic fatigue behavior under constant amplitude loading. Different descriptions of S-N curves of the [0°]8-, [0°/90°]2S- and [+45°/0°/-45°/90°]S-laminates for R = +0.1, -1 and +10 were determined to derive constant fatigue life diagrams applying methods of Goodman or Harris et al. Furthermore the residual strength degradation model for the critical element (0° ply) and the residual stiffness degradation models for the sub-critical elements have been derived by experimental determination on [0°]8-, [0°/90°]2S- and [+45°/0°/-45°/90°]S-(CF/VEUH:ETBN)-laminates. Deficiencies in current fatigue life time prediction modeling for carbon fiber reinforced materials nowadays results in large factors of safety to be adopted. As a consequence composite structures are often overdesigned and expensive proto-type testing is required for life time prediction. Therefore, in this thesis standardized random-ordered miniTWIST (minimized transport wing standard) spectrum loading was used to understand improvements in fatigue life modeling so that fatigue life prediction results in a more efficient use of these materials. In particular the influence of constant amplitude cyclic fatigue modeling as well as constant fatigue life modeling itself on the results of the fatigue life analysis of random loading sequences have been investigated. Finally the bearing of residual strength or residual stiffness degradation modeling and the effect of filtering and counting methods on the fatigue life time prediction was determined in a sensitivity analysis. The fatigue life models were validated by experimental results using the random miniTWIST-loading on [0°]8-, [0°/90°]2S- and [+45°/0°/-45°/90°]S-(CF/VEUH:ETBN)-laminates.
Der flächendeckende Ausbau der Kläranlagen in Deutschland hat in den letzten Jahrzehnten zu einer deutlichen Verbesserung der Gewässerqualität geführt. Dennoch ist der ökologische Zustand vieler Gewässer immer noch unbefriedigend. Einen negativen Einfluss auf den Gewässerzustand haben Stoßbelastungen aus Mischwassereinleitungen, die empfindliche aquatische Ökosysteme aufgrund von hydraulischem Stress und stofflichen Belastungen nachhaltig schädigen können.
Diese Arbeit liefert einen Beitrag dazu, wie hoch aufgelöste Online-Messdaten zur Optimierung des Kanalnetzbetriebs genutzt werden können. Hierfür wurden zwei reale Regenüberlaufbecken (RÜB) im Mischsystem in Süddeutschland für zwei Jahre mit Online-Spektrometersonden zur Erfassung von Äquivalenzkonzentrationen von abfiltrierbaren Stoffen (AFS), chemischem Sauerstoffbedarf (CSB, gesamt und gelöst) und Nitrat ausgestattet. Zusätzlich wurden hydrometrische Messdaten an den RÜB vom Betreiber des Entwässerungssystems bereitgestellt.
Den ersten Teil der Arbeit bilden Fracht- und Volumenauswertungen der Einstauereignisse an den beiden RÜB. Die Untersuchungen sollen zum besseren Verständnis der stoffspezifischen und hydraulischen Vorgänge im Mischsystem beitragen. Im zweiten Teil der Arbeit wird ein neuer Ansatz zur Verbesserung des Kanalnetzbetriebes unter direkter Verwendung von Messdaten erprobt. Für diese messdatenbasierte Simulation werden gemessene Ganglinien von Abflussmenge und Feststoffkonzentration direkt als Systeminput eines Transportmodells verwendet. Anhand dieses Modells werden verschiedene Kanalnetzbewirtschaftungsstrategien untersucht. Die folgenden Erkenntnisse lassen sich anhand der durchgeführten Auswertungen ableiten:
Eine Vorhersage der Spülstoßintensitäten anhand der Charakteristiken der Trockenphasen vor den Ereignissen oder der Eigenschaften der Niederschlagsereignisse selbst ist im Untersuchungsgebiet nicht möglich. Eine konstante Akkumulation der Schmutzstoffe auf der Gebietsoberfläche, wie sie in gängigen Qualitätsmodellen angesetzt wird, ist in den Untersuchungsgebieten ebenso wenig vorhanden. Somit kann die Abflussqualität im Untersuchungsgebiet nicht zuverlässig simuliert werden. Betriebsentscheidungen, die auf Basis von Schmutzfrachtmodellen getroffen werden, sind demnach höchst unsicher.
Die in dieser Arbeit neu vorgestellte messdatenbasierte Simulation umgeht diese Unsicherheiten und ersetzt sie durch die Messunsicherheiten selbst. Sie kann die Effizienz verschiedener Bewirtschaftungsstrategien, wie die Verwendung statisch optimierter Drosselabflüsse oder die dynamische Echtzeit-Steuerung von Speicherräumen, zuverlässig bewerten. Eine Dauer der zugrunde liegenden Messdatenzeitreihe von etwa vier Monaten mit mittlerer Niederschlagscharakteristik und etwa 10 Niederschlagsereignissen ist im untersuchten fiktiven System ausreichend für verlässliche Ergebnisse der messdatenbasierten Simulation. In komplexeren Gebieten kann der Datenbedarf höher sein. Die Methodik liefert unter Berücksichtigung der üblichen Messunsicherheiten robuste Ergebnisse.
Many open problems in graph theory aim to verify that a specific class of graphs has a certain property.
One example, which we study extensively in this thesis, is the 3-decomposition conjecture.
It states that every cubic graph can be decomposed into a spanning tree, cycles, and a matching.
Our most noteworthy contributions to this conjecture are a proof that graphs which are star-like satisfy the conjecture and that several small graphs, which we call forbidden subgraphs, cannot be part of minimal counterexamples.
These star-like graphs are a natural generalisation of Hamiltonian graphs in this context and encompass an infinite family of graphs for which the conjecture was not known previously.
Moreover, we use the forbidden subgraphs we determined to deduce that 3-connected cubic graphs of path-width at most 4 satisfy the 3-decomposition conjecture:
we do this by showing that the path-width restriction causes one of these forbidden subgraphs to appear.
In the second part of this thesis, we delve deeper into two steps of the proof that 3-connected cubic graphs of path-width 4 satisfy the conjecture.
These steps involve a significant amount of case distinctions and, as such, are impractical to extend to larger path-width values.
We show how to formalise the techniques used in such a way that they can be implemented and solved algorithmically.
As a result, only the work that is "interesting" to do remains and the many "straightforward" parts can now be done by a computer.
While one step is specific to the 3-decomposition conjecture, we derive a general algorithm for the other.
This algorithm takes a class of graphs \(\mathcal G\) as an input, together with a set of graphs \(\mathcal U\), and a path-width bound \(k\).
It then attempts to answer the following question:
does any graph in \(\mathcal G\) that has path-width at most \(k\) contain a subgraph in \(\mathcal U\)?
We show that this problem is undecidable in general, so our algorithm does not always terminate, but we also provide a general criterion that guarantees termination.
In the final part of this thesis we investigate two connectivity problems on directed graphs.
We prove that verifying the existence of an \(st\)-path in a local certification setting, cannot be achieved with a constant number of bits.
More precisely, we show that a proof labelling scheme needs \(\Theta(\log \Delta)\) many bits, where \(\Delta\) denotes the maximum degree.
Furthermore, we investigate the complexity of the separating by forbidden pairs problem, which asks for the smallest number of arc pairs that are needed such that any \(st\)-path completely contains at least one such pair.
We show that the corresponding decision problem in \(\mathsf{\Sigma_2P}\)-complete.
Die Verwendung der Fest-Flüssig Extraktion zur Gewinnung von Wirkstoffen aus
Pflanzenmaterial ist so alt wie die Menschheit. Es sind daher angereicherte Extrakte oder
aufgereinigte Wirkstoffe in der Lebensmitteltechnologie, Biotechnologie und Pharmakologie
alltäglich vorzufinden. Die Diffusion sowie der Stofftransport der Wirkstoffe werden durch die
biologischen, chemischen, physikalischen Charakteristika des Pflanzenmaterials und durch die
Betriebsführung des Extraktionsvorgangs beeinflusst. Diese Dissertation analysiert das
Diffusionsverhalten diverser Wirkstoffe aus unterschiedlichen Pflanzenmaterialien mit einer
Unterstützung des Extraktionsvorgangs durch Mikrowellen, Ultraschallwellen oder Hochspannungsimpulse. Der Stofftransport wird bei festgesetzter Temperatur mit verfahrenstechnischen Modellen einstufig und mehrstufig beschrieben.
Im Gegensatz zu vorangegangenen Studien werden mannigfaltige Pflanzenstrukturen in
Kombination mit alternativen Prozesskonzepte beruhend auf den verschiedenen
Wirkmechanismen ausgewählt. Diese Modellierungen sind durch die vielfältigen Pflanzenmaterialien wie Blätter, Blüten, Nadeln, Samen, Rinde, Wurzeln und Kräuter unter der Einwirkung
der Mikrowellen, Ultraschallwellen sowie Hochspannungsimpulse, die zur Erhöhung der Ausbeute
an den Wirkstoffen im Extrakt beitragen, inspiriert. Zuerst wurden die Eigenschaften wie die
Schüttgutdichte, der Anteil an flüchtigen Bestandteilen, Partikelgrößenverteilung, der Lösungsmittel- und Phasenverhältniseinfluss der ausgewählten Pflanzenmaterialien untersucht. Wie
erwartet zeigt jede Pflanze und deren Wirkstoffe verschiedene Merkmale, die sich auf den
Extraktionsvorgang mit alternativen Prozesskonzepte auswirken.
Entgegen den Vermutungen, erreicht die Mikrowellen-unterstützende Extraktion mittels
dielektrischer Erwärmung nach Optimierung der Leistung die höchste Ausbeute für alle ausgewählten Pflanzenmaterialien, wenn diese getrocknet vorliegen. Mit der Ultraschall-unterstützenden Extraktionstechnologie wird bei festgesetzter Extraktionstemperatur eine
größere Quantität der Wirkstoffe im Extrakt im Vergleich zu einem gerührten Batch gemessen.
Die Hochspannungsimpuls-unterstützende Extraktionstechnologie mit einem einfachen
Pulsprotokoll und einer moderaten elektrischen Feldstärke zeigt bei frisch geernteten bzw. nicht
getrockneten Pflanzenmaterialien und wässrigen Extraktionsmilieu mit maximal 20 Vol% Ethanol
eine hohe Ausbeute an Wirkstoffen und eine milde Erwärmung des Extraktionsmittels.
Die Berechnungen der Geschwindigkeitskonstanten, der daraus resultierenden
Aktivierungsenergien und der effektiven Diffusionskoeffizienten, die auf der analytischen Lösung
des 2. Fick‘schen Gesetz basieren, korrelieren mit den festgestellten Makro- und Mikroeigenschaften der Pflanzenmaterialien. Schließlich werden mit einem automatisierten Hochdurchsatzsystem durchgeführte, dreistufige Kreuzstromextraktionen auf Grundlage der
Massenbilanzen modelliert und die tatsächlich eingesetzten mit den berechneten
Extraktionsmittelmengen bei unterschiedlichen Pflanzenmaterialien verglichen. Die Wirkstoffe in
holzartigen Strukturen und in Kräuter zeigen aufgrund ihres Quellverhalten eine abgeschwächte
Diffusion im Vergleich zum Herauslösen der Wirkstoffe aus blättrigem Rohstoff oder
Gewürzsamen.
In the first part of this work, called Simple node singularity, are computed matrix factorizations of all isomorphism classes, up to shiftings, of rank one and two, graded, indecomposable maximal Cohen--Macaulay (shortly MCM) modules over the affine cone of the simple node singularity. The subsection 2.2 contains a description of all rank two graded MCM R-modules with stable sheafification on the projective cone of R, by their matrix factorizations. It is given also a general description of such modules, of any rank, over a projective curve of arithmetic genus 1, using their matrix factorizations. The non-locally free rank two MCM modules are computed using an alghorithm presented in the Introduction of this work, that gives a matrix factorization of any extension of two MCM modules over a hypersurface. In the second part, called Fermat surface, are classified all graded, rank two, MCM modules over the affine cone of the Fermat surface. For the classification of the orientable rank two graded MCM R-modules, is used a description of the orientable modules (over normal rings) with the help of codimension two Gorenstein ideals, realized by Herzog and Kühl. It is proven (in section 4), that they have skew symmetric matrix factorizations (over any normal hypersurface ring). For the classification of the non-orientable rank two MCM R-modules, we use a similar idea as in the case of the orientable ones, only that the ideal is not any more Gorenstein.
Schnelligkeit und Explosivität sind prägende Bestandteile des Fußballspiels und die Bedeutung dieser Fähigkeiten ist in den vergangenen Jahren deutlich gestiegen. Infolgedessen erscheint die Berücksichtigung der Schnellkraft von prognostischer Relevanz für das komplexe Feld der Talentidentifikation und die damit verbundenen Selektionsprozesse im leistungsorientierten Jugendfußball. Allerdings gibt es nur wenige publizierte Daten die unter methodischen Standards erhoben wurden. Aus diesem Grund absolvierten im Rahmen dieser Arbeit 822 aktive, männliche Vereinsfußballer im Alter zwischen 10 und 19 Jahren eine leistungsdiagnostische Schnellkrafttestbatterie. Die Ergebnisse der Untersuchung zeigen, dass die Leistungsfähigkeit der Spieler über die komplette Altersspanne von 10-19 Jahren ansteigt. Dabei steht die Leistungsentwicklung in engem Zusammenhang mit der Reifeentwicklung der Jugendlichen. Des Weiteren zeigt sich, dass Spieler aus Nachwuchsleistungszentren bessere Werte aufweisen, als Spieler die nicht in einem Nachwuchsleistungszentrum Fußball spielen. Darüber hinaus wird deutlich, dass sich die Testleistungen von Spielern verschiedener Spielpositionen teilweise erheblich unterscheiden. Durch Folgeuntersuchungen soll die Datenbank zukünftig weiter ausgebaut werden, um auf diese Weise detailliertere Vergleiche in den unterschiedlichen Subgruppen zu ermöglichen.
Due to their N-glycosidase activity, ribosome-inactivating proteins (RIPs) are attractive candidates as antitumor and antiviral agents in medical and biological research. In the present study, we have successfully cloned two different truncated gelonins into pET-28a(+) vectors and expressed intact recombinant gelonin (rGel), recombinant C-terminally truncated gelonin (rC3-gelonin) and recombinant N- and C-terminally truncated gelonin (rN34C3-gelonin). Biological experiments showed that all these recombinant gelonins have no inhibiting effect on MCF-7 cell lines. These data suggest that the truncated-gelonins are still having a specific structure that does not allow for internalization into cells. Further, truncation of gelonin leads to partial or complete loss of N-glycosidase as well as DNase activity compared to intact rGel. Our data suggest that C-and N-terminal amino acid residues are involved in the catalytic and cytotoxic activities of rGel. In addition, the intact gelonin should be selected as a toxin in the immunoconjugate rather than truncated gelonin.
In the second part, an immunotoxin composed of gelonin, a basic protein of 30 kDa isolated from the Indian plant Gelonium multiflorum and the cytotoxic drug MTX has been studied as a potential tool of gelonin delivery into the cytoplasm of cells. Results of many experiments showed that, on the average, about 5 molecules of MTX were coupled to one molecule of gelonin. The MTX-gelonin conjugate is able to reduce the viability of MCF-7 cell in a dose-dependent manner (ID50, 10 nM) as shown by MTT assay and significantly induce direct and oxidative DNA damage as shown by the alkaline comet assay. However, in-vitro translation toxicity MTX-gelonin conjugates have IC50, 50.5 ng/ml which is less toxic than that of gelonin alone IC50, 4.6 ng/ml. It can be concluded that the positive charge plays an important role in the N-glycosidase activity of gelonin. Furthermore, conjugation of MTX with gelonin through α- and γ- carboxyl groups leads to the partial loss of its anti-folate activity compared to free MTX. These results, taken together, indicate that conjugation of MTX to gelonin permits delivery of the gelonin into the cytoplasm of cancer cells and exerts a measurable toxic effect.
In the third part, we have isolated and characterized two ribosome-inactivating proteins (RIPs) type I, gelonin and GAP31, from seeds of Gelonium multiflorum. Both proteins exhibit RNA-N-glycosidase activity. The amino acid sequences of gelonin and GAP31 were identified by MALDI and ESI mass spectrometry. Gelonin and GAP31 peptides - obtained by proteolytic digestion (trypsin and Arg-C) - are consistent with the amino acid sequence published by Rosenblum and Huang, respectively. Further structural characterization of gelonin and GAP31 (tryptic and Arg-C peptide mapping) showed that the two RIPs have 96% similarity in their sequence. Thus, these two proteins are most probably isoforms arisen from the same gene by alternative splicing. The ESI-MS analysis of gelonin and GAP31 exhibited at least three different post-translational modified forms. A standard plant paucidomannosidic N-glycosylation pattern (GlcNAc2Man2-5Xyl0-1 and GlcNAc2Man6-12Fuc1-2Xyl0-2) was identified using electrospray ionization MS for gelonin on N196 and GAP31 on N189, respectively. Based on these results, both proteins are located in the vacuoles of Gelonium multiflorum seeds.
In this thesis we developed a desynchronization design flow in the goal of easing the de- velopment effort of distributed embedded systems. The starting point of this design flow is a network of synchronous components. By transforming this synchronous network into a dataflow process network (DPN), we ensures important properties that are difficult or theoretically impossible to analyze directly on DPNs are preserved by construction. In particular, both deadlock-freeness and buffer boundedness can be preserved after desyn- chronization. For the correctness of desynchronization, we developed a criteria consisting of two properties: a global property that demands the correctness of the synchronous network, as well as a local property that requires the latency-insensitivity of each local synchronous component. As the global property is also a correctness requirement of synchronous systems in general, we take this property as an assumption of our desyn- chronization. However, the local property is in general not satisfied by all synchronous components, and therefore needs to be verified before desynchronization. In this thesis we developed a novel technique for the verification of the local property that can be carried out very efficiently. Finally we developed a model transformation method that translates a set of synchronous guarded actions – an intermediate format for synchronous systems – to an asynchronous actor description language (CAL). Our theorem ensures that one passed the correctness verification, the generated DPN of asynchronous pro- cesses (or actors) preserves the functional behavior of the original synchronous network. Moreover, by the correctness of the synchronous network, our theorem guarantees that the derived DPN is deadlock-free and can be implemented with only finitely bounded buffers.
In this thesis we developed a desynchronization design flow in the goal of easing the de- velopment effort of distributed embedded systems. The starting point of this design flow is a network of synchronous components. By transforming this synchronous network into a dataflow process network (DPN), we ensures important properties that are difficult or theoretically impossible to analyze directly on DPNs are preserved by construction. In particular, both deadlock-freeness and buffer boundedness can be preserved after desyn- chronization. For the correctness of desynchronization, we developed a criteria consisting of two properties: a global property that demands the correctness of the synchronous network, as well as a local property that requires the latency-insensitivity of each local synchronous component. As the global property is also a correctness requirement of synchronous systems in general, we take this property as an assumption of our desyn- chronization. However, the local property is in general not satisfied by all synchronous components, and therefore needs to be verified before desynchronization. In this thesis we developed a novel technique for the verification of the local property that can be carried out very efficiently. Finally we developed a model transformation method that translates a set of synchronous guarded actions – an intermediate format for synchronous systems – to an asynchronous actor description language (CAL). Our theorem ensures that one passed the correctness verification, the generated DPN of asynchronous pro- cesses (or actors) preserves the functional behavior of the original synchronous network. Moreover, by the correctness of the synchronous network, our theorem guarantees that the derived DPN is deadlock-free and can be implemented with only finitely bounded buffers.
Epoxy belongs to a category of high-performance thermosetting polymers which have been used extensively in industrial and consumer applications. Highly cross-linked epoxy polymers offer excellent mechanical properties, adhesion, and chemical resistance. However, unmodified epoxies are prone to brittle fracture and crack propagation due to their highly crosslinked structure. As a result, epoxies are normally toughened to ensure the usability of these materials in practical applications.
This research work focuses on the development of novel modified epoxy matrices, with enhanced mechanical, fracture mechanical and thermal properties, suitable to be processed by filament winding technology, to manufacture composite based calender roller covers with improved performance in comparison to commercially available products.
In the first stage, a neat epoxy resin (EP) was modified using three different high functionality epoxy resins with two type of hardeners i.e. amine-based (H1) and anhydride-based (H2). Series of hybrid epoxy resins were obtained by systematic variation of high functionality epoxy resin contents with reference epoxy system. The resulting matrices were characterized by their tensile properties and the best system was chosen from each hardener system i.e. amine and anhydride. For tailored amine based system (MEP_H1) 14 % improvement was measured for bulk samples similarly, for tailored anhydride system (MEP_H2) 11 % improvement was measured when tested at 23 °C.
Further, tailored epoxy systems (MEP_H1 and MEP_H2) were modified using specially designed block copolymer (BCP), and core-shell rubber nanoparticles (CSR). Series of nanocomposites were obtained by systematic variation of filler contents. The resulting matrices were extensively characterized qualitatively and quantitatively to reveal the effect of each filler on the polymer properties. It was shown that the BCP confer better fracture properties to the epoxy resin at low filler loading without losing the other mechanical properties. These characteristics were accompanied by ductility and temperature stability. All composites were tested at 23 °C and at 80 °C to understand the effect of temperature on the mechanical and fracture properties.
Examinations on fractured specimen surfaces provided information about the mechanisms responsible for reinforcement. Nanoparticles generate several energy dissipating mechanisms in the epoxy, e.g. plastic deformation of the matrix, cavitation, void growth, debonding and crack pinning. These were closely related to the microstructure of the materials. The characteristic of the microstructure was verified by microscopy methods (SEM and AFM). The microstructure of neat epoxy hardener system was strongly influenced by the nanoparticles and the resulting interfacial interactions. The interaction of nanoparticles with a different hardener system will result in different morphology which will ultimately influence the mechanical and fracture mechanical properties of the nanocomposites. Hybrid toughening using a combination of the block-copolymer / core-shell rubber nanoparticles and block copolymer / TiO2 nanoparticles has been investigated in the epoxy systems. It was found out that addition of rigid phase with a soft phase recovers the loss of strength in the nanocomposites caused by a softer phase.
In order to clarify the relevant relationships, the microstructural and mechanical properties were correlated. The Counto’s, Halpin-Tsai, and Lewis-Nielsen equations were used to calculate the modulus of the composites and predicted modulus fit well with the measured values. Modeling was done to predict the toughening contribution from block copolymers and core-shell rubber nanoparticles. There was good agreement between the predicted values and the experimental values for the fracture energy.
Towards PACE-CAD Systems
(2022)
Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored.
Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed.
Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied.
Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction.
Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier.
This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare.
Wissenschaftliche Studien deuten auf eine gesundheitsförderndes und gewichtsregulierendes Potential des Kaffees hin, welches vor allem mit dem hohen Gehalt an Antioxidantien in Verbindung gebracht wird. In dieser Arbeit wurden Kaffeeextrakte, -inhaltsstoffe sowie Kaffeegetränke hinsichtlich ihrer antioxidativen Wirksamkeit untersucht. In-vitro wurde die präventive Wirkung von Extrakten aus leicht (AB1), mittel (RI, AC, AB) und stark (AB 2) gerösteten Kaffees mit Markern oxidativer Zellschädigung und Zellantwort in den Kolonkarzinomzelllinien HT-29 und Caco-2 charakterisiert. Vergleichend wurden die ausgewählten originären Kaffeeinhaltstoffe 5-Caffeoylchinasäure (5-CQA) und Trigonellin (TRIG) sowie die Röstprodukte Kaffeesäure (CA), Catechol (Cat), 1,2,4-Trihydroxybenzol (THB), N-Methylpyridinium (NMP) und methylierte NMP-Analoga untersucht. Erfasste Parameter waren zellulärer ROS-Level, (oxidative) DNA-Schädigung und Proteinexpression der ARE-abhängigen Enzyme NQO1, g-GCL und GSR. Zusätzlich wurde die direkte antioxidative Aktivität mittels TEAC- und ORAC-Assay gemessen. Die Ergebnisse zeigten eine radikalabfangende Eigenschaft aller Kaffeeextrakte mit Werten von 0,9-1,5 mM Trolox (TEAC) bzw. 2,5-2,8 mM Trolox (ORAC). Der zelluläre ROS-Level wurde in HT-29 Zellen durch die Extrakte AB, AC, RI und stark gerösteten AB 2 signifikant verringert. Eine konzentrationsabhängige und signifikante Induktion ARE- abhängiger Enzyme (NQO1, g-GCL und GSR) in HT-29 Zellen wurde durch den CQA-reichen AB 1 beobachtet, der NMP-reiche AB 2 war dagegen unwirksam. Von den untersuchten Kaffeeinhaltsstoffen zeigten nur die phenolischen Verbindungen 5-CQA und CA eine ausgeprägte zellfreie antioxidative Aktivität. Der zelluläre ROS-Level konnte durch 5-CQA verringert werden, dagegen waren methylierte NMP-Analoga in der Lage (oxidative) DNA-Schäden in Caco-2 Zellen zu reduzieren. 5-CQA induzierte wie leicht gerösteter AB 1 die Proteinexpression alle untersuchten Enzyme in HT-29 Zellen. Weiterhin wurde in zwei aufeinander folgenden humanen Interventionsstudien das antioxidative Potential unterschiedlicher Kaffeegetränke mit hohem Anteil an Antioxidantien charakterisiert; in der zweiten Studie wurde zusätzlich auf gewichtsregulierende Wirkung des Studienkaffees geprüft. Im Rahmen der ersten Pilotstudie, durchgeführt durch AG. Somoza (DFA Garching) wurde die Modulation von DNA-Schäden im Blut von Probanden nach Konsum zweier Kaffeegetränke erfasst, die reich an Chlorogensäuren oder an N-Methylpyridinium waren (Kaffeekonsum: 0,5 L pro Tag, 4 Wochen). Beide Kaffeegetränke bewirkten im Blut gesunder Probanden eine deutliche Abnahme oxidativer DNA-Schäden. In der zweiten Interventionsstudie nahmen 35 männliche Probanden nach einer vierwöchigen Wash-out Phase über vier Wochen täglich 750 ml eines Antioxidantien reichen Kaffees (580 mg/l CQAs; 71,7 mg/l NMP) auf, anschließend folgte eine vierwöchige Wash-out Phase. Zu Beginn der Studie und am Ende jeder Phase wurden Blutentnahmen zur Bestimmung der Biomarker (oxidative) DNA-Schäden, Glutathion (Gesamtglutathion tGSH, oxidiertes Glutathion GSSG) sowie Bioimpedanzanalysen zur Bestimmung der Körperzusammensetzung durchgeführt. Zusätzlich wurden Energie und Nährstoffaufnahme der Probanden erfasst. Die Ergebnisse zeigten eine signifikante Abnahme (oxidativer) DNA-Schäden (p < 0,001), Anstieg des tGSH-Spiegels (p<0,05) und des GSH-Status (Trend) und eine Erniedrigung des GSSG-Spiegels. Weiterhin wurde eine signifikante Abnahme von Körpergewicht und Körperfett der Probanden während der Kaffeeintervention im Vergleich zur beiden Wash-out Phasen beobachtet, welche vor allem bei Probanden mit BMI < 25 stärker ausgeprägt war. Die Verringerung der Energie- und Nährstoffaufnahme während der Kaffeephase weist auf eine Kaffeeabhängige beeinflussung der Sättigungsregulation hin. Zusammenfassend besitzt der Antioxidantien reiche Studienkaffee ein eindeutiges Potential zur Verringerung oxidativer Zellschädigung sowie gewichtsregulierende Wirkung in gesunden Probanden. Aufgrund der von uns erhaltenen in vitro Daten kann die antioxidative Wirksamkeit zum Teil auf die untersuchten originären Kaffeeinhaltsstoffen (vor allem CQAs) und Röstprodukte zurückgeführt werden. Andere bisher nicht charakterisierte Sustanzen/Stoffgruppen tragen vermutlich ebenfalls zu der beobachteten Wirkung bei.
Compared to our current knowledge of neuronal excitation, little is known about the development and maturation of inhibitory circuits. Recent studies show that inhibitory circuits develop and mature in a similar way like excitatory circuit. One such similarity is the development through excitation, irrespective of its inhibitory nature. Here in this current study, I used the inhibitory projection between the medial nucleus of the trapezoid body (MNTB) and the lateral superior olive (LSO) as a model system to unravel some aspects of the development of inhibitory synapses. In LSO neurons of the rat auditory brainstem, glycine receptor-mediated responses change from depolarizing to hyperpolarizing during the first two postnatal weeks (Kandler and Friauf 1995, J. Neurosci. 15:6890-6904). The depolarizing effect of glycine is due to a high intracellular chloride concentration ([Cl-]i), which induces a reversal potential of glycine (EGly) more positive than the resting membrane potential (Vrest). In older LSO neurons, the hyperpolarizing effect is due to a low [Cl-]i (Ehrlich et al., 1999, J. Physiol. 520:121-137). Aim of the present study was to elucidate the molecular mechanism behind Clhomeostasis in LSO neurons which determines polarity of glycine response. To do so, the role and developmental expression of Cl-cotransporters, such as NKCC1 and KCC2 were investigated. Molecular biological and gramicidin perforated patchclamp experiments revealed, the role of KCC2 as an outward Cl-cotransporter in mature LSO neurons (Balakrishnan et al., 2003, J Neurosci. 23:4134-4145). But, NKCC1 does not appear to be involved in accumulating chloride in immature LSO neurons. Further experiments, indicated the role of GABA and glycine transporters (GAT1 and GLYT2) in accumulating Cl- in immature LSO neurons. Finally, the experiments with hypothyroid animals suggest the possible role of thyroid hormone in the maturation of inhibitory synapse. Altogether, this thesis addressed the molecular mechanism underlying the Cl- regulation in LSO neurons and deciphered it to some extent.
This thesis deals with the development of a tractor front loader scale which measures payload continuously, independent of the center of gravity of the payload, and unaffected of the position and movements of the loader. To achieve this, a mathematic model of a common front loader is simplified which makes it possible to identify its parameters by a repeatable and automatic procedure. By measuring accelerations as well as cylinder forces, the payload is determined continuously during the working process. Finally, a prototype was build and the scale was tested on a tractor.
Große Stegöffnungen werden in Stahlverbundträgern immer dann erforderlich, wenn in Trägerebene Leitungen quer zur Trägerachse geführt werden sollen. Die Attraktivität der Stahlverbundbauweise steigt mit der Möglichkeit, die Tragfähigkeit im Bereich großer Stegöffnungen rechnerisch nachweisen zu können. Bisher war es aber nicht möglich, eine Verstärkung des Betongurtes durch Einbau von Dübelleisten in einem Nachweiskonzept nutzbar zu machen. Bei Dübelleisten handelt es sich um Doppelkopfanker, die in einer Reihe meist auf ein Lochblech aufgeschweißt werden. Sie werden von mehreren Herstellern als Querkraft- und Durchstanzbewehrungselemente vertrieben.
In der vorliegenden Arbeit wird der Einfluss von Dübelleisten auf das Querkrafttragverhalten
des Stahlbetongurts von Verbundträgern im Bereich von großen Stegöffnungen untersucht. Dazu wurden 28 Versuche sowie rechnerische Untersuchungen durchgeführt.
Dabei konnte in einzelnen Versuchen durch den Einsatz von Dübelleisten eine Traglaststeigerung von mehr als 50 % erzielt werden. Unter anderem wurde der Effekt der Querbiegung untersucht, bei dem durch die vertikale Belastung des Betongurts außerhalb der Verbundträgerachse ein negatives Biegemoment in Querrichtung entsteht.
Es wurde ein Nachweiskonzept erarbeitet, mit dem die Querkrafttragfähigkeit eines Betongurts unter Berücksichtigung von im Betongurt angeordneten Dübelleisten wirklichkeitsnah
bestimmt werden kann. Das erstellte Konzept ermöglicht es, bei der Bemessung von Verbundträgern, die durch Stegöffnungen geschwächt sind, durch einen geringen Mehreinsatz von Bewehrung in Form von Dübelleisten eine deutliche Traglaststeigerung zu erzielen. Der Einfluss von Querbiegemomenten wird durch eine konservative Abschätzung der Lasteinzugsfläche berücksichtigt.
Today’s pervasive availability of computing devices enabled with wireless communication and location- or inertial sensing capabilities is unprecedented. The number of smartphones sold worldwide are still growing and increasing numbers of sensor enabled accessories are available which a user can wear in the shoe or at the wrist for fitness tracking, or just temporarily puts on to measure vital signs. Despite this availability of computing and sensing hardware the merit of application seems rather limited regarding the full potential of information inherent to such senor deployments. Most applications build upon a vertical design which encloses a narrowly defined sensor setup and algorithms specifically tailored to suit the application’s purpose. Successful technologies, however, such as the OSI model, which serves as base for internet communication, have used a horizontal design that allows high level communication protocols to be run independently from the actual lower-level protocols and physical medium access. This thesis contributes to a more horizontal design of human activity recognition systems at two stages. First, it introduces an integrated toolchain to facilitate the entire process of building activity recognition systems and to foster sharing and reusing of individual components. At a second stage, a novel method for automatic integration of new sensors to increase a system’s performance is presented and discussed in detail.
The integrated toolchain is built around an efficient toolbox of parametrizable components for interfacing sensor hardware, synchronization and arrangement of data streams, filtering and extraction of features, classification of feature vectors, and interfacing output devices and applications. The toolbox emerged as open-source project through several research projects and is actively used by research groups. Furthermore, the toolchain supports recording, monitoring, annotation, and sharing of large multi-modal data sets for activity recognition through a set of integrated software tools and a web-enabled database.
The method for automatically integrating a new sensor into an existing system is, at its core, a variation of well-established principles of semi-supervised learning: (1) unsupervised clustering to discover structure in data, (2) assumption that cluster membership is correlated with class membership, and (3) obtaining at a small number of labeled data points for each cluster, from which the cluster labels are inferred. In most semi-supervised approaches, however, the labels are the ground truth provided by the user. By contrast, the approach presented in this thesis uses a classifier trained on an N-dimensional feature space (old classifier) to provide labels for a few points in an (N+1)-dimensional feature space which are used to generate a new, (N+1)-dimensional classifier. The different factors that make a distribution difficult to handle are discussed, a detailed description of heuristics designed to mitigate the influences of such factors is provided, and a detailed evaluation on a set of over 3000 sensor combinations from 3 multi-user experiments that have been used by a variety of previous studies of different activity recognition methods is presented.
Reinforcing sand soils using tyre rubber chips is a novel technology that is under investigation to optimize its engineering application. Previous studies concentrated on static behaviour and very few on cyclic and dynamic behaviour of sand rubber mixtures leaving gaps that need to be addressed.
This research focuses on evaluating the static, cyclic and dynamic behaviours of sand rubber mixtures. The basic properties of sands S2, S3, S4, rubber chips and sand rubber chips mixtures at 10/20/30% rubber chips content by dry mass were first evaluated in order to obtain the parameters essential for subsequent testing. Oedometer, direct shear with larger box 300x300 mm and static triaxial compression tests were performed to assess the static behaviour of the composite material. Further, dynamic cyclic triaxial tests were performed to evaluate the cyclic behaviour of saturated, dry and wet mixtures. All specimens were first isotropically consolidated at 100 kPa. For saturated material a static deviatoric stress of 45 kPa was imposed prior to cycling to simulate the field anisotropic consolidation condition. Cycling was applied stress-controlled with amplitude of 50kPa. Both undrained and drained tests were performed. Cyclic tests in dry or wet conditions were also performed under anisotropic consolidation condition with the application of different stress amplitudes. For all cyclic tests the loading frequency was 1 Hz. With regard to dynamic behaviour of the mixtures, the resonant column tests were conducted. Calibration was first performed yielding a frequency dependent drive head inertia. Wet mixture specimens were prepared at relative density of 50% and tested at various confining stresses. Note that all specimens tested in both triaxial and resonant column were 100 mm diameter. The results from the entire investigation are promising.
In summary, rubber chips in the range of 4 to 14 mm mixed with sands were found to increase the shear resistance of the mixtures. They yield an increase of the cyclic resistance under saturated condition, to a decrease of stiffness and to an increase of damping ratio. Increased confining stress increased the shear modulus reduction and decreased damping ratio of the mixtures. Increased rubber content increased both shear modulus reduction and damping ratio. Several new design equations were proposed that can be used to compute the compression deformation, pore pressure ratio, maximum shear modulus and minimum damping ratio, as well as the modulus reduction with shear strain. Finally, chips content around 20% to 30% by dry mass can be used to reinforce sandy soils. The use of this novel composite material in civil engineering application could consume a large volume of scrap tyres and at the same time contribute to cleaning environment and saving natural resources.
Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.
Thermoelasticity represents the fusion of the fields of heat conduction and elasticity in solids and is usually characterized by a twofold coupling. Thermally induced stresses can be determined as well as temperature changes caused by deformations. Studying the mutual influence is subject of thermoelasticity. Usually, heat conduction in solids is based on Fourier’s law which describes a diffusive process. It predicts unnatural infinite transmission speed for parts of local heat pulses. At room temperature, for example, these parts are strongly damped. Thus, in these cases most engineering applications are described satisfactorily by the classical theory. However, in some situations the predictions according to Fourier’s law fail miserable. One of these situations occurs at temperatures near absolute zero, where the phenomenon of second sound1 was discovered in the 20th century. Consequently, non-classical theories experienced great research interest during the recent decades. Throughout this thesis, the expression “non-classical” refers to the fact that the constitutive equation of the heat flux is not based on Fourier’s law. Fourier’s classical theory hypothesizes that the heat flux is proportional to the temperature gradient. A new thermoelastic theory, on the one hand, needs to be consistent with classical thermoelastodynamics and, on the other hand, needs to describe second sound accurately. Hence, during the second half of the last century the traditional parabolic heat equation was replaced by a hyperbolic one. Its coupling with elasticity leads to non-classical thermomechanics which allows the modeling of second sound, provides a passage to the classical theory and additionally overcomes the paradox of infinite wave speed. Although much effort is put into non-classical theories, the thermoelastodynamic community has not yet agreed on one approach and a systematic research is going on worldwide.Computational methods play an important role for solving thermoelastic problems in engineering sciences. Usually this is due to the complex structure of the equations at hand. This thesis aims at establishing a basic theory and numerical treatment of non-classical thermoelasticity (rather than dealing with special cases). The finite element method is already widely accepted in the field of structural solid mechanics and enjoys a growing significance in thermal analyses. This approach resorts to a finite element method in space as well as in time.
Methods for scale and orientation invariant analysis of lower dimensional structures in 3d images
(2023)
This thesis is motivated by two groups of scientific disciplines: engineering sciences and mathematics. On the one hand, engineering sciences such as civil engineering want to design sustainable and cost-effective materials with desirable mechanical properties. The material behaviour depends on physical properties and production parameters. Therefore, physical properties are measured experimentally from real samples. In our case, computed tomography (CT) is used to non-destructively gain insight into the materials’ microstructure. This results in large 3d images which yield information on geometric microstructure characteristics. On the other hand, mathematical sciences are interested in designing methods with suitable and guaranteed properties. For example, a natural assumption of human vision is to analyse images regardless of object position, orientation, or scale. This assumption is formalized through the concepts of equivariance and invariance.
In Part I, we deal with oriented structures in materials such as concrete or fiber-reinforced composites. In image processing, knowledge of the local structure orientation can be used for various tasks, e.g. structure enhancement. The idea of using banks of directed filters parameterized in the orientation space is effective in 2d. However, this class of methods is prohibitive in 3d due to the high computational burden of filtering when using a fine discretization of the unit sphere. Hence, we introduce a method for 3d pixel-wise orientation estimation and directional filtering inspired by the idea of adaptive refinement in discretized settings. Furthermore, an operator for distinction between isotropic and anisotropic structures is defined based on our method. Finally, usefulness of the method is shown on 3d CT images in three different tasks on a fiber-reinforced polymer, concrete with cracks, and partially closed foams. Additionally, our method is extended to construct line granulometry and characterize fiber length and orientation distributions in fiber-reinforced polymers produced by either 3d printing or by injection moulding.
In Part II, we investigate how to introduce scale invariance for neural networks by using the Riesz transform. In classical convolutional neural networks, scale invariance is typically achieved by data augmentation. However, when presented with a scale far outside the range covered by the training set, the network may fail to generalize. Here, we introduce the Riesz network, a novel scale invariant neural network. Instead of standard 2d or 3d convolutions for combining spatial information, the Riesz network is based on the Riesz transform, a scale equivariant operator. As a consequence, this network naturally generalizes to unseen or even arbitrary scales in a single forward pass. As an application example, we consider segmenting cracks in CT images of concrete. In this context, 'scale' refers to the crack thickness which may vary strongly even within the same sample. To prove its scale invariance, the Riesz network is trained on one fixed crack width. We then validate its performance in segmenting simulated and real CT images featuring a wide range of crack widths. As an alternative to deep learning models, the Riesz transform is utilized to construct a scale equivariant scattering network, which does not require a lengthy training procedure and works with very few training examples. Mathematical foundations behind this representation are laid out and analyzed. We show that this representation with 4 times less features than the original scattering networks from Mallat performs comparably well on texture classification and gives superior performance when dealing with scales outside the training set distribution.
Solid particle erosion is usually undesirable, as it leads to development of cracks and
holes, material removal and other degradation mechanisms that as final
consequence reduce the durability of the structure imposed to erosion. The main aim
of this study was to characterise the erosion behaviour of polymers and polymer
composites, to understand the nature and the mechanisms of the material removal
and to suggest modifications and protective strategies for the effective reduction of
the material removal due to erosion.
In polymers, the effects of morphology, mechanical-, thermomechanical, and fracture
mechanical- properties were discussed. It was established that there is no general
rule for high resistance to erosive wear. Because of the different erosive wear
mechanisms that can take place, wear resistance can be achieved by more than one
type of materials. Difficulties with materials optimisation for wear reduction arise from
the fact that a material can show different behaviour depending on the impact angle
and the experimental conditions. Effects of polymer modification through mixing or
blending with elastomers and inclusion of nanoparticles were also discussed.
Toughness modification of epoxy resin with hygrothermally decomposed polyesterurethane
can be favourable for the erosion resistance. This type of modification
changes also the crosslinking characteristics of the modified EP and it was
established the crosslink density along with fracture energy are decisive parameters
for the erosion response. Melt blending of thermoplastic polymers with functionalised
rubbers on the other hand, can also have a positive influence whereas inclusion of
nanoparticles deteriorate the erosion resistance at low oblique impact angles (30°).
The effects of fibre length, orientation, fibre/matrix adhesion, stacking sequence,
number, position and existence of interleaves were studied in polymer composites.
Linear and inverse rules of mixture were applied in order to predict the erosion rate of
a composite system as a function of the erosion rate of its constituents and their
relative content. Best results were generally delivered with the inverse rule of mixture
approach.
A semi-empirical model, proposed to describe the property degradation and damage
growth characteristics and to predict residual properties after single impact, was
applied for the case of solid particle erosion. Theoretical predictions and experimental
results were in very good agreement.
Strahlerosionsverschleiß (Erosion) entsteht beim Auftreffen von festen Partikel
auf Oberflächen und zeichnet sich üblicherweise durch einen Materialabtrag aus, der
neben der Partikelgeschwindigkeit und dem Auftreffwinkel stark vom jeweiligen
Werkstoff abhängt. In den letzten Jahren ist die Anwendung von Polymeren und
Verbundwerkstoffen anstelle der traditionellen Materialien stark angestiegen.
Polymere und Polymer-Verbundwerkstoffe weisen eine relativ hohe Erosionsrate
(ER) auf, was die potenzielle Anwendung dieser Werkstoffe unter erosiven
Umgebungsbedingungen erheblich einschränkt.
Untersuchungen des Erosionsverhaltens anhand ausgewählter Polymere und
Polymer-Verbundwerkstoffe haben gezeigt, dass diese Systeme unterschiedlichen
Verschleißmechnismen folgen, die sehr komplex sind und nicht nur von einer
Werkstoffeigenschaft beeinflusst werden. Anhand der ER kann das
Erosionsverhalten grob in zwei Kategorien eingeteilt werden: sprödes und duktiles
Erosionsverhalten. Das spröde Erosionsverhalten zeigt eine maximale ER bei 90°,
während das Maximum bei dem duktilen Verhalten bei 30° liegt. Ob ein Material das
eine oder das andere Erosionsverhalten aufweist, ist nicht nur von seinen
Eigenschaften, sondern auch von den jeweiligen Prüfparametern abhängig.
Das Ziel dieser Forschungsarbeit war, das grundsätzliche Verhalten von
Polymeren und Verbundwerkstoffen unter dem Einfluss von Erosion zu
charakterisieren, die verschiedenen Verschleißmechanismen zu erkennen und die
maßgeblichen Materialeigenschaften und Kennwerte zu erfassen, um Anwendungen
dieser Werkstoffe unter Erosionsbedingungen zu ermöglichen bzw. zu verbessern.
An einer exemplarischen Auswahl von Polymeren, Elastomeren, modifizierten Polymeren und Faserverbundwerkstoffen wurden die wesentlichen Einflussfaktoren
für die Erosion experimentell bestimmt.
Thermoplastische Polymere und thermoplastische- und vernetzte- Elastomere
Die Versuche, den Erosionswiderstand ausgewählter Polymere (Polyethylene
und Polyurethane) mit verschiedenen Materialeigenschaften zu korrelieren, haben
gezeigt, dass es weder eine klare Abhängigkeit von einzelnen Kenngrößen noch von
Eigenschaftskombinationen gibt. Möglicherweise führt die Bestimmung der
Materialeigenschaften unter den gleichen experimentellen Bedingungen wie bei den Erosionsversuchen zu einer besseren Korrelation zwischen ER und
Materialkenngröße.
Modifiziertes Epoxidharz
Am Beispiel eines modifizierten Epoxidharzes (EP) mit verschiedener
Vernetzungsdichte wurde eine Korrelation zwischen Erosionswiderstand und
Bruchenergie bzw. Erosionswiderstand und Vernetzungsdichte gefunden. Die
Modifizierung erfolgte mit verschiedenen Anteilen von einem hygrothermisch
abgebauten Polyurethan (HD-PUR). Der Zusammenhang zwischen ER und
Vernetzungsparametern steht im Einklang mit der Theorie der Kautschukelastizität.
Modifizierungseffizienz in Duromeren, Thermoplasten und Elastomeren
Des weiteren wurde der Einfluss von Modifizierungen von Polymeren und
Elastomeren untersucht. Mit dem obenerwähnten System (d.h. EP/HD-PUR) läßt sich
auch der Einfluss der Zähigkeitsmodifizierung des Epoxidharzes (EP) auf das
Erosionsverhalten untersuchen. Es wurde gezeigt, dass für HD-PUR Anteile von
mehr als 20 Gew.% diese Modifizierung einen positiven Einfluss auf die
Erosionsbeständigkeit hat. Durch Variation der HD-PUR-Anteile können für dieses
EP Materialeigenschaften, die zwischen den Eigenschaften eines üblichen
Duroplasten und eines weniger elastischen Gummis liegen, erzeugt werden.
Deswegen stellt der modifizierte EP-Harz ein sehr gutes Modellmaterial dar, um den
Einfluss der experimentellen Bedingungen zu studieren, und zu untersuchen, ob
verschiedene Erodenten zu gleichen Erosionsmechanismen führen. Der Übergang
vom duroplastischen zum zähen Verhalten wurde anhand von vier Erodenten
untersucht. Aus den Versuchen ergab sich, dass ein solcher Übergang auftritt, wenn
sehr feine, kantige Partikel (Korund) als Erodenten dienen. Die Partikelgröße und -form ist von entscheidender Bedeutung für die jeweiligen Verschleißmechanismen.
Die Effizienz neuartiger thermoplastischer Elastomere mit einer cokontinuierlichen
Phasenstruktur, bestehend aus thermoplastischem Polyester und
Gummi (funktionalisierter NBR und EPDM Kautschuk), wurde in Bezug auf die
Erosionsbeständigkeit untersucht. Große Anteile von funktionalisiertem Gummi (mehr
als 20 Gew.%) sind vorteilhaft für den Erosionswiderstand. Weiterhin wurde
untersucht, ob sich die herausragende Erosionsbeständigkeit von Polyurethan (PUR)
durch Zugabe von Nanosilikaten eventuell noch steigern läßt. Das Ergebnis war,
dass die Nanopartikel sich vor allem bei einem kleinen Verschleißwinkel (30°) negativ
auswirken. Die schwache Adhäsion zwischen Matrix und Partikeln erleichtert den
Beginn und das Wachsen von Rissen. Dies führt zu einem schnelleren
Materialabtrag von der Materialoberfläche.
Faserverbundwerkstoffe
Ferner wurden Faserverbundwerkstoffe (FVW) mit thermoplastischer und
duromerer Matrix auf ihr Verhalten bei Erosivverschleiß untersucht. Es war von
großem Interesse, den Einfluss von Faserlänge und -orientierung zu untersuchen.
Kurzfaserverstärkte Systeme haben einen besseren Erosionswiderstand als die
unidirektionalen (UD) Systeme. Die Rolle der Faserorientierung kann man nur in
Verbindung mit anderen Parametern, wie Matrixzähigkeit, Faseranteil oder Faser-
Matrix Haftung, berücksichtigen. Am Beispiel von GF/PP Verbunden weisen die
parallel zur Verstreckungsrichtung gestrahlten Systeme den geringsten Widerstand
auf. Andererseits findet bei einem GF/EP System die maximale ER in senkrechter
Richtung statt. Eine Verbesserung der Grenzflächenscherfestigkeit beeinflusst die
Erosionsverschleißrate nachhaltig. Wenn die Haftung der Grenzfläche ausreichend
ist, spielt die Erosionsrichtung eine unbedeutende Rolle für die ER. Weiterhin wurde
gezeigt, dass die Präsenz von zähen Zwischenschichten zu einer deutlichen
Verbesserung des Erosionswiderstands von CF/EP- Verbunden führt.
Eine weitere Aufgabenstellung war es, die Rolle des Faservolumenanteils zu
bestimmen. „Lineare, inverse und modifizierte Mischungsregeln“ wurden
angewendet, und es wurde festgestellt, dass die inversen Mischungsregeln besser
die ER in Abhängigkeit des Faservolumenanteils beschreiben können.
Im Anwendungsbereich von Faserverbundwerkstoffen ist nicht nur die Kenntnis
der ER, sondern auch die Kenntnis der Resteigenschaften erforderlich. Ein
halbempirisches Modell für die Vorhersage des Schlagenergieschwellwertes (Uo) für den Beginn der Festigkeitsabnahme und der Restzugfestigkeit nach einer
Schlagbelastung wurde bei der Untersuchung des Erosionsverschleißes
angewendet. Experimentelle Ergebnisse und theoretische Vorhersagen stimmten
nicht nur für duromere CF/EP-Verbundwerkstoffe, sondern auch für
Verbundwerkstoffe mit einer thermoplastischen Matrix (GF/PP) sehr gut überein.
Das Ziel der Studie war es, die Reaktionen der Protagonisten an den Kinder-und Ju-gendsportschulen der DDR auf die gesellschaftlichen Veränderungen der Wendejahre zu eruieren. Es sollte herausgefunden werden, inwieweit diese Ereignisse die Bildungs-merkmale beeinflussten. Dazu wurden historische Archivdokumente analysiert und in 28 qualitativen Interviews 33 Zeitzeugen befragt. Eine wesentliche Erkenntnis war, dass die Arbeit an den KJS bis in das Jahr 1991 hinein nahezu unverändert fortgeführt wur-de. Durch eine partielle Ausdehnung der Thematik auf die heutigen Eliteschulen des Sports leistet der Text einen Beitrag zur Diskussion über den Leistungssport in Deutsch-land und dessen Spezialschulen.
Die Arbeit richtet sich neben Wissenschaftlern an leistungssportlich und bildungspoli-tisch interessierte Eltern, Pädagogen, Trainer und Verantwortungsträger.
Das homotetramere, cytosolische Chaperon SecB spielt eine entscheidende Rolle in der Proteintranslokation von Escherichia coli beim Transport von Proteinen über die Cytoplasmamembran in den periplasmatischen Raum der Zelle. Es bindet währenddessen naszierende Polypeptide, hält diese in einem entfalteten, translokationskompeteten Zustand und transportiert sie zur Translokationsmaschinerie an die Cytoplasmamembran. In vitro wechselwirkt SecB mit einer Reihe von entfalteten Proteinen, beispielsweise dem Bovine Pancreatic Trypsin Inhibitor (BPTI) oder in vivo mit dem Vorläuferprotein des Maltosebindungsproteins (preMBP). Frühere Untersuchungen lieferten Hinweise auf eine Konformationsänderung des Chaperons hervorgerufen durch eine Substratbindung des Modellsubstrats BPTI. Der Fokus der vorliegenden Arbeit liegt auf Untersuchungen zur Komplexbildung zwischen dem natürlichen Substrat preMBP und SecB sowie auf weiteren Untersuchungen zur Konformationsänderung hervorgerufen durch die Substratbindung.
Um die Aufreinigung der Chaperone zeiteffizienter zu gestalten und die Reinheit weiter zu steigern, erfolgte eine Umklonierung der verschiedenen SecB-Gene in pET20b(+)-Expressionsvektoren. Im Zuge dieser Einklonierung wurden die SecB-Sequenzen mit einer Thrombinschnittstelle und einem His-Tag fusioniert. Weiterhin wurden zwei neue SecB-Mutanten (C109 und C113) generiert.
Die Untersuchung des preMBP-SecB-Komplexes mit Hilfe der HPLC zeigte, dass 2 M GdnHCl ausreichend ist, um das preMBP zu entfalten und entfaltetes preMBP schneller von der Säule eluiert als rückgefaltetes. Weiterhin konnte nachgewiesen werden, dass die Rückfaltung von preMBP durch Verringerung der GdnHCl-Konzentration von 3 M auf 0,1 M innerhalb weniger Sekunden abgeschlossen ist. Die Komplexbildung erfolgte nur, wenn das Chaperon vorlegt wurde und preMBP anschließend hinzugegeben wurde. Die anschließende Analyse zeigte eine Koelution beider Proteine.
Für die EPR-spektroskopischen Untersuchungen wurden die SecB-Mutanten mit den Spinlabeln MTS und IOPI gelabelt. Die cw-EPR-Messungen bei Raumtemperatur zeigten, dass die Beweglichkeit des Spinlabels am Wildtyp-Protein deutlich stärker eingeschränkt ist als an Aminosäureposition 90. Die cw-EPR-Messungen am Wildtyp-Chaperon bei 180 K gaben erste Hinweise darauf, dass sich benachbarte Spinlabel in einem Abstand von weniger als 20 Å befinden. Der Vergleich mit den berechneten Abstandsdaten aus dem Molecular Modeling zeigte, dass es sich bei den gemessenen Entfernungen um die kurzen Abstände zwischen den Cysteinen benachbarter Untereinheiten handeln muss. Weiterhin konnte nachgewiesen werden, dass sich das Spinlabel an Aminosäureposition 312 des preMBPs in einer Entfernung von maximal 20 Å befindet.
Die DEER-Messungen ergaben Abstände zwischen den Aminosäurepositionen 97 am SecB von 19,3 Å für die kurzen Entfernungen zwischen direkt benachbarten Untereinheiten und 51,2 Å für die langen Entfernungen. Durch die Substratbindung von BPTI an das Chaperon konnte eine doppelscherenartige Aufweitung zwischen den Dimeren im Bereich der Substratbindungstasche belegt werden. Auch die Abstandsdaten von Aminosäureposition 90 bestätigten die Aufweitung der Bindungstasche.