Refine
Year of publication
- 2009 (37) (remove)
Document Type
- Doctoral Thesis (37) (remove)
Language
- English (37) (remove)
Has Fulltext
- yes (37)
Keywords
- Algebraische Geometrie (2)
- Datenanalyse (2)
- Extrapolation (2)
- Finanzmathematik (2)
- Visualisierung (2)
- illiquidity (2)
- 17beta-Estradiol (1)
- 3D Gene Expression (1)
- 3D Point Data (1)
- Ableitungsschätzung (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (13)
- Kaiserslautern - Fachbereich Informatik (8)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (7)
- Kaiserslautern - Fachbereich Chemie (5)
- Kaiserslautern - Fachbereich Sozialwissenschaften (2)
- Kaiserslautern - Fachbereich Biologie (1)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (1)
Epoxy resins have achieved acceptance as adhesives, coatings, and potting compounds,
but their main application is as matrix to produce reinforced composites.
However, their usefulness in this field still limited due to their brittle nature. Some
studies have been done to increase the toughness of epoxy composites, of which the
most successful one is the modification of the polymer matrix with a second toughening
phase.
Resin Transfer Molding (RTM) is one of the most important technologies to manufacture
fiber reinforced composites. In the last decade it has experimented new impulse,
due to its favorable application to produce large surface composites with good technical
properties and at relative low cost.
This research work focuses on the development of novel modified epoxy matrices,
with enhanced mechanical and thermal properties, suitable to be processed by resin
transfer molding technology, to manufacture Glass Fiber Reinforced Composites
(GFRC’s) with improved performance in comparison to the commercially available
ones.
In the first stage of the project, a neat epoxy resin (EP) was modified using two different
nano-sized ceramics: silicium dioxide (SiO2) and zirconium dioxide (ZrO2); and
micro-sized particles of silicone rubber (SR) as second filler. Series of nanocomposites
and hybrid modified epoxy resins were obtained by systematic variation of filler
contents. The rheology and curing process of the modified epoxy resins were determined
in order to define their aptness to be processed by RTM. The resulting matrices
were extensively characterized qualitatively and quantitatively to precise the effect
of each filler on the polymer properties.
It was shown that the nanoparticles confer better mechanical properties to the epoxy
resin, including modulus and toughness. It was possible to improve simultaneously
the tensile modulus and toughness of the epoxy matrix in more than 30 % and 50 %
respectively, only by using 8 vol.-% nano-SiO2 as filler. A similar performance was
obtained by nanocomposites containing zirconia. The epoxy matrix modified with 8 vol.-% ZrO2 recorded tensile modulus and toughness improved up to 36% and 45%
respectively regarding EP.
On the other hand, the addition of silicone rubber to EP and nanocomposites results
in a superior toughness but has a slightly negative effect on modulus and strength.
The addition of 3 vol.-% SR to the neat epoxy and nanocomposites increases their
toughness between 1.5 and 2.5 fold; but implies also a reduction in their tensile modulus
and strength in range 5-10%. Therefore, when the right proportion of nanoceramic
and rubber were added to the epoxy resin, hybrid epoxy matrices with fracture
toughness 3 fold higher than EP but also with up to 20% improved modulus were
obtained.
Widespread investigations were carried out to define the structural mechanisms responsible
for these improvements. It was stated, that each type of filler induces specific
energy dissipating mechanisms during the mechanical loading and fracture
processes, which are closely related to their nature, morphology and of course to
their bonding with the epoxy matrix. When both nanoceramic and silicone rubber are
involved in the epoxy formulation, a superposition of their corresponding energy release
mechanisms is generated, which provides the matrix with an unusual properties
balance.
From the modified matrices glass fiber reinforced RTM-plates were produced. The
structure of the obtained composites was microscopically analyzed to determine their
impregnation quality. In all cases composites with no structural defects (i.e. voids,
delaminations) and good superficial finish were reached. The composites were also
properly characterized. As expected the final performance of the GFRCs is strongly
determined by the matrix properties. Thus, the enhancement reached by epoxy matrices
is translated into better GFRC´s macroscopical properties. Composites with up
to 15% enhanced strength and toughness improved up to 50%, were obtained from
the modified epoxy matrices.
2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) is a highly toxic and persistent organic pollutant, which is ubiquitously found in the environment. The prototype dioxin compound was classified as a human carcinogen by the International Agency for Research on Cancer. TCDD acts as a potent liver tumor promoter in rats, which is one of the major concerns related to TCDD exposure. There is extensive evidence, that TCDD exerts anti-estrogenic effects via arylhydrocarbon receptor (AhR)-mediated induction of cytochromes P450 and interferes with the estrogen receptor alpha (ERalpha)-mediated signaling pathway. The present work was conducted to shed light on the hypothesis that enhanced activation of estradiol metabolism by TCDD-induced enzymes, mainly CYP1A1 and CYP1B1, leads to oxidative DNA damage in liver cells. Furthermore, the possible modulation by 17beta-estradiol (E2) was investigated. The effects were examined using four different AhR-responsive species- and sex-specific liver cell models, rat H4II2 and human HepG2 hepatoma cell lines as well as rat primary hepatocytes from male and female Wistar rats. The effective induction of CYP1A1 and CYP1B1 by TCDD was demonstrated in all liver cell models. Basal and TCDD-induced expression of CYP1B1, which is a key enzyme in stimulating E2 metabolism via the more reactive formation of the genotoxic 4-hydroxyestradiol, was most pronounced in rat primary hepatocytes. CYP-dependent induction of reactive oxygen species (ROS) was only observed in rodent cells. E2 induced ROS only in primary rat hepatocytes, which was associated with a weak CYP1B1 mRNA induction. Thus, E2 itself was suggested to induce its own metabolism in primary rat hepatocytes, resulting in the redox cycling of catechol estradiol metabolites leading to ROS formation. In this study the role of TCDD and E2 on oxidative DNA damage was investigated for the first time in vitro in the comet assay using liver cells. Both TCDD and E2 were shown to induce oxidative DNA base modifications only in rat hepatocytes. Additionally, direct oxidative DNA-damaging effects of the two main E2 metabolites, 4-hydroxyestradiol and 2-hydroxyestradiol, were only observed in rat hepatocytes and revealed that E2 damaged the DNA to the same extent. However, the induction of oxidative DNA damage by E2 could not completely be explained by the metabolic conversion of E2 via CYP1A1 and CYP1B1 and has to be further investigated. The expression of low levels of endogenous ERalpha mRNA in primary rat hepatocytes and the lack of ERalpha in hepatoma cell lines were identified as crucial. Therefore, the effects of interference of ERalpha with AhR were examined in HepG2 cells, which were transiently transfected with ERalpha. The over-expression of ERalpha led to enhanced AhR-mediated transcriptional activity by E2, suggesting a possible regulation of E2 levels. In turn, TCDD reduced E2-mediated ERalpha signaling, confirming the anti-estrogenic action of TCDD. Such a modulation of the combined effects of TCDD with E2 was not observed in any of the other experiments. Thus, the role of low endogenous ERalpha levels has to be further investigated in transfection experiments using rat primary hepatocytes. Overall, rat primary hepatocyte culture turned out to be the more adaptive cell model to investigate metabolism in the liver, reflecting a more realistic situation of the liver tissue. Nevertheless, during this work a crosstalk between ERalpha and AhR was shown for the first time using human hepatoma cell line HepG2 by transiently transfecting ERalpha.
We study the extension of techniques from Inductive Logic Programming (ILP) to temporal logic programming languages. Therefore we present two temporal logic programming languages and analyse the learnability of programs from these languages from finite sets of examples. In first order temporal logic the following topics are analysed: - How can we characterize the denotational semantics of programs? - Which proof techniques are best suited? - How complex is the learning task? In propositional temporal logic we analyse the following topics: - How can we use well known techniques from model checking in order to refine programs? - How complex is the learning task? In both cases we present estimations for the VC-dimension of selected classes of programs.
Proteins of the intermembrane space of mitochondria are generally encoded by nuclear genes that are synthesized in the cytosol. A group of small intermembrane space proteins lack classical mitochondrial targeting sequences, but these proteins are imported in an oxidation-driven reaction that relies on the activity of two components, Mia40 and Erv1. Both proteins constitute the mitochondrial disulfide relay system. Mia40 functions as an import receptor that interacts with incoming polypeptides via transient, intermolecular disulfide bonds. Erv1 is an FAD-binding sulfhydryl oxidase that activates Mia40 by re-oxidation, but the process how Erv1 itself is re-oxidized has been poorly understood. Here, I show that Erv1 interacts with cytochrome c which provides a functional link between the mitochondrial disulfide relay system and the respiratory chain. This mechanism not only increases the efficiency of mitochondrial inport by the re-oxidation of Erv1 and Mia40 but also prevents the formation of deleterious hydrogen peroxide within the intermembrane space. Thus, the miochondrial disulfide relay system is, analogous to that of the bacterial periplasm, connected to the electron transport chain of the inner membrane, which possibly allows an oxygen-dependend regulation of mitochondrial import rates. In addition, I modeled the structure of Erv1 on the basis of the Saccharomyces cerevisiae Erv2 crystal structure in order to gain insight into the molecular mechanism of Erv1. According to the high degree of sequence homologies, various characteristics found for Erv2 are also valid for Erv1. Finally, I propose a regulatory function of the disulfide relay system on the respiratory chain. The disulfide relay system senses the molecular oxygen levels in mitochondria and, thus, is able to adapt respiratory chain activity in order to prevent wastage of NADH and production of ROS.
This dissertation is intended to give a systematic treatment of hypersurface singularities in arbitrary characteristic which provides the necessary tools, theoretically and computationally, for the purpose of classification. This thesis consists of five chapters: In chapter 1, we introduce the background on isolated hypersurface singularities needed for our work. In chapter 2, we formalize the notions of piecewise-homogeneous grading and we discuss thoroughly non-degeneracy in arbitrary characteristic. Chapter 3 is devoted to determinacy and normal forms of isolated hypersurface singularities. In the first part, we give finite determinacy theorems in arbitrary characteristic with respect to right respectively contact equivalence. Furthermore, we show that "isolated" and finite determinacy properties are equivalent. In the second part, we formalize Arnol'd's key ideas for the computation of normal forms an define the conditions (AA) and (AAC). The last part of Chapter 3 is devoted to the study of normal forms in the general setting of hypersurface singularities imposing neither condition (A) nor Newton-Nondegeneracy. In Chapter 4, we present algorithms which we implement in Singular for the purpose of explicit computation of regular bases and normal forms. In chapter 5, we transfer some classical results on invariants over the field C of complex numbers to algebraically closed fields of characteristic zero known as Lefschetz principle.
Subject of this book is an epistemological consideration - a consideration which could be characterised as a main theme - maybe the main theme - of that part of philosophy we all know as epistemology: the nature of knowledge. But other than the most essays on the subject of knowledge, here I am going to deal with a largely overlooked account to try to find an answer to the epistemological question of knowledge. This is the mental state account of knowledge (Price in his 'Belief' the formulation ``mental acts'' and Williamson talks about a ``state of mind''). Or to put it into the question I chose as title: is knowledge a mental state? We have to concede first that there is only a small group of philosophers who used to explain knowledge in terms of a mental state, particularly the `Oxford Realists'. And secondly, the acceptance of the MS thesis is low and negative. There is an interesting detail here: unlike the poor interest in an epistemic theory such as the MS thesis, philosophers like Prichard or Austin (and their philosophical thinking) are not really living in the shadows of philosophical consideration. Indeed their philosophical impact is high level, if we consider for instance Prichard's moral writings or Austin's theory of speech acts. I think we can conclude from this fact that the reason of the `negative' ignorance in respect of their epistemological point of view was not caused by a negative quality of their philosophy. Now, the question we are faced with (and that should be answered here) is: what is wrong with the MS thesis even though it is held by high class philosophers? Why is the epistemic thinking of Cook Wilson, Prichard and Austin afflicted with such ignorance? I will try to explain this later on with the notion of an unreflected Platonian heritage during 2000 years of epistemic thinking - a notion which is similar to a point Hetherington has called ``epistemic absolutism''. So, there are three main purposes which I am pursuing in this consideration: 1.To explain the reasons why there is such an ignorance towards an assertion of the MS thesis. I am going to pursue this through an analysis of knowledge which will demonstrate the inappropriateness of the JTB thesis as an adequate analysis of knowledge. 2.To describe that it is a mistake to ignore or at least underestimate the MS thesis in the discussion of an appropriate definition of knowledge and to maintain that the MS thesis is the key to a general theory of knowledge. 3.Conclusion: If the first two steps are correct, the JTB thesis is insufficient in order to give an account of the nature of knowledge in general. A consequence from this is: all the epistemic theories which are dealing with the JTB thesis are based on deficient assumptions. Hence their results - notably the well-known externalism/internalism debate - are insufficient, too. So, there is a need for a new theory of knowledge based on the MS thesis. In the course of my consideration I am going to justify the following three theses: i) The JTB thesis as a definition of knowledge in general is deficient, as the JTB thesis describes the propositional aspect of knowledge only. But the propositional knowledge - the so-called `knowledge that' - is merely one element among others that has to be recognized in search of a theory of knowledge. ii) The status of the `knowledge that' is derivative and not ultimate. It is derived from the non- propositional knowledge in order to make the non-propositional knowledge communicable to others. The mode of the `knowledge that' is indirect and thus can be stated in the third person point of view only. This ultimate kind of knowledge - the knowledge which the `knowledge that' is derived from - is the non-propositional knowledge. Its mode is direct and hence it is restricted to the first person point of view. Therefore the basis towards a theory of knowledge in general has to be this non-propositional aspect of knowledge. iii) Hence, taking the first two theses for granted, an appropriate theory of knowledge needs an account of the non-propositional knowledge. The MS thesis will accomplish this task.
Limit theorems constitute a classical and important field in probability theory. In several applications, in particular in demographic or medical contexts, killed Markov processes suggest themselves as models for populations undergoing culling by mortality or other processes. In these situations mathematical research features a general interest in the observable distribution of survivors, which is known as Yaglom limit or quasi-stationary distribution. Previous work often focuses on discrete state spaces, commonly birth-death processes (or with some more flexible localization of the transitions), with killing only on the boundary. The central concerns of this thesis are to describe, for a given class of one dimensional diffusion processes, the quasistationary distributions (if any), and to describe the convergence (or not) of the process conditioned on survival to one of these quasistationary distributions. Rather general diffusion processes on the half-line are considered, where 0 is allowed to be regular or an exit boundary. Very similar techniques are applied in this work in order to derive results on the large time behavior of an exotic measure valued process, which is closely related to so-called point interactions, which have been widely studied in the mathematical physics literature.
This Dissertation tried to provide insights into the influences of individual and contextual factors on Technical and Vocational Education and Training (TVET) teachers’ learning and professional development in Ethiopia. Specifically, this research focused on identifying and determining the influences of teachers’ self perception as learners and professionals, and investigates the impact of the context, process and content of their learning and experiences on their professional development. The knowledge of these factors and their impacts help in improving the learning and professional development of the TVET teachers and their professionalization. This research tried to provide answers for the following five research questions. (1) How do TVET teachers perceive themselves as active learners and as professionals? And what are the implications of their perceptions on their learning and development? (2) How do TVET teachers engage themselves in learning and professional development activities? (3) What contextual factors facilitated or hindered the TVET Teachers’ learning and professional development? (4) Which competencies are found critical for the TVET teachers’ learning and professional development? (5) What actions need to be considered to enhance and sustain TVET teachers learning and professional development in their context? It is believed that the research results are significant not only to the TVET teachers, but also to schools leaders, TVET Teacher Training Institutions, education experts and policy makers, researchers and others stakeholders in the TVET sector. The theoretical perspectives adopted in this research are based on the systemic constructivist approach to professional development. An integrated approach to professional development requires that the teachers’ learning and development activities to be taken as an adult education based on the principles of constructivism. Professional development is considered as context - specific and long-term process in which teachers are trusted, respected and empowered as professionals. Teachers’ development activities are sought as more of collaborative activities portraying the social nature of learning. Schools that facilitate the learning and development of teachers exhibit characteristics of a learning organisation culture where, professional collaboration, collegiality and shared leadership are practiced. This research has drawn also relevant point of views from studies and reports on vocational education and TVET teacher education programs and practices at international, continental and national levels. The research objectives and the types of research questions in this study implied the use of a qualitative inductive research approach as a research strategy. Primary data were collected from TVET teachers in four schools using a one-on-one qualitative in-depth interview method. These data were analyzed using a Qualitative Content Analysis method based on the inductive category development procedure. ATLAS.ti software was used for supporting the coding and categorization process. The research findings showed that most of the TVET teachers neither perceive themselves as professionals nor as active learners. These perceptions are found to be one of the major barriers to their learning and development. Professional collaborations in the schools are minimal and teaching is sought as an isolated individual activity; a secluded task for the teacher. Self-directed learning initiatives and individual learning projects are not strongly evident. The predominantly teacher-centered approach used in TVET teacher education and professional development programs put emphasis mainly to the development of technical competences and has limited the development of a range of competences essential to teachers’ professional development. Moreover, factors such as the TVET school culture, the society’s perception of the teaching profession, economic conditions, and weak links with industries and business sectors are among the major contextual factors that hindered the TVET teachers’ learning and professional development. A number of recommendations are forwarded to improve the professional development of the TVET teachers. These include change in the TVET schools culture, a paradigm shift in TVET teacher education approach and practice, and development of educational policies that support the professionalization of TVET teachers. Areas for further theoretical research and empirical enquiry are also suggested to support the learning and professional development of the TVET teachers in Ethiopia.
Most software systems are described in high-level model or programming languages. Their runtime behavior, however, is determined by the compiled code. For uncritical software, it may be sufficient to test the runtime behavior of the code. For safety-critical software, there is an additional aggravating factor resulting from the fact that the code must satisfy the formal specification which reflects the safety policy of the software consumer and that the software producer is obliged to demonstrate that the code is correct with respect to the specification using formal verification techniques. In this scenario, it is of great importance that static analyses and formal methods can be applied on the source code level, because this level is more abstract and better suited for such techniques. However, the results of the analyses and the verification can only be carried over to the machine code level, if we can establish the correctness of the translation. Thus, compilation is a crucial step in the development of software systems and formally verified translation correctness is essential to close the formalization chain from high-level formal methods to the machine-code level. In this thesis, I propose an approach to certifying compilers which achieves the aim of closing the formalization chain from high-level formal methods to the machine-code level by applying techniques from mathematical logic and programming language semantics. I propose an approach called foundational translation validation (FTV) in which the software producer implements an FTV system comprising a compiler and a specification and verification framework (SVF) which is implemented in higher-order logic (HOL). The most important part of the SVF is an explicit translation contract which comprises the formalizations of the source and the target languages of the compiler and the formalization of a binary translation correctness predicate corrTrans(S,T) for source programs S and target programs T. The formalizations of the languages are realized as deep embeddings in HOL. This enables one to declare the whole program in a formalized language as a HOL constant. The predicate formally specifies when T is considered to be a correct translation of S. Its definition is explicitly based on the program semantics definitions provided by the translation contract. Subsequent to the translation, the compiler translates the source and the target programs into their syntactic representations as HOL constants, S and T, and generates a proof of corrTrans(S,T). We call a compiler which follows the FTV approach a proof generating compiler. Our approach borrows the idea of representing programs in correctness proofs as logic constants from the foundational proof-carrying code (FPCC) approach. Novel features that distinquish our approach from further approaches to certifying compilers, such as proof-carrying code (PCC) and translation validation (TV) are the following: Firstly, the presence of an explicit translation contract formalized in HOL: The approaches PCC and TV do not formalize a translation contract explicitly. Instead of this, they incorporate operational semantics and translation correctness criterion in translation validation tools on the programming language level. Secondly, representation of programs in correctness proofs as logic constants: The approaches PCC and the TV translate programs into their representations as semantic abstractions that serve as inputs for translation validation tools. Thirdly, certification of program transformation chains: Unlike the TV approach, which certifies single program transformations, the FTV approach achieves the aim of certifying whole chains of program transformations. This is possible due to the fact that the translation contract provides, for all programming languages involved in the program transformation chain, definitions of program semantics functions which map programs to mathematical objects that are elements of a set with an (at least) partial order "<=". Then, the proof makes use of the fact that the relation "<=" is transitive. In this thesis, the feasibility of the FTV approach is exemplified by the implementation of an FTV system. The system comprises a compiler front-end that certifies its optimization phase and an accompanying SVF that is implemented in the theorem prover Isabelle/HOL. The compiler front-end translates programs in a small C-like programming language, performs three optimizations: constant folding, dead assignment elimination, and loop invariant hoisting, and generates translation certificates in the form of Isabelle/HOL theories. The main focus of the thesis is on the description of the SVF and its translation verification techniques.
Photochemical reactions are of great interest due to their importance in chemical and biological processes. Highly sensitive IR/UV double and triple resonance spectroscopy in molecular beam experiments in combination with ab initio and DFT calculations yields information on reaction coordinates and Intersystem Crossing (ISC) processes subsequent to photoexcitation. In general, molecular beam experiments enable the investigation of isolated, cold molecules without any influence of the environment. Furthermore, small aggregates can be analyzed in a supersonic jet by gradually adding solvent molecules like water. Conclusions concerning the interactions in solution can be derived by investigating and fully understanding small systems with a defined amount of solvent molecules. In this work the first applications of combined IR/UV spectroscopy on reactive isolated molecules and triplet states in molecular beams without using any messenger molecules are presented. Special focus was on excited state proton transfer reactions, which can also be described as keto enol tautomerisms. Various molecules such as 3-hydroxyflavone, 2-(2-naphthyl)-3-hydroxychromone and 2,5-dihydroxybenzoic acid have been investigated with regard to this question. In the case of 3-hydroxyflavone and 2-(2-naphthyl)-3-hydroxychromone, the IR spectra have been recorded subsequent to an excited state proton transfer. Furthermore the dihydrate of 3-hydroxyflavone has been analyzed concerning a possible proton transfer in the excited state: The proton transfer reaction along the water molecules (proton wire) has to be induced by raising the excitation energy. However, photoinduced reactions involve not only singlet but also triplet states. As an archetype molecule xanthone has been analysed. After excitation to the S2 state, ISC occurs into the triplet manifold leading to a population of the T1 state. The IR spectrum of the T1 state has been recorded for the first time using the UV/IR/UV technique without using any messenger molecules. Altogether it is shown that IR/UV double and triple resonance techniques are suitable tools to analyze reaction coordinates of photochemical processes.
This thesis deals with the application of binomial option pricing in a single-asset Black-Scholes market and its extension to multi-dimensional situations. Although the binomial approach is, in principle, an efficient method for lower dimensional valuation problems, there are at least two main problems regarding its application: Firstly, traded options often exhibit discontinuities, so that the Berry- Esséen inequality is in general tight; i.e. conventional tree methods converge no faster than with order 1/sqrt(N). Furthermore, they suffer from an irregular convergence behaviour that impedes the possibility to achieve a higher order of convergence via extrapolation methods. Secondly, in multi-asset markets conventional tree construction methods cannot ensure well-defined transition probabilities for arbitrary correlation structures between the assets. As a major aim of this thesis, we present two approaches to get binomial trees into shape in order to overcome the main problems in applications; the optimal drift model for the valuation of single-asset options and the decoupling approach to multi-dimensional option pricing. The new valuation methods are embedded into a self-contained survey of binomial option pricing, which focuses on the convergence behaviour of binomial trees. The optimal drift model is a new one-dimensional binomial scheme that can lead to convergence of order o(1/N) by exploiting the specific structure of the valuation problem under consideration. As a consequence, it has the potential to outperform benchmark algorithms. The decoupling approach is presented as a universal construction method for multi-dimensional trees. The corresponding trees are well-defined for an arbitrary correlation structure of the underlying assets. In addition, they yield a more regular convergence behaviour. In fact, the sawtooth effect can even vanish completely, so that extrapolation can be applied.
This PhD thesis aims at finding a global robot navigation strategy for rugged off-road terrain which is robust against inaccurate self-localization, scalable to large environments, but also cost-efficient, e.g. able to generate navigation paths which optimize a cost measure closely related to terrain traversability. In order to meet this goal, aspects of both metrical and topological navigation techniques are combined. A primarily topological map is extended with the previously lacking capability of cost-efficient path planning and map extension. Further innovations include a multi-dimensional cost measure for topological edges, a method to learn these costs based on live feedback from the robot and a set of extrapolation methods to predict the traversability costs for untraversed edges. The thesis presents two sophisticated new image analysis techniques to optimize cost prediction based on the shape and appearance of surrounding terrain. Experimental results indicate that the proposed global navigation system is indeed able to perform cost-efficient, large scale path planning. At the same time, the need to maintain a fine-grained, global world model which would reduce the scalability of the approach is avoided.
Adaptive Extraction and Representation of Geometric Structures from Unorganized 3D Point Sets
(2009)
The primary emphasis of this thesis concerns the extraction and representation of intrinsic properties of three-dimensional (3D) unorganized point clouds. The points establishing a point cloud as it mainly emerges from LiDaR (Light Detection and Ranging) scan devices or by reconstruction from two-dimensional (2D) image series represent discrete samples of real world objects. Depending on the type of scenery the data is generated from the resulting point cloud may exhibit a variety of different structures. Especially, in the case of environmental LiDaR scans the complexity of the corresponding point clouds is relatively high. Hence, finding new techniques allowing the efficient extraction and representation of the underlying structural entities becomes an important research issue of recent interest. This thesis introduces new methods regarding the extraction and visualization of structural features like surfaces and curves (e.g. ridge-lines, creases) from 3D (environmental) point clouds. One main part concerns the extraction of curve-like features from environmental point data sets. It provides a new method supporting a stable feature extraction by incorporating a probability-based point classification scheme that characterizes individual points regarding their affiliation to surface-, curve- and volume-like structures. Another part is concerned with the surface reconstruction from (environmental) point clouds exhibiting objects that are more or less complex. A new method providing multi-resolutional surface representations from regular point clouds is discussed. Following the applied principles of this approach a volumetric surface reconstruction method based on the proposed classification scheme is introduced. It allows the reconstruction of surfaces from highly unstructured and noisy point data sets. Furthermore, contributions in the field of reconstructing 3D point clouds from 2D image series are provided. In addition, a discussion concerning the most important properties of (environmental) point clouds with respect to feature extraction is presented.
Microfibrillar reinforced composites (MFC) have attracted considerable academic and practical interests after the concept was introduced more than a decade years ago. This new type of composites will be created by blending of two polymers with different melting temperatures and processing the blend under certain thermo-mechanical conditions to generate in-situ formed microfibrils of the higher melting polymer grade of temperature in the blend. The compression molded microfibrillar composites were reported to possess excellent mechanical properties and thus they are promising materials for different applications. In the present work, a typical immiscible polymer blend PET/PP was selected for the preparation of PET/PP, PET/PP/TiO2 microfibrillar reinforced composites. The objective of this study is to analyse the processing-structure-property relationship in the PET/PP based MFCs. The morphology of the PET microfibrils and the dispersion of the TiO2 nanoparticles were characterized by scanning electron microscopy (SEM) and transmission electron microscopy (TEM), and discussed. The crystallization behaviour of PET and PP was studied by means of differential scanning calorimetry (DSC). The thermomechanical and mechanical properties of the composites were determined by dynamic mechanical thermal analysis (DMTA) and uniaxial tensile tests and the related results discussed as a function of the composition of the corresponding system. During stretching of the PET/PP extrudate, the PET dispersed phase was deformed into microfibrils. These microfibrils were still well persevered after compression molding of the drawn strands. Therefore the PET microfibrils acted as the reinforcement for the PP matrix. Compared with neat PP, the tensile properties of the PET/PP MFC were greatly improved. For the PET/PP/TiO2 MFC, the effects of polypropylene grafted maleic anhydride (PP-g-MA, introduced as compatibilizer) and TiO2 particles on the structure and properties of drawn strands and composites were investigated. Upon the addition of PP-g-MA, the preferential location of TiO2 particles changed: they migrated from the PET dispersed phase to the continuous PP matrix phase. This was accompanied with structural changes of the drawn strands. The microfibril formation mechanism was also investigated. After injection molding of the microfibrillar composites, the preferential location of TiO2 particles was still preserved. DMTA analysis of drawn strands, the tensile and impact tests of the composites demonstrated that the mechanical properties of the drawn strands of the microfibrillar composites were strongly dependent on the respective structures of the tested materials. To further investigate the preferential location of TiO2 particles in the PET/PP blend which were discovered during the preparation of PET/PP/TiO2 MFCs, PET/PP/TiO2 ternary nanocomposites were prepared according to four blending procedures. The preferential location of TiO2 nanoparticles was influenced by the blending sequence and the amount of PP-g-MA incorporated. Furthermore, it was discovered that TiO2 nanoparticles exerted a compatibilizing effect on the morphology of the composites. Three different compatibilization mechanisms of nanoparticles were proposed depending on the location of the nanoparticles.
Elastomers and their various composites, and blends are frequently used as engineering working parts subjected to rolling friction movements. This fact already substantiates the importance of a study addressing the rolling tribological properties of elastomers and their compounds. It is worth noting that until now the research and development works on the friction and wear of rubber materials were mostly focused on abrasion and to lesser extent on sliding type of loading. As the tribological knowledge acquired with various counterparts, excluding rubbers, can hardly be adopted for those with rubbers, there is a substantial need to study the latter. Therefore, the present work was aimed at investigating the rolling friction and wear properties of different kinds of elastomers against steel under unlubricated condition. In the research the rolling friction and wear properties of various rubber materials were studied in home-made rolling ball-on-plate test configurations under dry condition. The materials inspected were ethylene/propylene/diene rubber (EPDM) without and with carbon black (EPDM_CB), hydrogenated acrylonitrile/butadiene rubber (HNBR) without and with carbon black/silica/multiwall carbon nanotube (HNBR_CB/silica/MWCNT), rubber-rubber hybrid (HNBR and fluororubber (HNBR-FKM)) and rubber-thermoplastic blend (HNBR and cyclic butylene terephthalate oligomers (HNBR-CBT)). The dominant wear mechanisms were investigated by scanning electron microscopy (SEM), and analyzed as a function of composition and testing conditions. Differential scanning calorimetry (DSC), dynamic-mechanical thermal analysis (DMTA), atomic force microscopy (AFM), and transmission electron microscopy (TEM) along with other auxiliary measurements, were adopted to determine the phase structure and network-related properties of the rubber systems. The changes of the friction and wear as a function of type and amount of the additives were explored. The friction process of selected rubbers was also modelled by making use of the finite element method (FEM). The results show that incorporation of filler enhanced generally the wear resistance, hardness, stiffness (storage modulus), and apparent crosslinking of the related rubbers (EPDM-, HNBR- and HNBR-FKM based ones), but did not affect their glass transition temperature. Filling of rubbers usually reduced the coefficient of friction (COF). However, the tribological parameters strongly depended also on the test set-up and test duration. High wear loss was noticed for systems showing the occurrence of Schallamach-type wavy pattern. The blends HNBR-FKM and HNBR-CBT were two-phase structured. In HNBR-FKM, the FKM was dispersed in form of large microscaled domains in the HNBR matrix. This phase structure did not change by incorporation of MWCNT. It was established that the MWCNT was preferentially embedded in the HNBR matrix. Blending HNBR with FKM reduced the stiffness and degree of apparent crosslinking of the blend, which was traced to the dilution of the cure recipe with FKM. The coefficient of friction increased with increasing FKM opposed to the expectation. On the other hand, the specific wear rate (Ws) changed marginally with increasing content of FKM. In HNBR-CBT hybrids the HNBR was the matrix, irrespective to the rather high CBT content. Both the partly and mostly polymerized CBT ((p)CBT and pCBT, respectively) in the hybrids worked as active filler and thus increased the stiffness and hardness. The COF and Ws decreased with increasing CBT content. The FEM results in respect to COF achieved on systems possessing very different structures and thus properties (EPDM_30CB, HNBR-FKM 100-100 and HNBR-(p)CBT 100-100, respectively) were in accordance with the experimental results. This verifies that FEM can be properly used to consider the complex viscoelastic behaviour of rubber materials under dry rolling condition.
Knowledge discovery from large and complex collections of today’s scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research covered in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics. Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of highdimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of highenergy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.
This thesis deals with 3 important aspects of optimal investment in real-world financial markets: taxes, crashes, and illiquidity. An introductory chapter reviews the portfolio problem in its historical context and motivates the theme of this work: We extend the standard modelling framework to include specific real-world features and evaluate their significance. In the first chapter, we analyze the optimal portfolio problem with capital gains taxes, assuming that taxes are deferred until the end of the investment horizon. The problem is solved with the help of a modification of the classical martingale method. The second chapter is concerned with optimal asset allocation under the threat of a financial market crash. The investor takes a worst-case attitude towards the crash, so her investment objective is to be best off in the most adverse crash scenario. We first survey the existing literature on the worst-case approach to optimal investment and then present in detail the novel martingale approach to worst-case portfolio optimization. The first part of this chapter is based on joint work with Ralf Korn. In the last chapter, we investigate optimal portfolio decisions in the presence of illiquidity. Illiquidity is understood as a period in which it is impossible to trade on financial markets. We use dynamic programming techniques in combination with abstract convergence results to solve the corresponding optimal investment problem. This chapter is based on joint work with Holger Kraft and Peter Diesinger.
This study deals with the optimal control problems of the glass tube drawing processes where the aim is to control the cross-sectional area (circular) of the tube by using the adjoint variable approach. The process of tube drawing is modeled by four coupled nonlinear partial differential equations. These equations are derived by the axisymmetric Stokes equations and the energy equation by using the approach based on asymptotic expansions with inverse aspect ratio as small parameter. Existence and uniqueness of the solutions of stationary isothermal model is also proved. By defining the cost functional, we formulated the optimal control problem. Then Lagrange functional associated with minimization problem is introduced and the first and the second order optimality conditions are derived. We also proved the existence and uniqueness of the solutions of the stationary isothermal model. We implemented the optimization algorithms based on the steepest descent, nonlinear conjugate gradient, BFGS, and Newton approaches. In the Newton method, CG iterations are introduced to solve the Newton equation. Numerical results are obtained for two different cases. In the first case, the cross-sectional area for the entire time domain is controlled and in the second case, the area at the final time is controlled. We also compared the performance of the optimization algorithms in terms of the solution iterations, functional evaluations and the computation time.
Product development with end-user integration is not an end in itself but a logical necessity due to divergent types of knowledge of the user and the developer of a product. While the user is an expert in regard to the product’s usage the developer is an expert in the product’s construction and functioning. For the development of high-end products both types of expertises were a prerequisite at all times. The efficient and throughout integration of the user’s perspective into existing product development approaches is the core of user-centred product development. Activities that are the basic ingredient of just any user-centred development approach can be roughly categorized into analysis, design and evaluation activities. Research and practice prove the early integration of real end-users within those activities to add significant and sustainable value to product innovation. The instrumental, methodological and procedural impact of globalization tendencies, on modern user-centred product development in particular, is the primary research focus of the field of cross-cultural user-centred product development. This research aims at the further advancement of the methodological foundations of cross-cultural user centred product development approaches based on a stabile and profound theoretical basis. Primary research objects are established user-analysis methodologies, which are mainly based on Western concepts and theories, and their applicability in disparate cultural contexts of the Far East (China and Korea in particular). For facilitating the adaptation of abstract method characteristics to the situational context of method application as foundation of cross-cultural methodological advancement, a model of method localization was developed. In alignment with internationalization and localization activities within product development processes, a framework for localizing user-centred methodologies was developed. Equivalent to internationalization activities of real product development, the abstraction of method traits from specific methodologies is a necessity in a first step. Methodological adaptation with the primary objective of optimizing situational application of a methodology is to be done in a second step – the step of method-localization. This model of method localization and its underlying theories and principles were tested within an extensive empirical study in Germany, China and Korea. Within this study the applicability of six distinct user-centred product development methodologies, each with its very own profile of abstract method traits, was tested with 248 participants in total. Results clearly back the basic hypothesis of method-localization, i.e. that the application of a user-centred methodology rises and falls with the alignment of its characteristic traits with the cross-cultural application context. Beyond, applicability-influencing factors identified within this study could be proven to be valid indicators of adaptation-necessities and –potentials of user-centred product development methodologies.
This dissertation deals with two main subjects. Both are strongly related to boundary problems for the Poisson equation and the Laplace equation, respectively. The oblique boundary problem of potential theory as well as the limit formulae and jump relations of potential theory are investigated. We divide this abstract into two parts and start with the oblique boundary problem. Here we prove existence and uniqueness results for solutions to the outer oblique boundary problem for the Poisson equation under very weak assumptions on boundary, coefficients and inhomogeneities. Main tools are the Kelvin transformation and the solution operator for the regular inner problem, provided in my diploma thesis. Moreover we prove regularization results for the weak solutions of both, the inner and the outer problem. We investigate the non-admissible direction for the oblique vector field, state results with stochastic inhomogeneities and provide a Ritz-Galerkin approximation. Finally we show that the results are applicable to problems from Geomathematics. Now we come to the limit formulae. There we combine the modern theory of Sobolev spaces with the classical theory of limit formulae and jump relations of potential theory. The convergence in Lebesgue spaces for integrable functions is already treated in literature. The achievement of this dissertation is this convergence for the weak derivatives of higher orders. Also the layer functions are elements of Sobolev spaces and the surface is a two dimensional suitable smooth submanifold in the three dimensional space. We are considering the potential of the single layer, the potential of the double layer and their first order normal derivatives. Main tool in the proof in Sobolev norm is the uniform convergence of the tangential derivatives, which is proved with help of some results taken from literature. Additionally, we need a result about the limit formulae in the Lebesgue spaces, which is also taken from literature, and a reduction result for normal derivatives of harmonic functions. Moreover we prove the convergence in the Hölder spaces. Finally we give an application of the limit formulae and jump relations. We generalize a known density of several function systems from Geomathematics in the Lebesgue spaces of square integrable measureable functions, to density in Sobolev spaces, based on the results proved before. Therefore we have prove the limit formula of the single layer potential in dual spaces of Soboelv spaces, where also the layer function is an element of such a distribution space.
The enamide moiety is an important substructure often encountered in biologically active compounds and synthetic drugs. Furthermore, enamides and their derivatives are versatile synthetic intermediates for polymerization, [4+2] cycloaddition, crosscoupling, Heck-olefinination, Halogenation, enantioselective addition or asymmetric hydrogenation. Traditional syntheses of this important substrate class involve rather harsh reaction conditions such as high temperatures and/or the use of strong bases. In continuation of our work on the addition of secondary amides to alkynes, we have developed a broadly applicable protocol for the catalytic addition of N-nucleophiles such as primary amides, imides and thioamides to terminal alkynes. The choice of ligands and additives determines the regiochemical outcome so that with two complementary catalyst systems, both the E-anti-Markovnikov products and the Z-anti-Markovnikov products can be synthesized highly regio- and stereoselectively.
It was recently reported that imatinib causes cell death in neonatal rat ventricular cardiomyocytes (NRVCM) by triggering endoplasmic reticulum (ER) stress and collapsed mitochondrial membrane potential. Retroviral gene transfer of an imatinib-resistant mutant c-Abl into NRVCM appeared to alleviate imatinib-induced cell death and it was concluded that the observed imatinib-induced cytotoxicity is mediated through direct interactions of imatinib with c-Abl. The imatinib effects were described as being specific for cardiomyocytes only, which are relevant also for the in vivo situation in man. [Kerkelä et al. 2006] The goal of the present study was to reproduce the published experiments and to further explore the dose-response relationship of imatinib-induced cell death in cardiomyocytes. Additional markers of toxicity were investigated. The following biochemical assays were applied: LDH release (membrane leakage marker), MTS-reduction (marker of mitochondrial integrity), ATP cellular contents (energy homoeostasis) and caspase 3/7 activity (apoptosis). The endoplasmatic reticulum (ER) stress markers eIF2α (elongation initiation factor 2α), XBP1 (X Box binding Protein 1), and CHOP (cAMP response element-binding transcription factor (C/EBP) homologous protein) were determined at the transcriptional and protein level. Online monitoring of cell attachment of, oxygen consumption and acidification of the medium by rat heart cells (H9c2) seated on chips (Bionas) allowed the determination of the onset and reversibility of cellular functions. Image analysis measured the spontaneous beating rates after imatinib treatment. The role of imatinib-induced reactive oxygen species was evaluated directly by 2’,7’-Dichlorofluorescein fluorescence and indirectly by means of interference experiments with antioxidants. The specificity of imatinib-induced effects were specific to cardiomyocytes was evaluated in fibroblasts derived from rat heart, lung and skin. The specific role of c-Abl in the imatinib-induced cellular toxicity was investigated by specific gene silencing of c-Abl in NRVCM. The results demonstrated that imatinib caused concentration-dependent cytotoxicity, apoptosis, and ER stress in heart, skin and lung fibroblasts, similar or stronger to those observed in cardiomyocytes. Similar to the results from cardiomyocytes, ER stress markers in fibroblasts were only increased at cytotoxic concentrations of imatinib. This effect was not reversible; also, reactive oxygen species did not participate in the mechanism of the imatinib-induced cytotoxicity in NRVCM. Small interfering RNA (siRNA)-mediated reduction of c-Abl mRNA levels by 51 % and c-Abl protein levels by 70 % had neither an effect on the spontaneous beating frequency of cardiomyocytes nor did it induce cytotoxicity, apoptosis, mitochondrial dysfunction or ER stress in NRVCM. Incubation of imatinib with c-Abl siRNA-transfected NRVCM suggested that reduced c-Abl protein levels did not rescue cardiomyocytes from imatinib-induced cytotoxicity. In conclusion, results from this study do not support a specific c-Abl-mediated mechanism of cytotoxicity in NRVCM.
This thesis is devoted to two main topics (accordingly, there are two chapters): In the first chapter, we establish a tropical intersection theory with analogue notions and tools as its algebro-geometric counterpart. This includes tropical cycles, rational functions, intersection products of Cartier divisors and cycles, morphisms, their functors and the projection formula, rational equivalence. The most important features of this theory are the following: - It unifies and simplifies many of the existing results of tropical enumerative geometry, which often contained involved ad-hoc computations. - It is indispensable to formulate and solve further tropical enumerative problems. - It shows deep relations to the intersection theory of toric varieties and connected fields. - The relationship between tropical and classical Gromov-Witten invariants found by Mikhalkin is made plausible from inside tropical geometry. - It is interesting on its own as a subfield of convex geometry. In the second chapter, we study tropical gravitational descendants (i.e. Gromov-Witten invariants with incidence and "Psi-class" factors) and show that many concepts of the classical Gromov-Witten theory such as the famous WDVV equations can be carried over to the tropical world. We use this to extend Mikhalkin's results to a certain class of gravitational descendants, i.e. we show that many of the classical gravitational descendants of P^2 and P^1 x P^1 can be computed by counting tropical curves satisfying certain incidence conditions and with prescribed valences of their vertices. Moreover, the presented theory is not restricted to plane curves and therefore provides an important tool to derive similar results in higher dimensions. A more detailed chapter synopsis can be found at the beginning of each individual chapter.
The thesis at hand deals with the numerical solution of multiscale problems arising in the modeling of processes in fluid and thermo dynamics. Many of these processes, governed by partial differential equations, are relevant in engineering, geoscience, and environmental studies. More precisely, this thesis discusses the efficient numerical computation of effective macroscopic thermal conductivity tensors of high-contrast composite materials. The term "high-contrast" refers to large variations in the conductivities of the constituents of the composite. Additionally, this thesis deals with the numerical solution of Brinkman's equations. This system of equations adequately models viscous flows in (highly) permeable media. It was introduced by Brinkman in 1947 to reduce the deviations between the measurements for flows in such media and the predictions according to Darcy's model.
The goal of this work is the development and investigation of an interdisciplinary and in itself closed hydrodynamic approach to the simulation of dilute and dense granular flow. The definition of “granular flow” is a nontrivial task in itself. We say that it is either the flow of grains in a vacuum or in a fluid. A grain is an observable piece of a certain material, for example stone when we mean the flow of sand. Choosing a hydrodynamic view on granular flow, we treat the granular material as a fluid. A hydrodynamic model is developed, that describes the process of flowing granular material. This is done through a system of partial differential equations and algebraic relations. This system is derived by the kinetic theory of granular gases which is characterized by inelastic collisions extended with approaches from soil mechanics. Solutions to the system have to be obtained to understand the process. The equations are so difficult to solve that an analytical solution is out of reach. So approximate solutions must be obtained. Hence the next step is the choice or development of a numerical algorithm to obtain approximate solutions of the model. Common to every problem in numerical simulation, these two steps do not lead to a result without implementation of the algorithm. Hence the author attempts to present this work in the following frame, to participate in and contribute to the three areas Physics, Mathematics and Software implementation and approach the simulation of granular flow in a combined and interdisciplinary way. This work is structured as follows. A continuum model for granular flow which covers the regime of fast dilute flow as well as slow dense flow up to vanishing velocity is presented in the first chapter. This model is strongly nonlinear in the dependence of viscosity and other coefficients on the hydrodynamic variables and it is singular because some coefficients diverge towards the maximum packing fraction of grains. Hence the second difficulty, the challenging task of numerically obtaining approximate solutions for this model is faced in the second chapter. In the third chapter we aim at the validation of both the model and the numerical algorithm through numerical experiments and investigations and show their application to industrial problems. There we focus intensively on the shear flow experiment from the experimental and analytical work of Bocquet et al. which serves well to demonstrate the algorithm, all boundary conditions involved and provides a setting for analytical studies to compare our results. The fourth chapter rounds up the work with the implementation of both the model and the numerical algorithm in a software framework for the solution of complex rheology problems developed as part of this thesis.
Interactive visualization of large structured and unstructured data sets is a permanent challenge for scientific visualization. Large data sets are for example created by magnetic resonance imaging (MRI), computed tomography (CT), Computational fluid dynamics (CFD) finite element method (FEM), and computer aided design (CAD). For visualizing those data sets not only accelerated rasterization by means of using specialized hardware i.e. graphics cards is of interest, but also ray casting, as it is perfectly suited for scientific visualization. Ray casting does not only support many rendering modes (e.g., opaque rendering, semi transparent rendering, iso surface rendering, maximum intensity projection, x-ray, absorption emitter model, ...) for which it allows the creation of high quality images, but it also supports many primitives (e.g., not only triangles but also spheres, curved iso surfaces, NURBS, implicit functions, ...). It furthermore scales basically linear to the amount of processor cores used and - this makes it highly interesting for the visualization of large data sets - it scales for static scenes sublinear to data size. Interactive ray casting is currently not widely used within the scientifc visualization community. This is mainly based on historical reasons, as just a few years ago no applicable interactive ray casters for commodity hardware did exist. Interactive scientific visualization has only been possible by using graphics cards or specialized and/or expensive hardware. The goal of this work is to broaden the possibilies for interactive scientific visualization, by showing that interactive CPU based ray casting is today feasible on commodity hardware and that it may efficiently be used together with GPU based rasterization. In this thesis it is first shown that interactive CPU based ray casters may efficiently be integrated into already existing OpenGL frameworks. This is achieved through an OpenGL friendly interface that supports multiple threads and single instruction multiple data (SIMD) operations. For the visualization of rectilinear (and not necessarily cartesian) grids are new implicit kd-trees introduced. They have fast construction times, low memory requirements, and allow ontoday's commodity desktop machines interactive iso surface ray tracing and maximum intensity projection of large scalar fields. A new interactive SIMD ray tracing technique for large tetrahedral meshes is introduced. It is very portable and general and is therefore suited for portation upon different (future) hardware and for usage upon several applications. The thesis ends with a real life commercial application which shows that CPU-based ray casting has already reached the state where it may outperform GPU-based rasterization for scientific visualization.
In the context of inverse optimization, inverse versions of maximum flow and minimum cost flow problems have thoroughly been investigated. In these network flow problems there are two important problem parameters: flow capacities of the arcs and costs incurred by sending a unit flow on these arcs. Capacity changes for maximum flow problems and cost changes for minimum cost flow problems have been studied under several distance measures such as rectilinear, Chebyshev, and Hamming distances. This thesis also deals with inverse network flow problems and their counterparts tension problems under the aforementioned distance measures. The major goals are to enrich the inverse optimization theory by introducing new inverse network problems that have not yet been treated in the literature, and to extend the well-known combinatorial results of inverse network flows for more general classes of problems with inherent combinatorial properties such as matroid flows on regular matroids and monotropic programming. To accomplish the first objective, the inverse maximum flow problem under Chebyshev norm is analyzed and the capacity inverse minimum cost flow problem, in which only arc capacities are perturbed, is introduced. In this way, it is attempted to close the gap between the capacity perturbing inverse network problems and the cost perturbing ones. The foremost purpose of studying inverse tension problems on networks is to achieve a well-established generalization of the inverse network problems. Since tensions are duals of network flows, carrying the theoretical results of network flows over to tensions follows quite intuitively. Using this intuitive link between network flows and tensions, a generalization to matroid flows and monotropic programs is built gradually up.
The manuscript divides in 7 chapters. Chapter 2 briefly introduces the reader to the elementary measures of classical continuum mechanics and thus allows to familiarize with the employed notation. Furthermore, deeper insight of the proposed first-order computational homogenization strategy is presented. Based on the need for a discrete representative volume element (rve), Chapter 3 focuses on a proper rve generation algorithm. Therein, the algorithm itself is described in detail. Additionally, we introduce the concept of periodicity. This chapter finalizes by granting multiple representative examples. A potential based soft particle contact method, used for the computations on the microscale level, is defined in Chapter 4. Included are a description of the used discrete element method (dem) as well as the applied macroscopically driven Dirichlet boundary conditions. The chapter closes with the proposition of a proper solution algorithm as well as illustrative representative examples. Homogenization of the discrete microscopic quantities is discussed in Chapter 5. Therein, the focus is on the upscaling of the aggregate energy as well as on the derivation of related macroscopic stress measures. Necessary quantities for coupling between a standard finite element method and the proposed discrete microscale are presented in Chapter 6. Therein, we tend to the derivation of the macroscopic tangent, necessary for the inclusion into the standard finite element programs. Chapter 7 focuses on the incorporation of inter-particle friction. We select to derive a variational based formulation of inter-particle friction forces, founded on a proposed reduced incremental potential. This contribution is closed by providing a discussion as well as an outlook.
Within this thesis we present a novel approach towards the modeling of strong discontinuities in a three dimensional finite element framework for large deformations. This novel finite element framework is modularly constructed containing three essential parts: (i) the bulk problem, ii) the cohesive interface problem and iii) the crack tracking problem. Within this modular design, chapter 2 (Continuous solid mechanics) treats the behavior of the bulk problem (i). It includes the overall description of the continuous kinematics, the required balance equations, the constitutive setting and the finite element formulation with its corresponding discretization and required solution strategy for the emerging highly non-linear equations. Subsequently, we discuss the modeling of strong discontinuities within finite element discretization schemes in chapter 3 (Discontinuous solid mechanics). Starting with an extension of the continuous kinematics to the discontinuous situation, we discuss the phantom-node discretization scheme based on the works of Hansbo & Hansbo. Thereby, in addition to a comparison with the extended finite element method (XFEM), importance is attached to the technical details for the adaptive introduction of the required discontinuous elements: The splitting of finite elements, the numerical integration, the visualization and the formulation of geometrical correct crack tip elements. In chapter 4 (The cohesive crack concept), we consider the treatment of cohesive process zones and the associated treatment of cohesive tractions. By applying this approach we are able to merge all irreversible, crack propagation accompanying, failure mechanisms into an arbitrary traction separation relation. Additionally, this concept ensures bounded crack tip stresses and allows the use of stress-based failure criteria for the determination of crack growth. In summary, the use of the discontinuous elements in conjunction with cohesive traction separation allows the mesh-independent computation of crack propagation along pre-defined crack paths. Therefore, this combination is defined as the interface problem (ii) and represents the next building block in the modular design of this thesis. The description and the computation of the evolving crack surface, based on the actual status of a considered specimen is the key issue of chapter 5 (Crack path tracking strategies). In contrast to the two-dimensional case, where tracking the path in a C0-continuous way is straightforward, three-dimensional crack path tracking requires additional strategies. We discuss the currently available approaches regarding this issue and further compare the approaches by means of usual quality measures. In the modular design of this thesis these algorithms represent the last main part which is classified as the crack tracking problem (iii). Finally chapter 6 (Representative numerical examples) verifies the finite element tool by comparisons of the computational results which experiments and benchmarks of engineering fracture problems in concrete. Afterwards the finite element tool is applied to model folding induced fracture of geological structures.
This thesis is devoted to applying symbolic methods to the problems of decoding linear codes and of algebraic cryptanalysis. The paradigm we employ here is as follows. We reformulate the initial problem in terms of systems of polynomial equations over a finite field. The solution(s) of such systems should yield a way to solve the initial problem. Our main tools for handling polynomials and polynomial systems in such a paradigm is the technique of Gröbner bases and normal form reductions. The first part of the thesis is devoted to formulating and solving specific polynomial systems that reduce the problem of decoding linear codes to the problem of polynomial system solving. We analyze the existing methods (mainly for the cyclic codes) and propose an original method for arbitrary linear codes that in some sense generalizes the Newton identities method widely known for cyclic codes. We investigate the structure of the underlying ideals and show how one can solve the decoding problem - both the so-called bounded decoding and more general nearest codeword decoding - by finding reduced Gröbner bases of these ideals. The main feature of the method is that unlike usual methods based on Gröbner bases for "finite field" situations, we do not add the so-called field equations. This tremendously simplifies the underlying ideals, thus making feasible working with quite large parameters of codes. Further we address complexity issues, by giving some insight to the Macaulay matrix of the underlying systems. By making a series of assumptions we are able to provide an upper bound for the complexity coefficient of our method. We address also finding the minimum distance and the weight distribution. We provide solid experimental material and comparisons with some of the existing methods in this area. In the second part we deal with the algebraic cryptanalysis of block iterative ciphers. Namely, we analyze the small-scale variants of the Advanced Encryption Standard (AES), which is a widely used modern block cipher. Here a cryptanalyst composes the polynomial systems which solutions should yield a secret key used by communicating parties in a symmetric cryptosystem. We analyze the systems formulated by researchers for the algebraic cryptanalysis, and identify the problem that conventional systems have many auxiliary variables that are not actually needed for the key recovery. Moreover, having many such auxiliary variables, specific to a given plaintext/ciphertext pair, complicates the use of several pairs which is common in cryptanalysis. We thus provide a new system where the auxiliary variables are eliminated via normal form reductions. The resulting system in key-variables only is then solved. We present experimental evidence that such an approach is quite good for small scaled ciphers. We investigate further our approach and employ the so-called meet-in-the-middle principle to see how far one can go in analyzing just 2-3 rounds of scaled ciphers. Additional "tuning techniques" are discussed together with experimental material. Overall, we believe that the material of this part of the thesis makes a step further in algebraic cryptanalysis of block ciphers.
In engineering and science, a multitude of problems exhibit an inherently geometric nature. The computational assessment of such problems requires an adequate representation by means of data structures and processing algorithms. One of the most widely adopted and recognized spatial data structures is the Delaunay triangulation which has its canonical dual in the Voronoi diagram. While the Voronoi diagram provides a simple and elegant framework to model spatial proximity, the core of which is the concept of natural neighbors, the Delaunay triangulation provides robust and efficient access to it. This combination explains the immense popularity of Voronoi- and Delaunay-based methods in all areas of science and engineering. This thesis addresses aspects from a variety of applications that share their affinity to the Voronoi diagram and the natural neighbor concept. First, an idea for the generalization of B-spline surfaces to unstructured knot sets over Voronoi diagrams is investigated. Then, a previously proposed method for \(C^2\) smooth natural neighbor interpolation is backed with concrete guidelines for its implementation. Smooth natural neighbor interpolation is also one of many applications requiring derivatives of the input data. The generation of derivative information in scattered data with the help of natural neighbors is described in detail. In a different setting, the computation of a discrete harmonic function in a point cloud is considered, and an observation is presented that relates natural neighbor coordinates to a continuous dependency between discrete harmonic functions and the coordinates of the point cloud. Attention is then turned to integrating the flexibility and meritable properties of natural neighbor interpolation into a framework that allows the algorithmically transparent and smooth extrapolation of any known natural neighbor interpolant. Finally, essential properties are proved for a recently introduced novel finite element tessellation technique in which a Delaunay triangulation is transformed into a unique polygonal tessellation.
Continuous stochastic control theory has found many applications in optimal investment. However, it lacks some reality, as it is based on the assumption that interventions are costless, which yields optimal strategies where the controller has to intervene at every time instant. This thesis consists of the examination of two types of more realistic control methods with possible applications. In the first chapter, we study the stochastic impulse control of a diffusion process. We suppose that the controller minimizes expected discounted costs accumulating as running and controlling cost, respectively. Each control action causes costs which are bounded from below by some positive constant. This makes a continuous control impossible as it would lead to an immediate ruin of the controller. We give a rigorous development of the relevant theory, where our guideline is to establish verification and convergence results under minimal assumptions, without focusing on the existence of solutions to the corresponding (quasi-)variational inequalities. If the impulse control problem can be characterized or approximated by (quasi-)variational inequalities, it remains to solve these equations. In Section 1.2, we solve the stochastic impulse control problem for a one-dimensional diffusion process with constant coefficients and convex running costs. Further, in Section 1.3, we solve a particular multi-dimensional example, where the uncontrolled process is given by an at least two-dimensional Brownian motion and the cost functions are rotationally symmetric. By symmetry, this problem can be reduced to a one-dimensional problem. In the last section of the first chapter, we suggest a new impulse control problem, where the controller is in addition allowed to invest his initial capital into a market consisting of a money market account and a risky asset. The costs which arise upon controlling the diffusion process and upon trading in this market have to be paid out of the controller's bond holdings. The aim of the controller is to minimize the running costs, caused by the abstract diffusion process, without getting ruined. The second chapter is based on a paper which is joint work with Holger Kraft and Frank Seifried. We analyze the portfolio decision of an investor trading in a market where the economy switches randomly between two possible states, a normal state where trading takes place continuously, and an illiquidity state where trading is not allowed at all. We allow for jumps in the market prices at the beginning and at the end of a trading interruption. Section 2.1 provides an explicit representation of the investor's portfolio dynamics in the illiquidity state in an abstract market consisting of two assets. In Section 2.2 we specify this market model and assume that the investor maximizes expected utility from terminal wealth. We establish convergence results, if the maximal number of liquidity breakdowns goes to infinity. In the Markovian framework of Section 2.3, we provide the corresponding Hamilton-Jacobi-Bellman equations and prove a verification result. We apply these results to study the portfolio problem for a logarithmic investor and an investor with a power utility function, respectively. Further, we extend this model to an economy with three regimes. For instance, the third state could model an additional financial crisis where trading is still possible, but the excess return is lower and the volatility is higher than in the normal state.
A series of (oligo)phenthiazines, thiazolium salts and sulfonic acid functionalized organic/inorganic hybrid materials were synthesized. The organic groups were covalently bound on the inorganic surface through reactions of organosilane precursors with TEOS or with the silanol groups of material surface. These synthetic methods are called the co-condensation process and the post grafting. The structures and the textural parameters of the generated hybrid materials were characterized by XRD, N2 adsorption-desorption measurements, SEM and TEM. The incorporations of the organic groups were verified by elemental analysis, thermogravimetric analysis, FT-IR, UV-Vis, EPR, CV, as well as by 13C CP-MAS NMR and 29Si CP-MAS NMR spectroscopy. Introduction of various organic groups endow different phsysical, chemical properties to these hybrid materials. The (oligo)phenothiazines provide a group of novel redox acitive hybrid materials with special electronic and optic properties. The thiazolium salts modified materials were applied as heterogenized organo catalysts for the benzoin condensation and the cross-coupling of aldehydes with acylimines to yield a-amido ketones. The sulfonic acid containing materials can not only be used as Broensted acid catalysts, but also can serve as ion exchangable supports for further modifications and applications.
This thesis deals with the following question. Given a moduli space of coherent sheaves on a projective variety with a fixed Hilbert polynomial, to find a natural construction that replaces the subvariety of the sheaves that are not locally free on their support (we call such sheaves singular) by some variety consisting of sheaves that are locally free on their support. We consider this problem on the example of the coherent sheaves on \(\mathbb P_2\) with Hilbert polynomial 3m+1.
Given a singular coherent sheaf \(\mathcal F\) with singular curve C as its support we replace \(\mathcal F\) by locally free sheaves \(\mathcal E\) supported on a reducible curve \(C_0\cup C_1\), where \(C_0\) is a partial normalization of C and \(C_1\) is an extra curve bearing the degree of \(\mathcal E\). These bundles resemble the bundles considered by Nagaraj and Seshadri. Many properties of the singular 3m+1 sheaves are inherited by the new sheaves we introduce in this thesis (we call them R-bundles). We consider R-bundles as natural replacements of the singular sheaves. R-bundles refine the information about 3m+1 sheaves on \(\mathbb P_2\). Namely, for every isomorphism class of singular 3m+1 sheaves there are \(\mathbb P_1\) many equivalence classes of R-bundles. There is a variety \(\tilde M\) of dimension 10 that may be considered as the space of all the isomorphism classes of the non-singular 3m+1 sheaves on \(\mathbb P_2\) together with all the equivalence classes of all R-bundles. This variety is obtained by blowing up the moduli space of 3m+1 sheaves on \(\mathbb P_2\) along the subvariety of singular sheaves. We modify the definition of a 3m+1 family and obtain a notion of a new family over an arbitrary variety S. In particular 3m+1 families of the non-singular sheaves on \(\mathbb P_2\) are families in this sense. New families over one point are either non-singular 3m+1 sheaves or R-bundles. For every variety S we introduce an equivalence relation on the set of all new families over S. The notion of equivalence for families over one point coincides with isomorphism for non-singular 3m+1 sheaves and with equivalence for R-bundles. We obtain a moduli functor \(\tilde{\mathcal M}:(Sch) \rightarrow (Sets)\) that assigns to every variety S the set of the equivalence classes of the new families over S. There is a natural transformation of functors \(\tilde{\mathcal M}\rightarrow \mathcal M\) that establishes a relation between \(\tilde{\mathcal M}\) and the moduli functor \(\mathcal M\) of the 3m+1 moduli problem on \(\mathbb P_2\). There is also a natural transformation \(\tilde{\mathcal M} \rightarrow Hom(\__ ,\tilde M)\), inducing a bijection \(\tilde{\mathcal M}(pt)\cong \tilde M\), which means that \(\tilde M\) is a coarse moduli space of the moduli problem \(\tilde{\mathcal M}\).
Modern digital imaging technologies, such as digital microscopy or micro-computed tomography, deliver such large amounts of 2D and 3D-image data that manual processing becomes infeasible. This leads to a need for robust, flexible and automatic image analysis tools in areas such as histology or materials science, where microstructures are being investigated (e.g. cells, fiber systems). General-purpose image processing methods can be used to analyze such microstructures. These methods usually rely on segmentation, i.e., a separation of areas of interest in digital images. As image segmentation algorithms rarely adapt well to changes in the imaging system or to different analysis problems, there is a demand for solutions that can easily be modified to analyze different microstructures, and that are more accurate than existing ones. To address these challenges, this thesis contributes a novel statistical model for objects in images and novel algorithms for the image-based analysis of microstructures. The first contribution is a novel statistical model for the locations of objects (e.g. tumor cells) in images. This model is fully trainable and can therefore be easily adapted to many different image analysis tasks, which is demonstrated by examples from histology and materials science. Using algorithms for fitting this statistical model to images results in a method for locating multiple objects in images that is more accurate and more robust to noise and background clutter than standard methods. On simulated data at high noise levels (peak signal-to-noise ratio below 10 dB), this method achieves detection rates up to 10% above those of a watershed-based alternative algorithm. While objects like tumor cells can be described well by their coordinates in the plane, the analysis of fiber systems in composite materials, for instance, requires a fully three dimensional treatment. Therefore, the second contribution of this thesis is a novel algorithm to determine the local fiber orientation in micro-tomographic reconstructions of fiber-reinforced polymers and other fibrous materials. Using simulated data, it will be demonstrated that the local orientations obtained from this novel method are more robust to noise and fiber overlap than those computed using an established alternative gradient-based algorithm, both in 2D and 3D. The property of robustness to noise of the proposed algorithm can be explained by the fact that a low-pass filter is used to detect local orientations. But even in the absence of noise, depending on fiber curvature and density, the average local 3D-orientation estimate can be about 9° more accurate compared to that alternative gradient-based method. Implementations of that novel orientation estimation method require repeated image filtering using anisotropic Gaussian convolution filters. These filter operations, which other authors have used for adaptive image smoothing, are computationally expensive when using standard implementations. Therefore, the third contribution of this thesis is a novel optimal non-orthogonal separation of the anisotropic Gaussian convolution kernel. This result generalizes a previous one reported elsewhere, and allows for efficient implementations of the corresponding convolution operation in any dimension. In 2D and 3D, these implementations achieve an average performance gain by factors of 3.8 and 3.5, respectively, compared to a fast Fourier transform-based implementation. The contributions made by this thesis represent improvements over state-of-the-art methods, especially in the 2D-analysis of cells in histological resections, and in the 2D and 3D-analysis of fibrous materials.
Proprietary polyurea based thermosets (3P resins) were produced from polymeric methylene diphenylisocyanate (PMDI) and water glass (WG) using a phosphate emulsifier. Polyisocyanates when combined with WG in presence of suitable emulsifier result in very versatile products. WG acts in the resulting polyurea through a special sol-gel route as a cheap precursor of the silicate (xerogel) filler produced in-situ. The particle size and its distribution of the silicate are coarse and very broad, respectively, which impart the mechanical properties of the 3P systems negatively. The research strategy was to achieve initially a fine water in oil type (W/O = WG/PMDI) emulsion by “hybridising” the polyisocyanate with suitable thermosetting resins (such as vinylester (VE), melamine/formaldehyde (MF) or epoxy resin (EP)). As the presently used phosphate emulsifiers may leak into the environment, the research work was directed to find such “reactive” emulsifiers which can be chemically built in into the final polyurea-based thermosets. The progressive elimination of the organic phosphate, following the European Community Regulation on chemicals and their safe use (REACH), was studied and alternative emulsifiers for the PMDI/WG systems were found. The new hybrid systems in which the role of the phosphate emulsifier has been overtaken by suitable resins (VE, EP) or additives (MF) are designed 2P resins. Further, the cure behaviour (DSC, ATR-IR), chemorheology (plate/plate rheometer), morphology (SEM, AFM) and mechanical properties (flexure, fracture mechanics) have been studied accordingly. The property upgrade targeted not only the mechanical performances but also thermal and flame resistance. Therefore, emphasis was made to improve the thermal and fire resistance (e.g. TGA, UL-94 flammability test) of the in-situ filled hybrid resins. Improvements on the fracture mechanical properties as well as in the flexural properties of the novel 3P and 2P hybrids were obtained. This was accompanied in most of the cases by a pronounced reduction of the polysilicate particle size as well as by a finer dispersion. Further the complex reaction kinetics of the reference 3P was studied, and some of the main reactions taking place during the curing process were established. The pot life of the hybrid resins was, in most of the cases, prolonged, which facilitates the posterior processing of such resins. The thermal resistance of the hybrid resins was also enhanced for all the novel hybrids. However, the hybridization strategy (mostly with EP and VE) did not have satisfactory results when taking into account the fire resistance. Efforts will be made in the future to overcome this problem. Finally it was confirmed that the elimination of the organic phosphate emulsifier was feasible, obtaining the so called 2P hybrids. Those, in many cases, showed improved fracture mechanical, flexural and thermal resistance properties as well as a finer and more homogeneous morphology. The novel hybrid resins of unusual characteristics (e.g. curing under wet conditions and even in water) are promising matrix materials for composites in various application fields such as infrastructure (rehabilitation of sewers), building and construction (refilling), transportation (coating of vessels, pipes of improved chemical resistance)…
Photonic crystals are inhomogeneous dielectric media with periodic variation of the refractive index. A photonic crystal gives us new tools for the manipulation of photons and thus has received great interests in a variety of fields. Photonic crystals are expected to be used in novel optical devices such as thresholdless laser diodes, single-mode light emitting diodes, small waveguides with low-loss sharp bends, small prisms, and small integrated optical circuits. They can be operated in some aspects as "left handed materials" which are capable of focusing transmitted waves into a sub-wavelength spot due to negative refraction. The thesis is focused on the applications of photonic crystals in communications and optical imaging: • Photonic crystal structures for potential dispersion management in optical telecommunication systems • 2D non-uniform photonic crystal waveguides with a square lattice for wide-angle beam refocusing using negative refraction • 2D non-uniform photonic crystal slabs with triangular lattice for all-angle beam refocusing • Compact phase-shifted band-pass transmission filter based on photonic crystals