Refine
Year of publication
Document Type
- Doctoral Thesis (941) (remove)
Language
- English (941) (remove)
Has Fulltext
- yes (941)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (54)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (23)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (8)
- Kaiserslautern - Fachbereich Bauingenieurwesen (6)
- Kaiserslautern - Fachbereich ARUBI (5)
- Landau - Fachbereich Psychologie (5)
- Fraunhofer (ITWM) (4)
- Kaiserslautern - Fachbereich Architektur (1)
- Landau - Fachbereich Erziehungswissenschaften (1)
In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.
Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.
This PhD-Thesis deals with the calculation and application of a new class of invariants, that can be used to recognize patterns in tensor fields (i.e. scalar fields, vector fields und matrix fields), and by the composition of scalar fields with delta-functions also to point-clouds.
In the first chapter an overview over already existing invariants is given.
In the second chapter the general definition of the new invariants is given:
starting with a tensor field a set of moment tensor is created via folding in tensor-product manner with different orders of the tensor product of the positional vector. From these, rotational invariant values are calculated via contraction of tensor products. An algorithm to get a complete and independent set of invariants from a given moment tensor set is described. Furthermore methods to make these sets of invariants invariant against translation, rotation, scaling, and affine transformation.
In the third chapter, a method to optimize the calculation of these sets of invariants is described: every invariant can be modeled as undirected graph comprising multiple sub-graphs representing partially contracted tensor products of the moment tensors.
The composition of the sets of invariants is optimized by a clever choice of the decomposition into sub-graphs, all paths creating a hyper-graph of sub-graphs where each node describes a composition step. Finally, C++-source-code is created, which optimized using the symmetry of the different tensors and tensor-products, and a comparison of the effort to other calculation methods of invariants is given.
The fourth chapter describes the application of the invariants to object recognition in point-clouds from 3D-scans. To do this, the invariants of sub-sets of point-clouds are stored for every known object. Afterwards, invariants are calculated from an unknown point-cloud and tried to find them in the database to assign it to one of the known objects. Benchmarks using three 3D-object databases are made testing time and recognition rate.
The study addresses the effect of multiple jet passes and other parameters namely feedrate, water pressure and standoff distance in waterjet peening of metallic
surfaces. An analysis of surface integrity was used to evaluate the performance of
different parameters in the process. An increase in the number of jet passes and
pressure leads to a higher roughness and more erosion and also a higher hardness.
In contrast, the feedrate shows a reverse effect on those surface characteristics.
There exists a specific value of standoff distance that results in the maximum surface
roughness, erosion as well as hardness. Analysis of the surface microstructure gave
a good insight into the mechanism material removal process involving initial and
evolved damage. Also, the waterjet peening process was optimized based on the
design of experiment approach. The developed empirical models had shown
reasonable correlations between the measured and predicted responses. A proper selection of waterjet peening parameters can be formulated to be used in practical
works.
Mechanical ventilation of patients with severe lung injury is an important clinical treatment to ensure proper lung oxygenation and to mitigate the extent of collapsed lung regions. While current imaging technologies such as Computed Tomography (CT) and chest X-ray allow for a thorough inspection of the thorax, they are limited to static pictures and exhibit several disadvantages, including exposure to ionizing radiation and high cost. Electrical Impedance Tomography (EIT) is a novel method to determine functional processes inside the thorax such as lung ventilation and cardiac activity. EIT reconstructs the internal electrical conductivity distribution within the thorax from voltage measurements on the body surface. Conductivity changes correlate with important clinical parameters such as lung volume and perfusion. Current EIT systems and algorithms use simplified or generalized thorax models to solve the reconstruction problem, which reduce image quality and anatomical significance. In this thesis, the development of a clinically relevant workflow to compute sophisticated three-dimensional thorax models from patient-specific CT data is described. The method allows medical experts to generate a multi-material segmentation in an interactive and fast way, while a volumetric mesh is computed automatically from the segmentation. The significantly improved image quality and anatomical precision of EIT images reconstructed with these 3D models is reported, and the impact on clinical applicability is discussed. In addition, three projects concerning quantitative CT (qCT) measurements and multi-modal 3D visualization are presented, which demonstrate the importance and productivity of interdisciplinary research groups including computer scientists and medical experts. The results presented in this thesis contribute significantly to clinical research efforts to pave the way towards improved patient-specific treatments of lung injury using EIT and qCT.
The heart is reported to show a net consumption of lactate. This may contribute up to 15% to the total body lactate disposal. In this work, the consumption of lactate was shown for the first
time on the single cell level with the new FRET-based lactate sensor Laconic.
Research published until today, almost exclusively reports the monocarboxylate transporter 1
(MCT1) as the transporter responsible for myocardial lactate uptake. As this membrane
transporter transports lactate together with H+ in a stoichiometry of 1:1, lactate transport is
coupled to pH regulation. Consequently, interactions of MCT1 and acid/base regulating proteins
(carbonic anhydrases (CAs and sodium bicarbonate co-transporters (NBCs)) are described in
the oocyte expression system, skeletal muscle and cancer cells.
In this work it is shown that activity of extracellular CA increases lactate uptake into mouse
cardiomyocytes by 27% and lactate induced JA/B by 42.8% to 46.2%. This effect is most likely
mediated via NBC/CA interaction because inhibition of extracellular CA reduces HCO3--
dependent acid extruding JA/B by 53.3% to 78.4%. This may link lactate uptake to cellular
respiration. When lactate was applied in medium gassed with 100% N2, lactate induced
acidification was 12.6% faster than in medium gassed with 100% O2. Thus, CO2 produced on
the pathway transferring redox energy from substrates like glucose and lactate to ADP and
phosphate via oxidative phosphorylation, may support further lactate uptake. The findings of
this work suggest an auto regulation of lactate uptake via CO2 release in ventricular mouse
cardiomyocytes.
Researchers and analysts in modern industrial and academic environments are faced with a daunting amount of multivariate data. While there has been significant development in the areas of data mining and knowledge
discovery, there is still the need for improved visualizations and generic solutions. The state-of-the-art in visual analytics and exploratory data visualization is to incorporate more profound analysis methods while focusing on improving interactive abilities, in order to support data analysts in gaining new insights through visual exploration and hypothesis building.
In the research field of exploratory data visualization, this thesis contributes new approaches in dimension reduction that tackle a number of shortcomings in state-of-the-art methods, such as interpretability and ambiguity. By combining methods from several disciplines, we describe how ambiguity can be countered effectively by visualizing coordinate values within a lower-dimensional embedding, thereby focusing on the display of the structural composition of high-dimensional data and on an intuitive depiction of inherent global relationships. We also describe how properties and alignment of high-dimensional manifolds can be analyzed in different levels of detail by means of a self-embedding hierarchy of local projections, each using full degree of freedom, while keeping the global context.
To the application field of air quality research, the thesis provides novel means for the research of aerosol source contributions. Triggered by this particularly challenging application problem, we instigate a new research direction in the area of visual analytics by describing a methodology to model-based visual analysis that (i) allows the scientist to be “in the loop” of computations and (ii) enables him to verify and control the analysis process, in order to steer computations towards physical meaning. Careful reflection of our work in this application has led us to derive key design choices that underlie and transcend beyond application-specific solutions. As a result, we describe a general design methodology to computing parameters of a pre-defined analytical model that map to multivariate data. Core applications areas that can benefit from our approach are within engineering disciplines, such as civil, chemical, electrical, and mechanical engineering, as well as in geology, physics, and biology.
This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.
ABSTRACT
"Spin and orbital contribution to the magnetic moment of transition metal clusters and complexes"
The spin and orbital contributions to the magnetic moments of isolated iron \(Fe_n^+\) \((7 ≤ n ≤ 18)\), cobalt \(Co_n^+\) \((8 ≤ n ≤ 22)\) and nickel \(Ni_n^+\) \((7 ≤ n ≤ 17)\) clusters were investigated. An experimental access to both contributions is possible by the application of x-ray magnetic circular dichroism (XMCD) spectroscopy. XMCD spectroscopy is based on x-ray absorption spectroscopy (XAS). It exploits the fact that for a magnetic sample the resonant absorption cross sections for negative and positive circular polarized x-rays differ for the transition from a spin orbit split ground state to the valence level. The resulting dichroic effects contain the information about the magnetism of the investigated sample. It can be extracted from the experimental spectrum via application of the so called sum rules. However, only the projections of the magnetic moments onto the quantization axis are experimentally accessible which corresponds to the magnetization of the sample.
We developed a method to apply XMCD spectroscopy to isolated clusters in the gas phase. A modified Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometer was used to record the XA spectra in Total Ion Yield (TIY) mode, i.e. by recording the fragmentation intensity of the clusters in dependence of x-ray energy. The clusters can be considered to be a superparamagnetic ensemble. Thus, the magnetization follows a Langevin curve. The intrinsic magnetic moments can be calculated by Langevin correction of the experimental magnetic moments because the cluster temperature and the magnetic field are known.
The spin and the orbital magnetic moments are enhanced compared to the respective bulk values for all three investigated elements. The enhancement of the orbital contribution is more pronounced, by about a factor 3 - 4 compared to the bulk, than for the spin magnetic moment. However, if compared to the atomic value, both contributions are quenched. The orbital magnetic moment only amounts to about 10 - 15 % of the atomic value while the spin retains about 80 % of its atomic value. If the magnetic moments found for the clusters are put into perspective with respect to the atomic and bulk values by means of scaling laws, it becomes evident that both contributions follow different interpolations between the atomic and bulk value. The spin follows the well-known trend
\(n^{-1/3} = 1/(cluster radius)\) (n = number of atoms per cluster, assumption of a spherical particle). This trend relates to the ratio of surface to inner atoms in spherical particle. Hence, our interpretation is that the spin magnetic moment seems to follow the surface area of the cluster. On the other hand, the orbital magnetic moment follows \(1/n = 1/(cluster volume)\).
First XA spectra recorded with circularly polarized x-rays of a Single Molecule Magnet (SMM) \([Fe_4Ln_2(N_3)_4(Htea)_4(piv_6)]\) (Ln = Gd, Tb; \(H_3tea\) = triethanolamine, Hpiv = pivalic acid) are presented.
This thesis provides a fully automatic translation from synchronous programs to parallel software for different architectures, in particular, shared memory processing (SMP) and distributed memory systems. Thereby, we exploit characteristics of the synchronous model of computation (MoC) to reduce communication and to improve available parallelism and load-balancing by out-of-order (OOO) execution and data speculation.
Manual programming of parallel software requires the developers to partition a system into tasks and to add synchronization and communication. The model-based approach of development abstracts from details of the target architecture and allows to make decisions about the target architecture as late as possible. The synchronous MoC supports this approach by abstracting from time and providing implicit parallelism and synchronization. Existing compilation techniques translate synchronous programs into synchronous guarded actions (SGAs) which are an intermediate format abstracting from semantic problems in synchronous languages. Compilers for SGAs analyze causality problems, ensure logical correctness and the absence of schizophrenia problems. Hence, SGAs are a simplified and general starting point and keep the synchronous MoC at the same time. The instantaneous feedback in the synchronous MoC makes the mapping of these systems to parallel software a non-trivial task. In contrast, other MoCs such as data-flow processing networks (DPNs) directly match with parallel architectures. We translate the SGAs into DPNs,which represent a commonly used model to create parallel software. DPNs have been proposed as a programming model for distributed parallel systems that have communication paths with unpredictable latencies. The purely data-driven execution of DPNs does not require a global coordination and therefore DPNs can be easily mapped to parallel software for architectures with distributed memory. The generation of efficient parallel code from DPNs challenges compiler design with two issues: To perfectly utilize a parallel system, the communication and synchronization has to be kept low, and the utilization of the computational units has to be balanced. The variety of hardware architectures and dynamic execution techniques in processing units of these systems make a statically balanced distributed execution impossible.
The synchronous MoC is still reflected in our generated DPNs, which exhibits characteristics that allow optimizations concerning the previously mentioned issues. In particular, we apply a general communication reduction and OOO execution to achieve a dynamically balanced execution which is inspired from hardware design.
The present work investigated three important constructs in the field of psychology: creativity, intelligence and giftedness. The major objective was to clarify some aspects about each one of these three constructs, as well as some possible correlations between them. Of special interest were: (1) the relationship between creativity and intelligence - particularly the validity of the threshold theory; (2) the development of these constructs within average and above-average intelligent children and throughout grade levels; and (3) the comparison between the development of intelligence and creativity in above-average intelligent primary school children that participated in a special program for children classified as “gifted”, called Entdeckertag (ET), against an age-class- and-IQ matched control group. The ET is a pilot program which was implemented in 2004 by the Ministry for Education, Science, Youth and Culture of the state of Rhineland-Palatinate, Germany. The central goals of this program are the early recognition of gifted children and intervention, based on the areas of German language, general science and mathematics, and also to foster the development of a child’s creativity, social ability, and more. Five hypotheses were proposed and analyzed, and reported separately within five chapters. To analyze these hypotheses, a sample of 217 children recruited from first to fourth grade, and between the ages of six and ten years, was tested for intelligence and creativity. Children performed three tests: Standard Progressive Matrices (SPM) for the assessment of classical intelligence, Test of Creative Thinking – Drawing Production (TCT-DP) for the measurement of classical creativity, and Creative Reasoning Task (CRT) for the evaluation of convergent and divergent thinking, both in open problem spaces. Participants were divided according to two general cohorts: Intervention group (N = 43), composed of children participating in the Entdeckertag program, and a non-intervention group (N = 174), composed of children from the regular primary school. For the testing of the hypotheses, children were placed into more specific groups according to the particular hypothesis that was being tested. It could be concluded that creativity and intelligence were not significantly related and the threshold theory was not confirmed. Additionally, intelligence accounted for less than 1% of the variance within creativity; moreover, scores on intelligence were unable to predict later creativity scores. The development of classical intelligence and classical creativity throughout grade levels also presented a different pattern; intelligence grew increasingly and continually, whereas creativity stagnated after the third grade. Finally, the ET program proved to be beneficial for classical intelligence after two years of attendance, but no effect was found for creativity. Overall, results indicate that organizations and institutions such as schools should not look solely to intelligence performance, especially when aiming to identify and foster gifted or creative individuals.
In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.
Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall
modeling and simulation of physical and chemical processes occuring in the course of an accident
is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor tech-
nology and computer programming. The aim of the study is therefore to create the foundations
of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based
on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under
different working and accident conditions, and to develop proper action plans for minimizing the
risks of accidents, and/or minimizing the consequences of possible accidents. A very large number
of scenarios have to be simulated, and at the same time acceptable accuracy for the critical param-
eters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software
tools are either too slow, or not accurate enough. This thesis deals with developing customized al-
gorithm and software tools for simulation of isothermal and non-isothermal flows in a containment
pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented.
The goal of the work is to achieve a balance between accuracy and speed of calculation, and to
develop customized algorithm for this special case. Different discretization and solution approaches
are studied and those which correspond best to the formulated goal are selected, adjusted, and when
possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated
geometries, in presence of solid and porous obstales, is in the core of the algorithm. Developing
suitable pre-processor and customized domain decomposition algorithms are essential part of the
overall algorithm amd software. Results from numerical simulations in test geometries and in real
geometries are presented and discussed.
For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.
This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.
Pedestrian Flow Models
(2014)
There have been many crowd disasters because of poor planning of the events. Pedestrian models are useful in analysing the behavior of pedestrians in advance to the events so that no pedestrians will be harmed during the event. This thesis deals with pedestrian flow models on microscopic, hydrodynamic and scalar scales. By following the Hughes' approach, who describes the crowd as a thinking fluid, we use the solution of the Eikonal equation to compute the optimal path for pedestrians. We start with the microscopic model for pedestrian flow and then derive the hydrodynamic and scalar models from it. We use particle methods to solve the governing equations. Moreover, we have coupled a mesh free particle method to the fixed grid for solving the Eikonal equation. We consider an example with a large number of pedestrians to investigate our models for different settings of obstacles and for different parameters. We also consider the pedestrian flow in a straight corridor and through T-junction and compare our numerical results with the experiments. A part of this work is devoted for finding a mesh free method to solve the Eikonal equation. Most of the available methods to solve the Eikonal equation are restricted to either cartesian grid or triangulated grid. In this context, we propose a mesh free method to solve the Eikonal equation, which can be applicable to any arbitrary grid and useful for the complex geometries.
This thesis combines mass spectrometric studies on ionic dicarboxylic acids and transition metal cluster adsorbate complexes. IR-MPD spectra of protonated and deprotonated aliphatic and aromatic dicarboxylic acids provide insights in the nature of intramolecular hydrogen bonding. Investigations of their fragmentation behavior are supported by MP2 calculations. Prior work on cobalt transition metal clusters is extended to iron and nickel and three cobalt alloys have been studied.
Perceptual grouping is an integral part of visual object recognition. It organizes elements within our visual field according to a set of heuristics (grouping principles), most of which are not well understood. To identify their temporal processing dynamics (i.e., to identify whether they rely on neuronal feedforward or recurrent activation), we introduce the primed flanker task that is based on a firm empirical and theoretical background. In three sets of experiments, participants responded to visual stimuli that were either grouped by (1) similarity of brightness, shape, or size, (2) symmetry and closure, or (3) Good Gestalt. We investigated whether these grouping cues were effective in rapid visuomotor processing (i.e., in terms of response times, error rates, and priming effects) and whether the results met theory-driven indicators of feedforward processing. (1) In the first set of experiments with similarity cues, we varied subjective grouping strength and found that stronger grouping in the targets enhanced overall response times while stronger grouping in the primes enhanced priming effects in motor responses. We also obtained differences between rapid visuomotor processing and the subjective impression with cues of brightness and shape but not with cues of brightness and size. These results show that the primed flanker task is an objective measure for comparing different feedforward-transmitted groupings. (2) In the second set of experiments, we used the task to study grouping by symmetry and grouping by closure that are more complex than similarity cues. We obtained results that were mostly in accordance with a feedforward model. Some other factors (line of view, orientation of the symmetry axis) were irrelevant for processing of symmetry cues. Thus, these experiments suggest that closure and (possibly) viewpoint-independent symmetry cues are extracted rapidly during the first feedforward wave of neuronal processing. (3) In the third set of experiments, we used the task to study grouping by Good Gestalt (i.e., visual completion in occluded shapes). By varying the amount of occlusion, we found that the processing was in accordance with a feedforward model only when occlusion was very limited. Thus, these experiments suggest that Good Gestalt is not extracted rapidly during the first feedforward wave of neuronal processing but relies on recurrent activation. I conclude (1) that the primed flanker task is an excellent tool to identify and compare the processing characteristics of different grouping cues by behavioral means, (2) that grouping strength and other factors are strongly modulating these processing characteristics, which (3) challenges a dichotomous classification of grouping cues based on feedforward vs. recurrent processing (incremental grouping theory, Roelfsema, 2006), and (4) that a focus on temporal processing dynamics is necessary to understand perceptual grouping.
In this thesis we studied and investigated a very common but a long existing noise problem and we provided a solution to this problem. The task is to deal with different types of noise that occur simultaneously and which we call hybrid. Although there are individual solutions for specific types one cannot simply combine them because each solution affects the whole speech. We developed an automatic speech recognition system DANSR ( Dynamic Automatic Noisy Speech Recognition System) for hybrid noisy environmental noise. For this we had to study all of speech starting from the production of sounds until their recognition. Central elements are the feature vectors on which pay much attention. As an additional effect we worked on the production of quantities for psychoacoustic speech elements.
The thesis has four parts:
1) The first part we give an introduction. The chapter 2 and 3 give an overview over speech generation and recognition when machines are used. Also noise is considered.
2) In the second part we describe our general system for speech recognition in a noisy environment. This is contained in the chapters 4-10. In chapter 4 we deal with data preparation. Chapter 5 is concerned with very strong noise and its modeling using Poisson distribution. In the chapters 5-8 we deal with parameter based modeling. Chapter 7 is concerned with autoregressive methods in relation to the vocal tract. In the chapters 8 and 9 we discuss linear prediction and its parameters. Chapter 9 is also concerned with quadratic errors, the decomposition into sub-bands and the use of Kalman filters for non-stationary colored noise in chapter 10. There one finds classical approaches as long we have used and modified them. This includes covariance mehods, the method of Burg and others.
3) The third part deals firstly with psychoacoustic questions. We look at quantitative magnitudes that describe them. This has serious consequences for the perception models. For hearing we use different scales and filters. In the center of the chapters 12 and 13 one finds the features and their extraction. The fearures are the only elements that contain information for further use. We consider here Cepstrum features and Mel frequency cepstral coefficients(MFCC), shift invariant local trigonometric transformed (SILTT), linear predictive coefficients (LPC), linear predictive cepstral coefficients (LPCC), perceptual linear predictive (PLP) cepstral coefficients. In chapter 13 we present our extraction methods in DANSR and how they use window techniques And discrete cosine transform (DCT-IV) as well as their inverses.
4) The fourth part considers classification and the ultimate speech recognition. Here we use the hidden Markov model (HMM) for describing the speech process and the Gaussian mixture model (GMM) for the acoustic modelling. For the recognition we use forward algorithm, the Viterbi search and the Baum-Welch algorithm. We also draw the connection to dynamic time warping (DTW). In the rest we show experimental results and conclusions.
When stimulus and response overlap in a choice-reaction task, enhanced performance can be observed. This effect, the so-called Stimulus-Response Compatibility (SRC) has been shown to appear for a variety of different stimulus features such as numerical or physical size, luminance, or pitch height. While many of these SRC effects have been investigated in an isolated manner, only fewer studies focus on possible interferences when more than one stimulus dimension is varied. The present thesis investigated how the SRC effect of pitch heights, the so-called SPARC effect (Spatial Pitch Associations of Response Codes), is influenced by additionally varied stimulus information. In Study 1, the pitch heights of presented tones were varied along with timbre categories under two different task and pitch range conditions and with two different response alignments. Similarly, in Study 2, pitch heights as well as numerical values were varied within sung numbers under two different task conditions. The results showed simultaneous SRC effects appearing independently of each other in both studies: In Study 1, an expected SRC effect of pitch heights with horizontal responses (i.e., a horizontal SPARC effect) was observed. More interestingly, an additional and unexpected SRC effect of timbre with response sides presented itself independently of this SPARC effect. Similar results were obtained in Study 2: Here, an SRC effect for pitch heights (SPARC) and an SRC effect for numbers (i.e., SNARC or Spatial Numerical Associations of Response Codes, respectively) were observed and again the effects did not interfere with each other. Thus, results indicate that SPARC with horizontal responses does not interfere with SRC effects of other, simultaneously varied stimulus dimensions. These findings are discussed within the principle of polarity correspondence and the dimensional overlap model as theoretical accounts for SRC effects. In sum, it appears that the different types of information according to varied stimulus dimensions enter the decision stage of stimulus processing from separate channels.
The recognition of day-to-day activities is still a very challenging and important research topic. During recent years, a lot of research has gone into designing and realizing smart environ- ments in different application areas such as health care, maintenance, sports or smart homes. As a result, a large amount of sensor modalities were developed, different types of activity and context recognition services were implemented and the resulting systems were benchmarked using state-of-the-art evaluation techniques. However, so far hardly any of these approaches have found their way into the market and consequently into the homes of real end-users on a large scale. The reason for this is, that almost all systems have one or more of the following characteristics in common: expensive high-end or prototype sensors are used which are not af- fordable or reliable enough for mainstream applications; many systems are deployed in highly instrumented environments or so-called "living labs", which are far from real-life scenarios and are often evaluated only in research labs; almost all systems are based on complex system con- figurations and/or extensive training data sets, which means that a large amount of data must be collected in order to install the system. Furthermore, many systems rely on a user and/or environment dependent training, which makes it even more difficult to install them on a large scale. Besides, a standardized integration procedure for the deployment of services in existing environments and smart homes has still not been defined. As a matter of fact, service providers use their own closed systems, which are not compatible with other systems, services or sensors. It is clear, that these points make it nearly impossible to deploy activity recognition systems in a real daily-life environment, to make them affordable for real users and to deploy them in hundreds or thousands of different homes.
This thesis works towards the solution of the above mentioned problems. Activity and context recognition systems designed for large-scale deployment and real-life scenarios are intro- duced. Systems are based on low-cost, reliable sensors and can be set up, configured and trained with little effort, even by technical laymen. It is because of these characteristics that we call our approach "minimally invasive". As a consequence, large amounts of training data, that are usu- ally required by many state-of-the-art approaches, are not necessary. Furthermore, all systems were integrated unobtrusively in real-world/similar to real-world environments and were evalu- ated under real-life, as well as similar to real-life conditions. The thesis addresses the following topics: First, a sub-room level indoor positioning system is introduced. The system is based on low-cost ceiling cameras and a simple computer vision tracking approach. The problem of user identification is solved by correlating modes of locomotion patterns derived from the trajectory of unidentified objects and on-body motion sensors. Afterwards, the issue of recognizing how and what mainstream household devices have been used for is considered. Based on a low-cost microphone, the water consumption of water-taps can be approximated by analyzing plumbing noise. Besides that, operating modes of mainstream electronic devices were recognized by using rule-based classifiers, electric current features and power measurement sensors. As a next step, the difficulty of spotting subtle, barely distinguishable hand activities and the resulting object interactions, within a data set containing a large amount of background data, is addressed. The problem is solved by introducing an on-body core system which is configured by simple, one-time physical measurements and minimal data collections. The lack of large training sets is compensated by fusing the system with activity and context recognition systems, that are able to reduce the search space observed. Amongst other systems, previously introduced approaches and ideas are revisited in this section. An in-depth evaluation shows the impact of each fusion procedure on the performance and run-time of the system. The approaches introduced are able to provide significantly better results than a state-of-the-art inertial system using large amounts of training data. The idea of using unobtrusive sensors has also been applied to the field of behavior analysis. Integrated smartphone sensors are used to detect behavioral changes of in- dividuals due to medium-term stress periods. Behavioral parameters related to location traces, social interactions and phone usage were analyzed to detect significant behavioral changes of individuals during stressless and stressful time periods. Finally, as a closing part of the the- sis, a standardization approach related to the integration of ambient intelligence systems (as introduced in this thesis) in real-life and large-scale scenarios is shown.
This thesis is devoted to the computational aspects of intersection theory and enumerative geometry. The first results are a Sage package Schubert3 and a Singular library schubert.lib which both provide the key functionality necessary for computations in intersection theory and enumerative geometry. In particular, we describe an alternative method for computations in Schubert calculus via equivariant intersection theory. More concretely, we propose an explicit formula for computing the degree of Fano schemes of linear subspaces on hypersurfaces. As a special case, we also obtain an explicit formula for computing the number of linear subspaces on a general hypersurface when this number is finite. This leads to a much better performance than classical Schubert calculus.
Another result of this thesis is related to the computation of Gromov-Witten invariants. The most powerful method for computing Gromov-Witten invariants is the localization of moduli spaces of stable maps. This method was introduced by Kontsevich in 1995. It allows us to compute Gromov-Witten invariants via Bott's formula. As an insightful application, we computed the numbers of rational curves on general complete intersection Calabi-Yau threefolds in projective spaces up to degree six. The results are all in agreement with predictions made from mirror symmetry.
According to the domain specific models of speech perception, speech is supposed to be processed distinctively compared to non-speech. This assumption is supported by many studies dealing with the processing of speech and non-speech stimuli. However, the complexity of both stimulus classes is not matched in most studies, which might be a confounding factor, according to the cue specific models of speech perception. One solution is spectrally rotated speech, which has already been used in a range of fMRI and PET studies. In order to be able to investigate the role of stimulus complexity, vowels, spectrally rotated vowels and a second non-speech condition with two bands of sinusoidal waves, representing the first two formants of the vowels, were used in the present thesis. A detailed description of the creation and the properties of the whole stimulus set are given in Chapter 2 (Experiment 1) of this work. These stimuli were used to investigate the auditory processing of speech and non-speech sounds in a group of dyslexic adults and age matched controls (Experiment 2). The results support the assumption of a general auditory deficit in dyslexia. In order to compare the sensory processing of speech and non-speech in healthy adults on the electrophysiological level, stimuli were also presented within a multifeature oddball paradigm (Experiment 3). Vowels evoked a larger mismatch negativity (MMN) compared to both non-speech stimulus types. The MMN evoked by tones and spectrally rotated tones were compared in Experiment 4, to investigate the role of harmony. No difference in the area of MMN was found, indicating that the results found in Experiment 3 were not moderated by the harmonic structure of the vowels. All results are discussed in the context of the domain and cue specific models of speech perception.
An huge amount of computational models and programming languages have been proposed
for the description of embedded systems. In contrast to traditional sequential programming
languages, they cope directly with the requirements for embedded systems: direct support for
concurrent computations and periodic interaction with the environment are only some of the
features they offer. Synchronous languages are one class of languages for the development of
embedded systems and they follow the fundamental principle that the execution is divided into
a sequence of logical steps. Thereby, each step follows the simplification that the computation
of the outputs is finished directly when the inputs are available. This rigorous abstraction leads
to well-defined deterministic parallel composition in general, and to deterministic abortion
and suspension in imperative synchronous languages in particular. These key features also
allow to translate programs to hardware and software, and also formal verification techniques
like model checking can be easily applied.
Besides the advantages of imperative synchronous languages, also some drawbacks can
be listed. Over-synchronization is an effect being caused by parallel threads which have to
synchronize for each execution step, even if they do not communicate, since the synchronization
is implicitly forced by the control-flow. This thesis considers the idea of clock refinement to
introduce several abstraction layers for communication and synchronization in addition to the
existing single-clock abstraction. Thereby, clocks can be refined by several independent clocks
so that a controlled amount of asynchrony between subsequent synchronization points can be
exploited by compilers. The declarations of clocks form a tree, and clocks can be defined within
the threads of the parallel statement, which allows one to do independent computations based
on these clocks without synchronizing the threads. However, the synchronous abstraction is
kept at each level of the abstraction.
Clock refinement is introduced in this thesis as an extension to the imperative synchronous
language Quartz. Therefore, new program statements are introduced which allow to define
a new clock as a refinement of an existing one and to finish a step based on a certain clock.
Examples are considered to show the impact of the behavior of the new statements to
the already existing statements, before the semantics of this extension is formally defined.
Furthermore, the thesis presents a compile algorithm to translate programs to an intermediate
format, and to translate the intermediate format to a hardware description. The advantages
obtained by the new modeling feature are finally evaluated based on examples.
Enhanced information processing of phobic natural images in participants with specific phobias
(2014)
From an evolutionary point of view, it can be assumed that visual processing and rapid detection of potentially dangerous stimuli in the environment (e.g., perilous animals) is highly adaptive for all humans. In the present dissertation, I address three research questions; (1) Is information processing of threatening stimuli enhanced in individuals with specific phobias? (2) Are there any differences between the different types of phobia (e.g., spider phobia vs. snake phobia)? (3) Is the frequently reported attentional bias of individuals with specific phobias - which may contribute to an enhancement in information processing – also detectable in a prior entry paradigm? In Experiments 1 to 3 of the present thesis non-anxious control, spider-fearful, snake-fearful, and blood-injection-injury-fearful participants took part in the study. We applied in each experiment a response priming paradigm which has a strong theoretical (cf. rapid-chase theory; Schmidt, Niehaus, & Nagel, 2006; Schmidt, Haberkamp, Veltkamp et al., 2011) as well as empirical background (cf. Schmidt, 2002). We show that information processing in fearful individuals is indeed enhanced for phobic images (i.e., spiders for spider-fearful participants; injuries for blood-injury-injection(BII)-fearful individuals). However, we found marked differences between the different types of phobia. In Experiment 1 and 2 (Chapter 2 and 3), spiders had a strong and specific influence in the group of spider-fearful individuals: Phobic primes entailed the largest priming effects, and phobic targets accelerated responses, both effects indicating speeded response activation by phobic images. In snake-fearful participants (Experiment 1, Chapter 2), this processing enhancement for phobic material was less pronounced and extended to both snake and spider images. In Experiment 3 (Chapter 4), we demonstrated that early information processing for pictures of small injuries is also enhanced in BII-fearful participants, even though BII fear is unique in that BII-fearful individuals show opposite physiological reactions when confronted with the phobic stimulus compared to individuals with animal phobias. These results show that already fast visuomotor responses are further enhanced in spider- and BII-fearful participants. Results give evidence that responses are based on the first feedforward sweep of neuronal activation proceeding through the visuomotor system. I propose that the additional enhancement in spider- and BII-fearful individuals depend on a specific hardwired binding of elementary features belonging to the phobic object in fearful individuals (i.e., effortless recognition of the respective phobic object via hardwired neuronal conjunctions). I suggest that these hardwired conjunctions developed due to long-term perceptual learning processes. We also investigate the frequently reported attentional bias of phobic individuals and showed that this bias is detectable in temporal order judgments using a prior entry paradigm. I assume that perceptual learning processes might also strengthen the attentional bias, for example, by providing a more salient bottom-up signal that draws attention involuntarily. In sum, I conclude that (1) early information processing of threatening stimuli is indeed enhanced in individuals with specific phobias but that (2) differences between divers types of phobia exist (i.e., spider- and BII-fearful participants show enhanced information of the respective phobic object; though, snake-fearful participants show no specific information processing enhancement of snakes); (3) the frequently reported attentional bias of spider-fearful individuals is also detectable in a prior entry paradigm.
In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.
In the present work, the phase transitions in different Fe/FeC systems were studied by using the molecular dynamics simulation and the Meyer-Entel interaction potential (also the Johnson potential for Fe-C interaction). Fe-bicrystal, thin film, Fe-C bulk and Fe-C nanowire systems were investigated to study the behaviour of the phase transition, where the energetics, dynamics and transformations pathways were analysed.
Continuum Mechanical Modeling of Dry Granular Systems: From Dilute Flow to Solid-Like Behavior
(2014)
In this thesis, we develop a granular hydrodynamic model which covers the three principal regimes observed in granular systems, i.e. the dilute flow, the dense flow and the solid-like regime. We start from a kinetic model valid at low density and extend its validity to the granular solid-like behavior. Analytical and numerical results show that this model reproduces a lot of complex phenomena like for instance slow viscoplastic motion, critical states and the pressure dip in sand piles. Finally we formulate a 1D version of the full model and develop a numerical method to solve it. We present two numerical examples, a filling simulation and the flow on an inclined plane where the three regimes are included.
The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.
Regular physical activity is essential to maintain or even improve an individual’s health. There exist various guidelines on how much individuals should do. Therefore, it is important to monitor performed physical activities during people’s daily routine in order to tell how far they meet professional recommendations. This thesis follows the goal to develop a mobile, personalized physical activity monitoring system applicable for everyday life scenarios. From the mentioned recommendations, this thesis concentrates on monitoring aerobic physical activity. Two main objectives are defined in this context. On the one hand, the goal is to estimate the intensity of performed activities: To distinguish activities of light, moderate or vigorous effort. On the other hand, to give a more detailed description of an individual’s daily routine, the goal is to recognize basic aerobic activities (such as walk, run or cycle) and basic postures (lie, sit and stand).
With recent progress in wearable sensing and computing the technological tools largely exist nowadays to create the envisioned physical activity monitoring system. Therefore, the focus of this thesis is on the development of new approaches for physical activity recognition and intensity estimation, which extend the applicability of such systems. In order to make physical activity monitoring feasible in everyday life scenarios, the thesis deals with questions such as 1) how to handle a wide range of e.g.
everyday, household or sport activities and 2) how to handle various potential users. Moreover, this thesis deals with the realistic scenario where either the currently performed activity or the current user is unknown during the development and training
phase of activity monitoring applications. To answer these questions, this thesis proposes and developes novel algorithms, models and evaluation techniques, and performs thorough experiments to prove their validity.
The contributions of this thesis are both of theoretical and of practical value. Addressing the challenge of creating robust activity monitoring systems for everyday life the concept of other activities is introduced, various models are proposed and validated. Another key challenge is that complex activity recognition tasks exceed the potential of existing classification algorithms. Therefore, this thesis introduces a confidence-based extension of the well known AdaBoost.M1 algorithm, called ConfAdaBoost.M1. Thorough experiments show its significant performance improvement compared to commonly used boosting methods. A further major theoretical contribution is the introduction and validation of a new general concept for the personalization of physical activity recognition applications, and the development of a novel algorithm (called Dependent Experts) based on this concept. A major contribution of practical value is the introduction of a new evaluation technique (called leave-one-activity-out) to simulate when performing previously unknown activities in a physical activity monitoring system. Furthermore, the creation and benchmarking of publicly available physical activity monitoring datasets within this thesis are directly benefiting the research community. Finally, the thesis deals with issues related to the implementation of the proposed methods, in order to realize the envisioned mobile system and integrate it into a full healthcare application for aerobic activity monitoring and support in daily life.
The noise issue in manufacturing system is widely discussed from legal and health aspects. Regarding the existing laws and guidelines, various investigation methods are implemented in industry. The sound pressure level can be measured and reduced by using established approaches in reality. However, a straightforward and low cost approach to study noise issue using existing digital factory models is not found.
This thesis attempts to develop a novel concept for sound pressure level investigation in a virtual environment. With this, the factory planners are able to investigate the noise issue during factory design and layout planning phase.
Two computer aided tools are used in this approach: acoustic simulation and virtual reality (VR). The former enables the planner to simulate the sound pressure level by given factory layout and facility sound features. And the latter provides a visualization environment to view and explore the simulation results. The combination of these two powerful tools provides the planners a new possibility to analyze the noise in a factory.
To validate the simulations, the acoustic measurements are implemented in a real factory. Sound pressure level and sound intensity are determined respectively. Furthermore, a software tool is implemented using the introduced concept and approach. With this software, the simulation results are represented in a Cave Automatic Virtual Environment (CAVE).
This thesis describes the development of the approach, the measurement of sound features, the design of visualization framework, and the implementation of VR software. Based on this know-how, the industry users are able to design their own method and software for noise investigation and analysis.
Backward compatibility of class libraries ensures that an old implementation of a library can safely be replaced by a new implementation without breaking existing clients.
Formal reasoning about backward compatibility requires an adequate semantic model to compare the behavior of two library implementations.
In the object-oriented setting with inheritance and callbacks, finding such models is difficult as the interface between library implementations and clients are complex.
Furthermore, handling these models in a way to support practical reasoning requires appropriate verification tools.
This thesis proposes a formal model for library implementations and a reasoning approach for backward compatibility that is implemented using an automatic verifier. The first part of the thesis develops a fully abstract trace-based semantics for class libraries of a core sequential object-oriented language. Traces abstract from the control flow (stack) and data representation (heap) of the library implementations. The construction of a most general context is given that abstracts exactly from all possible clients of the library implementation.
Soundness and completeness of the trace semantics as well as the most general context are proven using specialized simulation relations on the operational semantics. The simulation relations also provide a proof method for reasoning about backward compatibility.
The second part of the thesis presents the implementation of the simulation-based proof method for an automatic verifier to check backward compatibility of class libraries written in Java. The approach works for complex library implementations, with recursion and loops, in the setting of unknown program contexts. The verification process relies on a coupling invariant that describes a relation between programs that use the old library implementation and programs that use the new library implementation. The thesis presents a specification language to formulate such coupling invariants. Finally, an application of the developed theory and tool to typical examples from the literature validates the reasoning and verification approach.
Polychlorinated dibenzo-p-dioxins, dibenzofurans, and polychlorinated biphenyls are persistent environmental pollutants which ubiquitously occur as complex mixtures and accumulate in the food and feed chain due to their high lipophilic properties. Of the 419 possible congeners, only 29 share a common mechanism of action and cause similar effects, the so called dioxin-like compounds. Dioxin-like compounds evoke a broad spectrum of biochemical and toxic responses, i.e. enzyme induction, dermal toxicity, hepatotoxicity, immunotoxicity, carcinogenicity as well as adverse effects on reproduction, development, and the endocrine system in laboratory animals and in humans. Most, if not all, of the aforementioned responses, are mediated by the aryl hydrocarbon receptor. In the present work, the elicited biochemical effects of a selection of dioxin-like compounds and the non dioxin-like PCB 153 were examined in mouse (in vivo) and in human liver cell models (in vitro). Emphasis was given to the main contributors to the total toxic equivalents in human blood and tissues TCDD, 1-PnCDD, 4-PnCDF, PCB 118, PCB 126, and PCB 156, which likewise contribute about 90 % to the dioxin-like activity in the human food chain.
Three mouse in vivo studies were carried out aiming to characterize the alterations in hepatic gene expression as well as the induction of hepatic xenobiotic metabolizing enzymes after single oral dose. Based on the results obtained from mouse 3-day and 14-day studies, the seven test compounds can be categorized into three classes; the ones which are 'pure' AhR ligands (TCDD, 1-PnCDD, 4-PnCDF, and PCB 126) or solely CAR inducers (PCB 153), and the ones which are AhR/CAR mixed-type inducers (PCB 118, PCB 156). Moreover, the analysis of hepatic gene expression patterns after a single oral dose of either TCDD or PCB 153 revealed that the altered genes fundamentally differed. Profiling of significantly altered genes led to the conclusion that changes in gene expression were associated with different signalling pathways, in fact by AhR and CAR.
For investigating the role of the AhR in mediating biological responses, several experimental approaches were carried out, such as the analysis of blood plasma metabolites in Ahr knockout and wild-type mice. Genotype specifics and similarities were determined by HPLC-MS/MS analysis. Several plasma metabolites could be identified in both genotypes, but also differences were detected. Furthermore, an in vivo experiment was performed aiming to characterize AhR-dependent and -independent effects in female Ahr knockout and wild-type mice. For this purpose, mice received a single oral dose of TCDD and were killed 96 h later. Microarray analysis of mouse livers revealed that although the Ahr gene was knocked out in Ahr-/- mice, the quantity of affected genes were in the same order of magnitude as for Ahr+/+ mice, but the pattern of altered genes distinctly differed. In addition, the relative liver weights of TCDD-treated Ahr+/+ mice were significantly increased which led to the conclusion, that TCDD induced the development of hepatic steatosis in female Ahr wild-type.
The performed in vitro experiments aimed to characterize the effects elicited by selected DLCs and PCB 153 in human liver cell models by the use of HepG2 cells and primary human hepatocytes. In general, primary human hepatocytes were less responsive than HepG2 cells. This was not only observed in EC values derived from EROD assay, but also regarding microarray analysis in terms of differently regulated genes. In vitro REPs gained from both liver cell models widely confirmed the current TEFs, but some deviations occurred. The comparison of the TCDD-altered genes in both human cell types revealed that only a considerably small number of genes was in common up regulated by both human liver cell models, such as the established AhR-regulated highly inducible cytochrome P450s 1A1, 1A2, and 1B1 as well as other AhR target genes. Although the overlap was rather small, the TCDD-induced genes could be consistently associated with the broad spectrum of established dioxin-related biological responses. The gene expression pattern in primary human hepatocytes after treatment with selected DLCs (TCDD, 1-PnCDD, 4-PnCDF, and PCB 126) and PCB 153 was additionally characterized by microarray analysis. The highest response in terms of significantly altered genes was determined for TCDD, followed by 4-PnCDF, 1-PnCDD, and PCB 126, whereas exposure to PCB 153 did not evoke any significant changes in gene expression. The pattern of significantly altered genes was very homogenous among the four congeners. Genes associated with well-established DLC-related biological responses as well as novel dioxin-inducible target genes were identified, whereby an extensive overlap in terms of up regulated genes by all four DLCs occurred. In conclusion, the results from the in vitro experiments performed in primary human hepatocytes provided fundamental insight into the congeners' potencies and caused alterations in gene expression patterns. The obtained findings implicate that although the extent of enzyme inducibilities varied, the gene expression patterns are coincidental. Microarray analysis identified species-specific (mouse vs. human) as well as model-specific (in vitro vs. in vivo and transformed cells vs. untransformed cells) differences. In order to identify novel biomarkers for AhR activation due to treatment with dioxin-like compounds, five candidates were selected based on the microarray results i.e. ALDH3A1, TIPARP, HSD17B2, CD36, and AhRR. Eventually, ALDH3A1 turned out to be the most reliable and suitable marker for exposure to DLCs in both human liver cell models eliciting the highest mRNA inducibility among the five chosen candidates. In which way these species- and cell type-specific markers are involved in the dioxin-elicited toxic responses should be further characterized in vivo and in vitro.
This thesis is divided into two parts. Both cope with multi-class image segmentation and utilize
non-smooth optimization algorithms.
The topic of the first part, namely unsupervised segmentation, is the application of clustering
to image pixels. Therefore, we start with an introduction of the biconvex center-based clustering
algorithms c-means and fuzzy c-means, where c denotes the number of classes. We show that
fuzzy c-means can be seen as an approximation of c-means in terms of power means.
Since noise is omnipresent in our image data, these simple clustering models are not suitable
for its segmentation. To this end, we introduce a general and finite dimensional segmentation
model that consists of a data term stemming from the aforementioned clustering models plus a
continuous regularization term. We tackle this optimization model via an alternating minimiza-
tion approach called regularized c-centers (RcC). Thereby, we fix the centers and optimize the
segment membership of the pixels and vice versa. In this general setting, we prove convergence
in the sense of set-valued algorithms using Zangwill’s Theory [172].
Further, we present a segmentation model with a total variation regularizer. While updating
the cluster centers is straightforward for fixed segment memberships of the pixels, updating the
segment membership can be solved iteratively via non-smooth, convex optimization. Thereby,
we do not iterate a convex optimization algorithm until convergence. Instead, we stop as soon as
we have a certain amount of decrease in the objective functional to increase the efficiency. This
algorithm is a particular implementation of RcC providing also the corresponding convergence
theory. Moreover, we show the good performance of our method in various examples such as
simulated 2d images of brain tissue and 3d volumes of two materials, namely a multi-filament
composite superconductor and a carbon fiber reinforced silicon carbide ceramics. Thereby, we
exploit the property of the latter material that two components have no common boundary in
our adapted model.
The second part of the thesis is concerned with supervised segmentation. We leave the area
of center based models and investigate convex approaches related to graph p-Laplacians and
reproducing kernel Hilbert spaces (RKHSs). We study the effect of different weights used to
construct the graph. In practical experiments we show on the one hand image types that
are better segmented by the p-Laplacian model and on the other hand images that are better
segmented by the RKHS-based approach. This is due to the fact that the p-Laplacian approach
provides smoother results, while the RKHS approach provides often more accurate and detailed
segmentations. Finally, we propose a novel combination of both approaches to benefit from the
advantages of both models and study the performance on challenging medical image data.
In the last few years a lot of work has been done in the investigation of Brownian motion with point interaction(s) in one and higher dimensions. Roughly speaking a Brownian motion with point interaction is nothing else than a Brownian motion whose generator is disturbed by a measure supported in just one point.
The purpose of the present work is the introducing of curve interactions of the two dimensional Brownian motion for a closed curve \(\mathcal{C}\). We will understand a curve interaction as a self-adjoint extension of the restriction of the Laplacian to the set of infinitely often continuously differentiable functions with compact support in \(\mathbb{R}^{2}\) which are constantly 0 at the closed curve. We will give a full description of all these self-adjoint extensions.
In the second chapter we will prove a generalization of Tanaka's formula to \(\mathbb{R}^{2}\). We define \(g\) to be a so-called harmonic single layer with continuous layer function \(\eta\) in \(\mathbb{R}^{2}\). For such a function \(g\) we prove
\begin{align}
g\left(B_{t}\right)=g\left(B_{0}\right)+\int\limits_{0}^{t}{\nabla g\left(B_{s}\right)\mathrm{d}B_{s}}+\int\limits_{0}^{t}\eta\left(B_{s}\right)\mathrm{d}L\left(s,\mathcal{C}\right)
\end{align}
where \(B_{t}\) is just the usual Brownian motion in \(\mathbb{R}^{2}\) and \(L\left(t,\mathcal{C}\right)\) is the connected unique local time process of \(B_{t}\) on the closed curve \(\mathcal{C}\).
We will use the generalized Tanaka formula in the following chapter to construct classes of processes related to curve interactions. In a first step we get the generalization of point interactions in a second step we get processes which behaves like a Brownian motion in the complement of \(\mathcal{C}\) and has an additional movement along the curve in the time- scale of \(L\left(t,\mathcal{C}\right)\). Such processes do not exist in the one point case since there we cannot move when the Brownian motion is in the point.
By establishing an approximation of a curve interaction by operators of the form Laplacian \(+V_{n}\) with "nice" potentials \(V_{n}\) we are able to deduce the existence of superprocesses related to curve interactions.
The last step is to give an approximation of these superprocesses by a sytem of branching particles. This approximation gives a better understanding of the related mass creation.
The research presented in this PhD thesis is a contribution to the field of anion recognition in competitive aqueous solvent mixtures. Neutral anion receptors having a cage-type architecture have been developed on the basis of triply-linked bis(cyclopeptides) and their binding properties toward various inorganic anions have been studied.
The synthetic approaches chosen to assemble the targeted container molecules rely on dynamic chemistry under the template effects of anions such as sulfate and halides. As reversible reactions metal-ligand exchange and thiol-disulfide exchange were used. Disulfide exchange has previously provided singly- and doubly-linked bis(cyclopeptide) receptors whose anion affinities in 2:1 acetonitrile/water mixtures approached the nanomolar range. Metal-ligand interactions have so far not been used to assemble bis(cyclopeptides) in our group. The cyclopeptide building blocks required for both approaches, namely cyclic hexapeptides containing alternating 6-aminopicolinic acid and either (2S,4S)-4-cyanoproline or (2S,4S)-4-thioproline subunits could be synthesized successfully.
Self-assembly of the bis(cyclopeptide) held together by coordinative interactions has been attempted by treating the cyclopeptide trinitrile with square-planar palladium (II) complexes. The reaction was followed with different NMR spectroscopic techniques. Unfortunately, none of the experiments provides conclusive evidence that the targeted triply-linked cage was indeed formed.
Bis(cyclopeptides) containing three dithiol derived linkers between the cyclopeptide rings could be synthesizes successfully. Two complexes were isolated, albeit in small amounts, one containing linkers derived from 1,2-ethanedithiol and the other one from 1,3-benzenedithiol that contain a sulfate anion incorporated in the cavity between the cyclopeptide rings. Formation of triply-linked bis(cyclopeptides) containing different types of linkers could be achieved by performing the synthesis in the presence of different dithiols. Unfortunately, the two C3 symmetrical bis(cyclopeptides) containing a single linker type could not be isolated in analytically pure form so that only qualitative binding studies could be performed. Investigations in this context indicate extraordinary sulfate affinity for these bis(cyclopeptides). In particular, affinity of the receptor containing the 1,2-ethanedithiol linkers for sulfate anions is so high that is even able to dissolve barium sulfate under appropriate conditions and presumably exceeds the sulfate affinity of the doubly-linked bis(cyclopeptides). The sulfate anion present in the cavity of this bis(cyclopeptide) can be replaced by a large number of other anions, i.e. by selenate, perrhenate, nitrate, tetrafluoroborate, hexafluorophosphate and halides. None of these complexes proved to be as stable as the corresponding sulfate complex. In addition, 1H-NMR spectroscopic investigations provided information about the solution structure of the bis(cyclopeptide) anion complexes. Sulfate release from the cavity of the receptor is a slow process while exchange of other anions is significantly faster. Another interesting feature that has been observed for sulfate and selenate complexes of the 1,2-ethanedithiol-containing bis(cyclopeptide) is the very slow H/D rate with which protons on amide groups located inside the cavity of the cage are replaced by deuterium atoms in protic deuterated solvents. This effect in combination with the observation that the different deuterated bis(cyclopeptide) species exhibit individual amide NH signals in the 1H-NMR spectrum are indicative for well defined complex geometries with strong hydrogen-bonding interactions between the anion and the amide NH groups of the receptor. Following the H/D exchange rate in the presence of various salts indicated that anion exchange proceeds via the dissociated complex and not by direct replacement of one anion by another one.
Tire-soil interaction is important for the performance of off-road vehicles and the soil compaction in the agricultural field. With an analytical model, which is integrated in multibody-simulation software, and a Finite Element model, the forces and moments generated on the tire-soil contact patch were studied to analyze the tire performance. Simulations with these two models for different tire operating conditions were performed to evaluate the mechanical behaviors of an excavator tire. For the FE model validation a single wheel tester connected to an excavator arm was designed. Field tests were carried out to examine the tire vertical stiffness, the contact pressure on the tire – hard ground interface, the longitudinal/vertical force and the compaction of the sandy clay from the test field under specified operating conditions. The simulation and experimental results were compared to evaluate the model quality. The Magic Formula was used to fit the curves of longitudinal and lateral forces. A simplified tire-soil interaction model based on the fitted Magic Formula could be established and further applied to the simulation of vehicle-soil interaction.
Aim of this work was the extension and development of a coupled Computational Fluid Dynamics (CFD) and population balance model (PBM) solver to enable a simulation aided design of stirred liquid-liquid extraction columns. The principle idea is to develop a new design methodology based on a CFD-PBM approach and verify it with existing data and correlations. On this basis, the separation performance in any apparatus geometry should be possible to predict without any experimental input. Reliable “experiments in silico” (computer calculations) should give the engineer a valuable and user-friendly tool for early design studies at minimal costs.
The layout of extraction columns is currently based on experimental investigations from miniplant to pilot plant and a scale-up to the industrial scale. The hydrodynamic properties can be varied by geometrical adjustments of the stirrer diameter, the stirrer height, the free cross sectional area of the stator, the compartment height as well as the positioning and the size of additional baffles. The key parameter for the liquid–liquid extraction is the yield which is mainly determined at the in- and outlets of the column. Local phenomena as the swirl structure are influenced by geometry changes. However, these local phenomena are generally neglected in state-of-the are design methodologies due to the complex required measurement techniques. A geometrical optimization of the column therefore still results in costs for validation experiments as assembly and operation of the column, which can be reduced by numerical investigations. The still mainly in academics used simulation based layout of counter-current extraction columns is based at the beginning of this work on one dimensional simulations of extraction columns and first three dimensional simulations. The one dimensional simulations are based on experimental derived, geometrical dependent correlations for the axial backmixing (axial dispersion), the hold-up, the phase fraction, the droplet sedimentation and the energy dissipation. A combination of these models with droplet population balance modeling resulted in a description of the complex droplet-droplet interactions (droplet size) along the column height. The three dimensional CFD simulations give local information about the flow field (velocity, swirl structure) based on the used numerical mesh corresponding to the real geometry. A coupling of CFD with population balance modeling further provides information about the local droplet size. A backcoupling of the droplet size with the CFD (drag model) results in an enhancement of the local hydrodynamics (e.g. hold-up, dispersed phase velocity). CFD provided local information about the axial dispersion coefficient of simple geometrical design (e.g. Rotating Disc Contactor (RDC) column). First simulations of the RDC column using a two dimensional rotational geometry combined with population balance modeling were performed and gave local information about the droplet size for different boundary conditions (rotational speed, different column sizes).
In this work, two different column types were simulated using an extended OpenSource CFD code. The first was the RDC column, which were mainly used for code development due to its simple geometry. The Kühni DN32 column is equipped with a six-baffled stirring device and flat baffles for disturbing the flow and requires a full three dimensional description. This column type was mainly used for experimental validation of the simulations due to the low required volumetric flow rate. The Kühni DN60 column is similar to the Kühni DN32 column with slight changes to the stirring device (4-baffles) and was used for scale up investigations. For the experimental validation of the hydrodynamics, laser based measurement techniques as Particle Image Velocimetry (PIV) and Laser Induced Fluorescence (LIF) were used. A good agreement between the experimental derived values for velocity, hold-up and energy dissipation, experimentally derived correlations from literature and the simulations with a modified Euler-Euler based OpenSource CFD code could be found. The experimental derived axial dispersion coefficient was further compared to Euler-Lagrange simulations. The experimental derived correlations for the Kühni DN32 in literature fit to the simulated values. Also the axial dispersion coefficient for the dispersed phase satisfied a correlation from literature. However, due to the complexity of the dispersed phase axial dispersion coefficient measurement, the available correlations gave no distinct agreement to each other.
A coupling of the modified Euler-Euler OpenSource CFD code was done with a one group population balance model. The implementation was validated to the analytical solution of the population balance equation for constant breakage and coalescence kernels. A further validation of the population balance transport equation was done by comparing the results of a five compartment section to the results of the commercial CFD code FLUENT using the Quadrature Method of Moments (QMOM).
For the simulation of the droplet-droplet interactions in liquid-liquid extraction columns, several breakage and coalescence models are available in the literature. The models were compared to each other using the one-group population balance model in Matlab which allows the determination of the minimum stable droplet diameter at a certain energy dissipation. Based on this representation, it was possible to determine the parameters for a specific breakage and coalescence model combination which allowed the simulation of a Kühni miniplant column at different rotational speeds. The resulting simulated droplet size was in very good agreement to the experimental derived droplet size from literature. Several column designs of the DN32 were investigated by changing the compartment height and the axial stirrer position. It could be shown that a decrease of the stirrer position increases the phase fraction inside the compartment. At the same time, the droplet size decreases inside the compartment, which allows a higher mass transfer due to a higher available interfacial area. However, the shifting results in an expected earlier flooding of the column due to a compressed flow structure underneath the stirring device. In a next step, the code was further extended by mass transfer equations based on the two-film theory. Mass transfer coefficient models for the dispersed and continuous phase were investigated for the RDC column design.
A first mass transfer simulation of a full miniplant column was done. The change in concentration was accounted by the mixture density, viscosity and interfacial tension in dependence of the concentration, which affects the calculation of the droplet size. The results of the column simulation were compared to own experimental data of the column. It could be shown that the concentration profile along the column height can be predicted by the presented CFD/population balance/mass transfer code. The droplet size decreases corresponding to the interfacial tension along the column height. Compared to the experimental derived droplet size at the outlet, the simulation is in good agreement.
Besides the occurrence of a mono dispersed droplet size, high breakage may lead to the generation of small satellite droplets and coalescence underneath the stator leads to larger droplets inside the column and hence to a change of the hold-up and of the flooding point. A multi-phase code was extended by the Sectional Quadrature Method of Moment (SQMOM) allowing a modeling of the droplet interactions of bimodal droplet interactions or multimodal distributions. The implementations were in good agreement to the analytical solution. In addition, the simulation of an RDC column section showed the different distribution of the smaller droplets and larger droplets. The smaller droplets tend to follow the continuous phase flow structure and show a higher distribution of inside the column. The larger droplets tend to rise directly through the column and show only a low influence to the continuous phase flow.
The current results strengthen the use of CFD for the layout of liquid-liquid extraction columns in future. The coupling of CFD/PBM and mass transfer using an OpenSource CFD code allows the investigation of computational intensive column designs (e.g. pilot plant columns). Furthermore the coupled code enhances the accuracy of the hydrodynamics simulations and leads to a better understanding of counter-current liquid-liquid extraction columns. The gained correlation were finally used as an input for one dimensional mass transfer simulations, where a perfect fit of the concentration profiles at varied boundary conditions could be obtained. By using the multi-scale approach, the computational time for mass transfer simulations could be reduced to minutes. In future, with increasing computational power, a further extend of the multiphase CFD/SQMOM model including mass transfer equation will provide an efficient tool to model multimodal and multivariate systems as bubble column reactors.
Due to tremendous improvements of high-performance computing resources as well
as numerical advances computational simulations became a common tool for modern
engineers. Nowadays, simulation of complex physics is more and more substituting a
large amount of physical experiments. While the vast compute power of large-scale
high-performance systems enabled for simulating more complex numerical equations,
handling the ever increasing amount of data with spatial and temporal resolution
burdens new challenges to scientists. Huge hardware and energy costs desire for
ecient utilization of high-performance systems. However, increasing complexity of
simulations raises the risk of failing simulations resulting in a single simulation to be
restarted multiple times. Computational Steering is a promising approach to interact
with running simulations which could prevent simulation crashes. The large amount
of data expands gaps in the amount of data that can be calculated and the amount of
data that can be processed. Extreme-scale simulations produce more data that can
even be stored. In this thesis, I propose several methods that enhance the process
of steering, exploring, visualizing, and analyzing ongoing numerical simulations.
There is a growing trend for ever larger wireless sensor networks (WSNs) consisting of thousands or tens of thousands of sensor nodes (e.g., [91, 79]). We believe this trend will continue and thus scalability plays a crucial role in all protocols and mechanisms for WSNs. Another trend in many modern WSN applications is the time sensitivity to information from sensors to sinks. In particular, WSNs are a central part of the vision of cyber-physical systems and as these are basically closed-loop systems many WSN applications will have to operate under stringent timing requirements. Hence, it is crucial to develop algorithms that minimize the worst-case delay in WSNs. In addition, almost all WSNs consist of battery-powered nodes, and thus energy-efficiency clearly remains another premier goal in order to keep network lifetime high. This dissertation presents and evaluates designs for WSNs using multiple sinks to achieve high lifetime and low delay. Firstly, we investigate random and deterministic node placement strategies for large-scale and time-sensitive WSNs. In particular, we focus on tiling-based deterministic node placement strategies and analyze their effects on coverage, lifetime, and delay performance under both exact placement and stochastically disturbed placement. Next, we present sink placement strategies, which constitutes the main contributions of this dissertation. Static sinks will be placed and mobile sinks will be given a trajectory. A proper sink placement strategy can improve the performance of a WSN significantly. In general, the optimal sink placement with lifetime maximization is an NP-hard problem. The problem is even harder if delay is taken into account. In order to achieve both lifetime and delay goals, we focus on the problem of placing multiple (static) sinks such that the maximum worst-case delay is minimized while keeping the energy consumption as low as possible. Different target networks may need a corresponding sink placement strategy under differing levels of apriori assumptions. Therefore, we first develop an algorithm based on the Genetic Algorithm (GA) paradigm for known sensor nodes' locations. For a network where global information is not feasible we introduce a self-organized sink placement (SOSP) strategy. While GA-based sink placement achieves a near-optimal solution, SOSP provides a good sink placement strategy with a lower communication overhead. How to plan the trajectories of many mobile sinks in very large WSNs in order to simultaneously achieve lifetime and delay goals had not been treated so far in the literature. Therefore, we delve into this difficult problem and propose a heuristic framework using multiple orbits for the sinks' trajectories. The framework is designed based on geometric arguments to achieve both, high lifetime and low delay. In simulations, we compare two different instances of our framework, one conceived based on a load-balancing argument and one based on a distance minimization argument, with a set of different competitors spanning from statically placed sinks to battery-state aware strategies. We find our heuristics outperform the competitors in both, lifetime and delay. Furthermore, and probably even more important, the heuristic, while keeping its good delay and lifetime performance, scales well with an increasing number of sinks. In brief, the goal of this dissertation is to show that placing nodes and sinks in conventional WSNs as well as planning trajectories in mobility enabled WSNs carefully really pays off for large-scale and time-sensitive WSNs.
Constructing accurate earth models from seismic data is a challenging task. Traditional methods rely on ray based approximations of the wave equation and reach their limit in geologically complex areas. Full waveform inversion (FWI) on the other side seeks to minimize the misfit between modeled and observed data without such approximation.
While superior in accuracy, FWI uses a gradient based iterative scheme that makes it also very computationally expensive. In this thesis we analyse and test an Alternating Direction Implicit (ADI) scheme in order to reduce the costs of the two dimensional time domain algorithm for solving the acoustic wave equation. The ADI scheme can be seen as an intermediate between explicit and implicit finite difference modeling schemes. Compared to full implicit schemes the ADI scheme only requires the solution of much smaller matrices and is thus less computationally demanding. Using ADI we can handle coarser discretization compared to an explicit method. Although order of convergence and CFL conditions for the examined explicit method and ADI scheme are comparable, we observe that the ADI scheme is less prone to dispersion. Furhter, our algorithm is efficiently parallelized with vectorization and threading techniques. In a numerical comparison, we can demonstrate a runtime advantage of the ADI scheme over an explicit method of the same accuracy.
With the modeling in place, we test and compare several inverse schemes in the second part of the thesis. With the goal of avoiding local minima and improving speed of convergence, we use different minimization functions and hierarchical approaches. In several tests, we demonstrate superior results of the L1 norm compared to the L2 norm – especially in the presence of noise. Furthermore we show positive effects for applying three different multiscale approaches to the inverse problem. These methods focus on low frequency, early recording, or far offset during early iterations of the minimization and then proceed iteratively towards the full problem. We achieve best results with the frequency based multiscale scheme, for which we also provide a heuristical method of choosing iteratively increasing frequency bands.
Finally, we demonstrate the effectiveness of the different methods first on the Marmousi model and then on an extract of the 2004 BP model, where we are able to recover both high contrast top salt structures and lower contrast inclusions accurately.
Recent progresses and advances in the field of consumer electronics, driven by display
technologies and also the sector of mobile, hand-held devices, enable new ways in
presenting information to users, as well as new ways of user interaction, therefore
providing a basis for user-centered applications and work environments.
My thesis focuses on how arbitrary display environments can be utilized to improve
both the user experience, regarding perception of information, and also to provide
intuitive interaction possibilities. On the one hand advances in display technologies
provide the basis for new ways of visualizing content and collaborative work, on the
other hand forward-pressing developments in the consumer market, especially the
market of smart phones, offer potential to enhance usability in terms of interaction
and therefore can provide additional benefit for users.
Tiled display setups, combining both large screen real estate and high resolution,
provide new possibilities and chances to visualize large datasets and to facilitate col-
laboration in front of a large screen area. Furthermore these display setups present
several advantages over the traditional single-user-workspace environments: con-
trary to single-user-workspaces, multiple users are able to explore a dataset displayed
on a tiled display system, at the same time, thus allowing new forms of collabora-
tive work. Based on that, face-to-face discussions are enabled, an additional value
is added. Large displays also allow the utilization of the user’s spatial memory, al-
lowing physical navigation without the need of switching between different windows
to explore information.
With Tiled++ I contributed a versatile approach to address the bezel problem. The
bezel problem is one of the Top Ten research challenges in the research field of LCD-
based tiled wall setups. By applying the Tiled++ approach a large high resolution
Focus & Context screen is created, combining high resolution focus areas with low
resolution context information, projected onto the bezel area.
Additionally the field of user interaction poses an important challenge, especially
regarding the utilization of large tiled displays, since traditional keyboard & mouse
interaction devices reached their limits. My focus in this thesis is on Mobile HCI.Devices like mobile phones are utilized to interact with large displays, since they
feature various interaction modalities and preserve user mobility.
Large public displays, as a modernized form of traditional bulletin boards, also en-
able new ways of handling information, displaying content, and user interaction.
Utilized in hot spots, Digital Interactive Public Pinboards can provide an adequate
answer to questions like how to approach pressing issues like disaster and crisis man-
agement for both responders as well as citizens and also new ways of how to handle
information flow (contribution & distribution & accession). My contribution to the
research field of public display environments was the conception and implementa-
tion of an easy-to-use and easy-to-set-up architecture to overcome shortcomings of
current approaches and to cover the needs of aid personnel.
Although being a niche, Virtual Reality (VR) environments can provide additional
value for visualizing specific content. Disciplines like earth sciences & geology, me-
chanical engineering, design, and architecture can benefit from VR environments. In
order to consider the variety of users, I introduce a more intuitive and user friendly
interaction metaphor, the ARC metaphor.
Visualization challenges base on being able to cope with more and more complex
datasets and to bridge the gap between comprehensibility and loss of information.
Furthermore the visualization approach has to be reasonable, which is a crucial
factor when working in interdisciplinary teams, where the standard of knowledge
is diverse. Users have to be able to conceive the visualized content in a fast and
reliable way. My contribution are visualization approaches in the field of supportive
visualization.
Finally, my work illuminates how the synthesis of visualization, interaction and dis-
play technologies enhance the user experience. I promote a holistic view. The user
is brought back into the focus of attention, provided with a tool-set to support him,
without overextending the abilities of, for example, non-expert users, a crucial factor
in the more and more interdisciplinary field of computer science.
The use of trading stops is a common practice in financial markets for a variety of reasons: it provides a simple way to control losses on a given trade, while also ensuring that profit-taking is not deferred indefinitely; and it allows opportunities to consider reallocating resources to other investments. In this thesis, it is explained why the use of stops may be desirable in certain cases.
This is done by proposing a simple objective to be optimized. Some simple and commonly-used rules for the placing and use of stops are investigated; consisting of fixed or moving barriers, with fixed transaction costs. It is shown how to identify optimal levels at which to set stops, and the performances of different rules and strategies are compared. Thereby, uncertainty and altering of the drift parameter of the investment are incorporated.
Cyanobacteria are the only prokaryotes with the ability to conduct oxygenic photosynthesis,
therefore having major influence on the evolution of life on earth. Their diverse morphology
was traditionally the basis for taxonomy and classification. For example, the genus
Chroococcidiopsis has been classified within the order Pleurocapsales, based on a unique
reproduction modus by baeocytes. Recent phylogenetic results suggested a closer
relationship of this genus to the order Nostocales. However, these studies were based
mostly on the highly conserved 16S rRNA and a small selection of Chroococcidiopsis
strains. One aim of this present thesis was to investigate the evolutionary relationships of
the genus Chroococcidiopsis, the Pleurocapsales and remaining cyanobacteria using
16S rRNA, rpoC1 and gyrB gene. Including the single gene, as the multigene analyses of
97 strains clearly showed a separation of the genus Chroococcidiopsis from the
Pleurocapsales. Furthermore, a sister relationship between the genus Chroococcidiopsis
and the order Nostocales was confirmed. Consequently, the monogeneric family
Chroococcidiopsidaceae Geitler ex. Büdel, Donner & Kauff familia nova is justified. The
phylogenetic analyses also revealed the polyphyly of the remaining Pleurocapsales, due to
the fact that the strain Pleurocapsa PCC 7327 was always separated from other strains.
This is supported by differences in their metabolism, ecology and physiology.
A second aim of this study was to investigate the thylakoid arrangement of
Chroococcidiopsis and a selection of cyanobacterial strains. The investigation of 13 strains
with Low Temperature Scanning Electron Microscopy revealed two unknown thylakoidal
arrangements within Chroococcidiopsis (parietal and stacked). This result revised the
knowledge of the thylakoid arrangement in this genus. Previously, only a coiled
arrangement was known for three strains. Based on the data of 66 strains, the feature
thylakoid arrangement was tested as a potential feature for morphological identification of
cyanobacteria. The results showed a strong relationship between the group assignment of
cyanobacteria and their thylakoid arrangements. Hence, it is in general possible to
conclude from this certain phenotypic character the affiliation to a particular family, order
or genus.
The third aim of this study was to investigate biogeographical patterns of the worldwide
distributed genus Chroococcidiopsis. The phylogenetic analysis suggested that the genus do not have biogeographical patterns, which is in contrast with a recent study on hypolithic
living Chroococcidiopsis strains and the majority of phylogeographic analysis of
microorganisms. Further analysis showed no separation of different life-strategies within
the genus. These results could be related to the genetic markers utilized, which may not
contain biogeographical information. Hence the present study can neither exclude nor
prove the possibility of biogeographic and life-strategy patterns in the genus
Chroococcidiopsis.
Future research should be focused on finding appropriate genetic markers investigate of
evolutionary relationships and biogeographical patterns within Chroococcidiopsis.
This thesis deals with generalized inverses, multivariate polynomial interpolation and approximation of scattered data. Moreover, it covers the lifting scheme, which basically links the aforementioned topics. For instance, determining filters for the lifting scheme is connected to multivariate polynomial interpolation. More precisely, sets of interpolation sites are required that can be interpolated by a unique polynomial of a certain degree. In this thesis a new class of such sets is introduced and elements from this class are used to construct new and computationally more efficient filters for the lifting scheme.
Furthermore, a method to approximate multidimensional scattered data is introduced which is based on the lifting scheme. A major task in this method is to solve an ordinary linear least squares problem which possesses a special structure. Exploiting this structure yields better approximations and therefore this particular least squares problem is analyzed in detail. This leads to a characterization of special generalized inverses with partially prescribed image spaces.
The application behind the subject of this thesis are multiscale simulations on highly heterogeneous particle-reinforced composites with large jumps in their material coefficients. Such simulations are used, e.g., for the prediction of elastic properties. As the underlying microstructures have very complex geometries, a discretization by means of finite elements typically involves very fine resolved meshes. The latter results in discretized linear systems of more than \(10^8\) unknowns which need to be solved efficiently. However, the variation of the material coefficients even on very small scales reveals the failure of most available methods when solving the arising linear systems. While for scalar elliptic problems of multiscale character, robust domain decomposition methods are developed, their extension and application to 3D elasticity problems needs to be further established.
The focus of the thesis lies in the development and analysis of robust overlapping domain decomposition methods for multiscale problems in linear elasticity. The method combines corrections on local subdomains with a global correction on a coarser grid. As the robustness of the overall method is mainly determined by how well small scale features of the solution can be captured on the coarser grid levels, robust multiscale coarsening strategies need to be developed which properly transfer information between fine and coarse grids.
We carry out a detailed and novel analysis of two-level overlapping domain decomposition methods for the elasticity problems. The study also provides a concept for the construction of multiscale coarsening strategies to robustly solve the discretized linear systems, i.e. with iteration numbers independent of variations in the Young's modulus and the Poisson ratio of the underlying composite. The theory also captures anisotropic elasticity problems and allows applications to multi-phase elastic materials with non-isotropic constituents in two and three spatial dimensions.
Moreover, we develop and construct new multiscale coarsening strategies and show why they should be preferred over standard ones on several model problems. In a parallel implementation (MPI) of the developed methods, we present applications to real composites and robustly solve discretized systems of more than \(200\) million unknowns.
Factorization of multivariate polynomials is a cornerstone of many applications in computer algebra. To compute it, one uses an algorithm by Zassenhaus who used it in 1969 to factorize univariate polynomials over \(\mathbb{Z}\). Later Musser generalized it to the multivariate case. Subsequently, the algorithm was refined and improved.
In this work every step of the algorithm is described as well as the problems that arise in these steps.
In doing so, we restrict to the coefficient domains \(\mathbb{F}_{q}\), \(\mathbb{Z}\), and \(\mathbb{Q}(\alpha)\) while focussing on a fast implementation. The author has implemented almost all algorithms mentioned in this work in the C++ library factory which is part of the computer algebra system Singular.
Besides, a new bound on the coefficients of a factor of a multivariate polynomial over \(\mathbb{Q}(\alpha)\) is proven which does not require \(\alpha\) to be an algebraic integer. This bound is used to compute Hensel lifting and recombination of factors in a modular fashion. Furthermore, several sub-steps are improved.
Finally, an overview on the capability of the implementation is given which includes benchmark examples as well as random generated input which is supposed to give an impression of the average performance.
This thesis reports on investigations on the structure and reactivity of dipeptide-alkali metal complexes, a series of ruthenium bearing catalysts, dysprosium based single molecule magnets and organometallic di-cobalt complexes. A variety of experimental and theoretical methods was used dependent on the problem: collision induced dissociation, hydrogen/deuterium exchange reactions, gas phase reactions with \(D_2\), infrared multiple-photon dissociation and the determination of minimum energy structures, IR absorption spectra, transition states and electronic transitions based on density functional theory.
A case study was carried out to explore the influence of alkali metal ions on the gas phase structure of the dipeptide Carnosine. CID experiments on protonated Carnosine and its alkali metal complexes in an ion trap resulted in different fragment pathways dependent on the size of the alkali metal. The complexation of small ions (\(Li^+\) and \(Na^+\)) promoted the cleavage of bonds in the molecules backbone under CID, while \(Rb^+\)- and \(Cs^+\)-Carnosine complexes underwent the exclusive loss of the alkali metal. CID breakdown curves reflected the different binding behavior of the alkali ions to Carnosine. Gas phase H/D exchange reactions with \(D_2O\) resulted in the exchange of several protons of the protonated dipeptide, while its alkali metal complexes underwent no exchange reactions. DFT derived energetical minimum isomers exhibited only charge solvated tridentate structures, whereas salt bridge as well as charge solvated binding motives are reported in literature on complexes of alkali metal ions and oligopeptides. This study was published in a similar version as a paper in Zeitschrift für Physikalische Chemie.
A combination of the four dipeptides Carnosine, Anserine, GlyHis and HisGly with alkali metal ions was investigated with the help of CID, IR-MPD spectroscopy and H/D exchange reactions with \(ND_3\). The aim of the survey was to elucidate the influence of the methyl-group at the histidine ring, of the peptide sequence and chain length on the binding motives of the alkali ions. The experimental results were compared to DFT derived minimum energetical isomers. A moderate accordance was found for DFT predicted IR absorptions to IR-MPD spectra. A systematic nomenclature was developed reflecting all binding motives of the four dipeptides to alkali ions. Carnosine complexes all alkali metal ions in an uniform motive. DFT derived energetical minimum isomers of the three other dipeptides showed strong conformational changes with increasing size of the alkali ion. The most favored binding motive of all peptides was the tridentate complexation of the alkali ion by a carboxylic and an amidic oxygen atom, while the electron donating nitrogen atom either belongs to the Histidine ring or the amine group. The ability to form hydrogen bonds in a certain binding motive is essential for the preference of the Histidine or amine nitrogen atom as an electron donor. The charge solvated binding motive is the most common within all found isomers. Several structures exhibited hydrogen bonded protons. Those can be interpretated as intermediates between the charge solvated and the salt bridge binding motive. CID breakdown curves of the cationic complexes of the dipeptides with \(K^+\), \(Rb^+\) and \(Cs^+\) resulted in a fair agreement of \(E^{50\%}_{com}\) values with DFT derived Gibbs free binding energies. CID led to multiple fragments of the \(Li^+\) and \(Na^+\) dipeptide complexes and to an insufficient correlation between the \(E^{50\%}_{com}\) values and metal-dipeptide free binding enthalpies. Gas phase H/D exchange reactions of the protonated dipeptides with \(ND_3\) resulted in the exchange of all labile protons with comparable relative partial rate constants. The assumption of coexisting single and double exchange reactions per single collision led to an enhancement in quality of the pseudo first order kinetic fits of the experimental derived data. The \(Li^+\), \(Na^+\) and \(K^+\) complexes of the dipeptides exhibited a reduction in the number of exchanged protons, significantly lower rate constants for H/D exchange and only single exchange reactions.
The complexation of the doubly charged cationic transition metal \(Zn^{2+}\) by deprotonated Carnosine led to crucial conformational changes with respect to the alkali metal complexes. Former DFT calculations on the gas phase structure of \([Carn-H,Zn^{II}]^+\) were now compared to IR-MPD spectra. IR-MPD spectra exhibited several of the DFT predicted IR absorptions while the overall agreement in the position of bands is only partially satisfactory. The complex \([Carn-H,Zn^{II}]^+\) was furthermore used in order to study the band dependent enhancement of fragmentation efficiency by application of a resonant 2-color IR-MPD pump/probe scheme. In literature, it is assumed that the slopes of linear fits to the log-log scale of experimental data (fragmentation efficiency vs. laser pulse energy) correlate to the number of photons needed for fragmentation. No reasonable number of photons for the fragmentation of the molecule was derived with this approach. However, it could be shown that the number of photons of the pump laser needed for fragmentation is reduced by the use of a second IR color. The change of the delay between the pump and probe laser pulse had an influence on the shape of the absorption bands. Irradiation with the probe laser pulse before the pump laser caused a heating of the molecule which resulted in a broadening of bands. No broadening was observed when the probe laser was applied simultaneously or after the pump laser. CID and IR-MPD fragmentation channels differed in their relative abundance. Furthermore, relative abundancies of fragments were specific to the excited vibrational motions. This study provides essential approaches for the further study of the mechanism of resonant 2-color IR-MPD spectroscopy.
Several ruthenium catalysts for transfer hydrogenation reactions were synthesized by L. Ghoochany (research group W. Thiel, TU Kaiserlautern). CID measurements on isotopic labeled species led to the following conclusion about the activation process of the catalyst: a nitrogen-ruthenium bond is broken, the pyrimidine ring of the substituted 2-R-4-(2-pyridinyl)pyrimidine ligand rotates about 160° and a carbon-ruthenium bond is formed under subsequent loss of a HCl (or DCl) molecule. The mass spectrometers CID amplitude was calibrated with a set of “thermometer ions”. CID breakdown curves were used for determination of \(E^{50\%}_{com}\) values of three differently substituted catalysts. Finally, activation energies were estimated by means of the calibration. The resulting activation energies showed a qualitative correlation to DFT derived activation energies. These results are part of a manuscript which was submitted to Chemistry – A European Journal and is currently in the review process. Further studies on this series of transition metal complexes included CID on ligand exchanged species, 1- and 2-color IR-MPD spectroscopy, gas phase reactions with \(D_2\) and DFT based modeling of the reaction coordinate of the \(D_2\) insertion. The exchange of the anionic chlorido ligand in solution led to three complexes with different fragmentation thresholds. CID derived activation amplitudes correspond well to the order predicted by the hard/soft acids/bases (HSAB) concept. 1-color IR-MPD experiments on two complexes showed only a few bands. Resonant 2-color IR-MPD increased the overall fragmentation efficiency and uncovered several dark bands. DFT derived IR absorption spectra correlate well to IR-MPD spectra while some bands are still not observable. Gas phase reactions with \(D_2\) showed an increase of the mass of the activated complex of +4 m/z. This was interpreted in terms of an incorporation of a \(D_2\) molecule under heterolytical cleavage of the \(D_2\) molecule and can be compared to a back reaction of the activation. The reaction coordinate of the \(D_2\) incorporation was modeled with DFT at the B3LYP/cc-pVTZ level of theory and different activation energies were derived dependent on the substituent. Reactions of three differently substituted complexes with \(D_2\) resulted in different relative partial rate constants. The comparison to rate constants derived from transition state theory showed a qualitative but not quantitative correlation to the experimental results. This study contributes to our ongoing work on the assignment and isolation of reaction intermediates in the gas phase.
A series of dysprosium based complexes was synthesized by A. Bhunia (research group P. W. Roesky, KIT) and studied within the collaborative research center SFB/TRR 88 “3MET”. We contributed to this work with ESI-MS, CID and experiments on H/D exchange reactions with \(ND_3\) in the gas phase. Those complexes consist of a central triple-charged dysprosium cation and two identical salen-type ligands which allow for a complexation of up to two transition metals. The monometallic dysprosium complex shows single molecule magnet (SMM) behavior in SQUID measurements, while the incorporation of two double-charged manganese cations leads to ferromagnetic behavior. The interaction of terminal amine groups with the manganese ions caused a hinderance of the exchange H/D exchange reaction with \(ND_3\) in the gas phase. Alternatively, the terminal amine groups of the monometallic dysprosium complex allow for the bond of two \(Ni^{2+}(tren)\) complexes. ESI-MS studies showed anionic as well as cationic complexes due to deprotonation or protonation in solution. CID studies led to fragmentation schemes which correlate quite well to the predicted structures of the complexes. These results are part of two publications in Inorganic Chemistry and Dalton Transactions. Further studies on this series of mono-, di- and trimetallic complexes are reported in this thesis. H/D exchange reactions with \(D_2O\) in solution yielded in an exchange of all labile protons for the cationic complexes. Anionic complexes underwent a partial or a complete exchange of labile protons. A comparison of 1- and 2-color IR-MPD spectra of anionic and cationic complexes as well as H/D exchanged species allowed for the assignment of vibrational bands. Furthermore, preferred protonation sites were derived by comparing the results of IR-MPD experiments and H/D exchange reactions in solution and in the gas phase. This study contributes to our ongoing work on the determination of magnetic properties of isolated ions in the gas phase at the Helmholtz-Zentrum Berlin.
The complex \([(^4CpCo)_2(\mu-C_2Ph_2)]\) (\(^4Cp\) = tetraisopropyl-cyclopentadiene) was synthesized by J. Becker (research group H. Sitzmann, TU Kaiserslautern). The cationic complex and several reaction products were characterized by ESI-MS. Some of the experimental data contributed to the diploma thesis of J. Becker. The cationic reaction products and the complex itself were subject of IR spectroscopic characterization. IR-MPD efficiency changed crucially with modification of the complex, yielding \([(^4CpCo)_2(\mu-C_2Ph_2)X]^+ (X=H, (H+CH_3CN), Cl, O)\). The contribution of various fragmentation channels to the overall fragmentation efficiency was studied in detail. An increase of photon flux resulted in a saturation of preferred \(C_2Ph_2\) loss, additional alkyl fragments out of the \(^4Cp\) rings arising. Several absorption bands were found in the mid- and near-IR region. A model system from literature was used to identify seemingly levels of DFT theory by reference to X-ray crystal structure data. The B3LYP and the B97D functional with cc-pVDZ and Stuttgart 1997 ECP basis sets were identified for calculations of the complex \([(^4CpCo)_2(\mu-C_2Ph_2)]^+\) and of its reaction products. An elongation of the Co-Co bond distance was observed for the cationic reaction products with \(Cl^-\) and \(O^{2-}\). Calculations with B3LYP and B97D resulted in different electronic ground states. We did not obtain a good agreement of calculated vibrational modes and recorded IR-MPD spectra. DFT predicted more absorption bands than observed, especially those corresponding to aliphatic symmetric \(CH_n (n=2, 3)\) and aromatic CH stretch motions. Future 2-color IR-MPD experiments might resolve currently prevailing discrepancies. TD-DFT calculations yielded several electronic transitions that do not correspond to the IR-MPD spectra. The chosen levels of theory for DFT and TD-DFT calculations does not seem to be appropriate. IR-MPD spectra have to be remeasured in order to normalize the spectra to photon flux. Furthermore, a different strategy has to be developed for ab initio calculations on the complexes under study.
A combination of various methods applied to isolated ions in the gas phase and in solution allowed for the study of their structure, binding energies and reactivity. 1- and 2-color IR-MPD spectroscopy combined with DFT predicted absorption spectra of different isomers enabled an assignment of vibrational bands and binding motives of the molecules. The derived results are important for further studies on the binding behavior of peptides and the reaction behavior of metal complexes.
This work shall provide a foundation for the cross-design of wireless networked control systems with limited resources. A cross-design methodology is devised, which includes principles for the modeling, analysis, design, and realization of low cost but high performance and intelligent wireless networked control systems. To this end, a framework is developed in which control algorithms and communication protocols are jointly designed, implemented, and optimized taking into consideration the limited communication, computing, memory, and energy resources of the low performance, low power, and low cost wireless nodes used. A special focus of the proposed methodology is on the prediction and minimization of the total energy consumption of the wireless network (i.e. maximization of the lifetime of wireless nodes) under control performance constraints (e.g. stability and robustness) in dynamic environments with uncertainty in resource availability, through the joint (offline/online) adaptation of communication protocol parameters and control algorithm parameters according to the traffic and channel conditions. Appropriate optimization approaches that exploit the structure of the optimization problems to be solved (e.g. linearity, affinity, convexity) and which are based on Linear Matrix Inequalities (LMIs), Dynamic Programming (DP), and Genetic Algorithms (GAs) are investigated. The proposed cross-design approach is evaluated on a testbed consisting of a real lab plant equipped with wireless nodes. Obtained results show the advantages of the proposed cross-design approach compared to standard approaches which are less flexible.
This thesis combined gas phase mass spectrometric investigations of ionic transition metal clusters that are either homogeneous \((Nb_n^{+/-}, Co_n^{+/-})\) or heterogeneous \(([Co_nPt_m]^{+/-})\), of their organo metallic reaction products, and of organic molecules (aspartame and Asp-Phe) and their alkali metal ion adducts.At the Paris FEL facility CLIO a newly installed FT-ICR mass spectrometer has been modified by inclusion of an ion bender that allows for the usage of additional ion sources beyond the installed ESI source. The installation of an LVAP metal cluster source served to produce metal cluster adsorbate complex ions of the type \([Nb_n(C_6H_6)]^{+/-}\). IR-MPD of the complexes \([Nb_n(C_6H_6)]^{+/-} (n = 18, 19)\) resulted in \([Nb_n(C_6)]^{+/-} (n = 18, 19)\) fragments. Spectra are broad, possibly because of vibronic / electronic transitions. In Kaiserslautern the capabilities of the LVAP source were extended by adding a gas pick up unit. Complex gases containing C-H bonds otherwise break within the cluster forming plasma. More stable gases like CO seem to attach at least partially intact. Metal cluster production with argon tagged onto the cluster failed when introducing argon through the pick up source, but succeeded when using argon as expansion gas. A new mass spectrometer concept of an additional multipole collision cell for metal cluster adsorbate formation is currently under construction. Subsequent cooling shall achieve high resolution IR-MPD spectra of transition metal cluster adsorbate complexes.Prior work on reaction of transition metal clusters with benzene was extended by investigating the reaction with benzene and benzene-d6 of size selected cationic cobalt clusters \(Co_n^+\) and of anionic cobalt clusters \(Co_n^-\) in the size range \(n = 3 - 28\) and of bimetallic cobalt platinum clusters \([Co_nPt_m]^{+/-}\) in the size range \(n + m \le 8\). Dehydrogenation by cationic cobalt clusters \(Co_n^+\) is sparse, it is effective in small bimetallic clusters \([Co_nPt_m]^+ (n + m \le 3)\). Thus single platinum atoms promote benzene dehydrogenation while further cobalt atoms quench it. Dehydrogenation is ubiquitous in reactions of anionic cobalt clusters. Mixed triatomic clusters \([Co_2Pt_1]^-\) and \([Co_1Pt_2]^-\) are special in causing effective reactions and single dehydrogenation through some kind of cooperativity while \([Co_nPt_{1,2}]^- (n \ge 3)\) do not react at all. Kinetic isotope effects KIE(n) in total reaction rates are inverse and - in part - large, dehydrogenation isotope effects DIE(n) are normal. A multistep model of adsorption and stepwise dehydrogenation from the precursor adsorbate proves suitable to rationalize the found KIEs and DIEs in principle. Particular insights into the effects of charge and of cluster size are largely beyond this model. Some DFT calculations - though preliminary - lend strong support to the otherwise assumed structures and enthalpies. More insights into the cause of the found effects of charge, size and composition of both pure and mixed clusters shall arise from ongoing high level ab initio modeling (of especially the \(n + m = 3\) case for mixed clusters).The influence of the methylester group in the molecules aspartame (Asp-PheOMe) and Asp-Phe has been explored. Therefore, their protonated and deprotonated species and their complexes with alkali metal ions attached were investigated with different techniques utilizing mass spectrometry.Gas phase H-/D-exchange with \(ND_3\) has proven that in both molecules all acidic NH and OH binding motifs do exchange their hydrogen atom and that simultaneous multi exchange is present. Kinetic studies revealed that with alkali metal ions attached the speed of the first exchange step decreases with increasing ion size. The additional OH of the carboxylic COOHPhe group in Asp-Phe increases the exchange speed by a constant value. CID experiments yielded water and the protonated Asp-Phe anhydride as main fragments out of the protonated molecules, neutral Asp anhydride and \([Phe M]^+ / [PheOMe M]^+\) for \(Li^+\) and \(Na^+\) attached, and neutral aspartame / Asp-Phe and ionic \(M^+\) for \(K^+\), \(Rb^+\) and \(Cs^+\) attached. The threshold energy \(E_{CID}\), indicating ion stability, decreases with increasing ion size. For aspartame fragmentation occurs at lower \(E_{CID}\) values for complexes with \(H^+\), \(Li^+\) and \(Na^+\) than for the Asp-Phe analoga. Complexes with \(K^+\), \(Rb^+\) and \(Cs^+\) give the same \(E_{CID}\) value for aspartame and Asp-Phe. IR-MPD investigations lead to the same fragments as the CID experiments. In combination with quantum mechanical calculations a change in the preferred structure from charge-solvated, tridentate type for complexes with small alkali metal ions (\(Li^+\)) to salt-bridge type structure for large alkali metal ions (\(Cs^+\)) could be confirmed. Calculations thereby reveal nearly no structural differences between aspartame and Asp-Phe for cationized species. The deprotonation of the additional COOHPhe group in Asp-Phe is preferred against other acidic positions. A better experimental distinction between possible (calculated) structure types would arise from additional FEL IR-MPD measurements in the energy range of 600 to 1800 \(cm^{-1}\). The comparison of the \(E_{CID}\) values with calculated fragmentation energy values proves that not only for alkali metal complexes with \(K^+\), \(Rb^+\) and \(Cs^+\), but also for \(Li^+\) and \(Na^+\) the bond breaking of all metal atom bonds is part of the transition state. The lower \(E_{CID}\) values for aspartame with small cations may be explained in terms of internal energy. Aspartame is a larger molecule, possesses more internal energy and can be recognized as the larger heat bath. Less energy is needed for fragmentation, if the Phe part with the additional methylester group is involved in the fragmentation process.
This thesis is concerned with tropical moduli spaces, which are an important tool in tropical enumerative geometry. The main result is a construction of tropical moduli spaces of rational tropical covers of smooth tropical curves and of tropical lines in smooth tropical surfaces. The construction of a moduli space of tropical curves in a smooth tropical variety is reduced to the case of smooth fans. Furthermore, we point out relations to intersection theory on suitable moduli spaces on algebraic curves.
This thesis is concerned with a phase field model for brittle fracture.
The high potential of phase field modeling in computational fracture mechanics lies in the generality of the approach and the straightforward numerical implementation, combined with a good accuracy of the results in the sense of continuum fracture mechanics.
However, despite the convenient numerical application of phase field fracture models, a detailed understanding of the physical properties is crucial for a correct interpretation of the numerical results. Therefore, the driving mechanisms of crack propagation and nucleation in the proposed phase field fracture model are explored by a thorough numerical and analytical investigation in this work.
The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.
In recent years, recommender systems have been widely used for a variety of different kinds of items such as books, movies, and music. However, current recommendation approaches have often been criticized to suffer from overspecialization thus not enough considering a user’s diverse topics of interest. In this thesis we present a novel approach to extracting contextualized user profiles which enable recommendations taking into account a user’s full range of interests. The method applies algorithms from the domain of topic detection and tracking to automatically identify diverse user interests and to represent them with descriptive labels. That way manual annotations of interest topics by the users, e. g., from a predefined domain taxonomy, are no longer required. The approach has been tested in two scenarios: First, we implemented a content-based recommender system for an Enterprise 2.0 resource sharing platform where the contextualized user interest profiles have been used to generate recommendations with a high degree of inter-topic diversity. In an effort to harness the collective intelligence of the users, the resources in the system were described by making use of user-generated metadata. The evaluation experiments show that our approach is likely to capture a multitude of diverse interest topics per user. The labels extracted are specific for these topics and can be used to retrieve relevant on-topic resources. Second, a slightly adapted variation of the algorithm has been used to target music recommendations based on the user’s current mood. In this scenario music artists are described by using freely available Semantic Web data from the Linked Open Data cloud thus not requiring expensive metadata annotations by experts. The evaluation experiments conducted show that many users have a multitude of different preferred music styles. However a correlation between these music styles and music mood categories could not be observed. An integration of our proposed user profiles with existing user model ontologies seems promising for enabling context-sensitive recommendations.
Data integration aims at providing uniform access to heterogeneous data, managed by distributed source systems. Data sources can range from legacy systems, databases, and enterprise applications to web-scale data management systems. The materialized approach to data integration, extracts data from the sources, transforms and consolidates the data, and loads it into an integration system, where it is persistently stored and can be queried and analyzed.
To support materialized data integration, so called Extract-Transform-Load (ETL) systems have been built and are widely used to populate data warehouses today. While ETL is considered state-of-the-art in enterprise data warehousing, a new paradigm known as MapReduce has recently gained popularity for web-scale data transformations, such as web indexing or page rank computation.
The input data of both, ETL and MapReduce programs keeps changing over time, while business transactions are processed or the web is crawled, for instance. Hence, the results of ETL and MapReduce programs get stale and need to be recomputed from time to time. Recurrent computations over changing input data can be performed in two ways. The result may either be recomputed from scratch or recomputed in an incremental fashion. The idea behind the latter approach is to update the existing result in response to incremental changes in the input data. This is typically more efficient than the full recomputation approach, because reprocessing unchanged portions of the input data can often be avoided.
Incremental recomputation techniques have been studied by the database research community mainly in the context of the maintenance of materialized views and have been adopted by all major commercial database systems today. However, neither today's ETL tools nor MapReduce support incremental recomputation techniques. The situation of ETL and MapReduce programmers nowadays is thus much comparable to the situation of database programmers in the early 1990s. This thesis makes an effort to transfer incremental recomputation techniques into the ETL and MapReduce environments. This poses interesting research challenges, because these environments differ fundamentally from the relational world with regard to query and programming models, change data capture, transactional guarantees and consistency models. However, as this thesis will show, incremental recomputations are feasible in ETL and MapReduce and may lead to considerable efficiency improvements.
Hydrogels are known to be covalently or ionic cross-linked, hydrophilic three-dimensional
polymer networks, which exist in our bodies in a biological gel form such as the vitreous
humour that fills the interior of the eyes. Poly(N-isopropylacrylamide) (poly(NIPAAm))
hydrogels are attracting more interest in biomedical applications because, besides others, they
exhibit a well-defined lower critical solution temperature (LCST) in water, around 31–34°C,
which is close to the body temperature. This is considered to be of great interest in drug
delivery, cell encapsulation, and tissue engineering applications. In this work, the
poly(NIPAAm) hydrogel is synthesized by free radical polymerization. Hydrogel properties
and the dimensional changes accompanied with the volume phase transition of the
thermosensitive poly(NIPAAm) hydrogel were investigated in terms of Raman spectra,
swelling ratio, and hydration. The thermal swelling/deswelling changes that occur at different
equilibrium temperatures and different solutions (phenol, ethanol, propanol, and sodium
chloride) based on Raman spectrum were investigated. In addition, Raman spectroscopy has
been employed to evaluate the diffusion aspects of bovine serum albumin (BSA) and phenol
through the poly(NIPAAm) network. The determination of the mutual diffusion coefficient,
\(D_{mut}\) for hydrogels/solvent system was achieved successfully using Raman spectroscopy at
different solute concentrations. Moreover, the mechanical properties of the hydrogel, which
were investigated by uniaxial compression tests, were used to characterize the hydrogel and to
determine the collective diffusion coefficient through the hydrogel. The solute release coupled
with shrinking of the hydrogel particles was modelled with a bi-dimensional diffusion model
with moving boundary conditions. The influence of the variable diffusion coefficient is
observed and leads to a better description of the kinetic curve in the case of important
deformation around the LCST. A good accordance between experimental and calculated data
was obtained.
Palladium-Catalyzed C–C Bond Formations via Activation of Carboxylic Acids and Their Derivatives
(2013)
Applications of carboxylic acids and their derivatives in transition metal-catalyzed cross-coupling reactions regio-selectively forming Csp3-Csp2, and Csp2-Csp2 bonds were explored in this thesis. Several important organic building blocks such as aryl acetates, diaryl acetates, imines, ketones, biaryls, styrenes and polysubstituted alkenes were successfully accessed from carboxylic acids and their derivatives by the means of C–H activation and decarboxylative cross-couplings.
An efficient and practical protocol for the synthesis of biologically important ethyl 2-arylacates through the dealkoxycarbonlative cross-coupling reaction between aryl halides and malonates was developed. Activation of the alpha-proton of alkyl esters by a copper catalyst allowed the deprotonation of esters even in the presence of mild bases, leading to a straightforward and efficient approach to alkyl alpha-diarylacetate from simple alkyl acetates and aryl halides.
The addition of a primary amine into the coupling reaction of alpha-oxocarboxylic acids and aryl halides led to an unprecedented low-temperature redox-neutral decarboxylative coupling process, providing a green and efficient method for the preparation of azomethines, in which all the three substituents can be independently varied. A minor modification of this protocol allowed us to easily access the corresponding ketones.
The decarboxylative coupling of robust aryl mesylates as well as polysubstituted alkenyl mesylates using our customized imidazolyl phosphine ligands was realized, further expanding the scope of carbon electrophiles in decarboxylative coupling reactions. Variation of the ligands led to two complementary protocols, providing the corresponding biaryls and polysubstituted olefins in high yields.
The use of a new class of pyrimidinyl phosphine ligands dramatically reduced the reaction temperatures of decarboxylative cross-coupling reactions between aromatic carboxylic acids and aryl or alkenyl triflates. The new catalyst system for the first time allowed the efficient decarboxylative biaryls synthesis at only 100 °C, representing a significant achievement in redox-neutral decarboxylative coupling reactions.
There is growing international concern about the necessity to re-think the university so that it might remain relevant in a modern society. In the traditional task division at universities, knowledge is the main resource. Universities make use of both the cognitive and the informational approach. It was expected that universities use each approach to improve overall university performance. To effectively use the informational approach, universities should apply the tools from knowledge management. To effectively use the cognitive approach, universities must update their teaching-learning strategies to incorporate some of the recent advances in neuroscience and biology of knowledge, specifically from neurobiology and autopoiesis. With this frame, the main contribution of this work is the result of merging pedagogy and biology, towards an ideal future university. This goal was achieved through an exploratory study conducted to identify opportunities and difficulties in improving the teaching-learning process for the future of higher education in Honduras. The Delphi Study was used as a predictive method. Nineteen Honduran experts participated in this study, and two rounds were necessary to achieve consensus.
The multi-disciplinary approach of this research addresses three different fields whose core element is knowledge. First, input from the present field of higher education is used to speak about the future. Second, input is taken from the biology of knowledge, and its contributions from neurobiology and autopoiesis that allow modifying and completing the already existing learning theories with a biological basis. Third, input is taken from the knowledge process, which is traditionally used as an organizational tool and know is translated to the individual level. The exploration shows that experts are concerned about all the missions and responsibilities of universities, but they agree that changes should primarily take place in the teaching dimension. Even though they are not aware of the possible contributions of biology, they suggest new forms of teaching that more favor skills development, promotes values, pertinent knowledge, and personal development over short-term contents. The resulting BRAIN Model encompasses the ideal future of higher education regarding teaching and learning, according to experts’ answers. It provides a useful guide that any reform in teaching should take into account for a holistic, integral, and therefore more efficient learning task.
Fluid extraction is a typical chemical process where two types of fluids are mixed together. The high complexity of this process which involves droplet coalescence, breakup, mass transfer, and counter-current flow often makes design difficult. The industrial design of these processes is still based on expensive mini-plant and pilot plant experiments. Therefore, there is a strong need for research into the stimulation of fluid-fluid interaction processes using computational fluid dynamics (CFD).
Previous multi-phase fluid simulations have focused on the development of models that couple mass and momentum using the Navier-Stokes equation. Recent population balance models (PBM) have proved to be important methods for analyzing droplet breakage and collisions. A combination of CFD and PBM facilitates the simulation of flow property by solving coupling equations, and the calculation of the droplet size and numbers. In our study, we successfully coupled an Euler-Euler CFD model with the breakup and coalescence models proposed by Luo and Svendsen (59).
The simulation output of extraction columns provides a mathematical understand- ing of how fluids are mixed inside a mixing device. This mixing process shows that the dispersed phase of a flow generates large blobs and bubbles. Current mathemati- cal simulation results often fail to provide an intuitive representation of how well two different types of fluid interact, so intuitive and physically plausible visualization tech- niques are in high demand to help chemical engineers to explore and analyze bubble column simulation data. In chapter 3, we present the visualization tools we developed for extraction column data.
Fluid interfaces and free surfaces are topics of growing interest in the field of multi- phase computational fluid dynamics. However, the analysis of the flow field relative to the material interface shape and topology is a challenging task. In chapter 5, we present a technique that facilitates the visualization and analysis of complex material interface behaviors over time. To achieve this, we track the surface parameterization of time-varying material interfaces and identify locations where there are interactions between the material interfaces and fluid particles. Splatting and surface visualization techniques produce an intuitive representation of the derived interface stability. Our results demonstrate that the interaction of a flow field with a material interface can be understood using appropriate extraction and visualization techniques, and that our techniques can help the analysis of mixing and material interface consistency.
In addition to texture-based methods for surface analysis, the interface of two- phase fluid can be considered as an implicit function of the density or volume fraction values. High-level visualization techniques such as topology-based methods can re- veal the hidden structure underlying simple simulation data, which will enhance and advance our understanding of multi-fluid simulation data. Recent feature-based vi- sualization approaches have explored the possibility of using Reeb graphs to analyze scalar field topologies(19, 107). In chapter 6, we present a novel interpolation scheme for interpolating point-based volume fraction data and we further explore the implicit fluid interface using a topology-based method.
Efficient time integration and nonlinear model reduction for incompressible hyperelastic materials
(2013)
This thesis deals with the time integration and nonlinear model reduction of nearly incompressible materials that have been discretized in space by mixed finite elements. We analyze the structure of the equations of motion and show that a differential-algebraic system of index 1 with a singular perturbation term needs to be solved. In the limit case the index may jump to index 3 and thus renders the time integration into a difficult problem. For the time integration we apply Rosenbrock methods and study their convergence behavior for a test problem, which highlights the importance of the well-known Scholz conditions for this problem class. Numerical tests demonstrate that such linear-implicit methods are an attractive alternative to established time integration methods in structural dynamics. In the second part we combine the simulation of nonlinear materials with a model reduction step. We use the method of proper orthogonal decomposition and apply it to the discretized system of second order. For a nonlinear model reduction to be efficient we approximate the nonlinearity by following the lookup approach. In a practical example we show that large CPU time savings can achieved. This work is in order to prepare the ground for including such finite element structures as components in complex vehicle dynamics applications.
This thesis is separated into three main parts: Development of Gaussian and White Noise Analysis, Hamiltonian Path Integrals as White Noise Distributions, Numerical methods for polymers driven by fractional Brownian motion.
Throughout this thesis the Donsker's delta function plays a key role. We investigate this generalized function also in Chapter 2. Moreover we show by giving a counterexample, that the general definition for complex kernels is not true.
In Chapter 3 we take a closer look to generalized Gauss kernels and generalize these concepts to the case of vector-valued White Noise. These results are the basis for Hamiltonian path integrals of quadratic type. The core result of this chapter gives conditions under which pointwise products of generalized Gauss kernels and certain Hida distributions have a mathematical rigorous meaning as distributions in the Hida space.
In Chapter 4 we discuss operators which are related to applications for Feynman Integrals as differential operators, scaling, translation and projection. We show the relation of these operators to differential operators, which leads to the well-known notion of so called convolution operators. We generalize the central homomorphy theorem to regular generalized functions.
We generalize the concept of complex scaling to scaling with bounded operators and discuss the relation to generalized Radon-Nikodym derivatives. With the help of this we consider products of generalized functions in chapter 5. We show that the projection operator from the Wick formula for products with Donsker's deltais not closable on the square-integrable functions..
In Chapter 5 we discuss products of generalized functions. Moreover the Wick formula is revisited. We investigate under which conditions and on which spaces the Wick formula can be generalized to. At the end of the chapter we consider the products of Donsker's delta function with a generalized function with help of a measure transformation. Here also problems as measurability are concerned.
In Chapter 6 we characterize Hamiltonian path integrands for the free particle, the harmonic oscillator and the charged particle in a constant magnetic field as Hida distributions. This is done in terms of the T-transform and with the help of the results from chapter 3. For the free particle and the harmonic oscillator we also investigate the momentum space propagators. At the same time, the $T$-transform of the constructed Feynman integrands provides us with their generating functional. In Chapter 7, we can show that the generalized expectation (generating functional at zero) gives the Greens function to the corresponding Schrödinger equation.
Moreover, with help of the generating functional we can show that the canonical commutation relations for the free particle and the harmonic oscillator in phase space are fulfilled. This confirms on a mathematical rigorous level the heuristics developed by Feynman and Hibbs.
In Chapter 8 we give an outlook, how the scaling approach which is successfully applied in the Feynman integral setting can be transferred to the phase space setting. We give a mathematical rigorous meaning to an analogue construction to the scaled Feynman-Kac kernel. It is open if the expression solves the Schrödinger equation. At least for quadratic potentials we can get the right physics.
In the last chapter, we focus on the numerical analysis of polymer chains driven by fractional Brownian motion. Instead of complicated lattice algorithms, our discretization is based on the correlation matrix. Using fBm one can achieve a long-range dependence of the interaction of the monomers inside a polymer chain. Here a Metropolis algorithm is used to create the paths of a polymer driven by fBm taking the excluded volume effect in account.
Many real life problems have multiple spatial scales. In addition to the multiscale nature one has to take uncertainty into account. In this work we consider multiscale problems with stochastic coefficients.
We combine multiscale methods, e.g., mixed multiscale finite elements or homogenization, which are used for deterministic problems with stochastic methods, such as multi-level Monte Carlo or polynomial chaos methods.
The work is divided into three parts.
In the first two parts we study homogenization with different stochastic methods. Therefore we consider elliptic stationary diffusion equations with stochastic coefficients.
The last part is devoted to the study of mixed multiscale finite elements in combination with multi-level Monte Carlo methods. In the third part we consider multi-phase flow and transport equations.
The automatic analysis and retrieval of technical line drawings is hindered by many challenges such as: the large amount of contextual clutter around the symbols within the drawings, degradation, transformations on the symbols in drawings, large databases of drawings
and large alphabets of symbols. The core tasks required for the analysis of technical line
drawings are: symbol recognition, spotting and retrieval. The current systems for performing these tasks have poor performance due to the mentioned challenges. This dissertation
presents a number of methods that address these challenges. These methods achieve both
accurate and efficient symbol spotting and retrieval in technical line drawings, and perform
significantly better than state-of-the-art methods on the same problems. An overview of
the key contributions of this dissertation is given in the following.
First, this dissertation presents a geometric matching-based method for symbol recognition
and spotting. The method performs recognition in the presence of large amounts of contextual clutter, and provides precise localization of the recognized symbols. On standard
databases such as GREC-2005 and GREC-2011, the method achieves up to 10% higher
recall and up to 28% higher precision than state-of-the-art methods on the spotting task,
and achieves up to 7% higher recognition accuracy on the isolated recognition task. The
method is based on a geometric matching approach, which is flexible enough to incorporate
improvements on the matching strategy, feature types and information on the features. The
method also includes an adaptive preprocessing algorithm that deals with a wide variety
of noise types.
In order to improve the performance of the spotting method when dealing with degraded
drawings, two novel methods are presented in this dissertation. Both methods are based on
combining geometric matching with machine learning techniques. The geometric matching
is used to automatically generate training data that contain information on how well the
features of the queries are matched in both the true and the false matches found by the
spotting method. The first method learns the feature weights of the different query symbols
by linear discriminant analysis (LDA). The weighted query features are used in the spotting
method and result in 27% higher average precision than the original method, with a speedup
factor of 2. The second method uses SVM classification as a post-spotting step to distinguish
the true from the false matches in the spotting method. The use of the classification step
further improves the average precision of the spotting method by 20.6%.
This dissertation also presents methods for content analysis of line drawings. First, a
method for accurate and consistent detection (95.8%) of regions of interest (ROIs) is presented. The method is based on statistical feature grouping. The ROI-finding method is
identified as an important part of a symbol retrieval system: the better the detected ROIs,the higher the performance of a retrieval system. The ROI-finding method is also used to
improve the performance of the geometric-based spotting system.
Second, a symbol clustering method for building a compact and accurate representation of
a large database of technical drawings is presented. This method uses the output from the
ROI-finding method as input, and uses geometric matching as a similarity measure. The
method achieves high accuracy (90.1% recall, 94.3% precision) in forming clusters of symbols. The representatives of the clusters (34 symbols) are used as key entries to a symbol
index, which is identified as the outcome of an off-line stage of a symbol retrieval system.
Finally, an efficient and high performing large scale symbol retrieval system is presented
in this dissertation. The system follows the bag of visual words (BoVW) model, but with
using methods that are suitable to line drawings. The system uses the symbol index to
represent a database of drawings. During the on-line query retrieval stage, the query is
analyzed by the ROI-finding method, matched with the key entries of the symbol index via
geometric matching, and finally, a spatial verification step is performed on the retrieved
matches. The system achieves a query lookup time that is independent of the size of the
database, and is instead dependent on the size of the symbol index. The system achieves up
to 10% higher recall and up to 28% higher precision than state-of-the-art spotting systems
on similar databases.
Overall, these contributions are major advancements in the research of graphics recognition.
The hope is that, such contributions provide the basis for the development of reliable and
accurate performing applications for browsing, querying or classification of line drawings
for the benefit of end users.
In this study, two outstanding subgroups of organic-inorganic hybrid materials have been investigated. The first part covers the design, synthesis, characterization and application of seven novel Metal Organic Frameworks (MOFs) containing functionalized biphenyl dicarboxylates as linkers. In the second part, the surface modification of the metal oxides ZrO2, TiO2 and Al2O3 using phosphonate derivates is reported.
Firstly three functionalized MOF structures; ZnBrBPDC, ZnNO2BPDC and ZnNH2BPDC were synthesised using 4,4´-biphenyldicarboxylic acid derivatives with different functional groups (-Br, -NO2, -NH2) Powder X-ray diffraction (PXRD) measurements indicated that the synthesised MOFs posses the interpenetrated IRMOF-9 structure with a cubic topology, which was also confirmed with single crystal X-ray measurements. The chemical structure of the MOF materials was further proved by solid state NMR and IR measurements. N2 adsorption measurements showed Type I isotherms for all three structures with large surface areas. TGA measurements of the evacuated samples were in good agreement with the elemental analysis data. The results proved that their thermal stability is between 325 °C - 450 °C.
Adsorption properties of these MOF structures were tested using light alkanes (CH4, C2H6, C3H8, and n-C4H10) at three different temperatures. For all adsorbents, the maximum uptakes were observed at 273 K. When the temperature was increased, the amount of the adsorbed gas decreased. All three MOFs showed strong affinities for n-butane. The lowest uptakes were observed for CH4.
The effect of functional groups on the IRMOF series was also examined by synthesizing amide functionalized biphenyl linkers. For this purpose, four different linkers containing amides with different alkyl chains (C1-C4) were synthesized and used for the synthesis of four new MOF structures ZnAcBPDC, ZnPrBPDC, ZnBuBPDC and ZnPeBPDC.
PXRD measurements of ZnAcBPDC indicated that the structure contains two different phases. PXRD patterns of ZnPrBPDC, ZnBuBPDC and ZnPeBPDC revealed non-interpenetrated structures which were further proved by single crystal X-ray measurements. The chemical structure of the MOF materials was further confirmed by X-ray spectoscopy, solid state NMR and IR measurements.
N2 adsorption measurements of the MOF structures were carried out using different activation methods. For all four MOFs, Type I isotherms were obtained. ZnAcBPDC showed the highest BET surface area. ZnAcBPDC and ZnBuBPDC were tested for their alkane, alkene and CO2 adsorption capacities.
In the second part of the work, the surface modification of three different metal oxides, ZrO2, TiO2 and Al2O3 was performed. For this purpose firstly three different fluorescent phosphonate derivatives containing thiophene units were synthesized from their halo derivatives in a four step synthesis and then used as coupling molecules for the surface modification. Nine different surfaces were obtained (38@TiO2, 39@TiO2, 40@TiO2, 38@Al2O3, 39@Al2O3, 40@Al2O3, 38@ZrO2, 39@ZrO2, 40@ZrO2).
All three modified metal oxide surfaces were characterized using elemental analysis, solid state NMR and IR spectroscopy. The BET surface areas of the materials were determined by N2 adsorption measurements. TGA was used to determine the stability of the surfaces. Maximum loadings were obtained for ZrO2 surfaces.
Due to the strong luminescence of the coupling molecules, the modified surfaces were checked for their light emission. All ZrO2 and Al2O3 surfaces showed fluorescence with exception of 40@Al2O3. On the other hand, for the modified TiO2 surfaces, no fluorescence could be observed.
I report on two experiments, which were designed to test theoretical predictions about individual behavior in a duopolistic setting. With quantity being the choice variable a simultaneous Cournot game and a sequential Stackelberg game were tested over two periods. The key feature of both models was that players were able to lower marginal cost for period two if they successfully outperformed their competition in period one in terms of profit. Experimental results suggest that in the Cournot game players are very competitive in period one but become Cournot players in period two. In the Stackelberg game Cournot play is modal, suggesting that players have preferences for equality in payoffs, which maybe brought about by punishment of Stackelberg followers and fear of punishment of Stackelberg leaders . Overall, players earned more money in the Stackelberg game than in the Cournot game.
Generic layout analysis--process of decomposing document image into homogeneous regions for a collection of diverse document images--has many important applications in document image analysis and understanding such as preprocessing of degraded warped, camera-captured document images, high performance layout analysis of document images containing complex cursive scripts, and word spotting in historical document images at page level. Many areas in this field like generic text line extraction method are considered as elusive goals so far, still beyond the reach of the state-of-the-art methods [NJ07, LSZT07, KB06]. This thesis addresses this problem in such a way that it presents generic, domain-independent, text line extraction and text and non-text segmentation methods, and then describes some important applications, that were developed based on these methods. An overview of the key contributions of this thesis is as follows.
The first part of this thesis presents a generic text line extraction method using a combination of matched filtering and ridge detection techniques, which are commonly used in computer vision. Unlike the state-of-the-art text line extraction methods in the literature, the generic text line extraction method can be equally and robustly applied to a large variety of document image classes including scanned and camera-captured documents, binary and grayscale documents, typed-text and handwritten documents, historical and contemporary documents, and documents containing different scripts. Different standard datasets are selected for performance evaluation that belong to different categories of document images such as the UW-III [GHHP97] dataset of scanned documents, the ICDAR 2007 [GAS07] and the UMD [LZDJ08] datasets of handwritten documents, the DFKI-I [SB07] dataset of camera-captured documents, Arabic/Urdu script documents dataset, and German calligraphic (Fraktur) script historical documents dataset. The generic text line extraction method achieves 86% (n = 23,763 text lines in 650 documents) text line detection accuracy which is better than the aggregate accuracy of 73% of the best performing domain-specific state-of-the-art methods. To the best of the author's knowledge, it is the first general-purpose text line extraction method that can be equally used for a diverse collection of documents.
This thesis also presents an active contour (snake) based curled text line extraction method for warped, camera-captured document images. The presented approach is applied to DFKI-I [SB07] dataset of camera-captured, Latin script document images for curled text line extraction. It achieves above 95% (n = 3,091 text lines in 102 documents) text line detection accuracy, which is significantly better than the competing state-of-the-art curled text line extraction methods. The presented text line extraction method can also be applied to document images containing different scripts like Chinese, Devanagari, and Arabic after small modifications.
The second part of this thesis presents an improved version of the state-of-the-art multiresolution morphology (Leptonica) based text and non-text segmentation method [Blo91], which is a domain-independent page segmentation approach and can be equally applied to a diverse collection of binarized document images. It is demonstrated that the presented improvements result in an increase in segmentation accuracy from 93% to 99% (n = 113 documents).
This thesis also introduces a discriminative learning based approach for page segmentation, where a self-tunable multi-layer perceptron (MLP) classifier [BS10] is trained for distinguishing between text and non-text connected components. Unlike other classification based page segmentation approaches in the literature, the connected components based discriminative learning based approach is faster than pixel based classification methods and does not require a block segmentation method beforehand. A segmentation accuracy of $96\%$ ($n = 113$ documents) is achieved in comparison to the state-of-the-art multiresolution morphology (Leptonica) based page segmentation method [Blo91] that achieves a segmentation accuracy of 93%. In addition to text and non-text segmentation of Latin script documents, the presented approach can also be adapted for document images containing other scripts as well as for other specialized layout analysis tasks such as digit and non-digit segmentation [HBSB12], orientation detection [RBSB09], and body-text and side-note segmentation [BAESB12].
Finally, this thesis presents important applications of the two generic layout analysis techniques, ridge-based text line extraction method and the multi-resolution morphology based text and non-text segmentation method, discussed above. First, a complete preprocessing pipeline is described for removing different types of degradations from grayscale warped, camera-captured document images that includes removal of grayscale degradations such as non-uniform shadows and blurring through binarization, noise cleanup applying page frame detection, and document rectification using monocular dewarping. Each of these preprocessing steps shows significant improvement in comparison to the analyzed state-of-the-art methods in the literature. Second, a high performance layout analysis method is described for complex Arabic script document images written in different languages such as Arabic, Urdu, and Persian and different styles for example Naskh and Nastaliq. The presented layout analysis system is robust against different types of document image degradations and shows better performance for text and non-text segmentation, text line extraction, and reading order determination on a variety of Arabic and Urdu document images as compared to the state-of-the-art methods. It can be used for large scale Arabic and Urdu documents' digitization processes. These applications demonstrate that the layout analysis methods, ridge-based text line extraction and the multi-resolution morphology based text and non-text segmentation, are generic and can be applied easily to a large collection of diverse document images.
Wechselnde Umweltbedingungen wie Temperaturveränderungen oder der Zugang zu Nährstoffen erfordern spezielle genetische Anpassungsprogramme, vor allem von sessilen Organismen wie Pflanzen. Ein solcher hochkonservierter Mechanismus, der unter anderem vor Temperaturspitzen schützt, ist die von Hitzeschockfaktoren (HSF) kontrollierte Hitzeschockantwort (HSR). Dabei werden vermehrt spezifische Hitzestressproteine (HSPs, Chaperone) gebildet, die Proteine vor Denaturierung schützen. In Pflanzen hat sich ein hochkomplexes regulatorisches Netzwerk gebildet, das aus über 20 HSFs besteht, das eine genaue Feinabstimmung der HSR auf die jeweiligen Stressbedingungen erlaubt.
Das hohe Maß an Komplexität der HSR in Pflanzen erschwert die wissenschaftliche Zugänglichkeit jedoch erheblich. Um die grundlegenden Prinzipien der HSR in Pflanzen zu verstehen griffen wir deshalb auf einen einfacheren Modellorganismus zurück, der Pflanzen sehr nahe steht aber nur einen einzigen HSF (HSF1) enthält, der einzelligen Grünalge Chlamydomonas reinhardtii. Im Rahmen dieser Arbeit wurden dazu drei Ansätze verfolgt.
Als erstes wurden verschiedene chemische Substanzen eingesetzt die unterschiedliche Schritte während der Aktivierung und Abschaltung der HSR hemmen um darüber die Regulation der HSR aufzuklären. Dabei wurde festgestellt, dass die Phosphorylierung von HSF1 eine entscheidende Rolle in der Aktivierung der HSR spielt, das auslösende Momentum die Anhäufung von falsch gefalteten Proteinen ist und das HSP90A aus dem Cytosol eine wichtige modulierende Rolle bei der HSR spielt.
Als zweites wurde die Veränderung sämtlicher Transkripte mithilfe von Microarrays gemessen, um vor allem pflanzenspezifische Prozesse zu identifizieren, die auf erhöhte Temperaturen gezielt angepasst werden müssen. Dabei konnte die Chlorophyll Biosynthese und der Transport von Proteinen in den Chloroplasten als neue, pflanzenspezifische Ziele der Stressantwort identifiziert werden. Des Weiteren konnte direkt gezeigt werden, das HSF1 auch plastidäre Chaperone reguliert, im Gegensatz zu mitochondrialen Chaperonen die getrennt gesteuert werden.
Als letztes wurde gezielt die Expression wichtiger Gene für die Stressantwort (HSF1/HSP70B) unterdrückt, um den Einfluss dieser Gene auf die HSR genauer zu studieren. Dazu habe ich ein in der einzelligen Grünalge neuartiges System entwickelt, basierend auf dem RNAi Mechanismus, dass es erlaubt abhängig von der Stickstoffquelle im Nährmedium auch essentielle Gene gezielt auszuschalten. Dieses System erlaubte es zu zeigen, dass HSF1 selbst während des Stresses die Expression seiner RNA erhöht, und dies gezielt tut um die Stressantwort weiter zu verstärken. Es konnte weiter gezeigt werden, dass das Chloroplasten Chaperon HSP70B ein essentielles Protein für das Zellwachstum ist, welches mithilfe des induzierbaren RNAi Systems genauer untersucht werden kann. Dabei wurde festgestellt, dass die HSP70B vermittelte Assemblierung und Disassemblierung des VIPP1 Proteins entscheidend ist für dessen Funktion in der Zelle. Des Weiteren konnte gezeigt werde, dass HSP70B wahrscheinlich verantwortlich ist für die Faltung eines oder mehrerer noch unbekannter Enzyme der Arginin Biosynthese oder der Stickstofffixierung, und das diese Prozesse wahrscheinlich die essentielle Funktion von HSP70B darstellen.
The main topic of this thesis is to define and analyze a multilevel Monte Carlo algorithm for path-dependent functionals of the solution of a stochastic differential equation (SDE) which is driven by a square integrable, \(d_X\)-dimensional Lévy process \(X\). We work with standard Lipschitz assumptions and denote by \(Y=(Y_t)_{t\in[0,1]}\) the \(d_Y\)-dimensional strong solution of the SDE.
We investigate the computation of expectations \(S(f) = \mathrm{E}[f(Y)]\) using randomized algorithms \(\widehat S\). Thereby, we are interested in the relation of the error and the computational cost of \(\widehat S\), where \(f:D[0,1] \to \mathbb{R}\) ranges in the class \(F\) of measurable functionals on the space of càdlàg functions on \([0,1]\), that are Lipschitz continuous with respect to the supremum norm.
We consider as error \(e(\widehat S)\) the worst case of the root mean square error over the class of functionals \(F\). The computational cost of an algorithm \(\widehat S\), denoted \(\mathrm{cost}(\widehat S)\), should represent the runtime of the algorithm on a computer. We work in the real number model of computation and further suppose that evaluations of \(f\) are possible for piecewise constant functions in time units according to its number of breakpoints.
We state strong error estimates for an approximate Euler scheme on a random time discretization. With this strong error estimates, the multilevel algorithm leads to upper bounds for the convergence order of the error with respect to the computational cost. The main results can be summarized in terms of the Blumenthal-Getoor index of the driving Lévy process, denoted by \(\beta\in[0,2]\). For \(\beta <1\) and no Brownian component present, we almost reach convergence order \(1/2\), which means, that there exists a sequence of multilevel algorithms \((\widehat S_n)_{n\in \mathbb{N}}\) with \(\mathrm{cost}(\widehat S_n) \leq n\) such that \( e(\widehat S_n) \precsim n^{-1/2}\). Here, by \( \precsim\), we denote a weak asymptotic upper bound, i.e. the inequality holds up to an unspecified positive constant. If \(X\) has a Brownian component, the order has an additional logarithmic term, in which case, we reach \( e(\widehat S_n) \precsim n^{-1/2} \, (\log(n))^{3/2}\).
For the special subclass of $Y$ being the Lévy process itself, we also provide a lower bound, which, up to a logarithmic term, recovers the order \(1/2\), i.e., neglecting logarithmic terms, the multilevel algorithm is order optimal for \( \beta <1\).
An empirical error analysis via numerical experiments matches the theoretical results and completes the analysis.
This thesis is devoted to furthering the tropical intersection theory as well as to applying the
developed theory to gain new insights about tropical moduli spaces.
We use piecewise polynomials to define tropical cocycles that generalise the notion of tropical Cartier divisors to higher codimensions, introduce an intersection product of cocycles with tropical cycles and use the connection to toric geometry to prove a Poincaré duality for certain cases. Our
main application of this Poincaré duality is the construction of intersection-theoretic fibres under a
large class of tropical morphisms.
We construct an intersection product of cycles on matroid varieties which are a natural
generalisation of tropicalisations of classical linear spaces and the local blocks of smooth tropical
varieties. The key ingredient is the ability to express a matroid variety contained in another matroid variety by a piecewise polynomial that is given in terms of the rank functions of the corresponding
matroids. In particular, this enables us to intersect cycles on the moduli spaces of n-marked abstract
rational curves. We also construct a pull-back of cycles along morphisms of smooth varieties, relate
pull-backs to tropical modifications and show that every cycle on a matroid variety is rationally
equivalent to its recession cycle and can be cut out by a cocycle.
Finally, we define families of smooth rational tropical curves over smooth varieties and construct a tropical fibre product in order to show that every morphism of a smooth variety to the moduli space of abstract rational tropical curves induces a family of curves over the domain of the morphism.
This leads to an alternative, inductive way of constructing moduli spaces of rational curves.
The safety of embedded systems is becoming more and more important nowadays. Fault Tree Analysis (FTA) is a widely used technique for analyzing the safety of embedded systems. A standardized tree-like structure called a Fault Tree (FT) models the failures of the systems. The Component Fault Tree (CFT) provides an advanced modeling concept for adapting the traditional FTs to the hierarchical architecture model in system design. Minimal Cut Set (MCS) analysis is a method that works for qualitative analysis based on the FTs. Each MCS represents a minimal combination of component failures of a system called basic events, which may together cause the top-level system failure. The ordinary representations of MCSs consist of plain text and data tables with little additional supporting visual and interactive information. Importance analysis based on FTs or CFTs estimates the contribution of each potential basic event to a top-level system failure. The resulting importance values of basic events are typically represented in summary views, e.g., data tables and histograms. There is little visual integration between these forms and the FT (or CFT) structure. The safety of a system can be improved using an iterative process, called the safety improvement process, based on FTs taking relevant constraints into account, e.g., cost. Typically, relevant data regarding the safety improvement process are presented across multiple views with few interactive associations. In short, the ordinary representation concepts cannot effectively facilitate these analyses.
We propose a set of visualization approaches for addressing the issues above mentioned in order to facilitate those analyses in terms of the representations.
Contribution:
1. To support the MCS analysis, we propose a matrix-based visualization that allows detailed data of the MCSs of interest to be viewed while maintaining a satisfactory overview of a large number of MCSs for effective navigation and pattern analysis. Engineers can also intuitively analyze the influence of MCSs of a CFT.
2. To facilitate the importance analysis based on the CFT, we propose a hybrid visualization approach that combines the icicle-layout-style architectural views with the CFT structure. This approach facilitates to identify the vulnerable components taking the hierarchies of system architecture into account and investigate the logical failure propagation of the important basic events.
3. We propose a visual safety improvement process that integrates an enhanced decision tree with a scatter plot. This approach allows one to visually investigate the detailed data related to individual steps of the process while maintaining the overview of the process. The approach facilitates to construct and analyze improvement solutions of the safety of a system.
Using our visualization approaches, the MCS analysis, the importance analysis, and the safety improvement process based on the CFT can be facilitated.
The scientific aim of this work was to synthesize and characterize new bidentate and tridentate phosphine ligands , their corresponding palladium complexes and to examine their application as homogenous catalysts. Later on, a part of the obtained palladium catalysts was immobilized and used as heterogonous catalyst.
Pyrimidinyl functionalized diphenyl phosphine ligands were synthesized by ring closure of [2-(3-dimethylamino-1-oxoprop-2-en-yl)phenyl]diphenylphosphine with an excess of substituted guanidinium salts. Furthermore to increase the electron density at phosphorous centre the two aryl substituents on the phosphanyl group were exchanged against two alkyl substituents. Electron rich pyrimidinyl functionalized dialkyl phosphine ligands were synthesized from pyrimidinyl functionalized bromobenzene in a process involving lithiation followed by reaction with a chlorodialkylphosphine.
Starting from the new synthesized diaryl phosphine ligands, their corresponding palladium complexes were synthesized. I was able to show that slight changes at the amino group of [(2-aminopyrimidin-4-yl)aryl]phosphines lead to pronounced differences in the stability and catalytic activity of the corresponding palladium(II) complexes. Having a P,C coordination mode, the palladium complex can catalyze rapidly the Suzuki coupling reaction of phenylbronic acid with arylbromides even at room temperature with a low loading.
Using the NH2 group of the aminopyrimidine as a potential site for the introduction of an other substituent, bidentate and tridentate ligands containing phosphorous atoms connected to the aminopyrimidine group and their corresponding palladium complexes were synthesized and characterized.
Two ligands [2- and 4-(4-(2-amino)pyrimidinyl)phenyl]diphenylphosphine (containing NH2 group) functionalized with a ethoxysilane group were synthesized. The palladium complexes based on these ligands were prepared and immobilized on commercial silica and MCM-41. Using elemental analysis, FT-IR, solid state 31P, 13C and 29Si CP–MAS NMR spectroscopy, XRD and N2 adsorption the success of the immobilization was confirmed and the structure of the heterogenized catalyst was investigated.
The resulting heterogeneous catalysts were applied for the Suzuki reaction and exhibited excellent activity, selectivity and reusability.
Predicting secondary structures of RNA molecules is one of the fundamental problems of and thus a challenging task in computational structural biology. Existing prediction methods basically use the dynamic programming principle and are either based on a general thermodynamic model or on a specific probabilistic model, traditionally realized by a stochastic context-free grammar. To date, the applied grammars were rather simple and small and despite the fact that statistical approaches have become increasingly appreciated over the past years, a corresponding sampling algorithm based on a stochastic RNA structure model has not yet been devised. In addition, basically all popular state-of-the-art tools for computational structure prediction have the same worst-case time and space requirements of O(n^3) and O(n^2) for sequence length n, limiting their applicability for practical purposes due to the often quite large sizes of native RNA molecules. Accordingly, the prime demand imposed by biologists on computational prediction procedures is to reach a reduced waiting time for results that are not significantly less accurate.
We here deal with all of these issues, by describing algorithms and performing comprehensive studies that are based on sophisticated stochastic context-free grammars of similar complexity as those underlying thermodynamic prediction approaches, where all of our methods indeed make use of the concept of sampling. We also employ the approximation technique known from theoretical computer science in order to reach a heuristic worst-case speedup for RNA folding.
Particularly, we start by describing a way for deriving a sequence-independent random sampler for an arbitrary class of RNAs by means of (weighted) unranking. The resulting algorithm may generate any secondary structure of a given fixed size n in only O(n·log(n)) time, where the results are observed to be accurate, validating its practical applicability.
With respect to RNA folding, we present a novel probabilistic sampling algorithm that generates statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method actually samples the possible foldings from a distribution implied by a suitable (traditional or length-dependent) grammar. Notably, we also propose several (new) ways for obtaining predictions from generated samples. Both variants have the same worst-case time and space complexities of O(n^3) and O(n^2) for sequence length n. Nevertheless, evaluations of our sampling methods show that they are actually capable of producing accurate (prediction) results.
In an attempt to resolve the long-standing problem of reducing the time complexity of RNA folding algorithms without sacrificing much of the accuracy of the results, we invented an innovative heuristic statistical sampling method that can be implemented to require only O(n^2) time for generating a fixed-size sample of candidate structures for a given sequence of length n. Since a reasonable prediction can still efficiently be obtained from the generated sample set, this approach finally reduces the worst-case time complexity by a liner factor compared to all existing precise methods. Notably, we also propose a novel (heuristic) sampling strategy as opposed to the common one typically applied for statistical sampling, which may produce more accurate results for particular settings. A validation of our heuristic sampling approach by comparison to several leading RNA secondary structure prediction tools indicates that it is capable of producing competitive predictions, but may require the consideration of large sample sizes.
Filtering, Approximation and Portfolio Optimization for Shot-Noise Models and the Heston Model
(2012)
We consider a continuous time market model in which stock returns satisfy a stochastic differential equation with stochastic drift, e.g. following an Ornstein-Uhlenbeck process. The driving noise of the stock returns consists not only of Brownian motion but also of a jump part (shot noise or compound Poisson process). The investor's objective is to maximize expected utility of terminal wealth under partial information which means that the investor only observes stock prices but does not observe the drift process. Since the drift of the stock prices is unobservable, it has to be estimated using filtering techniques. E.g., if the drift follows an Ornstein-Uhlenbeck process and without
jump part, Kalman filtering can be applied and optimal strategies can be computed explicitly. Also in other cases, like for an underlying
Markov chain, finite-dimensional filters exist. But for certain jump processes (e.g. shot noise) or certain nonlinear drift dynamics explicit computations, based on discrete observations, are no longer possible or existence of finite dimensional filters is no longer valid. The same
computational difficulties apply to the optimal strategy since it depends on the filter. In this case the model may be approximated by
a model where the filter is known and can be computed. E.g., we use statistical linearization for non-linear drift processes, finite-state-Markov chain approximations for the drift process and/or diffusion approximations for small jumps in the noise term.
In the approximating models, filters and optimal strategies can often be computed explicitly. We analyze and compare different approximation methods, in particular in view of performance of the corresponding utility maximizing strategies.
The discrete nature of the dispersed phase (swarm of droplet) in stirred and pulsed liquid-liquid extraction columns makes its mathematical modelling of such complex system a tedious task. The dispersed phase is considered as a population of droplets distributed randomly with respect to their internal properties (such as: droplet size and solute concentration) at a specific location in space. Hence, the population balance equation has been emerged as a mathematical tool to model and describe such complex behaviour. However, the resulting model is too complicated. Accordingly, the analytical solution of such a mathematical model does not exist except for particular cases. Therefore, numerical solutions are resorted to in general. This is due to the inherent nonlinearities in the convective and diffusive terms as well as the appearance of many integrals in the source term. However, modelling and simulation of liquid extraction columns is not an easy task because of the discrete nature of the dispersed phase, which consist of population of droplets. The natural frame work for taking this into account is the population balance approach.
In part of this doctoral thesis work, a rigours mathematical model based on the bivariate population balance frame work (the base of LLECMOD ‘‘Liquid-Liquid Extraction Column Module’’) for the steady state and dynamic simulation of pulsed (sieve plate and packed) liquid-liquid extraction columns is developed. The model simulates the coupled hydrodynamic and mass transfer for pulsed (packed and sieve plate) extraction columns. The model is programmed using visual digital FORTRAN and then integrated into the LLECMOD program. Within LLECMOD the user can simulate different types of extraction columns including stirred and pulsed ones. The basis of LLECMOD depends on stable robust numerical algorithms based on an extended version of a fixed pivot technique after Attarakih et al., 2003 (to take into account interphase solute transfer) and advanced computational fluid dynamics numerical methods. Experimental validated correlations are used for the estimation of the droplet terminal velocity in extraction columns based on single and swarm droplet experiments in laboratory scale devices. Additionally, recent published correlations for turbulent energy dissipation, droplet breakage and coalescence frequencies are discussed as been used in this version of LLECMOD. Moreover, coalescence model from literature derived from a stochastical description have been modified to fit the deterministic population model. As a case study, LLECMOD is used here to simulate the steady state performance of pulsed extraction columns under different operating conditions, which include pulsation intensity and volumetric flow rates are simulated. The effect of pulsation intensity (on the holdup, mean droplet diameter and solute concentration) is found to have more profound effect on systems of high interfacial tension. On the hand, the variation of volumetric flow rates have substantial effect on the holdup, mean droplet diameter and solute concentration profiles for chemical systems with low interfacial tension. Two chemical test systems recommended by the European Federation of Chemical Engineering (water-acetone (solute)-n-butyl acetate and water-acetone (solute)-toluene) and an industrial test system are used in the simulation. Model predictions are successfully validated against steady state and transient experimental data, where good agreements are achieved. The simulated results (holdup, mean droplet diameter and mass transfer profiles) compared to the experimental data show that LLECMOD is a powerful simulation tool, which can efficiently predict the dynamic and steady state performance of pulsed extraction columns.
In other part of this doctoral thesis work, the steady state performance of extraction columns is studied taking into account the effect of dispersed phase inlet condition (light or heavy phase is dispersed) and the direction of mass transfer (from continuous to dispersed phase and vice versa) using the population balance framework. LLECMOD, a program that uses multivariate population balance models, is extended to take into account the direction of mass transfer and the dispersed phase inlet. As a case study, LLECMOD is used to simulate pilot plant RDC columns where the steady state mean flow properties (dispersed phase hold up and droplet mean diameter) and the solute concentration profiles are compared to the available experimental data. Three chemical systems were used: sulpholane–benzene–n-heptane, water–acetone–toluene and water–acetone–n-butyl acetate. The dispersed phase inlet and the direction of mass transfer as well as the chemical system physical properties are found to have profound effect on the steady state performance of the RDC column. For example, the mean droplet diameter is found to persist invariant when the heavy phase is dispersed and the extractor efficiency is higher when the direction of mass transfer is from the continuous to the dispersed phase. For the purpose of experimental validation, it is found that LLECMOD predictions are in good agreement with the available experimental data concerning the dispersed phase hold up, mean droplet diameter and solute concentration profiles in both phases.
In a further part of this doctoral thesis, a mathematical model is developed for liquid extraction columns based on the multivariate population balance equation (PBE) and the primary secondary particle method (PSPM) introduced by Attarakih, 2010 (US Patent Application: 0100106467). It is extended to include the momentum balance for the dispersed phase. The advantage of momentum balance is to eliminate the need for often conflicting correlations used in estimating the terminal velocity of single and swarm of droplets. The resulting mathematical model is complex due to the integral nature of the population balance equation. To reduce the complexity of this model, while maintaining most of the information drawn from the continuous population balance equation, the concept of the PSPM is used. Based on the multivariate population balance equation and the PSPM a mathematical model is developed for any liquid extraction column. The secondary particle could be envisaged as a fluid particle carrying information about the distribution as it is evolved in space and time, in the meanwhile the primary particles carry the mean properties of the population such as total droplet concentration; mean droplet diameter dispersed phase hold up and so on. This information reflects the particle-particle interactions (breakage and coalescence) and transport (convection and diffusion). The developed model is discretized in space using a first-order upwind method, while semi-implicit first-order scheme in time is used to simulate a pilot plant RDC extraction column. Here the effect of the number of primary particles (classes) on the final predicted solution is investigated. Numerical results show that the solution converge fast even as the number of primary particle is increased. The terminal droplet velocity of the individual primary particle is found the most sensitive to the number of primary particles. Other mean population properties like the droplet mean diameter, mean hold up and the concentration profiles are also found to converge along the column height by increasing the number of primary particles. The predicted steady state profiles (droplet diameter, holdup and the concentration profiles) along a pilot RDC extraction column are compared to the experimental data where good agreement is achieved.
In addition to this a robust rigorous mathematical model based on the bivariate population balance equation is developed to predict the steady state and dynamic behaviour of the interacting hydrodynamics and mass transfer in Kühni extraction columns. The developed model is extended to include the momentum balance for the calculation of the droplet velocity. The effects of step changes in the important input variables (such as volumetric flow rates, rotational speed, inlet solute concentrations etc.) on the output variables (dispersed phase holdup, mean droplet diameter and the concentration profiles) are investigated.
The last topic of this doctoral thesis is developed to transient problems. The unsteady state analysis reveals the fact that the largest time constant (slowest response) is due to the mass transfer. On the contrary, the hydrodynamic response of the dispersed phase holdup is very fast when compared to the mass transfer due to the relative fast motion of the dispersed droplets with respect to the continuous phase. The dynamic behaviour of the dispersed and continuous phases shows a lag time that increases away from the feed points of both phases. Moreover, the solute concentration response shows a highly nonlinear behaviour due to both positive and negative step changes in the input variables. The simulation results are in good agreement with the experimental ones and show the usefulness of the model.
In this thesis we consider the problem of maximizing the growth rate with proportional and fixed costs in a framework with one bond and one stock, which is modeled as a jump diffusion with compound Poisson jumps. Following the approach from [1], we prove that in this framework it is optimal for an investor to follow a CB-strategy. The boundaries depend only on the parameters of the underlying stock and bond. Now it is natural to ask for the investor who follows a CB-strategy which is given by the stopping times \((\tau_i)_{i\in\mathbb N}\) and impulses \((\eta_i)_{i\in\mathbb N}\) how often he has to rebalance. In other words we want to obtain the limit of the inter trading times
\[
\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^n(\tau_{i+1}-\tau_{i}).
\]
We are able to obtain this limit which is given by the expected first exit time of the risky fraction process from some interval under the invariant measure of the Markov chain \((\eta_i)_{i\in\mathbb N}\) using the Ergodic Theorem from von Neumann and Birkhoff. In general, it is difficult to obtain the expectation of the first exit time for the process with jumps. Because of the jump part, when the process crosses the boundaries of the interval an overshoot may occur which makes it difficult to obtain the distribution. Nevertheless we can obtain the first exit time if the process has only negative jumps using scale functions. The main difficulty of this approach is that the scale functions are known only up to their Laplace transforms. In [2] and [3] the closed-form expression for the scale function of the Levy process with phase-type distributed jumps is obtained. Phase-type distributions build a rich class of positive-valued distributions: the exponential, hyperexponential, Erlang, hyper-Erlang and Coxian distributions. Since the scale function is given as a function in a closed form we can differentiate to obtain the expected first exit time using the fluctuation identities explicitly.
[1] Irle, A. and Sass,J.: Optimal portfolio policies under fixed and proportional transaction costs, Advances in Applied Probability 38, 916-942.
[2] Egami, M., Yamazaki, K.: On scale functions of spectrally negative Levy processes with phase-type jumps, working paper, July 3.
[3]Egami, M., Yamazaki, K.: Precautionary measures for credit risk management in jump models, working paper, June 17.
The goal of this work is to develop a simulation-based algorithm, allowing the prediction
of the effective mechanical properties of textiles on the basis of their microstructure
and corresponding properties of fibers. This method can be used for optimization of the
microstructure, in order to obtain a better stiffness or strength of the corresponding fiber
material later on. An additional aspect of the thesis is that we want to take into account the microcontacts
between fibers of the textile. One more aspect of the thesis is the accounting for the thickness of thin fibers in the
textile. An introduction of an additional asymptotics with respect to a small parameter,
the relation between the thickness and the representative length of the fibers, allows a
reduction of local contact problems between fibers to 1-dimensional problems, which
reduces numerical computations significantly.
A fiber composite material with periodic microstructure and multiple frictional microcontacts
between fibers is studied. The textile is modeled by introducing small geometrical
parameters: the periodicity of the microstructure and the characteristic
diameter of fibers. The contact linear elasticity problem is considered. A two-scale
approach is used for obtaining the effective mechanical properties.
The algorithm using asymptotic two-scale homogenization for computation of the
effective mechanical properties of textiles with periodic rod or fiber microstructure
is proposed. The algorithm is based on the consequent passing to the asymptotics
with respect to the in-plane period and the characteristic diameter of fibers. This
allows to come to the equivalent homogenized problem and to reduce the dimension
of the auxiliary problems. Further numerical simulations of the cell problems give
the effective material properties of the textile.
The homogenization of the boundary conditions on the vanishing out-of-plane interface
of a textile or fiber structured layer has been studied. Introducing additional
auxiliary functions into the formal asymptotic expansion for a heterogeneous
plate, the corresponding auxiliary and homogenized problems for a nonhomogeneous
Neumann boundary condition were deduced. It is incorporated into the right hand
side of the homogenized problem via effective out-of-plane moduli.
FiberFEM, a C++ finite element code for solving contact elasticity problems, is
developed. The code is based on the implementation of the algorithm for the contact
between fibers, proposed in the thesis.
Numerical examples of homogenization of geotexiles and wovens are obtained in the
work by implementation of the developed algorithm. The effective material moduli
are computed numerically using the finite element solutions of the auxiliary contact
problems obtained by FiberFEM.
This thesis deals with the relationship between no-arbitrage and (strictly) consistent price processes for a financial market with proportional transaction costs
in a discrete time model. The exact mathematical statement behind this relationship is formulated in the so-called Fundamental Theorem of Asset Pricing (FTAP). Among the many proofs of the FTAP without transaction costs there
is also an economic intuitive utility-based approach. It relies on the economic
intuitive fact that the investor can maximize his expected utility from terminal
wealth. This approach is rather constructive since the equivalent martingale measure is then given by the marginal utility evaluated at the optimal terminal payoff.
However, in the presence of proportional transaction costs such a utility-based approach for the existence of consistent price processes is missing in the literature. So far, rather deep methods from functional analysis or from the theory of random sets have been used to show the FTAP under proportional transaction costs.
For the sake of existence of a utility-maximizing payoff we first concentrate on a generic single-period model with only one risky asset. The marignal utility evaluated at the optimal terminal payoff yields the first component of a
consistent price process. The second component is given by the bid-ask prices
depending on the investors optimal action. Even more is true: nearby this consistent price process there are many strictly consistent price processes. Their exact structure allows us to apply this utility-maximizing argument in a multi-period model. In a backwards induction we adapt the given bid-ask prices in such a way so that the strictly consistent price processes found from maximizing utility can be extended to terminal time. In addition possible arbitrage opportunities of the 2nd kind vanish which can present for the original bid-ask process. The notion of arbitrage opportunities of the 2nd kind has been so
far investigated only in models with strict costs in every state. In our model
transaction costs need not be present in every state.
For a model with finitely many risky assets a similar idea is applicable. However, in the single-period case we need to develop new methods compared
to the single-period case with only one risky asset. There are mainly two reasons
for that. Firstly, it is not at all obvious how to get a consistent price process
from the utility-maximizing payoff, since the consistent price process has to be
found for all assets simultaneously. Secondly, we need to show directly that the
so-called vector space property for null payoffs implies the robust no-arbitrage condition. Once this step is accomplished we can à priori use prices with a
smaller spread than the original ones so that the consistent price process found
from the utility-maximizing payoff is strictly consistent for the original prices.
To make the results applicable for the multi-period case we assume that the prices are given by compact and convex random sets. Then the multi-period case is similar to the case with only one risky asset but more demanding with regard to technical questions.
Image restoration and enhancement methods that respect important features such as edges play a fundamental role in digital image processing. In the last decades a large
variety of methods have been proposed. Nevertheless, the correct restoration and
preservation of, e.g., sharp corners, crossings or texture in images is still a challenge, in particular in the presence of severe distortions. Moreover, in the context of image denoising many methods are designed for the removal of additive Gaussian noise and their adaptation for other types of noise occurring in practice requires usually additional efforts.
The aim of this thesis is to contribute to these topics and to develop and analyze new
methods for restoring images corrupted by different types of noise:
First, we present variational models and diffusion methods which are particularly well
suited for the restoration of sharp corners and X junctions in images corrupted by
strong additive Gaussian noise. For their deduction we present and analyze different
tensor based methods for locally estimating orientations in images and show how to
successfully incorporate the obtained information in the denoising process. The advantageous
properties of the obtained methods are shown theoretically as well as by
numerical experiments. Moreover, the potential of the proposed methods is demonstrated
for applications beyond image denoising.
Afterwards, we focus on variational methods for the restoration of images corrupted
by Poisson and multiplicative Gamma noise. Here, different methods from the literature
are compared and the surprising equivalence between a standard model for
the removal of Poisson noise and a recently introduced approach for multiplicative
Gamma noise is proven. Since this Poisson model has not been considered for multiplicative
Gamma noise before, we investigate its properties further for more general
regularizers including also nonlocal ones. Moreover, an efficient algorithm for solving
the involved minimization problems is proposed, which can also handle an additional
linear transformation of the data. The good performance of this algorithm is demonstrated
experimentally and different examples with images corrupted by Poisson and
multiplicative Gamma noise are presented.
In the final part of this thesis new nonlocal filters for images corrupted by multiplicative
noise are presented. These filters are deduced in a weighted maximum likelihood
estimation framework and for the definition of the involved weights a new similarity measure for the comparison of data corrupted by multiplicative noise is applied. The
advantageous properties of the new measure are demonstrated theoretically and by
numerical examples. Besides, denoising results for images corrupted by multiplicative
Gamma and Rayleigh noise show the very good performance of the new filters.
This thesis addresses challenges faced by small package shipping companies and investigates the integration of 1) service consistency and driver knowledge aspects and 2) the utilization of electric vehicles into the route planning of small package shippers. We use Operations Research models and solution methods to gain insights into the newly arising problems and thus support managerial decisions concerning these issues.
Due to their N-glycosidase activity, ribosome-inactivating proteins (RIPs) are attractive candidates as antitumor and antiviral agents in medical and biological research. In the present study, we have successfully cloned two different truncated gelonins into pET-28a(+) vectors and expressed intact recombinant gelonin (rGel), recombinant C-terminally truncated gelonin (rC3-gelonin) and recombinant N- and C-terminally truncated gelonin (rN34C3-gelonin). Biological experiments showed that all these recombinant gelonins have no inhibiting effect on MCF-7 cell lines. These data suggest that the truncated-gelonins are still having a specific structure that does not allow for internalization into cells. Further, truncation of gelonin leads to partial or complete loss of N-glycosidase as well as DNase activity compared to intact rGel. Our data suggest that C-and N-terminal amino acid residues are involved in the catalytic and cytotoxic activities of rGel. In addition, the intact gelonin should be selected as a toxin in the immunoconjugate rather than truncated gelonin.
In the second part, an immunotoxin composed of gelonin, a basic protein of 30 kDa isolated from the Indian plant Gelonium multiflorum and the cytotoxic drug MTX has been studied as a potential tool of gelonin delivery into the cytoplasm of cells. Results of many experiments showed that, on the average, about 5 molecules of MTX were coupled to one molecule of gelonin. The MTX-gelonin conjugate is able to reduce the viability of MCF-7 cell in a dose-dependent manner (ID50, 10 nM) as shown by MTT assay and significantly induce direct and oxidative DNA damage as shown by the alkaline comet assay. However, in-vitro translation toxicity MTX-gelonin conjugates have IC50, 50.5 ng/ml which is less toxic than that of gelonin alone IC50, 4.6 ng/ml. It can be concluded that the positive charge plays an important role in the N-glycosidase activity of gelonin. Furthermore, conjugation of MTX with gelonin through α- and γ- carboxyl groups leads to the partial loss of its anti-folate activity compared to free MTX. These results, taken together, indicate that conjugation of MTX to gelonin permits delivery of the gelonin into the cytoplasm of cancer cells and exerts a measurable toxic effect.
In the third part, we have isolated and characterized two ribosome-inactivating proteins (RIPs) type I, gelonin and GAP31, from seeds of Gelonium multiflorum. Both proteins exhibit RNA-N-glycosidase activity. The amino acid sequences of gelonin and GAP31 were identified by MALDI and ESI mass spectrometry. Gelonin and GAP31 peptides - obtained by proteolytic digestion (trypsin and Arg-C) - are consistent with the amino acid sequence published by Rosenblum and Huang, respectively. Further structural characterization of gelonin and GAP31 (tryptic and Arg-C peptide mapping) showed that the two RIPs have 96% similarity in their sequence. Thus, these two proteins are most probably isoforms arisen from the same gene by alternative splicing. The ESI-MS analysis of gelonin and GAP31 exhibited at least three different post-translational modified forms. A standard plant paucidomannosidic N-glycosylation pattern (GlcNAc2Man2-5Xyl0-1 and GlcNAc2Man6-12Fuc1-2Xyl0-2) was identified using electrospray ionization MS for gelonin on N196 and GAP31 on N189, respectively. Based on these results, both proteins are located in the vacuoles of Gelonium multiflorum seeds.
The scientific intention of this work was to synthesize and characterize new bidentate, tridentate and multidentate ligands and to apply them in heterogenous catalysis. For each type of the ligands, new methods of synthesis were developed. Starting from 1,1'-(pyridine-2,6-diyl)diethanone and dimethylpyridine-2,6-dicarboxylate different bispyrazolpyridines were
synthesized and novel ruthenium complexes of the type (L)(NNN)RuCl2 could be obtained. The complexes with L = triphenylphosphine turned out to be highly efficient
catalyst precursors for the transfer hydrogenation of aromatic ketones. Introduction of a butyl group in the 5-positions of the pyrazoles leads to a pronounced increase of catalytic activity.
To find a method for the synthesis of bispyrimidinepyridines, different reactants and condition were applied and it was found that these tridentate ligands can be obtained by mixing and grinding the tetraketone with guanidinium carbonate and silica, which plays the role of a catalyst in this ring closing reaction.
The bidentate 2-amino-4-(2-pyridinyl)pyrimidines were synthesized from different substrates according to the desired substituent on the pyrimidine ring.
Reacting these bidentate ligands with the ruthenium(II) precursor [(η6-cymene)Ru(Cl)(μ
2-Cl)]2 gave cationic ruthenium(II) complexes of the type [(η6-cymene)Ru(Cl)(adpm)]Cl (adpm = chelating 2-amino-4-(2-yridinyl)pyrimidine ligand). Stirring the freshly prepared complexes with either NaBPh4, NaBF4 or KPF6, the chloride anion was exchanged against other coordinating anions (BF4-, PF6-, BPh4-).Some of these ruthenium complexes have shown very special activities in the transfer hydrogenation of ketones by reacting them in the absence of the base. This led to detailed investigations on the mechanism of this reaction. According to the activities and with the help
of ESI-MS experiments and DFT calculations, a mechanism was proposed for the transfer hydrogenation of acetophenone in the absence of the base. It shows that in the absence of the base, a C-H bond activation at the pyrimidine ring should occur to activate the catalyst.
The palladium complexes of bidentate N,N ligands were examined in coupling reactions. As expected, they did not show very special activities.
Multidentate ligands, having pyrimidine groups as relatively soft donors for late transition metals and simultaneously possessing a binding position for a hard Lewis-acid, could be obtained using the new synthesized bidentate and tridentate ligands.
Dealing with information in modern times involves users to cope with hundreds of thousands of documents, such as articles, emails, Web pages, or News feeds.
Above all information sources, the World Wide Web presents information seekers with great challenges.
It offers more text in natural language than one is capable to read.
The key idea for this research intends to provide users with adaptable filtering techniques, supporting them in filtering out the specific information items they need.
Its realization focuses on developing an Information Extraction system,
which adapts to a domain of concern, by interpreting the contained formalized knowledge.
Utilizing the Resource Description Framework (RDF), which is the Semantic Web's formal language for exchanging information,
allows extending information extractors to incorporate the given domain knowledge.
Because of this, formal information items from the RDF source can be recognized in the text.
The application of RDF allows a further investigation of operations on recognized information items, such as disambiguating and rating the relevance of these.
Switching between different RDF sources allows changing the application scope of the Information Extraction system from one domain of concern to another.
An RDF-based Information Extraction system can be triggered to extract specific kinds of information entities by providing it with formal RDF queries in terms of the SPARQL query language.
Representing extracted information in RDF extends the coverage of the Semantic Web's information degree and provides a formal view on a text from the perspective of the RDF source.
In detail, this work presents the extension of existing Information Extraction approaches by incorporating the graph-based nature of RDF.
Hereby, the pre-processing of RDF sources allows extracting statistical information models dedicated to support specific information extractors.
These information extractors refine standard extraction tasks, such as the Named Entity Recognition, by using the information provided by the pre-processed models.
The post-processing of extracted information items enables representing these results in RDF format or lists, which can now be ranked or filtered by relevance.
Post-processing also comprises the enrichment of originating natural language text sources with extracted information items by using annotations in RDFa format.
The results of this research extend the state-of-the-art of the Semantic Web.
This work contributes approaches for computing customizable and adaptable RDF views on the natural language content of Web pages.
Finally, due to the formal nature of RDF, machines can interpret these views allowing developers to process the contained information in a variety of applications.
The goal of this thesis is to find ways to improve the analysis of hyperspectral Terahertz images. Although it would be desirable to have methods that can be applied on all spectral areas, this is impossible. Depending on the spectroscopic technique, the way the data is acquired differs as well as the characteristics that are to be detected. For these reasons, methods have to be developed or adapted to be especially suitable for the THz range and its applications. Among those are particularly the security sector and the pharmaceutical industry.
Due to the fact that in many applications the volume of spectra to be organized is high, manual data processing is difficult. Especially in hyperspectral imaging, the literature is concerned with various forms of data organization such as feature reduction and classification. In all these methods, the amount of necessary influence of the user should be minimized on the one hand and on the other hand the adaption to the specific application should be maximized.
Therefore, this work aims at automatically segmenting or clustering THz-TDS data. To achieve this, we propose a course of action that makes the methods adaptable to different kinds of measurements and applications. State of the art methods will be analyzed and supplemented where necessary, improvements and new methods will be proposed. This course of action includes preprocessing methods to make the data comparable. Furthermore, feature reduction that represents chemical content in about 20 channels instead of the initial hundreds will be presented. Finally the data will be segmented by efficient hierarchical clustering schemes. Various application examples will be shown.
Further work should include a final classification of the detected segments. It is not discussed here as it strongly depends on specific applications.
Mechanisms underlying the biological effects of coffee and its constituents are incompletely understood. Many effects have been attributed solely to caffeine, neglecting that coffee is a mixture of many chemical substances. Some authors suggest that the main mechanism of action of caffeine is to antagonize adenosine receptors (AR); a second effect is the inhibition of phosphodiesterases with the subsequent accumulation of cAMP and an intensification of the effects of catecholamines. Although the inhibition of phosphodiesterases may contribute to the actions of caffeine, there is growing evidence that most pharmacological effects of this xanthine result from antagonism of AR.
One of the main objectives of this work was to investigate whether substances other than caffeine in coffee may influence the homeostasis of intracellular cyclic nucleotides in vitro and in vivo. The influence of selected coffee compounds, extracts and brews on key elements involved in the adenosine receptor-mediated signaling pathway have been investigated.
A further aim of this work was also to determine if coffee or some coffee constituents may have a stimulatory effect on the cellular heme oxygenase activity (HO-activity). Two coffee extracts, a slightly (AB1) and an intensively roasted coffee (AB2), were studied along with selected individual compounds. Caffeine and low substituted pyrazines showed no effect on the HO-activity, while NMP, pyrazines with a greater substitution pattern such as Tetramethylpyrazine (TMP) and 2-Ethyl-3,5(6)-dimethylpyrazine (2-E-3,5-DMP) and both coffee extracts significantly induced the HO-activity in liver hepatocellular carcinoma (HepG2), intestinal colo-rectal adenocarcinoma (Caco-2) and in some instances in monocytic leukemia (MM6) cells.
It was found that caffeine, theophylline, coffee extracts from conventional or functional coffees, pyrazines (2,3-DE-6-MP, 2-Isobutyl-3-methoxyP), 5-CQA and caffeic acid all significantly inhibited the basal cytoplasmatic PDE activity in lysates of lung tumour xenograft cells (LXFL529L) and human platelets. To a somewhat lesser extent, PDE inhibition was also found in experiments performed with paraxanthine and other pyrazines (2-E-3,5-DMP, TMP and 2-E-5-MP). Thus the degree of roasting has a considerable impact on constituents of influence for PDE activity. Caffeine, coffee polyphenols and some pyrazines and further, as yet unknown roasting products appear to represent the main modulating constituents.
In two coffee intervention studies, a short-term (8 weeks) and a long-term study (24 weeks), comprising 8 and 84 healthy volunteers respectively, we examined extracellular key elements of the adenosine pathway including plasma adenosine levels and adenosine deaminase activity. Additionally, we studied the intracellular cAMP concentration and the PDE activity in platelets as surrogate biomarkers of adipocytes.
Results of in vitro experiments had suggested that the concentrations of caffeine and coffee extracts required to obtain a half maximal inhibition were in the upper range of physiological conditions. Yet, it was demonstrated for the first time in vivo that moderate consumption of coffee can modulate the activity of platelet phosphodiesterases in humans in long and short term. In both studies, the first exposure to coffee showed a strong inhibition (p<0.001) of the PDE activity in the platelet lysates of the participants while the second coffee phase showed no or a slight effect when compared with the first coffee intervention.
In both studies a significant increase (p<0.001) in intraplatelet cAMP concentrations during the wash-out phase (after the first coffee phase) was observed. This response could be due to inhibition of the PDE activity in the previously phase extending in to the wash out phase. However, the behavior of cAMP in the following study phases cannot be easily explained. It may be hypothesized that this effect is attributable to adaptive effects to allow PDE inhibition. One possibility is the modulation of the expression of membrane-bound adenosine receptors in platelet precursors, which still have a nucleus. This may potentially influence adenylate cyclase activity in mature platelets. For the observed effects, in addition to caffeine other ingredients of coffee appear to play a role. The findings suggest that monitoring of cAMP homeostasis in platelets is not a useful surrogate biomarker for effects in other tissues.
Neither the activity of adenosine deaminase nor the adenosine concentrations in plasma were markedly modulated by the coffee consumption in both trials. This may reflect the fact that adenosine is subject to quick and effective enzymatic turnover by phosphorylation (adenosine kinase) or deamination (adenosine deaminase) allowing keep its concentration within a well balanced homeostasis. However, it is also well known, that considerable variability exists in the responses to coffee drinking. In part, such variability is due to caffeine tolerance, but there is also evidence for a genetic background.
Altogether the data reported here provide further evidence for the perception that coffee consumption is associated with beneficial health effects demonstrated for the cAMP enhancement in platelets, known to counteract platelet aggregation. The effects observed for the influence of cellular heme oxygenase (HO) are in line with the well documented antioxidative activity of coffee and its constituents.
Paper production is a problem with significant importance for the society and it is a challenging topic for scientific investigations. This study is concerned with the simulations of the pressing section of a paper machine. We aim at the development of an advanced mathematical model of the pressing section, which is able to recover the behavior of the fluid flow within the paper felt sandwich obtained in laboratory experiments.
From the modeling point of view the pressing of the paper-felt sandwich is a complex process since one has to deal with the two-phase flow in moving and deformable porous media. To account for the solid deformations, we use developments from the PhD thesis by S. Rief where the elasticity model is stated and discussed in detail. The flow model which accounts for the movement of water within the paper-felt sandwich is described with the help of two flow regimes: single-phase water flow and two-phase air-water flow. The model for the saturated flow is presented by the Darcy's law and the mass conservation. The second regime is described by the Richards' approach together with dynamic capillary effects. The model for the dynamic capillary pressure - saturation relation proposed by Hassanizadeh and Gray is adapted for the needs of the paper manufacturing process.
We have started the development of the flow model with the mathematical modeling in one-dimensional case. The one-dimensional flow model is derived from a two-dimensional one by an averaging procedure in vertical direction. The model is numerically studied and verified in comparison with measurements. Some theoretical investigations are performed to prove the convergence of the discrete solution to the continuous one. For completeness of the studies, the models with the static and dynamic capillary pressure–saturation relations are considered. Existence, compactness and convergence results are obtained for both models.
Then, a two-dimensional model is developed, which accounts for a multilayer computational domain and formation of the fully saturated zones. For discretization we use a non-orthogonal grid resolving the layer interfaces and the multipoint flux approximation O-method. The numerical experiments are carried out for parameters which are typical for the production process. The static and dynamic capillary pressure-saturation relations are tested to evaluate the influence of the dynamic capillary effect.
The last part of the thesis is an investigation of the validity range of the Richards’ assumption for the two-dimensional flow model with the static capillary pressure-saturation relation. Numerical experiments show that the Richards’ assumption is not the best choice in simulating processes in the pressing section.
Standard bases are one of the main tools in computational commutative algebra. In 1965
Buchberger presented a criterion for such bases and thus was able to introduce a first approach for their computation. Since the basic version of this algorithm is rather inefficient
due to the fact that it processes lots of useless data during its execution, active research for
improvements of those kind of algorithms is quite important.
In this thesis we introduce the reader to the area of computational commutative algebra with a focus on so-called signature-based standard basis algorithms. We do not only
present the basic version of Buchberger’s algorithm, but give an extensive discussion of different attempts optimizing standard basis computations, from several sorting algorithms
for internal data up to different reduction processes. Afterwards the reader gets a complete
introduction to the origin of signature-based algorithms in general, explaining the under-
lying ideas in detail. Furthermore, we give an extensive discussion in terms of correctness,
termination, and efficiency, presenting various different variants of signature-based standard basis algorithms.
Whereas Buchberger and others found criteria to discard useless computations which
are completely based on the polynomial structure of the elements considered, Faugère presented a first signature-based algorithm in 2002, the F5 Algorithm. This algorithm is famous for generating much less computational overhead during its execution. Within this
thesis we not only present Faugère’s ideas, we also generalize them and end up with several
different, optimized variants of his criteria for detecting redundant data.
Being not completely focussed on theory, we also present information about practical
aspects, comparing the performance of various implementations of those algorithms in the
computer algebra system Singular over a wide range of example sets.
In the end we give a rather extensive overview of recent research in this area of computational commutative algebra.
Development of New Methods for the Synthesis of Aldehydes, Arenes and Trifluoromethylated Compounds
(2012)
In the 1st project, successful development of 2nd generation of a palladium catalyst for the selective hydrogenation of carboxylic acids to aldehydes was accomplished. This project was done in cooperation with Dipl. Chem. Thomas Fett from Boeringer Ingelheim, Austria. The new catalyst is highly effective for the conversion of diversely functionalized aromatic, heteroaromatic and aliphatic carboxylic acids to the corresponding aldehydes in the presence of pivalic anhydride at 5 bar hydrogen pressure, which was otherwise achieved either at 30 bar of hydrogen pressure or by using waste intensive hypophosphite bases as reducing agent. Our method has increased the synthetic importance of this valuable transformation. Selective hydrogenation of carboxylic acids to the corresponding aldehydes is now possible with industrial hydrogenation equipment as well as laboratory scale glass autoclaves. It might also convince the synthetic organic chemists to use this transformation for routine aldehyde synthesis in the laboratories.
In the 2nd project, a microwave assisted Cu-catalyzed protodecarboxylation of arenecarboxylic acids to arenes is achieved. This work was done in collaboration with Dipl. Chem. Filipe Manjolinho under the supervision of Dr. Nuria Rodríguez. In the presence of 1-5 mol% of inexpensive CuI/1,10-phenanthroline catalyst generated in situ under microwave radiations, diversely functionalized arenes and heteroarene carboxylic acids have been decarboxylated to the corresponding arenes in good yields at 190 °C in 5-15 min. The loss of volatile arenes with the release of CO2 is controled by the use of sealed high pressure resistant microwave vessels. These reactions are highly beneficial for parallel synthesis in drug discovery due to their short reaction time. Microwave technology will also help in the future to develop more effective catalysts for protodecarboxylation rections.
Based on the microwave assisted protodecarboxylation strategy, decarboxylative coupling of arenecarboxylic acids with aryl triflates and tosylates was also conducted under microwave radiation which provided higher yields of the corresponding biphenyls from deactivated substrates in short reaction time compared to the conventional heating.
In the 3rd project, crystalline, potassium (trifluoromethyl)trimethoxyborate was successfully applied for the synthesis of benzotrifluorides under the oxidative conditions. This project was done in cooperation with Dipl. Chem. Annette Buba. In the presence of Cu(OAc)2 and molecular oxygen, arylboronates were coupled with K+[CF3B(OMe)3] in DMSO at 60 °C. A variety of benzotriflurides was synthesized in good yields under the optimized reaction conditions. This protocol for the oxidative trifluoromethylation of arylboronates is the base for the development of decarboxylative trifluoromethylation reaction of arenecarboxylic acids.
The 4th project discloses the simple and straightforward synthesis of trifluoromethylated alcohols by nucleophilic addition of potassium (trifluoromethyl)trimethoxyborate to carbonyl compounds. This project was done in cooperation with Dr. Thomas Knauber and Dipl. Chem. Annette Buba. In the presence of K+[CF3B(OMe)3] in THF at 60 °C, diversely functionalized aldehydes and ketones were successfully converted into the corresponding trifluoromethylated alcohols.
The 3rd and 4th projects demonstrate the successful establishment of crystalline and shelf stable potassium (trifluoromethyl)trimethoxyborate as highly versatile CF3-source in nucleophilic trifluoromethylation reactions. These new protocols are characterized by their user-friendliness and broad applicability under mild reaction conditions, thus they are beneficial for late stage introduction of CF3-group into organic molecules.
On Gyroscopic Stabilization
(2012)
This thesis deals with systems of the form
\(
M\ddot x+D\dot x+Kx=0\;, \; x \in \mathbb R^n\;,
\)
with a positive definite mass matrix \(M\), a symmetric damping matrix \(D\) and a positive definite stiffness
matrix \(K\).
If the equilibrium in the system is unstable, a small disturbance is enough to set the system in motion again. The motion of the system sustains itself, an effect which is called self-excitation or self-induced vibration. The reason behind this effect is the presence of negative damping, which results for example from dry friction.
Negative damping implies that the damping matrix \(D\) is indefinite or negative definite. Throughout our work, we assume \(D\) to be indefinite, and that the system possesses both stable and unstable modes and thus is unstable.
It is now the idea of gyroscopic stabilization to mix the modes of a system with indefinite damping such
that the system is stabilized without introducing further
dissipation. This is done by adding gyroscopic forces \(G\dot x\) with a suitable
skew-symmetric matrix \(G\) to the left-hand side. We call \(G=-G^T\in\mathbb R^{n\times n}\) a gyroscopic stabilizer for
the unstable system, if
\(
M\ddot x+(D+ G)\dot x+Kx=0
\)
is asymptotically stable. We show the existence of \(G\) in space dimensions three and four.
In this thesis we outline the Kerner's 3-phase traffic flow theory, which states that the flow of vehicular traffic occur in three phases i.e. free flow, synchronized flow and wide moving jam phases.
A macroscopic 3-phase traffic model of the Aw-Rascle type is derived from the microscopic Speed Adaptation 3-phase traffic model
developed by Kerner and Klenov [J. Phys. A: Math. Gen., 39(2006), pp. 1775-1809 ].
We derive the same macroscopic model from the kinetic traffic flow model of Klar and Wegener [SIAM J. Appl. Math., 60(2000), pp. 1749-1766 ] as well as that of Illner, Klar and Materne [Comm. Math. Sci., 1(2003), pp. 1-12 ].
In the above stated derivations, the 3-phase traffic theory is constituted in the macroscopic model through a relaxation term.
This serves as an incentive to modify the relaxation term of the `switching curve' model of Greenberg,
Klar and Rascle [SIAM J. Appl. Math.,63(2003), pp.818-833 ] to obtain another macroscopic 3-phase traffic model, which is still of the Aw-Rascle type.
By specifying the relaxation term differently we obtain three kinds of models, namely the macroscopic Speed Adaptation,
the Switching Curve and the modified Switching Curve models.
To demonstrate the capability of the derived macroscopic traffic models to reproduce the features of 3-phase traffic theory, we simulate a
multi-lane road that has a bottleneck. We consider a stationary and a moving bottleneck.
The results of the simulations for the three models are compared.
This thesis generalizes the Cohen-Lenstra heuristic for the class groups of real quadratic
number fields to higher class groups. A "good part" of the second class group is defined.
In general this is a non abelian proper factor group of the second class group. Properties
of those groups are described, a probability distribution on the set of those groups is in-
troduced and proposed as generalization of the Cohen-Lenstra heuristic for real quadratic
number fields. The calculation of number field tables which contain information about
higher class groups is explained and the tables are compared to the heuristic. The agree-
ment is close. A program which can create an internet database for number field tables is
presented.
The increasing complexity of modern SoC designs makes tasks of SoC formal verification
a lot more complex and challenging. This motivates the research community to develop
more robust approaches that enable efficient formal verification for such designs.
It is a common scenario to apply a correctness by integration strategy while a SoC
design is being verified. This strategy assumes formal verification to be implemented in
two major steps. First of all, each module of a SoC is considered and verified separately
from the other blocks of the system. At the second step – when the functional correctness
is successfully proved for every individual module – the communicational behavior has
to be verified between all the modules of the SoC. In industrial applications, SAT/SMT-based interval property checking(IPC) has become widely adopted for SoC verification. Using IPC approaches, a verification engineer is able to afford solving a wide range of important verification problems and proving functional correctness of diverse complex components in a modern SoC design. However, there exist critical parts of a design where formal methods often lack their robustness. State-of-the-art property checkers fail in proving correctness for a data path of an industrial central processing unit (CPU). In particular, arithmetic circuits of a realistic size (32 bits or 64 bits) – especially implementing multiplication algorithms – are well-known examples when SAT/SMT-based
formal verification may reach its capacity very fast. In cases like this, formal verification
is replaced with simulation-based approaches in practice. Simulation is a good methodology that may assure a high rate of discovered bugs hidden in a SoC design. However, in contrast to formal methods, a simulation-based technique cannot guarantee the absence of errors in a design. Thus, simulation may still miss some so-called corner-case bugs in the design. This may potentially lead to additional and very expensive costs in terms of time, effort, and investments spent for redesigns, refabrications, and reshipments of new chips.
The work of this thesis concentrates on studying and developing robust algorithms
for solving hard arithmetic decision problems. Such decision problems often originate from a task of RTL property checking for data-path designs. Proving properties of those
designs can efficiently be performed by solving SMT decision problems formulated with
the quantifier-free logic over fixed-sized bit vectors (QF-BV).
This thesis, firstly, proposes an effective algebraic approach based on a Gröbner basis theory that allows to efficiently decide arithmetic problems. Secondly, for the case of custom-designed components, this thesis describes a sophisticated modeling technique which is required to restore all the necessary arithmetic description from these components. Further, this thesis, also, explains how methods from computer algebra and the modeling techniques can be integrated into a common SMT solver. Finally, a new QF-BV SMT solver is introduced.
In urban planning, both measuring and communicating sustainability are among the most recent concerns. Therefore, the primary emphasis of this thesis concerns establishing metrics and visualization techniques in order to deal with indicators of sustainability.
First, this thesis provides a novel approach for measuring and monitoring two indicators of sustainability - urban sprawl and carbon footprints – at the urban neighborhood scale. By designating different sectors of relevant carbon emissions as well as different household categories, this thesis provides detailed information about carbon emissions in order to estimate impacts of daily consumption decisions and travel behavior by household type. Regarding urban sprawl, a novel gridcell-based indicator model is established, based on different dimensions of urban sprawl.
Second, this thesis presents a three-step-based visualization method, addressing predefined requirements for geovisualizations and visualizing those indicator results, introduced above. This surface-visualization combines advantages from both common GIS representation and three-dimensional representation techniques within the field of urban planning, and is assisted by a web-based graphical user interface which allows for accessing the results by the public.
In addition, by focusing on local neighborhoods, this thesis provides an alternative approach in measuring and visualizing both indicators by utilizing a Neighborhood Relation Diagram (NRD), based on weighted Voronoi diagrams. Thus, the user is able to a) utilize original census data, b) compare direct impacts of indicator results on the neighboring cells, and c) compare both indicators of sustainability visually.
This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)
Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.
Today, polygonal models occur everywhere in graphical applications, since they are easy
to render and to compute and a very huge set of tools are existing for generation and
manipulation of polygonal data. But modern scanning devices that allow a high quality
and large scale acquisition of complex real world models often deliver a large set of
points as resulting data structure of the scanned surface. A direct triangulation of those
point clouds does not always result in good models. They often contain problems like
holes, self-intersections and non manifold structures. Also one often looses important
surface structures like sharp corners and edges during a usual surface reconstruction.
So it is suitable to stay a little longer in the point based world to analyze the point cloud
data with respect to such features and apply a surface reconstruction method afterwards
that is known to construct continuous and smooth surfaces and extend it to reconstruct
sharp features.
For computational reasons, the spline interpolation of the Earth's gravitational potential is usually done in a spherical framework. In this work, however, we investigate a spline method with respect to the real Earth. We are concerned with developing the real Earth oriented strategies and methods for the Earth's gravitational potential determination. For this purpose we introduce the reproducing kernel Hilbert space of Newton potentials on and outside given regular surface with reproducing kernel defined as a Newton integral over it's interior. We first give an overview of thus far achieved results considering approximations on regular surfaces using surface potentials (Chapter 3). The main results are contained in the fourth chapter where we give a closer look to the Earth's gravitational potential, the Newton potentials and their characterization in the interior and the exterior space of the Earth. We also present the L2-decomposition for regions in R3 in terms of distributions, as a main strategy to impose the Hilbert space structure on the space of potentials on and outside a given regular surface. The properties of the Newton potential operator are investigated in relation to the closed subspace of harmonic density functions. After these preparations, in the fifth chapter we are able to construct the reproducing kernel Hilbert space of Newton potentials on and outside a regular surface. The spline formulation for the solution to interpolation problems, corresponding to a set of bounded linear functionals is given, and corresponding convergence theorems are proven. The spline formulation reflects the specifics of the Earth's surface, due to the representation of the reproducing kernel (of the solution space) as a Newton integral over the inner space of the Earth. Moreover, the approximating potential functions have the same domain of harmonicity as the actual Earth's gravitational potential, i.e., they are harmonic outside and continuous on the Earth's surface. This is a step forward in comparison to the spherical harmonic spline formulation involving functions harmonic down to the Runge sphere. The sixth chapter deals with the representation of the used kernel in the spherical case. It turns out that in the case of the spherical Earth, this kernel can be considered a kind of generalization to spherically oriented kernels, such as Abel-Poisson or the singularity kernel. We also investigate the existence of the closed expression of the kernel. However, at this point it remains to be unknown to us. So, in Chapter 7, we are led to consider certain discretization methods for integrals over regions in R3, in connection to theory of the multidimensional Euler summation formula for the Laplace operator. We discretize the Newton integral over the real Earth (representing the spline function) and give a priori estimates for approximate integration when using this discretization method. The last chapter summarizes our results and gives some directions for the future research.
The present dissertation contains the theoretical studies performed on the topic of a high energy deposition in matter. The work focuses on electronic excitation and relaxation processes on ultrafast timescales. Energy deposition by means of intense ultrashort (femtosecond) laser pulses or by means of swift heavy ions irradiation have a certain similarities: the final observable material modifications result from a number of processes on different timescales. First, the electronic excitation by photoabsorption or by ion impact takes place on subfemtosecond timescales. Then these excited electrons propagate and redistribute their energy interacting among themselves and exciting secondary generations of electrons. This typically takes place on femtosecond timescales. On the order of tens to hundreds femtoseconds the excited electrons are usually thermalized. The energy exchange with the lattice atoms lasts up to tens of picoseconds. The lattice temperature can reach melting point; then the material cools down and recrystalizes, forming the final modified nanostructures, which are observed experimentally. The processes on each previous step form the initial conditions for the following step. Thus, to describe the final phase transition and formation of nanostructures, one has to start from the very beginning and follow through all the steps.
The present work focuses on the early stages of the energy dissipation after its deposition, taking place in the electronic subsystems of excited materials. Different models applicable for different excitation mechanisms will be presented: in the thesis I will start from the description of high energy excitation (electron energies of \(\sim\) keV), then I shall focus on excitations to intermediate energies of electrons (\(\sim\) 100 eV), and finally coming down to a few eV electron excitations (visible light). The results will be compared with experimental observations.
For the high energy material excitation assumed to be caused by irradiation with swift heavy ions, the classical Asymptotical Trajectory Monte-Carlo (ATMC) is applied to describe the excitation of electrons by the impact of the projectile, the initial kinetics of electrons, secondary electron creation and Auger-redistribution of holes. I first simulate the early stage (first tens of fs) of kinetics of the electronic subsystem (in silica target, SiO\(_2\)) in tracks of ions decelerated in the electronic stopping regime. It will be shown that the well pronounced front of excitation in the electronic and ionic subsystems is formed due to the propagation of electrons, which cannot be described by models based on diffusion mechanisms (e.g. parabolic equations of heat diffusion). On later timescales, the thermalization time of electrons can be estimated as a time when the particle- and the energy propagation turns from the ballistic to the diffusive one. As soon as the electrons are thermalized, one can apply the Two Temperature Model. It will be demonstrated how to combine the MC output with the two temperature model. The results of this combination demonstrate that secondary ionizations play a very important role for the track formation process, leading to energy stored in the hole subsystem. This energy storage causes a significant delay of heating and prolongs the timescales of lattice modifications up to tens of picoseconds.
For intermediate energies of excitation (XUV-VUV laser pulse excitation of materials) I applied the Monte-Carlo simulation, modified where necessary and extended in order to take into account the electronic band structure and Pauli's principle for electrons within the conduction band. I apply the new method for semiconductors and for metals on examples of solid silicon and aluminum, respectively.
It will be demonstrated that for the case of semiconductors the final kinetic energy of free electrons is much less than the total energy provided by the laser pulse, due to the energy spent to overcome ionization potentials. It was found that the final total number of electrons excited by a single photon is significantly less than \(\hbar \omega / E_{gap}\). The concept of an 'effective energy gap' is introduced for collective electronic excitation, which can be applied to estimate the free electron density after high-intensity VUV laser pulse irradiation.
For metals, experimentally observed spectra of emitted photons from irradiated aluminum can be explained well with our results. At the characteristic time of a photon emission due to radiative decay of \(L-\)shell hole (\(t < 60\) fs), the distribution function of the electrons is not yet fully thermalized. This distribution consists of two main branches: low energy distribution as a distorted Fermi-distribution, and a long high energy tail. Therefore, the experimentally observed spectra demonstrate two different branches of results: the one observed with \(L-\)shell radiation emission reflects the low energy distribution, the Bremsstrahlung spectra reflects high energy (nonthermalized) tail. The comparison with experiments demonstrated a good agreement of the calculated spectra with the experimentally observed ones.
For the irradiation of semiconductor with low energy photons (visible light), a statistical model named the "extended multiple rate equation" is proposed. Based on the earlier developed multiple rate equation, the model additionally includes the interaction of electrons with the phononic subsystem of the lattice and allows for the direct determination of the conditions for crystal damage. Our model effectively describes the dynamics of the electronic subsystem, dynamical changes in the optical properties, and lattice heating, and the results are in very good agreement with experimental measurements on the transient reflectivity and the fluence damage threshold of silicon irradiated with a femtosecond laser pulse.
In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.