## Dissertation

### Filtern

#### Erscheinungsjahr

#### Dokumenttyp

- Dissertation (570) (entfernen)

#### Sprache

- Englisch (570) (entfernen)

#### Schlagworte

- Visualisierung (13)
- finite element method (8)
- Finite-Elemente-Methode (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Visualization (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)

#### Fachbereich / Organisatorische Einheit

- Fachbereich Mathematik (209)
- Fachbereich Informatik (117)
- Fachbereich Maschinenbau und Verfahrenstechnik (86)
- Fachbereich Chemie (55)
- Fachbereich Elektrotechnik und Informationstechnik (44)
- Fachbereich Biologie (25)
- Fachbereich Sozialwissenschaften (13)
- Fachbereich Wirtschaftswissenschaften (6)
- Fachbereich ARUBI (5)
- Fachbereich Physik (5)

With the burgeoning computing power available, multiscale modelling and simulation has these days become increasingly capable of capturing the details of physical processes on different scales. The mechanical behavior of solids is oftentimes the result of interaction between multiple spatial and temporal scales at different levels and hence it is a typical phenomena of interest exhibiting multiscale characteristic. At the most basic level, properties of solids can be attributed to atomic interactions and crystal structure that can be described on nano scale. Mechanical properties at the macro scale are modeled using continuum mechanics for which we mention stresses and strains. Continuum models, however they offer an efficient way of studying material properties they are not accurate enough and lack microstructural information behind the microscopic mechanics that cause the material to behave in a way it does. Atomistic models are concerned with phenomenon at the level of lattice thereby allowing investigation of detailed crystalline and defect structures, and yet the length scales of interest are inevitably far beyond the reach of full atomistic computation and is rohibitively expensive. This makes it necessary the need for multiscale models. The bottom line and a possible avenue to this end is, coupling different length scales, the continuum and the atomistics in accordance with standard procedures. This is done by recourse to the Cauchy-Born rule and in so doing, we aim at a model that is efficient and reasonably accurate in mimicking physical behaviors observed in nature or laboratory. In this work, we focus on concurrent coupling based on energetic formulations that links the continuum to atomistics. At the atomic scale, we describe deformation of the solid by the displaced positions of atoms that make up the solid and at the continuum level deformation of the solid is described by the displacement field that minimize the total energy. In the coupled model, continuum-atomistic, a continuum formulation is retained as the overall framework of the problem and the atomistic feature is introduced by way of constitutive description, with the Cauchy-Born rule establishing the point of contact. The entire formulation is made in the framework of nonlinear elasticity and all the simulations are carried out within the confines of quasistatic settings. The model gives direct account to measurable features of microstructures developed by crystals through sequential lamination.

This Dissertation tried to provide insights into the influences of individual and contextual factors on Technical and Vocational Education and Training (TVET) teachers’ learning and professional development in Ethiopia. Specifically, this research focused on identifying and determining the influences of teachers’ self perception as learners and professionals, and investigates the impact of the context, process and content of their learning and experiences on their professional development. The knowledge of these factors and their impacts help in improving the learning and professional development of the TVET teachers and their professionalization. This research tried to provide answers for the following five research questions. (1) How do TVET teachers perceive themselves as active learners and as professionals? And what are the implications of their perceptions on their learning and development? (2) How do TVET teachers engage themselves in learning and professional development activities? (3) What contextual factors facilitated or hindered the TVET Teachers’ learning and professional development? (4) Which competencies are found critical for the TVET teachers’ learning and professional development? (5) What actions need to be considered to enhance and sustain TVET teachers learning and professional development in their context? It is believed that the research results are significant not only to the TVET teachers, but also to schools leaders, TVET Teacher Training Institutions, education experts and policy makers, researchers and others stakeholders in the TVET sector. The theoretical perspectives adopted in this research are based on the systemic constructivist approach to professional development. An integrated approach to professional development requires that the teachers’ learning and development activities to be taken as an adult education based on the principles of constructivism. Professional development is considered as context - specific and long-term process in which teachers are trusted, respected and empowered as professionals. Teachers’ development activities are sought as more of collaborative activities portraying the social nature of learning. Schools that facilitate the learning and development of teachers exhibit characteristics of a learning organisation culture where, professional collaboration, collegiality and shared leadership are practiced. This research has drawn also relevant point of views from studies and reports on vocational education and TVET teacher education programs and practices at international, continental and national levels. The research objectives and the types of research questions in this study implied the use of a qualitative inductive research approach as a research strategy. Primary data were collected from TVET teachers in four schools using a one-on-one qualitative in-depth interview method. These data were analyzed using a Qualitative Content Analysis method based on the inductive category development procedure. ATLAS.ti software was used for supporting the coding and categorization process. The research findings showed that most of the TVET teachers neither perceive themselves as professionals nor as active learners. These perceptions are found to be one of the major barriers to their learning and development. Professional collaborations in the schools are minimal and teaching is sought as an isolated individual activity; a secluded task for the teacher. Self-directed learning initiatives and individual learning projects are not strongly evident. The predominantly teacher-centered approach used in TVET teacher education and professional development programs put emphasis mainly to the development of technical competences and has limited the development of a range of competences essential to teachers’ professional development. Moreover, factors such as the TVET school culture, the society’s perception of the teaching profession, economic conditions, and weak links with industries and business sectors are among the major contextual factors that hindered the TVET teachers’ learning and professional development. A number of recommendations are forwarded to improve the professional development of the TVET teachers. These include change in the TVET schools culture, a paradigm shift in TVET teacher education approach and practice, and development of educational policies that support the professionalization of TVET teachers. Areas for further theoretical research and empirical enquiry are also suggested to support the learning and professional development of the TVET teachers in Ethiopia.

The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.

In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber

Typically software engineers implement their software according to the design of the software
structure. Relations between classes and interfaces such as method-call relations and inheritance
relations are essential parts of a software structure. Accordingly, analyzing several types of
relations will benefit the static analysis process of the software structure. The tasks of this
analysis include but not limited to: understanding of (legacy) software, checking guidelines,
improving product lines, finding structure, or re-engineering of existing software. Graphs with
multi-type edges are possible representation for these relations considering them as edges, while
nodes represent classes and interfaces of software. Then, this multiple type edges graph can
be mapped to visualizations. However, the visualizations should deal with the multiplicity of
relations types and scalability, and they should enable the software engineers to recognize visual
patterns at the same time.
To advance the usage of visualizations for analyzing the static structure of software systems,
I tracked difierent development phases of the interactive multi-matrix visualization (IMMV)
showing an extended user study at the end. Visual structures were determined and classified
systematically using IMMV compared to PNLV in the extended user study as four categories:
High degree, Within-package edges, Cross-package edges, No edges. In addition to these structures
that were found in these handy tools, other structures that look interesting for software
engineers such as cycles and hierarchical structures need additional visualizations to display
them and to investigate them. Therefore, an extended approach for graph layout was presented
that improves the quality of the decomposition and the drawing of directed graphs
according to their topology based on rigorous definitions. The extension involves describing
and analyzing the algorithms for decomposition and drawing in detail giving polynomial time
complexity and space complexity. Finally, I handled visualizing graphs with multi-type edges
using small-multiples, where each tile is dedicated to one edge-type utilizing the topological
graph layout to highlight non-trivial cycles, trees, and DAGs for showing and analyzing the
static structure of software. Finally, I applied this approach to four software systems to show
its usefulness.

In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).

In 2006 Jeffrey Achter proved that the distribution of divisor class groups of degree 0 of function fields with a fixed genus and the distribution of eigenspaces in symplectic similitude groups are closely related to each other. Gunter Malle proposed that there should be a similar correspondence between the distribution of class groups of number fields and the distribution of eigenspaces in ceratin matrix groups. Motivated by these results and suggestions we study the distribution of eigenspaces corresponding to the eigenvalue one in some special subgroups of the general linear group over factor rings of rings of integers of number fields and derive some conjectural statements about the distribution of \(p\)-parts of class groups of number fields over a base field \(K_{0}\). Where our main interest lies in the case that \(K_{0}\) contains the \(p\)th roots of unity, because in this situation the \(p\)-parts of class groups seem to behave in an other way like predicted by the popular conjectures of Henri Cohen and Jacques Martinet. In 2010 based on computational data Malle has succeeded in formulating a conjecture in the spirit of Cohen and Martinet for this case. Here using our investigations about the distribution in matrixgroups we generalize the conjecture of Malle to a more abstract level and establish a theoretical backup for these statements.

Acidic zeolites like H-Y, H-ZSM-5, H-MCM-22 and H-MOR zeolites were found to be the selective adsorbents for the removal of thiophene from toluene or n-heptane as solvent. The competitive adsorption of toluene is found to influence the adsorption capacity for thiophene and is more predominant when high-alumina zeolites are used as adsorbents. This behaviour is also reflected by the results of the adsorption of thiophene on H-ZSM-5 zeolites with varied nSi/nAl ratios (viz. 13, 19 and 36) from toluene and n-heptane as solvents, respectively. UV-Vis spectroscopic results show that the oligomerization of thiophene leads to the formation of dimers and trimers on these zeolites. The oligomerization in acid zeolites is regarded to be dependent on the geometry of the pore system of the zeolites. The sulphur-containing compounds with more than one ring viz. benzothiophene, which are also present in substantial amounts in certain hydrocarbon fractions, are not adsorbed on H-ZSM-5 zeolites. This is obvious, as the diameter of the pore aperture of zeolite H-ZSM-5 is smaller than the molecular size of benzothiophene. Metal ion-exchanged FAU-type zeolites are found to be promising adsorbents for the removal of sulphur-containing compounds from model solutions. The introduction of Cu+-, Ni2+-, Ce3+-, La3+- and Y3+- ions into zeolite Na+-Y by aqueous ion-exchange substantially improves the adsorption capacity for thiophene from toluene or n-heptane as solvent. More than the absolute content of Cu+-ions, the presence of Cu+-ions at the sites exposed to supercages is believed to influence the adsorption of thiophene on Cu+-Y zeolite. It was shown experimentally for the case of Cu+-Y and Ce3+-Y that the supercages present in the FAU zeolite allow for an access of bulkier sulphur-containing compounds (viz. benzothiophene, dibenzothiophene and dimethyl dibenzothiophene). The presence of these bulkier compounds compete with thiophene and are preferentially adsorbed on Cu+-Y zeolite. IR spectroscopic results revealed that the adsorption of thiophene on Na+-Y, Cu+-Y and Ni2+-Y is primarily a result of the interaction of thiophene via pi-complexation between C=C double bond (of thiophene) and metal ions (in the zeolite framework). A different mode of interaction of thiophene with Ce3+-, La3+- and Y3+-metal ions was observed in the IR spectra of thiophene adsorbed on Ce3+-Y, La3+-Y and Y3+-Y zeolites, respectively. On these adsorbents, thiophene is believed to interact via a lone electron pair of the sulphur atom with metal ions present in the adsorbent (M-S interaction). The experimental results show that there is a large difference in the thiophene adsorption capacities of pi-complexation adsorbents (like Cu+-Y, Ni2+-Y) between the model solution with toluene as solvent and the model solution with n-heptane as solvent. The lower capacity of these zeolites for the adsorption of thiophene from toluene than from n-heptane as solvent is the clear indication of competition of toluene in interating with adsorbent in a way similar to thiophene. The difference in thiophene adsorption capacities is very low in the case of adsorbents Ce3+-Y, La3+-Y and Y3+-Y, which are believed to interact with thiophene predominantly by direct M3+-S bond (thiophene interacting with metal ion via a lone pair of electrons). TG-DTA analysis was used to study the regeneration behaviour of the adsorbents. Acid zeolites can be regenerated by simply heating at 400 °C in a flow of nitrogen whereas thiophene is chemically adsorbed on the metal ion. By contrast, it is not possible to regenerate by heating under idle inert gas flow. The only way to regenerate these adsorbents is to burn off the adsorbate, which eventually brings about an undesired emission of SOx. The exothermic peaks appeared at different temperatures in the heat flow profiles of Cu+-Y, Ce3+-Y, La3+-Y and Y3+-Y are also indicating that two different types of interaction are present as revealed by IR spectroscopy, too. One major difficulty in reducing the sulphur content in fuels to value below 10 ppm is the inability in removing alkyl dibenzothiophenes, viz. 4,6 dimethyl dibenzothiophene, by the existing catalytic hydrodesulphurization technique. Cu+-Y and Ce3+-Y were found in the present study to adsorb this compound from toluene to a certain extent. To meet the stringent regulations on sulphur content, selective adsorption by zeolites could be a valuable post-purification method after the catalytic hydrodesulphurization unit.

Dealing with information in modern times involves users to cope with hundreds of thousands of documents, such as articles, emails, Web pages, or News feeds.
Above all information sources, the World Wide Web presents information seekers with great challenges.
It offers more text in natural language than one is capable to read.
The key idea for this research intends to provide users with adaptable filtering techniques, supporting them in filtering out the specific information items they need.
Its realization focuses on developing an Information Extraction system,
which adapts to a domain of concern, by interpreting the contained formalized knowledge.
Utilizing the Resource Description Framework (RDF), which is the Semantic Web's formal language for exchanging information,
allows extending information extractors to incorporate the given domain knowledge.
Because of this, formal information items from the RDF source can be recognized in the text.
The application of RDF allows a further investigation of operations on recognized information items, such as disambiguating and rating the relevance of these.
Switching between different RDF sources allows changing the application scope of the Information Extraction system from one domain of concern to another.
An RDF-based Information Extraction system can be triggered to extract specific kinds of information entities by providing it with formal RDF queries in terms of the SPARQL query language.
Representing extracted information in RDF extends the coverage of the Semantic Web's information degree and provides a formal view on a text from the perspective of the RDF source.
In detail, this work presents the extension of existing Information Extraction approaches by incorporating the graph-based nature of RDF.
Hereby, the pre-processing of RDF sources allows extracting statistical information models dedicated to support specific information extractors.
These information extractors refine standard extraction tasks, such as the Named Entity Recognition, by using the information provided by the pre-processed models.
The post-processing of extracted information items enables representing these results in RDF format or lists, which can now be ranked or filtered by relevance.
Post-processing also comprises the enrichment of originating natural language text sources with extracted information items by using annotations in RDFa format.
The results of this research extend the state-of-the-art of the Semantic Web.
This work contributes approaches for computing customizable and adaptable RDF views on the natural language content of Web pages.
Finally, due to the formal nature of RDF, machines can interpret these views allowing developers to process the contained information in a variety of applications.

Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.

Hydrogels are known to be covalently or ionic cross-linked, hydrophilic three-dimensional
polymer networks, which exist in our bodies in a biological gel form such as the vitreous
humour that fills the interior of the eyes. Poly(N-isopropylacrylamide) (poly(NIPAAm))
hydrogels are attracting more interest in biomedical applications because, besides others, they
exhibit a well-defined lower critical solution temperature (LCST) in water, around 31–34°C,
which is close to the body temperature. This is considered to be of great interest in drug
delivery, cell encapsulation, and tissue engineering applications. In this work, the
poly(NIPAAm) hydrogel is synthesized by free radical polymerization. Hydrogel properties
and the dimensional changes accompanied with the volume phase transition of the
thermosensitive poly(NIPAAm) hydrogel were investigated in terms of Raman spectra,
swelling ratio, and hydration. The thermal swelling/deswelling changes that occur at different
equilibrium temperatures and different solutions (phenol, ethanol, propanol, and sodium
chloride) based on Raman spectrum were investigated. In addition, Raman spectroscopy has
been employed to evaluate the diffusion aspects of bovine serum albumin (BSA) and phenol
through the poly(NIPAAm) network. The determination of the mutual diffusion coefficient,
\(D_{mut}\) for hydrogels/solvent system was achieved successfully using Raman spectroscopy at
different solute concentrations. Moreover, the mechanical properties of the hydrogel, which
were investigated by uniaxial compression tests, were used to characterize the hydrogel and to
determine the collective diffusion coefficient through the hydrogel. The solute release coupled
with shrinking of the hydrogel particles was modelled with a bi-dimensional diffusion model
with moving boundary conditions. The influence of the variable diffusion coefficient is
observed and leads to a better description of the kinetic curve in the case of important
deformation around the LCST. A good accordance between experimental and calculated data
was obtained.

Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.

This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.

Towards A Non-tracking Web
(2016)

Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.
In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.

This research work focuses on the generation of a high resolution digital surface model featuring complex urban surface characteristics in order to enrich the database for runoff simulations of urban drainage systems. The discussion of global climate change and its possible consequences have taken centre stage over the last decade. Global climate change has triggered more erratic weather patterns by causing severe and unpredictable rainfall events in many parts of the world. The incidence of more frequent rainfall has led to the problem of increased flooding in urban areas. The increased property values of urban structures and threats to people's personal safety have hastened the demand for a detailed urban drainage simulation model for accurate flood prediction. Although the use of 2D hydraulic modelling approach in rural floodplains is in practice for quite a long time, the use of the same approach in urban floodplains is still in its infancy. The reason is mainly due to the lack of a high resolution topographic model describing urban surface characteristics properly.
High resolution surface data describing hydrologic and hydraulic properties of complex urban areas are the prerequisite to more accurately describing and simulating the flood water movement and thereby taking adequate measures against urban flooding. Airborne LiDAR (Light detection and ranging) is an efficient way of generating a high resolution Digital Surface Model (DSM) of any study area. The processing of high-density and large volume of unstructured LiDAR data is a difficult and time-consuming task towards generating fine resolution spatial databases when considering only human intervention. The application of robust algorithms in terms of processing this massive volume of data can significantly reduce the data processing time and thereby increase the degree of automation as well as accuracy.
This research work presents a number of techniques pertaining to processing, filtering and classification of LiDAR point data in order to achieve higher degree of automation and accuracy towards generating a high resolution urban surface model. This research work also describes the use of ancillary datasets such as aerial images and topographic maps in combination with LiDAR data for feature detection and surface characterization. The integration of various data sources facilitates detailed modelling of street networks and accurate detection of various urban surface types (e.g. grasslands, bare soil and impervious surfaces).
While the accurate characterization of various surface types contributes to the better modelling of rainfall runoff processes, the application of LiDAR-derived fine resolution DSM serves as input to 2D hydraulic models and capable of simulating surface flooding scenarios in cases the sewer systems are surcharged.
Thus, this research work develops high resolution spatial databases aiming at improving the accuracy of hydrologic and hydraulic databases of urban drainage systems. Later, these databases are given as input to a standard flood simulation software in order to: 1) test the suitability of the databases for running the simulation; 2) assess the performance of the hydraulic capacity of urban drainage systems and 3) predict and visualize the surface flooding scenarios in order to take necessary flood protection measures.

The goal of this work is to develop statistical natural language models and processing techniques
based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short-
Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more
robust, and easier to train than traditional methods, i.e., words list and rule-based models. They
improve the output of recognition systems and make them more accessible to users for browsing
and reading. These techniques are required, especially for historical books which might take
years of effort and huge costs to manually transcribe them.
The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As
a second contribution, a hyphenation model for difficult transcription for alignment purposes
is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography.
A size normalization alignment is implemented to equal the size of strings, before the training
phase. Using the LSTM networks as a language model to improve the recognition results is
the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State
Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions
will be elaborated in more detail.
Context-dependent confusion rules is a new technique to build an error model for Optical
Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions
which appear in the recognition outputs and are translated into edit operations, e.g., insertions,
deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations
are extracted in a form of rules with respect to the context of the incorrect string to build an
error model using WFSTs. The context-dependent rules assist the language model to find the
best candidate corrections. They avoid the calculations that occur in searching the language
model and they also make the language model able to correct incorrect words by using context-
dependent confusion rules. The context-dependent error model is applied on the university of
Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR
results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the
state-of-the-art single rule-based which returns an error rate of 1.0%.
This thesis describes a new, simple, fast, and accurate system for generating correspondences
between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the
original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will
not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations
model that allows the alignment with respect to the varieties of the hyphenated words in the line
breaks of the OCR documents. In this work, several approaches are implemented to be used for
the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches
are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan-
derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach
returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0%
using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from
the transcription. They provide the transcription in a readable and reflowable format to be used
for alignment purposes. We consider the task as classification problem and classify the hyphens
from the given patterns as hyphens for line breaks, combined words, or noise. The methods are
applied on clean and noisy transcription for different languages. The Decision Trees classifier
returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97%
on Fraktur script.
A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New
High German applied on the Luther bible. It performed better than the rule-based word-list
approaches. It provides a transcription for various purposes such as part-of-speech tagging and
n-grams. Also two new techniques are presented for aligning the OCR results and normalize the
size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern
wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined
rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the
LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown
data.
In this thesis, a deep investigation has been done on constructing high-performance language
modeling for improving the recognition systems. A new method to construct a language model
using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu
script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens
during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from
1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate
from 6.9% to 1.58%.
Finally, the integration of multiple recognition outputs can give higher performance than a
single recognition system. Therefore, a new method for combining the results of OCR systems is
explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output
to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription
so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech
tagging. The method consists of two alignment steps. First, two recognition systems are aligned
using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors.
The LSTM model then is used to vote the best candidate correction of the two systems and
improve the incorrect tokens which are produced during the first alignment. The approaches
are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset
which are obtained from state-of-the-art OCR systems. The Experiments show that the error
rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the
error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has
the best performance with 0.40%.
The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and
more effective use of the documents in digital libraries.

Numerical Algorithms in Algebraic Geometry with Implementation in Computer Algebra System SINGULAR
(2011)

Polynomial systems arise in many applications: robotics, kinematics, chemical kinetics,
computer vision, truss design, geometric modeling, and many others. Many polynomial
systems have solutions sets, called algebraic varieties, having several irreducible
components. A fundamental problem of the numerical algebraic geometry is to decompose
such an algebraic variety into its irreducible components. The witness point sets are
the natural numerical data structure to encode irreducible algebraic varieties.
Sommese, Verschelde and Wampler represented the irreducible algebraic decomposition of
an affine algebraic variety \(X\) as a union of finite disjoint sets \(\cup_{i=0}^{d}W_i=\cup_{i=0}^{d}\left(\cup_{j=1}^{d_i}W_{ij}\right)\) called numerical irreducible decomposition. The \(W_i\) correspond to the pure i-dimensional components, and the \(W_{ij}\) represent the i-dimensional irreducible components. The numerical irreducible decomposition is implemented in BERTINI.
We modify this concept using partially Gröbner bases, triangular sets, local dimension, and
the so-called zero sum relation. We present in the second chapter the corresponding
algorithms and their implementations in SINGULAR. We give some examples and timings,
which show that the modified algorithms are more efficient if the number of variables is not
too large. For a large number of variables BERTINI is more efficient.
Leykin presented an algorithm to compute the embedded components of an algebraic variety
based on the concept of the deflation of an algebraic variety.
Depending on the modified algorithm mentioned above, we will present in the third chapter an
algorithm and its implementation in SINGULAR to compute the embedded components.
The irreducible decomposition of algebraic varieties allows us to formulate in the fourth
chapter some numerical algebraic algorithms.
In the last chapter we present two SINGULAR libraries. The first library is used to compute
the numerical irreducible decomposition and the embedded components of an algebraic variety.
The second library contains the procedures of the algorithms in the last Chapter to test
inclusion, equality of two algebraic varieties, to compute the degree of a pure i-dimensional
component, and the local dimension.

The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large.
The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed.
MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures.
The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined.
Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage.
Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information.
This phase is called MCS analysis I in this thesis.
The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible.
This phase is called MCS analysis II in this thesis.
The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT).
The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities,
I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II.
Thus, the following approach was taken in my thesis:
1- First, a triangulation of mixed methods and data sources was conducted.
2- Then, four novel interactive visualizations and one novel interaction widget were developed.
3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools)
from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies.
The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs.
Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.

This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.

Tropical intersection theory
(2010)

This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.

Information Visualization (InfoVis) and Human-Computer Interaction (HCI) have strong ties with each other. Visualization supports the human cognitive system by providing interactive and meaningful images of the underlying data. On the other side, the HCI domain cares about the usability of the designed visualization from the human perspectives. Thus, designing a visualization system requires considering many factors in order to achieve the desired functionality and the system usability. Achieving these goals will help these people in understanding the inside behavior of complex data sets in less time.
Graphs are widely used data structures to represent the relations between the data elements in complex applications. Due to the diversity of this data type, graphs have been applied in numerous information visualization applications (e.g., state transition diagrams, social networks, etc.). Therefore, many graph layout algorithms have been proposed in the literature to help in visualizing this rich data type. Some of these algorithms are used to visualize large graphs, while others handle the medium sized graphs. Regardless of the graph size, the resulting layout should be understandable from the users’ perspective and at the same time it should fulfill a list of aesthetic criteria to increase the representation readability. Respecting these two principles leads to produce a resulting graph visualization that helps the users in understanding and exploring the complex behavior of critical systems.
In this thesis, we utilize the graph visualization techniques in modeling the structural and behavioral aspects of embedded systems. Furthermore, we focus on evaluating the resulting representations from the users’ perspectives.
The core contribution of this thesis is a framework, called ESSAVis (Embedded Systems Safety Aspect Visualizer). This framework visualizes not only some of the safety aspects (e.g. CFT models) of embedded systems, but also helps the engineers and experts in analyzing the system safety critical situations. For this, the framework provides a 2Dplus3D environment in which the 2D represents the graph representation of the abstract data about the safety aspects of the underlying embedded system while the 3D represents the underlying system 3D model. Both views are integrated smoothly together in the 3D world fashion. In order to check the effectiveness and feasibility of the framework and its sub-components, we conducted many studies with real end users as well as with general users. Results of the main study that targeted the overall ESSAVis framework show high acceptance ratio and higher accuracy with better performance using the provided visual support of the framework.
The ESSAVis framework has been designed to be compatible with different 3D technologies. This enabled us to use the 3D stereoscopic depth of such technologies to encode nodes attributes in node-link diagrams. In this regard, we conducted an evaluation study to measure the usability of the stereoscopic depth cue approach, called the stereoscopic highlighting technique, against other selected visual cues (i.e., color, shape, and sizes). Based on the results, the thesis proposes the Reflection Layer extension to the stereoscopic highlighting technique, which was also evaluated from the users’ perspectives. Additionally, we present a new technique, called ExpanD (Expand in Depth), that utilizes the depth cue to show the structural relations between different levels of details in node-link diagrams. Results of this part opens a promising direction of the research in which visualization designers can get benefits from the richness of the 3D technologies in visualizing abstract data in the information visualization domain.
Finally, this thesis proposes the application of the ESSAVis frame- work as a visual tool in the educational training process of engineers for understanding the complex concepts. In this regard, we conducted an evaluation study with computer engineering students in which we used the visual representations produced by ESSAVis to teach the principle of the fault detection and the failure scenarios in embedded systems. Our work opens the directions to investigate many challenges about the design of visualization for educational purposes.

The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.

In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.

The main aim of this work was to obtain an approximate solution of the seismic traveltime tomography problems with the help of splines based on reproducing kernel Sobolev spaces. In order to be able to apply the spline approximation concept to surface wave as well as to body wave tomography problems, the spherical spline approximation concept was extended for the case where the domain of the function to be approximated is an arbitrary compact set in R^n and a finite number of discontinuity points is allowed. We present applications of such spline method to seismic surface wave as well as body wave tomography, and discuss the theoretical and numerical aspects of such applications. Moreover, we run numerous numerical tests that justify the theoretical considerations.

We investigate the long-term behaviour of diffusions on the non-negative real numbers under killing at some random time. Killing can occur at zero as well as in the interior of the state space. The diffusion follows a stochastic differential equation driven by a Brownian motion. The diffusions we are working with will almost surely be killed. In large parts of this thesis we only assume the drift coefficient to be continuous. Further, we suppose that zero is regular and that infinity is natural. We condition the diffusion on survival up to time t and let t tend to infinity looking for a limiting behaviour.

An autoregressive-ARCH model with possible exogeneous variables is treated. We estimate the conditional volatility of the model by applying feedforward networks to the residuals and prove consistency and asymptotic normality for the estimates under the rate of feedforward networks complexity. Recurrent neural networks estimates of GARCH and value-at-risk is studied. We prove consistency and asymptotic normality for the recurrent neural networks ARMA estimator under the rate of recurrent networks complexity. We also overcome the estimation problem in stochastic variance models in discrete time by feedforward networks and the introduction of a new distributions on the innovations. We use the method to calculate market risk such as expected shortfall and Value-at risk. We tested this distribution together with other new distributions on the GARCH family models against other common distributions on the financial market such as Normal Inverse Gaussian, normal and the Student's t- distributions. As an application of the models, some German stocks are studied and the different approaches are compared together with the most common method of GARCH(1,1) fit.

A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.

The transfer of substrates between to enzymes within a biosynthesis pathway is an effective way to synthesize the specific product and a good way to avoid metabolic interference. This process is called metabolic channeling and it describes the (in-)direct transfer of an intermediate molecule between the active sites of two enzymes. By forming multi-enzyme cascades the efficiency of product formation and the flux is elevated and intermediate products are transferred and converted in a correct manner by the enzymes.
During tetrapyrrole biosynthesis several substrate transfer events occur and are prerequisite for an optimal pigment synthesis. In this project the metabolic channeling process during the pink pigment phycoerythrobilin (PEB) was investigated. The responsible ferredoxin-dependent bilin reductases (FDBR) for PEB formation are PebA and PebB. During the pigment synthesis the intermediate molecule 15,16-dihydrobiliverdin (DHBV) is formed and transferred from PebA to PebB. While in earlier studies a metabolic channeling of DHBV was postulated, this work revealed new insights into the requirements of this protein-protein interaction. It became clear, that the most important requirement for the PebA/PebB interaction is based on the affinity to their substrate/product DHBV. The already high affinity of both enzymes to each other is enhanced in the presence of DHBV in the binding pocket of PebA which leads to a rapid transfer to the subsequent enzyme PebB. DHBV is a labile molecule and needs to be rapidly channeled in order to get correctly further reduced to PEB. Fluorescence titration experiments and transfer assays confirmed the enhancement effect of DHBV for its own transfer.
More insights became clear by creating an active fusion protein of PebA and PebB and comparing its reaction mechanism with standard FDBRs. This fusion protein was able to convert biliverdin IXα (BV IXα) to PEB similar to the PebS activity, which also can convert BV IXα via DHBV to PEB as a single enzyme. The product and intermediate of the reaction were identified via HPLC and UV-Vis spectroscopy.
The results of this work revealed that PebA and PebB interact via a proximity channeling process where the intermediate DHBV plays an important role for the interaction. It also highlights the importance of substrate channeling in the synthesis of PEB to optimize the flux of intermediates through this metabolic pathway.

Since their invention in the 1980s, behaviour-based systems have become very popular among roboticists. Their component-based nature facilitates the distributed implementation of systems, fosters reuse, and allows for early testing and integration. However, the distributed approach necessitates the interconnection of many components into a network in order to realise complex functionalities. This network is crucial to the correct operation of the robotic system. There are few sound design techniques for behaviour networks, especially if the systems shall realise task sequences. Therefore, the quality of the resulting behaviour-based systems is often highly dependant on the experience of their developers.
This dissertation presents a novel integrated concept for the design and verification of behaviour-based systems that realise task sequences. Part of this concept is a technique for encoding task sequences in behaviour networks. Furthermore, the concept provides guidance to developers of such networks. Based on a thorough analysis of methods for defining sequences, Moore machines have been selected for representing complex tasks. With the help of the structured workflow proposed in this work and the developed accompanying tool support, Moore machines defining task sequences can be transferred automatically into corresponding behaviour networks, resulting in less work for the developer and a lower risk of failure.
Due to the common integration of automatically and manually created behaviour-based components, a formal analysis of the final behaviour network is reasonable. For this purpose, the dissertation at hand presents two verification techniques and justifies the selection of model checking. A novel concept for applying model checking to behaviour-based systems is proposed according to which behaviour networks are modelled as synchronised automata. Based on such automata, properties of behaviour networks that realise task sequences can be verified or falsified. Extensive graphical tool support has been developed in order to assist the developer during the verification process.
Several examples are provided in order to illustrate the soundness of the presented design and verification techniques. The applicability of the integrated overall concept to real-world tasks is demonstrated using the control system of an autonomous bucket excavator. It can be shown that the proposed design concept is suitable for developing complex sophisticated behaviour networks and that the presented verification technique allows for verifying real-world behaviour-based systems.

When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.

The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.

The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.

Within the last decades, a remarkable development in materials science took place -- nowadays, materials are not only constructed for the use of inert structures but rather designed for certain predefined functions. This innovation was accompanied with the appearance of smart materials with reliable recognition, discrimination and capability of action as well as reaction. Even though ferroelectric materials serve smartly in real applications, they also possess several restrictions at high performance usage. The behavior of these materials is almost linear under the action of low electric fields or low mechanical stresses, but exhibits strong non-linear response under high electric fields or mechanical stresses. High electromechanical loading conditions result in a change of the spontaneous polarization direction with respect to individual domains, which is commonly referred to as domain switching. The aim of the present work is to develop a three-dimensional coupled finite element model, to study the rate-independent and rate-dependent behavior of piezoelectric materials including domain switching based on a micromechanical approach. The proposed model is first elaborated within a two-dimensional finite element setting for piezoelectric materials. Subsequently, the developed two-dimensional model is extended to the three-dimensional case. This work starts with developing a micromechanical model for ferroelectric materials. Ferroelectric materials exhibit ferroelectric domain switching, which refers to the reorientation of domains and occurs under purely electrical loading. For the simulation, a bulk piezoceramic material is considered and each grain is represented by one finite element. In reality, the grains in the bulk ceramics material are randomly oriented. This property is taken into account by applying random orientation as well as uniform distribution for individual elements. Poly-crystalline ferroelectric materials at un-poled virgin state can consequently be characterized by randomly oriented polarization vectors. Energy reduction of individual domains is adopted as a criterion for the initiation of domain switching processes. The macroscopic response of the bulk material is predicted by classical volume-averaging techniques. In general, domain switching does not only depend on external loads but also on neighboring grains, which is commonly denoted as the grain boundary effect. These effects are incorporated into the developed framework via a phenomenologically motivated probabilistic approach by relating the actual energy level to a critical energy level. Subsequently, the order of the chosen polynomial function is optimized so that simulations nicely match measured data. A rate-dependent polarization framework is proposed, which is applied to cyclic electrical loading at various frequencies. The reduction in free energy of a grain is used as a criterion for the onset of the domain switching processes. Nucleation in new grains and propagation of the domain walls during domain switching is modeled by a linear kinetics theory. The simulated results show that for increasing loading frequency the macroscopic coercive field is also increasing and the remanent polarization increases at lower loading amplitudes. The second part of this work is focused on ferroelastic domain switching, which refers to the reorientation of domains under purely mechanical loading. Under sufficiently high mechanical loading, however, the strain directions within single domains reorient with respect to the applied loading direction. The reduction in free energy of a grain is used as a criterion for the domain switching process. The macroscopic response of the bulk material is computed for the hysteresis curve (stress vs strain) whereby uni-axial and quasi-static loading conditions are applied on the bulk material specimen. Grain boundary effects are addressed by incorporating the developed probabilistic approach into this framework and the order of the polynomial function is optimized so that simulations match measured data. Rate dependent domain switching effects are captured for various frequencies and mechanical loading amplitudes by means of the developed volume fraction concept which relates the particular time interval to the switching portion. The final part of this work deals with ferroelectric and ferroelastic domain switching and refers to the reorientation of domains under coupled electromechanical loading. If this free energy for combined electromechanical loading exceeds the critical energy barrier elements are allowed to switch. Firstly, hysteresis and butterfly curves under purely electrical loading are discussed. Secondly, additional mechanical loads in axial and lateral directions are applied to the specimen. The simulated results show that an increasing compressive stress results in enlarged domain switching ranges and that the hysteresis and butterfly curves flatten at higher mechanical loading levels.

Wetting of a solid surface with liquids is an important parameter in the chemical engineering process such as distillation, absorption and desorption. The degree of wetting in packed columns mainly contributes in the generating of the effective interfacial area and then enhancing of the heat and mass transfer process. In this work the wetting of solid surfaces was studied in real experimental work and virtually through three dimensional CFD simulations using the multiphase flow VOF model implemented in the commercial software FLUENT. That can be used to simulate the stratified flows [1]. The liquid rivulet flow which is a special case of the film flow and mostly found in packed columns has been discussed. Wetting of a solid flat and wavy metal plate with rivulet liquid flow was simulated and experimentally validated. The local rivulet thickness was measured using an optically assisted mechanical sensor using a needle which is moved perpendicular to the plate surface with a step motor and in the other two directions using two micrometers. The measured and simulated rivulet profiles were compared to some selected theoretical models founded in the literature such as Duffy & Muffatt [2], Towell & Rothfeld [3] and Al-Khalil et al. [4]. The velocity field in a cross section of a rivulet flow and the non-dimensional maximum and mean velocity values for the vertical flat plate was also compared with models from Al-Khalil et al. [4] and Allen & Biggin [5]. Few CFD simulations for the wavy plate case were compared to the experimental findings, and the Towel model for a flat plate [3]. In the second stage of this work 3-D CFD simulations and experimental study has been performed for wetting of a structured packing element and packing sheet consisting of three elements from the type Rombopak 4M, which is a product of the company Kuhni, Switzerland. The hydrodynamics parameters of a packed column, e. i. the degree of wetting, the interfacial area and liquid hold-up have been depicted from the CFD simulations for different liquid systems and liquid loads. Flow patterns on the degree of wetting have been compared to that of the experiments, where the experimental values for the degree of wetting were estimated from the snap shooting of the flow on the packing sheet in a test rig. A new model to describe the hydrodynamics of packed columns equipped with Rombopak 4M was derived with help of the CFD–simulation results. The model predicts the degree of wetting, the specific or interfacial area and liquid hold-up at different flow conditions. This model was compared to Billet & Schultes [6], the SRP model Rocha et al. [7-9], to Shi & Mersmann [10] and others. Since the pressure drop is one of the most important parameter in packed columns especially for vacuum operating columns, few CFD simulations were performed to estimate the dry pressure drop in a structured and flat packing element and were compared to the experimental results. It was found a good agreement from one side, between the experimental and the CFD simulation results, and from the other side between the simulations and theoretical models for the rivulet flow on an inclined plate. The flow patterns and liquid spreading behaviour on the packing element agrees well with the experimental results. The VOF (Volume of Fluid) was found very sensitive to different liquid properties and can be used in optimization of the packing geometries and revealing critical details of wetting and film flow. An extension of this work to perform CFD simulations for the flow inside a block of the packing to get a detailed picture about the interaction between the liquid and packing surfaces is recommended as further perspective.

The polydispersive nature of the turbulent droplet swarm in agitated liquid-liquid contacting equipment makes its mathematical modelling and the solution methodologies a rather sophisticated process. This polydispersion could be modelled as a population of droplets randomly distributed with respect to some internal properties at a specific location in space using the population balance equation as a mathematical tool. However, the analytical solution of such a mathematical model is hardly to obtain except for particular idealized cases, and hence numerical solutions are resorted to in general. This is due to the inherent nonlinearities in the convective and diffusive terms as well as the appearance of many integrals in the source term. In this work two conservative discretization methodologies for both internal (droplet state) and external (spatial) coordinates are extended and efficiently implemented to solve the population balance equation (PBE) describing the hydrodynamics of liquid-liquid contacting equipment. The internal coordinate conservative discretization techniques of Kumar and Ramkrishna (1996a, b) originally developed for the solution of PBE in simple batch systems are extended to continuous flow systems and validated against analytical solutions as well as published experimental droplet interaction functions and hydrodynamic data. In addition to these methodologies, we presented a conservative discretization approach for droplet breakage in batch and continuous flow systems, where it is found to have identical convergence characteristics when compared to the method of Kumar and Ramkrishna (1996a). Apart from the specific discretization schemes, the numerical solution of droplet population balance equations by discretization is known to suffer from inherent finite domain errors (FDE). Two approaches that minimize the total FDE during the solution of the discrete PBEs using an approximate optimal moving (for batch) and fixed (for continuous systems) grids are introduced (Attarakih, Bart & Faqir, 2003a). As a result, significant improvements are achieved in predicting the number densities, zero and first moments of the population. For spatially distributed populations (such as extraction columns) the resulting system of partial differential equations is spatially discretized in conservative form using a simplified first order upwind scheme as well as first and second order nonoscillatory central differencing schemes (Kurganov & Tadmor, 2000). This spatial discretization avoids the characteristic decomposition of the convective flux based on the approximate Riemann Solvers and the operator splitting technique required by classical upwind schemes (Karlsen et al., 2001). The time variable is discretized using an implicit strongly stable approach that is formulated by careful lagging of the nonlinear parts of the convective and source terms. The present algorithms are tested against analytical solutions of the simplified PBE through many case studies. In all these case studies the discrete models converges successfully to the available analytical solutions and to solutions on relatively fine grids when the analytical solution is not available. This is accomplished by deriving five analytical solutions of the PBE in continuous stirred tank and liquid-liquid extraction column for especial cases of breakage and coalescence functions. As an especial case, these algorithms are implemented via a windows computer code called LLECMOD (Liquid-Liquid Extraction Column Module) to simulate the hydrodynamics of general liquid-liquid extraction columns (LLEC). The user input dialog makes the LLECMOD a user-friendly program that enables the user to select grids, column dimensions, flow rates, velocity models, simulation parameters, dispersed and continuous phases chemical components, and droplet phase space-time solvers. The graphical output within the windows environment adds to the program a distinctive feature and makes it very easy to examine and interpret the results very quickly. Moreover, the dynamic model of the dispersed phase is carefully treated to correctly predict the oscillatory behavior of the LLEC hold up. In this context, a continuous velocity model corresponding to the manipulation of the inlet continuous flow rate through the control of the dispersed phase level is derived to get rid of this behavior.

We discuss some first steps towards experimental design for neural network regression which, at present, is too complex to treat fully in general. We encounter two difficulties: the nonlinearity of the models together with the high parameter dimension on one hand, and the common misspecification of the models on the other hand.
Regarding the first problem, we restrict our consideration to neural networks with only one and two neurons in the hidden layer and a univariate input variable. We prove some results regarding locally D-optimal designs, and present a numerical study using the concept of maximin optimal designs.
In respect of the second problem, we have a look at the effects of misspecification on optimal experimental designs.

This dissertation focuses on the evaluation of technical and environmental sustainability of water distribution systems based on scenario analysis. The decision support system is created to assist in the decision making-process and to visualize the results of the sustainability assessment for current and future populations and scenarios. First, a methodology is developed to assess the technical and environmental sustainability for the current and future water distribution system scenarios. Then, scenarios are produced to evaluate alternative solutions for the current water distribution system as well as future populations and water demand variations. Finally, a decision support system is proposed using a combination of several visualization approaches to increase the data readability and robustness for the sustainability evaluations of the water distribution system.
The technical sustainability of a water distribution system is measured using the sustainability index methodology which is based on the reliability, resiliency and vulnerability performance criteria. Hydraulic efficiency and water quality requirements are represented using the nodal pressure and water age parameters, respectively. The U.S. Environmental Protection Agency EPANET software is used to simulate hydraulic (i.e. nodal pressure) and water quality (i.e. water age) analysis in a case study. In addition, the environmental sustainability of a water network is evaluated using the “total fresh water use” and “total energy intensity” indicators. For each scenario, multi-criteria decision analysis is used to combine technical and environmental sustainability criteria for the study area.
The technical and environmental sustainability assessment methodology is first applied to the baseline scenario (i.e. the current water distribution system). Critical locations where hydraulic efficiency and water quality problems occur in the current system are identified. There are two major scenario options that are considered to increase the sustainability at these critical locations. These scenarios focus on creating alternative systems in order to test and verify the technical and environmental sustainability methodology rather than obtaining the best solution for the current and future water distribution systems. The first scenario is a traditional approach in order to increase the hydraulic efficiency and water quality. This scenario includes using additional network components such as booster pumps, valves etc. The second scenario is based on using reclaimed water supply to meet the non-potable water demand and fire flow. The fire flow simulation is specifically included in the sustainability assessment since regulations have significant impact on the urban water infrastructure design. Eliminating the fire flow need from potable water distribution systems would assist in saving fresh water resources as well as to reduce detention times.
The decision support system is created to visualize the results of each scenario and to effectively compare these results with each other. The EPANET software is a powerful tool used to conduct hydraulic and water quality analysis but for the decision support system purposes the visualization capabilities are limited. Therefore, in this dissertation, the hydraulic and water quality simulations are completed using EPANET software and the results for each scenario are visualized by combining several visualization techniques in order to provide a better data readability. The first technique introduced here is using small multiple maps instead of the animation technique to visualize the nodal pressure and water age parameters. This technique eliminates the change blindness and provides easy comparison of time steps. In addition, a procedure is proposed to aggregate the nodes along the edges in order to simplify the water network. A circle view technique is used to visualize two values of a single parameter (i.e. the nodal pressure or water age). The third approach is based on fitting the water network into a grid representation which assists in eliminating the irregular geographic distribution of the nodes and improves the visibility of each circle view. Finally, a prototype for an interactive decision support tool is proposed for the current population and water demand scenarios. Interactive tools enable analyzing of the aggregated nodes and provide information about the results of each of the current water distribution scenarios.

For the last decade, optimization of beam orientations in intensity-modulated radiation therapy (IMRT) has been shown to be successful in improving the treatment plan. Unfortunately, the quality of a set of beam orientations depends heavily on its corresponding beam intensity profiles. Usually, a stochastic selector is used for optimizing beam orientation, and then a single objective inverse treatment planning algorithm is used for the optimization of beam intensity profiles. The overall time needed to solve the inverse planning for every random selection of beam orientations becomes excessive. Recently, considerable improvement has been made in optimizing beam intensity profiles by using multiple objective inverse treatment planning. Such an approach results in a variety of beam intensity profiles for every selection of beam orientations, making the dependence between beam orientations and its intensity profiles less important. This thesis takes advantage of this property to accelerate the optimization process through an approximation of the intensity profiles that are used for multiple selections of beam orientations, saving a considerable amount of calculation time. A dynamic algorithm (DA) and evolutionary algorithm (EA), for beam orientations in IMRT planning will be presented. The DA mimics, automatically, the methods of beam's eye view and observer's view which are recognized in conventional conformal radiation therapy. The EA is based on a dose-volume histogram evaluation function introduced as an attempt to minimize the deviation between the mathematical and clinical optima. To illustrate the efficiency of the algorithms they have been applied to different clinical examples. In comparison to the standard equally spaced beams plans, improvements are reported for both algorithms in all the clinical examples even when, for some cases, fewer beams are used. A smaller number of beams is always desirable without compromising the quality of the treatment plan. It results in a shorter treatment delivery time, which reduces potential errors in terms of patient movements and decreases discomfort.

In this thesis we have discussed the problem of decomposing an integer matrix \(A\) into a weighted sum \(A=\sum_{k \in {\mathcal K}} \alpha_k Y^k\) of 0-1 matrices with the strict consecutive ones property. We have developed algorithms to find decompositions which minimize the decomposition time \(\sum_{k \in {\mathcal K}} \alpha_k\) and the decomposition cardinality \(|\{ k \in {\mathcal K}: \alpha_k > 0\}|\). In the absence of additional constraints on the 0-1 matrices \(Y^k\) we have given an algorithm that finds the minimal decomposition time in \({\mathcal O}(NM)\) time. For the case that the matrices \(Y^k\) are restricted to shape matrices -- a restriction which is important in the application of our results in radiotherapy -- we have given an \({\mathcal O}(NM^2)\) algorithm. This is achieved by solving an integer programming formulation of the problem by a very efficient combinatorial algorithm. In addition, we have shown that the problem of minimizing decomposition cardinality is strongly NP-hard, even for matrices with one row (and thus for the unconstrained as well as the shape matrix decomposition). Our greedy heuristics are based on the results for the decomposition time problem and produce better results than previously published algorithms.

In the first part of this work, called Simple node singularity, are computed matrix factorizations of all isomorphism classes, up to shiftings, of rank one and two, graded, indecomposable maximal Cohen--Macaulay (shortly MCM) modules over the affine cone of the simple node singularity. The subsection 2.2 contains a description of all rank two graded MCM R-modules with stable sheafification on the projective cone of R, by their matrix factorizations. It is given also a general description of such modules, of any rank, over a projective curve of arithmetic genus 1, using their matrix factorizations. The non-locally free rank two MCM modules are computed using an alghorithm presented in the Introduction of this work, that gives a matrix factorization of any extension of two MCM modules over a hypersurface. In the second part, called Fermat surface, are classified all graded, rank two, MCM modules over the affine cone of the Fermat surface. For the classification of the orientable rank two graded MCM R-modules, is used a description of the orientable modules (over normal rings) with the help of codimension two Gorenstein ideals, realized by Herzog and Kühl. It is proven (in section 4), that they have skew symmetric matrix factorizations (over any normal hypersurface ring). For the classification of the non-orientable rank two MCM R-modules, we use a similar idea as in the case of the orientable ones, only that the ideal is not any more Gorenstein.

Due to their N-glycosidase activity, ribosome-inactivating proteins (RIPs) are attractive candidates as antitumor and antiviral agents in medical and biological research. In the present study, we have successfully cloned two different truncated gelonins into pET-28a(+) vectors and expressed intact recombinant gelonin (rGel), recombinant C-terminally truncated gelonin (rC3-gelonin) and recombinant N- and C-terminally truncated gelonin (rN34C3-gelonin). Biological experiments showed that all these recombinant gelonins have no inhibiting effect on MCF-7 cell lines. These data suggest that the truncated-gelonins are still having a specific structure that does not allow for internalization into cells. Further, truncation of gelonin leads to partial or complete loss of N-glycosidase as well as DNase activity compared to intact rGel. Our data suggest that C-and N-terminal amino acid residues are involved in the catalytic and cytotoxic activities of rGel. In addition, the intact gelonin should be selected as a toxin in the immunoconjugate rather than truncated gelonin.
In the second part, an immunotoxin composed of gelonin, a basic protein of 30 kDa isolated from the Indian plant Gelonium multiflorum and the cytotoxic drug MTX has been studied as a potential tool of gelonin delivery into the cytoplasm of cells. Results of many experiments showed that, on the average, about 5 molecules of MTX were coupled to one molecule of gelonin. The MTX-gelonin conjugate is able to reduce the viability of MCF-7 cell in a dose-dependent manner (ID50, 10 nM) as shown by MTT assay and significantly induce direct and oxidative DNA damage as shown by the alkaline comet assay. However, in-vitro translation toxicity MTX-gelonin conjugates have IC50, 50.5 ng/ml which is less toxic than that of gelonin alone IC50, 4.6 ng/ml. It can be concluded that the positive charge plays an important role in the N-glycosidase activity of gelonin. Furthermore, conjugation of MTX with gelonin through α- and γ- carboxyl groups leads to the partial loss of its anti-folate activity compared to free MTX. These results, taken together, indicate that conjugation of MTX to gelonin permits delivery of the gelonin into the cytoplasm of cancer cells and exerts a measurable toxic effect.
In the third part, we have isolated and characterized two ribosome-inactivating proteins (RIPs) type I, gelonin and GAP31, from seeds of Gelonium multiflorum. Both proteins exhibit RNA-N-glycosidase activity. The amino acid sequences of gelonin and GAP31 were identified by MALDI and ESI mass spectrometry. Gelonin and GAP31 peptides - obtained by proteolytic digestion (trypsin and Arg-C) - are consistent with the amino acid sequence published by Rosenblum and Huang, respectively. Further structural characterization of gelonin and GAP31 (tryptic and Arg-C peptide mapping) showed that the two RIPs have 96% similarity in their sequence. Thus, these two proteins are most probably isoforms arisen from the same gene by alternative splicing. The ESI-MS analysis of gelonin and GAP31 exhibited at least three different post-translational modified forms. A standard plant paucidomannosidic N-glycosylation pattern (GlcNAc2Man2-5Xyl0-1 and GlcNAc2Man6-12Fuc1-2Xyl0-2) was identified using electrospray ionization MS for gelonin on N196 and GAP31 on N189, respectively. Based on these results, both proteins are located in the vacuoles of Gelonium multiflorum seeds.

In this thesis we developed a desynchronization design flow in the goal of easing the de- velopment effort of distributed embedded systems. The starting point of this design flow is a network of synchronous components. By transforming this synchronous network into a dataflow process network (DPN), we ensures important properties that are difficult or theoretically impossible to analyze directly on DPNs are preserved by construction. In particular, both deadlock-freeness and buffer boundedness can be preserved after desyn- chronization. For the correctness of desynchronization, we developed a criteria consisting of two properties: a global property that demands the correctness of the synchronous network, as well as a local property that requires the latency-insensitivity of each local synchronous component. As the global property is also a correctness requirement of synchronous systems in general, we take this property as an assumption of our desyn- chronization. However, the local property is in general not satisfied by all synchronous components, and therefore needs to be verified before desynchronization. In this thesis we developed a novel technique for the verification of the local property that can be carried out very efficiently. Finally we developed a model transformation method that translates a set of synchronous guarded actions – an intermediate format for synchronous systems – to an asynchronous actor description language (CAL). Our theorem ensures that one passed the correctness verification, the generated DPN of asynchronous pro- cesses (or actors) preserves the functional behavior of the original synchronous network. Moreover, by the correctness of the synchronous network, our theorem guarantees that the derived DPN is deadlock-free and can be implemented with only finitely bounded buffers.

Epoxy belongs to a category of high-performance thermosetting polymers which have been used extensively in industrial and consumer applications. Highly cross-linked epoxy polymers offer excellent mechanical properties, adhesion, and chemical resistance. However, unmodified epoxies are prone to brittle fracture and crack propagation due to their highly crosslinked structure. As a result, epoxies are normally toughened to ensure the usability of these materials in practical applications.
This research work focuses on the development of novel modified epoxy matrices, with enhanced mechanical, fracture mechanical and thermal properties, suitable to be processed by filament winding technology, to manufacture composite based calender roller covers with improved performance in comparison to commercially available products.
In the first stage, a neat epoxy resin (EP) was modified using three different high functionality epoxy resins with two type of hardeners i.e. amine-based (H1) and anhydride-based (H2). Series of hybrid epoxy resins were obtained by systematic variation of high functionality epoxy resin contents with reference epoxy system. The resulting matrices were characterized by their tensile properties and the best system was chosen from each hardener system i.e. amine and anhydride. For tailored amine based system (MEP_H1) 14 % improvement was measured for bulk samples similarly, for tailored anhydride system (MEP_H2) 11 % improvement was measured when tested at 23 °C.
Further, tailored epoxy systems (MEP_H1 and MEP_H2) were modified using specially designed block copolymer (BCP), and core-shell rubber nanoparticles (CSR). Series of nanocomposites were obtained by systematic variation of filler contents. The resulting matrices were extensively characterized qualitatively and quantitatively to reveal the effect of each filler on the polymer properties. It was shown that the BCP confer better fracture properties to the epoxy resin at low filler loading without losing the other mechanical properties. These characteristics were accompanied by ductility and temperature stability. All composites were tested at 23 °C and at 80 °C to understand the effect of temperature on the mechanical and fracture properties.
Examinations on fractured specimen surfaces provided information about the mechanisms responsible for reinforcement. Nanoparticles generate several energy dissipating mechanisms in the epoxy, e.g. plastic deformation of the matrix, cavitation, void growth, debonding and crack pinning. These were closely related to the microstructure of the materials. The characteristic of the microstructure was verified by microscopy methods (SEM and AFM). The microstructure of neat epoxy hardener system was strongly influenced by the nanoparticles and the resulting interfacial interactions. The interaction of nanoparticles with a different hardener system will result in different morphology which will ultimately influence the mechanical and fracture mechanical properties of the nanocomposites. Hybrid toughening using a combination of the block-copolymer / core-shell rubber nanoparticles and block copolymer / TiO2 nanoparticles has been investigated in the epoxy systems. It was found out that addition of rigid phase with a soft phase recovers the loss of strength in the nanocomposites caused by a softer phase.
In order to clarify the relevant relationships, the microstructural and mechanical properties were correlated. The Counto’s, Halpin-Tsai, and Lewis-Nielsen equations were used to calculate the modulus of the composites and predicted modulus fit well with the measured values. Modeling was done to predict the toughening contribution from block copolymers and core-shell rubber nanoparticles. There was good agreement between the predicted values and the experimental values for the fracture energy.

Compared to our current knowledge of neuronal excitation, little is known about the development and maturation of inhibitory circuits. Recent studies show that inhibitory circuits develop and mature in a similar way like excitatory circuit. One such similarity is the development through excitation, irrespective of its inhibitory nature. Here in this current study, I used the inhibitory projection between the medial nucleus of the trapezoid body (MNTB) and the lateral superior olive (LSO) as a model system to unravel some aspects of the development of inhibitory synapses. In LSO neurons of the rat auditory brainstem, glycine receptor-mediated responses change from depolarizing to hyperpolarizing during the first two postnatal weeks (Kandler and Friauf 1995, J. Neurosci. 15:6890-6904). The depolarizing effect of glycine is due to a high intracellular chloride concentration ([Cl-]i), which induces a reversal potential of glycine (EGly) more positive than the resting membrane potential (Vrest). In older LSO neurons, the hyperpolarizing effect is due to a low [Cl-]i (Ehrlich et al., 1999, J. Physiol. 520:121-137). Aim of the present study was to elucidate the molecular mechanism behind Clhomeostasis in LSO neurons which determines polarity of glycine response. To do so, the role and developmental expression of Cl-cotransporters, such as NKCC1 and KCC2 were investigated. Molecular biological and gramicidin perforated patchclamp experiments revealed, the role of KCC2 as an outward Cl-cotransporter in mature LSO neurons (Balakrishnan et al., 2003, J Neurosci. 23:4134-4145). But, NKCC1 does not appear to be involved in accumulating chloride in immature LSO neurons. Further experiments, indicated the role of GABA and glycine transporters (GAT1 and GLYT2) in accumulating Cl- in immature LSO neurons. Finally, the experiments with hypothyroid animals suggest the possible role of thyroid hormone in the maturation of inhibitory synapse. Altogether, this thesis addressed the molecular mechanism underlying the Cl- regulation in LSO neurons and deciphered it to some extent.

This thesis deals with the development of a tractor front loader scale which measures payload continuously, independent of the center of gravity of the payload, and unaffected of the position and movements of the loader. To achieve this, a mathematic model of a common front loader is simplified which makes it possible to identify its parameters by a repeatable and automatic procedure. By measuring accelerations as well as cylinder forces, the payload is determined continuously during the working process. Finally, a prototype was build and the scale was tested on a tractor.

Today’s pervasive availability of computing devices enabled with wireless communication and location- or inertial sensing capabilities is unprecedented. The number of smartphones sold worldwide are still growing and increasing numbers of sensor enabled accessories are available which a user can wear in the shoe or at the wrist for fitness tracking, or just temporarily puts on to measure vital signs. Despite this availability of computing and sensing hardware the merit of application seems rather limited regarding the full potential of information inherent to such senor deployments. Most applications build upon a vertical design which encloses a narrowly defined sensor setup and algorithms specifically tailored to suit the application’s purpose. Successful technologies, however, such as the OSI model, which serves as base for internet communication, have used a horizontal design that allows high level communication protocols to be run independently from the actual lower-level protocols and physical medium access. This thesis contributes to a more horizontal design of human activity recognition systems at two stages. First, it introduces an integrated toolchain to facilitate the entire process of building activity recognition systems and to foster sharing and reusing of individual components. At a second stage, a novel method for automatic integration of new sensors to increase a system’s performance is presented and discussed in detail.
The integrated toolchain is built around an efficient toolbox of parametrizable components for interfacing sensor hardware, synchronization and arrangement of data streams, filtering and extraction of features, classification of feature vectors, and interfacing output devices and applications. The toolbox emerged as open-source project through several research projects and is actively used by research groups. Furthermore, the toolchain supports recording, monitoring, annotation, and sharing of large multi-modal data sets for activity recognition through a set of integrated software tools and a web-enabled database.
The method for automatically integrating a new sensor into an existing system is, at its core, a variation of well-established principles of semi-supervised learning: (1) unsupervised clustering to discover structure in data, (2) assumption that cluster membership is correlated with class membership, and (3) obtaining at a small number of labeled data points for each cluster, from which the cluster labels are inferred. In most semi-supervised approaches, however, the labels are the ground truth provided by the user. By contrast, the approach presented in this thesis uses a classifier trained on an N-dimensional feature space (old classifier) to provide labels for a few points in an (N+1)-dimensional feature space which are used to generate a new, (N+1)-dimensional classifier. The different factors that make a distribution difficult to handle are discussed, a detailed description of heuristics designed to mitigate the influences of such factors is provided, and a detailed evaluation on a set of over 3000 sensor combinations from 3 multi-user experiments that have been used by a variety of previous studies of different activity recognition methods is presented.

Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.

Thermoelasticity represents the fusion of the fields of heat conduction and elasticity in solids and is usually characterized by a twofold coupling. Thermally induced stresses can be determined as well as temperature changes caused by deformations. Studying the mutual influence is subject of thermoelasticity. Usually, heat conduction in solids is based on Fourier’s law which describes a diffusive process. It predicts unnatural infinite transmission speed for parts of local heat pulses. At room temperature, for example, these parts are strongly damped. Thus, in these cases most engineering applications are described satisfactorily by the classical theory. However, in some situations the predictions according to Fourier’s law fail miserable. One of these situations occurs at temperatures near absolute zero, where the phenomenon of second sound1 was discovered in the 20th century. Consequently, non-classical theories experienced great research interest during the recent decades. Throughout this thesis, the expression “non-classical” refers to the fact that the constitutive equation of the heat flux is not based on Fourier’s law. Fourier’s classical theory hypothesizes that the heat flux is proportional to the temperature gradient. A new thermoelastic theory, on the one hand, needs to be consistent with classical thermoelastodynamics and, on the other hand, needs to describe second sound accurately. Hence, during the second half of the last century the traditional parabolic heat equation was replaced by a hyperbolic one. Its coupling with elasticity leads to non-classical thermomechanics which allows the modeling of second sound, provides a passage to the classical theory and additionally overcomes the paradox of infinite wave speed. Although much effort is put into non-classical theories, the thermoelastodynamic community has not yet agreed on one approach and a systematic research is going on worldwide.Computational methods play an important role for solving thermoelastic problems in engineering sciences. Usually this is due to the complex structure of the equations at hand. This thesis aims at establishing a basic theory and numerical treatment of non-classical thermoelasticity (rather than dealing with special cases). The finite element method is already widely accepted in the field of structural solid mechanics and enjoys a growing significance in thermal analyses. This approach resorts to a finite element method in space as well as in time.

The nowadays increasing number of fields where large quantities of data are collected generates an emergent demand for methods for extracting relevant information from huge databases. Amongst the various existing data mining models, decision trees are widely used since they represent a good trade-off between accuracy and interpretability. However, one of their main problems is that they are very instable, which complicates the process of the knowledge discovery because the users are disturbed by the different decision trees generated from almost the same input learning samples. In the current work, binary tree classifiers are analyzed and partially improved. The analysis of tree classifiers goes from their topology from the graph theory point of view to the creation of a new tree classification model by means of combining decision trees and soft comparison operators (Mlynski, 2003) with the purpose to not only overcome the well known instability problem of decision trees, but also in order to confer the ability of dealing with uncertainty. In order to study and compare the structural stability of tree classifiers, we propose an instability coefficient which is based on the notion of Lipschitz continuity and offer a metric to measure the proximity between decision trees. This thesis converges towards its main part with the presentation of our model ``Soft Operators Decision Tree\'\' (SODT). Mainly, we describe its construction, application and the consistency of the mathematical formulation behind this. Finally we show the results of the implementation of SODT and compare numerically the stability and accuracy of a SODT and a crisp DT. The numerical simulations support the stability hypothesis and a smaller tendency to overfitting the training data with SODT than with crisp DT is observed. A further aspect of this inclusion of soft operators is that we choose them in a way so that the resulting goodness function (used by this method) is differentiable and thus allows to calculate the best split points by means of gradient descent methods. The main drawback of SODT is the incorporation of the unpreciseness factor, which increases the complexity of the algorithm.

Photochemical reactions are of great interest due to their importance in chemical and biological processes. Highly sensitive IR/UV double and triple resonance spectroscopy in molecular beam experiments in combination with ab initio and DFT calculations yields information on reaction coordinates and Intersystem Crossing (ISC) processes subsequent to photoexcitation. In general, molecular beam experiments enable the investigation of isolated, cold molecules without any influence of the environment. Furthermore, small aggregates can be analyzed in a supersonic jet by gradually adding solvent molecules like water. Conclusions concerning the interactions in solution can be derived by investigating and fully understanding small systems with a defined amount of solvent molecules. In this work the first applications of combined IR/UV spectroscopy on reactive isolated molecules and triplet states in molecular beams without using any messenger molecules are presented. Special focus was on excited state proton transfer reactions, which can also be described as keto enol tautomerisms. Various molecules such as 3-hydroxyflavone, 2-(2-naphthyl)-3-hydroxychromone and 2,5-dihydroxybenzoic acid have been investigated with regard to this question. In the case of 3-hydroxyflavone and 2-(2-naphthyl)-3-hydroxychromone, the IR spectra have been recorded subsequent to an excited state proton transfer. Furthermore the dihydrate of 3-hydroxyflavone has been analyzed concerning a possible proton transfer in the excited state: The proton transfer reaction along the water molecules (proton wire) has to be induced by raising the excitation energy. However, photoinduced reactions involve not only singlet but also triplet states. As an archetype molecule xanthone has been analysed. After excitation to the S2 state, ISC occurs into the triplet manifold leading to a population of the T1 state. The IR spectrum of the T1 state has been recorded for the first time using the UV/IR/UV technique without using any messenger molecules. Altogether it is shown that IR/UV double and triple resonance techniques are suitable tools to analyze reaction coordinates of photochemical processes.

This thesis combined gas phase mass spectrometric investigations of ionic transition metal clusters that are either homogeneous \((Nb_n^{+/-}, Co_n^{+/-})\) or heterogeneous \(([Co_nPt_m]^{+/-})\), of their organo metallic reaction products, and of organic molecules (aspartame and Asp-Phe) and their alkali metal ion adducts.At the Paris FEL facility CLIO a newly installed FT-ICR mass spectrometer has been modified by inclusion of an ion bender that allows for the usage of additional ion sources beyond the installed ESI source. The installation of an LVAP metal cluster source served to produce metal cluster adsorbate complex ions of the type \([Nb_n(C_6H_6)]^{+/-}\). IR-MPD of the complexes \([Nb_n(C_6H_6)]^{+/-} (n = 18, 19)\) resulted in \([Nb_n(C_6)]^{+/-} (n = 18, 19)\) fragments. Spectra are broad, possibly because of vibronic / electronic transitions. In Kaiserslautern the capabilities of the LVAP source were extended by adding a gas pick up unit. Complex gases containing C-H bonds otherwise break within the cluster forming plasma. More stable gases like CO seem to attach at least partially intact. Metal cluster production with argon tagged onto the cluster failed when introducing argon through the pick up source, but succeeded when using argon as expansion gas. A new mass spectrometer concept of an additional multipole collision cell for metal cluster adsorbate formation is currently under construction. Subsequent cooling shall achieve high resolution IR-MPD spectra of transition metal cluster adsorbate complexes.Prior work on reaction of transition metal clusters with benzene was extended by investigating the reaction with benzene and benzene-d6 of size selected cationic cobalt clusters \(Co_n^+\) and of anionic cobalt clusters \(Co_n^-\) in the size range \(n = 3 - 28\) and of bimetallic cobalt platinum clusters \([Co_nPt_m]^{+/-}\) in the size range \(n + m \le 8\). Dehydrogenation by cationic cobalt clusters \(Co_n^+\) is sparse, it is effective in small bimetallic clusters \([Co_nPt_m]^+ (n + m \le 3)\). Thus single platinum atoms promote benzene dehydrogenation while further cobalt atoms quench it. Dehydrogenation is ubiquitous in reactions of anionic cobalt clusters. Mixed triatomic clusters \([Co_2Pt_1]^-\) and \([Co_1Pt_2]^-\) are special in causing effective reactions and single dehydrogenation through some kind of cooperativity while \([Co_nPt_{1,2}]^- (n \ge 3)\) do not react at all. Kinetic isotope effects KIE(n) in total reaction rates are inverse and - in part - large, dehydrogenation isotope effects DIE(n) are normal. A multistep model of adsorption and stepwise dehydrogenation from the precursor adsorbate proves suitable to rationalize the found KIEs and DIEs in principle. Particular insights into the effects of charge and of cluster size are largely beyond this model. Some DFT calculations - though preliminary - lend strong support to the otherwise assumed structures and enthalpies. More insights into the cause of the found effects of charge, size and composition of both pure and mixed clusters shall arise from ongoing high level ab initio modeling (of especially the \(n + m = 3\) case for mixed clusters).The influence of the methylester group in the molecules aspartame (Asp-PheOMe) and Asp-Phe has been explored. Therefore, their protonated and deprotonated species and their complexes with alkali metal ions attached were investigated with different techniques utilizing mass spectrometry.Gas phase H-/D-exchange with \(ND_3\) has proven that in both molecules all acidic NH and OH binding motifs do exchange their hydrogen atom and that simultaneous multi exchange is present. Kinetic studies revealed that with alkali metal ions attached the speed of the first exchange step decreases with increasing ion size. The additional OH of the carboxylic COOHPhe group in Asp-Phe increases the exchange speed by a constant value. CID experiments yielded water and the protonated Asp-Phe anhydride as main fragments out of the protonated molecules, neutral Asp anhydride and \([Phe M]^+ / [PheOMe M]^+\) for \(Li^+\) and \(Na^+\) attached, and neutral aspartame / Asp-Phe and ionic \(M^+\) for \(K^+\), \(Rb^+\) and \(Cs^+\) attached. The threshold energy \(E_{CID}\), indicating ion stability, decreases with increasing ion size. For aspartame fragmentation occurs at lower \(E_{CID}\) values for complexes with \(H^+\), \(Li^+\) and \(Na^+\) than for the Asp-Phe analoga. Complexes with \(K^+\), \(Rb^+\) and \(Cs^+\) give the same \(E_{CID}\) value for aspartame and Asp-Phe. IR-MPD investigations lead to the same fragments as the CID experiments. In combination with quantum mechanical calculations a change in the preferred structure from charge-solvated, tridentate type for complexes with small alkali metal ions (\(Li^+\)) to salt-bridge type structure for large alkali metal ions (\(Cs^+\)) could be confirmed. Calculations thereby reveal nearly no structural differences between aspartame and Asp-Phe for cationized species. The deprotonation of the additional COOHPhe group in Asp-Phe is preferred against other acidic positions. A better experimental distinction between possible (calculated) structure types would arise from additional FEL IR-MPD measurements in the energy range of 600 to 1800 \(cm^{-1}\). The comparison of the \(E_{CID}\) values with calculated fragmentation energy values proves that not only for alkali metal complexes with \(K^+\), \(Rb^+\) and \(Cs^+\), but also for \(Li^+\) and \(Na^+\) the bond breaking of all metal atom bonds is part of the transition state. The lower \(E_{CID}\) values for aspartame with small cations may be explained in terms of internal energy. Aspartame is a larger molecule, possesses more internal energy and can be recognized as the larger heat bath. Less energy is needed for fragmentation, if the Phe part with the additional methylester group is involved in the fragmentation process.

This thesis provides a fully automatic translation from synchronous programs to parallel software for different architectures, in particular, shared memory processing (SMP) and distributed memory systems. Thereby, we exploit characteristics of the synchronous model of computation (MoC) to reduce communication and to improve available parallelism and load-balancing by out-of-order (OOO) execution and data speculation.
Manual programming of parallel software requires the developers to partition a system into tasks and to add synchronization and communication. The model-based approach of development abstracts from details of the target architecture and allows to make decisions about the target architecture as late as possible. The synchronous MoC supports this approach by abstracting from time and providing implicit parallelism and synchronization. Existing compilation techniques translate synchronous programs into synchronous guarded actions (SGAs) which are an intermediate format abstracting from semantic problems in synchronous languages. Compilers for SGAs analyze causality problems, ensure logical correctness and the absence of schizophrenia problems. Hence, SGAs are a simplified and general starting point and keep the synchronous MoC at the same time. The instantaneous feedback in the synchronous MoC makes the mapping of these systems to parallel software a non-trivial task. In contrast, other MoCs such as data-flow processing networks (DPNs) directly match with parallel architectures. We translate the SGAs into DPNs,which represent a commonly used model to create parallel software. DPNs have been proposed as a programming model for distributed parallel systems that have communication paths with unpredictable latencies. The purely data-driven execution of DPNs does not require a global coordination and therefore DPNs can be easily mapped to parallel software for architectures with distributed memory. The generation of efficient parallel code from DPNs challenges compiler design with two issues: To perfectly utilize a parallel system, the communication and synchronization has to be kept low, and the utilization of the computational units has to be balanced. The variety of hardware architectures and dynamic execution techniques in processing units of these systems make a statically balanced distributed execution impossible.
The synchronous MoC is still reflected in our generated DPNs, which exhibits characteristics that allow optimizations concerning the previously mentioned issues. In particular, we apply a general communication reduction and OOO execution to achieve a dynamically balanced execution which is inspired from hardware design.

Nowadays one of the major objectives in geosciences is the determination of the gravitational field of our planet, the Earth. A precise knowledge of this quantity is not just interesting on its own but it is indeed a key point for a vast number of applications. The important question is how to obtain a good model for the gravitational field on a global scale. The only applicable solution - both in costs and data coverage - is the usage of satellite data. We concentrate on highly precise measurements which will be obtained by GOCE (Gravity Field and Steady State Ocean Circulation Explorer, launch expected 2006). This satellite has a gradiometer onboard which returns the second derivatives of the gravitational potential. Mathematically seen we have to deal with several obstacles. The first one is that the noise in the different components of these second derivatives differs over several orders of magnitude, i.e. a straightforward solution of this outer boundary value problem will not work properly. Furthermore we are not interested in the data at satellite height but we want to know the field at the Earth's surface, thus we need a regularization (downward-continuation) of the data. These two problems are tackled in the thesis and are now described briefly. Split Operators: We have to solve an outer boundary value problem at the height of the satellite track. Classically one can handle first order side conditions which are not tangential to the surface and second derivatives pointing in the radial direction employing integral and pseudo differential equation methods. We present a different approach: We classify all first and purely second order operators which fulfill that a harmonic function stays harmonic under their application. This task is done by using modern algebraic methods for solving systems of partial differential equations symbolically. Now we can look at the problem with oblique side conditions as if we had ordinary i.e. non-derived side conditions. The only additional work which has to be done is an inversion of the differential operator, i.e. integration. In particular we are capable to deal with derivatives which are tangential to the boundary. Auto-Regularization: The second obstacle is finding a proper regularization procedure. This is complicated by the fact that we are facing stochastic rather than deterministic noise. The main question is how to find an optimal regularization parameter which is impossible without any additional knowledge. However we could show that with a very limited number of additional information, which are obtainable also in practice, we can regularize in an asymptotically optimal way. In particular we showed that the knowledge of two input data sets allows an order optimal regularization procedure even under the hard conditions of Gaussian white noise and an exponentially ill-posed problem. A last but rather simple task is combining data from different derivatives which can be done by a weighted least squares approach using the information we obtained out of the regularization procedure. A practical application to the downward-continuation problem for simulated gravitational data is shown.

The recognition of day-to-day activities is still a very challenging and important research topic. During recent years, a lot of research has gone into designing and realizing smart environ- ments in different application areas such as health care, maintenance, sports or smart homes. As a result, a large amount of sensor modalities were developed, different types of activity and context recognition services were implemented and the resulting systems were benchmarked using state-of-the-art evaluation techniques. However, so far hardly any of these approaches have found their way into the market and consequently into the homes of real end-users on a large scale. The reason for this is, that almost all systems have one or more of the following characteristics in common: expensive high-end or prototype sensors are used which are not af- fordable or reliable enough for mainstream applications; many systems are deployed in highly instrumented environments or so-called "living labs", which are far from real-life scenarios and are often evaluated only in research labs; almost all systems are based on complex system con- figurations and/or extensive training data sets, which means that a large amount of data must be collected in order to install the system. Furthermore, many systems rely on a user and/or environment dependent training, which makes it even more difficult to install them on a large scale. Besides, a standardized integration procedure for the deployment of services in existing environments and smart homes has still not been defined. As a matter of fact, service providers use their own closed systems, which are not compatible with other systems, services or sensors. It is clear, that these points make it nearly impossible to deploy activity recognition systems in a real daily-life environment, to make them affordable for real users and to deploy them in hundreds or thousands of different homes.
This thesis works towards the solution of the above mentioned problems. Activity and context recognition systems designed for large-scale deployment and real-life scenarios are intro- duced. Systems are based on low-cost, reliable sensors and can be set up, configured and trained with little effort, even by technical laymen. It is because of these characteristics that we call our approach "minimally invasive". As a consequence, large amounts of training data, that are usu- ally required by many state-of-the-art approaches, are not necessary. Furthermore, all systems were integrated unobtrusively in real-world/similar to real-world environments and were evalu- ated under real-life, as well as similar to real-life conditions. The thesis addresses the following topics: First, a sub-room level indoor positioning system is introduced. The system is based on low-cost ceiling cameras and a simple computer vision tracking approach. The problem of user identification is solved by correlating modes of locomotion patterns derived from the trajectory of unidentified objects and on-body motion sensors. Afterwards, the issue of recognizing how and what mainstream household devices have been used for is considered. Based on a low-cost microphone, the water consumption of water-taps can be approximated by analyzing plumbing noise. Besides that, operating modes of mainstream electronic devices were recognized by using rule-based classifiers, electric current features and power measurement sensors. As a next step, the difficulty of spotting subtle, barely distinguishable hand activities and the resulting object interactions, within a data set containing a large amount of background data, is addressed. The problem is solved by introducing an on-body core system which is configured by simple, one-time physical measurements and minimal data collections. The lack of large training sets is compensated by fusing the system with activity and context recognition systems, that are able to reduce the search space observed. Amongst other systems, previously introduced approaches and ideas are revisited in this section. An in-depth evaluation shows the impact of each fusion procedure on the performance and run-time of the system. The approaches introduced are able to provide significantly better results than a state-of-the-art inertial system using large amounts of training data. The idea of using unobtrusive sensors has also been applied to the field of behavior analysis. Integrated smartphone sensors are used to detect behavioral changes of in- dividuals due to medium-term stress periods. Behavioral parameters related to location traces, social interactions and phone usage were analyzed to detect significant behavioral changes of individuals during stressless and stressful time periods. Finally, as a closing part of the the- sis, a standardization approach related to the integration of ambient intelligence systems (as introduced in this thesis) in real-life and large-scale scenarios is shown.

Large displays become more and more popular, due to dropping prices. Their size and high resolution leverages collaboration and they are capable of dis- playing even large datasets in one view. This becomes even more interesting as the number of big data applications increases. The increased screen size and other properties of large displays pose new challenges to the Human- Computer-Interaction with these screens. This includes issues such as limited scalability to the number of users, diversity of input devices in general, leading to increased learning efforts for users, and more.
Using smart phones and tablets as interaction devices for large displays can solve many of these issues. Since they are almost ubiquitous today, users can bring their own device. This approach scales well with the number of users. These mobile devices are easy and intuitive to use and allow for new interaction metaphors, as they feature a wide array of input and output capabilities, such as touch screens, cameras, accelerometers, microphones, speakers, Near-Field Communication, WiFi, etc.
This thesis will present a concept to solve the issues posed by large displays. We will show proofs-of-concept, with specialized approaches showing the via- bility of the concept. A generalized, eyes-free technique using smart phones or tablets to interact with any kind of large display, regardless of hardware or software then overcomes the limitations of the specialized approaches. This is implemented in a large display application that is designed to run under a multitude of environments, including both 2D and 3D display setups. A special visualization method is used to combine 2D and 3D data in a single visualization.
Additionally the thesis will present several approaches to solve common is- sues with large display interaction, such as target sizes on large display getting too small, expensive tracking hardware, and eyes-free interaction through vir- tual buttons. These methods provide alternatives and context for the main contribution.

This thesis covers two important fields in financial mathematics, namely the continuous time portfolio optimisation and credit risk modelling. We analyse optimisation problems of portfolios of Call and Put options on the stock and/or the zero coupon bond issued by a firm with default risk. We use the martingale approach for dynamic optimisation problems. Our findings show that the riskier the option gets, the less proportion of his wealth the investor allocates to the risky asset. Further, we analyse the Credit Default Swap (CDS) market quotes on the Eurobonds issued by Turkish sovereign for building the term structure of the sovereign credit risk. Two methods are introduced and compared for bootstrapping the risk-neutral probabilities of default (PD) in an intensity based (or reduced form) credit risk modelling approach. We compare the market-implied PDs with the actual PDs reported by credit rating agencies based on historical experience. Our results highlight the market price of the sovereign credit risk depending on the assigned rating category in the sampling period. Finally, we find an optimal leverage strategy for delivering the payments promised by a Constant Proportion Debt Obligation (CPDO). The problem is solved via the introduction and explicit solution of a stochastic control problem by transforming the related Hamilton-Jacobi-Bellman Equation into its dual. Contrary to the industry practise, the optimal leverage function we derive is a non-linear function of the CPDO asset value. The simulations show promising behaviour of the optimal leverage function compared with the one popular among practitioners.

Thermoplastic polymer-polymer composites consist of a polymeric matrix and a
polymeric reinforcement. The combination of these materials offers outstanding
mechanical properties at lower weight than standard fiber reinforced materials.
Furthermore, when both polymeric components originate from the same family or,
ideally, from the same polymer, their sustainability degree is higher than standard
fiber reinforced composites.
A challenge of polymer-polymer composites is the subsequent processing of their
semi-finished materials by heating techniques. Since the fibers are made of meltable
thermoplastic, the reinforcing fiber structure might be lost during the heating process.
Hence, the mechanical properties of an overheated polymer-polymer composite
would decline, and finally, they would be even lower than the neat matrix. A decrease
of process temperature to manage the heating challenge is not reasonable since the
cycle time would be increased at the same time. Therefore, this work pursues the
adaption of a fast and selective heating method on the use with polymer-polymer
composites. Inductively activatable particles, so-called susceptors, were distributed in
the matrix to evoke a local heating in the matrix when being exposed to an
alternating magnetic field. In this way, the energy input to the fibers is limited.
The experimental series revealed the induction particle heating effect to be mainly
related to susceptor material, susceptor fraction, susceptor distribution as well as
magnetic field strength, coupling distance, and heating time. A proper heating was
achieved with ferromagnetic particles at a filler content of only 5 wt-% in HDPE as
well as with its respective polymer fiber reinforced composites. The study included
the analysis of susceptor impact on mechanical and thermal matrix properties as well
as a degradation evaluation. The susceptors were identified to have only a marginal
impact on matrix properties. Furthermore, a semi-empiric simulation of the particle
induction heating was applied, which served for the investigation of intrinsic melting
processes.
The achieved results, the experimental as well as the analytic study, were
successfully adapted to a thermoforming process with a polymer-polymer material,
which had been preheated by means of particle induction.

Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.

Efficient time integration and nonlinear model reduction for incompressible hyperelastic materials
(2013)

This thesis deals with the time integration and nonlinear model reduction of nearly incompressible materials that have been discretized in space by mixed finite elements. We analyze the structure of the equations of motion and show that a differential-algebraic system of index 1 with a singular perturbation term needs to be solved. In the limit case the index may jump to index 3 and thus renders the time integration into a difficult problem. For the time integration we apply Rosenbrock methods and study their convergence behavior for a test problem, which highlights the importance of the well-known Scholz conditions for this problem class. Numerical tests demonstrate that such linear-implicit methods are an attractive alternative to established time integration methods in structural dynamics. In the second part we combine the simulation of nonlinear materials with a model reduction step. We use the method of proper orthogonal decomposition and apply it to the discretized system of second order. For a nonlinear model reduction to be efficient we approximate the nonlinearity by following the lookup approach. In a practical example we show that large CPU time savings can achieved. This work is in order to prepare the ground for including such finite element structures as components in complex vehicle dynamics applications.

In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.

In this thesis diverse problems concerning inflation-linked products are dealt with. To start with, two models for inflation are presented, including a geometric Brownian motion for consumer price index itself and an extended Vasicek model for inflation rate. For both suggested models the pricing formulas of inflation-linked products are derived using the risk-neutral valuation techniques. As a result Black and Scholes type closed form solutions for a call option on inflation index for a Brownian motion model and inflation evolution for an extended Vasicek model as well as for an inflation-linked bond are calculated. These results have been already presented in Korn and Kruse (2004) [17]. In addition to these inflation-linked products, for the both inflation models the pricing formulas of a European put option on inflation, an inflation cap and floor, an inflation swap and an inflation swaption are derived. Consequently, basing on the derived pricing formulas and assuming the geometric Brownian motion process for an inflation index, different continuous-time portfolio problems as well as hedging problems are studied using the martingale techniques as well as stochastic optimal control methods. These utility optimization problems are continuous-time portfolio problems in different financial market setups and in addition with a positive lower bound constraint on the final wealth of the investor. When one summarizes all the optimization problems studied in this work, one will have the complete picture of the inflation-linked market and both counterparts of market-participants, sellers as well as buyers of inflation-linked financial products. One of the interesting results worth mentioning here is naturally the fact that a regular risk-averse investor would like to sell and not buy inflation-linked products due to the high price of inflation-linked bonds for example and an underperformance of inflation-linked bonds compared to the conventional risk-free bonds. The relevance of this observation is proved by investigating a simple optimization problem for the extended Vasicek process, where as a result we still have an underperforming inflation-linked bond compared to the conventional bond. This situation does not change, when one switches to an optimization of expected utility from the purchasing power, because in its nature it is only a change of measure, where we have a different deflator. The negativity of the optimal portfolio process for a normal investor is in itself an interesting aspect, but it does not affect the optimality of handling inflation-linked products compared to the situation not including these products into investment portfolio. In the following, hedging problems are considered as a modeling of the other half of inflation market that is inflation-linked products buyers. Natural buyers of these inflation-linked products are obviously institutions that have payment obligations in the future that are inflation connected. That is why we consider problems of hedging inflation-indexed payment obligations with different financial assets. The role of inflation-linked products in the hedging portfolio is shown to be very important by analyzing two alternative optimal hedging strategies, where in the first one an investor is allowed to trade as inflation-linked bond and in the second one he is not allowed to include an inflation-linked bond into his hedging portfolio. Technically this is done by restricting our original financial market, which is made of a conventional bond, inflation index and a stock correlated with inflation index, to the one, where an inflation index is excluded. As a whole, this thesis presents a wide view on inflation-linked products: inflation modeling, pricing aspects of inflation-linked products, various continuous-time portfolio problems with inflation-linked products as well as hedging of inflation-related payment obligations.

Test rig optimization
(2014)

Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.

Specification of asynchronous circuit behaviour becomes more complex as the
complexity of today’s System-On-a-Chip (SOC) design increases. This also causes
the Signal Transition Graphs (STGs) – interpreted Petri nets for the specification
of asynchronous circuit behaviour – to become bigger and more complex, which
makes it more difficult, sometimes even impossible, to synthesize an asynchronous
circuit from an STG with a tool like petrify [CKK+96] or CASCADE [BEW00].
It has, therefore, been suggested to decompose the STG as a first step; this
leads to a modular implementation [KWVB03] [KVWB05], which can reduce syn-
thesis effort by possibly avoiding state explosion or by allowing the use of library
elements. A decomposition approach for STGs was presented in [VW02] [KKT93]
[Chu87a]. The decomposition algorithm by Vogler and Wollowski [VW02] is based
on that of Chu [Chu87a] but is much more generally applicable than the one in
[KKT93] [Chu87a], and its correctness has been proved formally in [VW02].
This dissertation begins with Petri net background described in chapter 2.
It starts with a class of Petri nets called a place/transition (P/T) nets. Then
STGs, the subclass of P/T nets, is viewed. Background in net decomposition
is presented in chapter 3. It begins with the structural decomposition of P/T
nets for analysis purposes – liveness and boundedness of the net. Then STG
decomposition for synthesis from [VW02] is described.
The decomposition method from [VW02] still could be improved to deal with
STGs from real applications and to give better decomposition results. Some
improvements for [VW02] to improve decomposition result and increase algorithm
efficiency are discussed in chapter 4. These improvement ideas are suggested in
[KVWB04] and some of them are have been proved formally in [VK04].
The decomposition method from [VW02] is based on net reduction to find
an output block component. A large amount of work has to be done to reduce
an initial specification until the final component is found. This reduction is not
always possible, which causes input initially classified as irrelevant to become
relevant input for the component. But under certain conditions (e.g. if structural
auto-conflicts turn out to be non-dynamic) some of them could be reclassified as
irrelevant. If this is not done, the specifications become unnecessarily large, which
intern leads to unnecessarily large implemented circuits. Instead of reduction, a
new approach, presented in chapter 5, decomposes the original net into structural
components first. An initial output block component is found by composing the
structural components. Then, a final output block component is obtained by net
reduction.
As we cope with the structure of a net most of the time, it would be useful
to have a structural abstraction of the net. A structural abstraction algorithm
[Kan03] is presented in chapter 6. It can improve the performance in finding an
output block component in most of the cases [War05] [Taw04]. Also, the structure
net is in most cases smaller than the net itself. This increases the efficiency of the
decomposition algorithm because it allows the transitions contained in a node of
the structure graph to be contracted at the same time if the structure graph is
used as internal representation of the net.
Chapter 7 discusses the application of STG decomposition in asynchronous
circuit design. Application to speed independent circuits is discussed first. Af-
ter that 3D circuits synthesized from extended burst mode (XBM) specifications
are discussed. An algorithm for translating STG specifications to XBM specifi-
cations was first suggested by [BEW99]. This algorithm first derives the state
machine from the STG specification, then translates the state machine to XBM
specification. An XBM specification, though it is a state machine, allows some
concurrency. These concurrencies can be translated directly, without deriving
all of the possible states. An algorithm which directly translates STG to XBM
specifications, is presented in chapter 7.3.1. Finally DESI, a tool to decompose
STGs and its decomposition results are presented.

Fast Internet content delivery relies on two layers of caches on the request path. Firstly, content delivery networks (CDNs) seek to answer user requests before they traverse slow Internet paths. Secondly, aggregation caches in data centers seek to answer user requests before they traverse slow backend systems. The key challenge in managing these caches is the high variability of object sizes, request patterns, and retrieval latencies. Unfortunately, most existing literature focuses on caching with low (or no) variability in object sizes and ignores the intricacies of data center subsystems.
This thesis seeks to fill this gap with three contributions. First, we design a new caching system, called AdaptSize, that is robust under high object size variability. Second, we derive a method (called Flow-Offline Optimum or FOO) to predict the optimal cache hit ratio under variable object sizes. Third, we design a new caching system, called RobinHood, that exploits variances in retrieval latencies to deliver faster responses to user requests in data centers.
The techniques proposed in this thesis significantly improve the performance of CDN and data center caches. On two production traces from one of the world's largest CDN AdaptSize achieves 30-91% higher hit ratios than widely-used production systems, and 33-46% higher hit ratios than state-of-the-art research systems. Further, AdaptSize reduces the latency by more than 30% at the median, 90-percentile and 99-percentile.
We evaluate the accuracy of our FOO analysis technique on eight different production traces spanning four major Internet companies.
We find that FOO's error is at most 0.3%. Further, FOO reveals that the gap between online policies and OPT is much larger than previously thought: 27% on average, and up to 43% on web application traces.
We evaluate RobinHood with production traces from a major Internet company on a 50-server cluster. We find that RobinHood improves the 99-percentile latency by more than 50% over existing caching systems.
As load imbalances grow, RobinHood's latency improvement can be more than 2x. Further, we show that RobinHood is robust against server failures and adapts to automatic scaling of backend systems.
The results of this thesis demonstrate the power of guiding the design of practical caching policies using mathematical performance models and analysis. These models are general enough to find application in other areas of caching design and future challenges in Internet content delivery.

Herbivory is discussed as a key agent in maintaining dynamics and stability of tropical forested ecosystems. Accordingly increasing attention has been paid to the factors that structure tropical herbivore communities. The aim of this study was (1) to describe diversity, density, distribution and host range of the phasmid community (Phasmatodea) of a moist neotropical forest in Panamá, and (2) to experimentally assess bottom-up and top-down factors that may regulate populations of the phasmid Metriophasma diocles. The phasmid community of Barro Colorado Island was poor in species and low in density. Phasmids mainly occurred along forest edges and restricted host ranges of phasmid species reflected the successional status of their host plants. Only M. diocles that fed on early and late successional plants occurred regularly in the forest understory. A long generation time with a comparably low fecundity converted into a low biotic potential of M. diocles. However, modeled potential population density increased exponentially and exceeded the realized densities of this species already after one generation indicating that control factors continuously affect M. diocles natural populations. Egg hatching failure decreased potential population growth by 10 % but was of no marked effect at larger temporal scale. Interspecific differences in defensive physical and chemical leaf traits of M. diocles host plants, amongst them leaf toughness the supposedly most effective anti-herbivore defense, seemed not to affect adult female preference and nymph performance. Alternatively to these defenses, I suggest that the pattern of differential preference and performance may be based on interspecific differences in qualitative toxic compounds or in nutritive quality of leaves. The significant rejection of leaf tissue with a low artificial increase of natural phenol contents by nymphs indicated a qualitative defensive pathway in Piper evolution. In M. diocles, oviposition may not be linked to nymph performance, because the evolutionary prediction of a relation between female adult preference and nymph performance was missing. Consequently, the recruitment of nymphs into the reproductive adult phase may be crucially affected by differential performance of nymphs. Neonate M. diocles nymphs suffered strong predation pressure when exposed to natural levels of predation. Concluding from significantly increased predation-related mortality at night, I argue that arthropods may be the main predators of this nocturnal herbivore. Migratory behavior of nymphs seemed not to reflect predation avoidance. Instead, I provided first evidence that host plant quality may trigger off-plant migration. In conclusion, I suggest that predation pressure with its direct effects on nymph survival may be a stronger factor regulating M. diocles populations, compared to direct and indirect effects of host plant quality, particularly because slow growth and off-host migration both may feed back into an increase of predation related mortality.

Industrial design has a long history. With the introduction of Computer-Aided Engineering, industrial design was revolutionised. Due to the newly found support, the design workflow changed, and with the introduction of virtual prototyping, new challenges arose. These new engineering problems have triggered
new basic research questions in computer science.
In this dissertation, I present a range of methods which support different components of the virtual design cycle, from modifications of a virtual prototype and optimisation of said prototype, to analysis of simulation results.
Starting with a virtual prototype, I support engineers by supplying intuitive discrete normal vectors which can be used to interactively deform the control mesh of a surface. I provide and compare a variety of different normal definitions which have different strengths and weaknesses. The best choice depends on
the specific model and on an engineer’s priorities. Some methods have higher accuracy, whereas other methods are faster.
I further provide an automatic means of surface optimisation in the form of minimising total curvature. This minimisation reduces surface bending, and therefore, it reduces material expenses. The best results can be obtained for analytic surfaces, however, the technique can also be applied to real-world examples.
Moreover, I provide engineers with a curvature-aware technique to optimise mesh quality. This helps to avoid degenerated triangles which can cause numerical issues. It can be applied to any component of the virtual design cycle: as a direct modification of the virtual prototype (depending on the surface defini-
tion), during optimisation, or dynamically during simulation.
Finally, I have developed two different particle relaxation techniques that both support two components of the virtual design cycle. The first component for which they can be used is discretisation. To run computer simulations on a model, it has to be discretised. Particle relaxation uses an initial sampling,
and it improves it with the goal of uniform distances or curvature-awareness. The second component for which they can be used is the analysis of simulation results. Flow visualisation is a powerful tool in supporting the analysis of flow fields through the insertion of particles into the flow, and through tracing their movements. The particle seeding is usually uniform, e.g. for an integral surface, one could seed on a square. Integral surfaces undergo strong deformations, and they can have highly varying curvature. Particle relaxation redistributes the seeds on the surface depending on surface properties like local deformation or curvature.

In the present work the concept of decarboxylative couplings and the strategy to use carboxylates as directing groups for C-H functionalizations have been decisively improved in three ways. These concepts emphasize the multifaceted nature of aromatic carboxylic acids as expedient starting materials in homogeneous catalysis to construct highly desirable molecular scaffolds in a straightforward fashion.
In the first project, the restriction of decarboxylative biaryl synthesis to exclusively couple aryl halides with ortho-substituted benzoic acids has been overcome by a holistic optimization of a Cu/Pd bimetallic catalyst system. Long ago postulated, this is now the proof that decarboxylative cross-couplings are not intrinsically limited to different decarboxylation propensities of benzoic acids or hampered by excess halides, accessing for the first time the entire spectrum of aromatic carboxylic acids as starting materials for the decarboxylative biaryl synthesis. The second project uses the carboxyl moiety as directing group for the ortho-arylation with aryl bromides and -chlorides catalyzed by comparatively inexpensive ruthenium. The carboxylic acid group remains untouched after the ortho-functionalization giving the possibility to a wealth of further diversifications via decarboxylative ipso-substitutions. Within the same project, a Cu/Ru bimetallic catalyst system was found to be able to switch the decarboxylative biaryl coupling from the ipso- to the ortho-position, complementing the Cu/Pd system developed in the first project. In a third project, a redox neutral C-C bond formation revealed the full synthetic potential of the carboxyl group. The COOH moiety acts as a classical directing group for the C-H hydroarylation of internal alkynes to form highly desirable 2-vinyl benzoic acids. With propargylic alcohols the hydroarylation is followed by an in situ esterification, showing that after easing the C-H cleavage, the directing group can be transformed into another functional group, thus, acting as a transformable directing group. Most importantly, a new fascinating reaction mode is activated by embedding the decarboxylation within the C-H functionalization event. This mode of action is capable to solve regioselectivity issues that inherently occur when dealing with carboxylates as directing groups. A so-called deciduous directing group is cast off simultaneously within the C-H functionalization event, resulting in an inherently monoselective pathway.
These methods were developed with the permanent goal of ensuring high sustainability. They do require neither pre-functionalized starting materials nor additional oxidants and provide access to a number of chemically relevant molecules from abundant, inexpensive and toxicologically innocuous educts.

Proteins of the intermembrane space of mitochondria are generally encoded by nuclear genes that are synthesized in the cytosol. A group of small intermembrane space proteins lack classical mitochondrial targeting sequences, but these proteins are imported in an oxidation-driven reaction that relies on the activity of two components, Mia40 and Erv1. Both proteins constitute the mitochondrial disulfide relay system. Mia40 functions as an import receptor that interacts with incoming polypeptides via transient, intermolecular disulfide bonds. Erv1 is an FAD-binding sulfhydryl oxidase that activates Mia40 by re-oxidation, but the process how Erv1 itself is re-oxidized has been poorly understood. Here, I show that Erv1 interacts with cytochrome c which provides a functional link between the mitochondrial disulfide relay system and the respiratory chain. This mechanism not only increases the efficiency of mitochondrial inport by the re-oxidation of Erv1 and Mia40 but also prevents the formation of deleterious hydrogen peroxide within the intermembrane space. Thus, the miochondrial disulfide relay system is, analogous to that of the bacterial periplasm, connected to the electron transport chain of the inner membrane, which possibly allows an oxygen-dependend regulation of mitochondrial import rates. In addition, I modeled the structure of Erv1 on the basis of the Saccharomyces cerevisiae Erv2 crystal structure in order to gain insight into the molecular mechanism of Erv1. According to the high degree of sequence homologies, various characteristics found for Erv2 are also valid for Erv1. Finally, I propose a regulatory function of the disulfide relay system on the respiratory chain. The disulfide relay system senses the molecular oxygen levels in mitochondria and, thus, is able to adapt respiratory chain activity in order to prevent wastage of NADH and production of ROS.

In engineering and science, a multitude of problems exhibit an inherently geometric nature. The computational assessment of such problems requires an adequate representation by means of data structures and processing algorithms. One of the most widely adopted and recognized spatial data structures is the Delaunay triangulation which has its canonical dual in the Voronoi diagram. While the Voronoi diagram provides a simple and elegant framework to model spatial proximity, the core of which is the concept of natural neighbors, the Delaunay triangulation provides robust and efficient access to it. This combination explains the immense popularity of Voronoi- and Delaunay-based methods in all areas of science and engineering. This thesis addresses aspects from a variety of applications that share their affinity to the Voronoi diagram and the natural neighbor concept. First, an idea for the generalization of B-spline surfaces to unstructured knot sets over Voronoi diagrams is investigated. Then, a previously proposed method for \(C^2\) smooth natural neighbor interpolation is backed with concrete guidelines for its implementation. Smooth natural neighbor interpolation is also one of many applications requiring derivatives of the input data. The generation of derivative information in scattered data with the help of natural neighbors is described in detail. In a different setting, the computation of a discrete harmonic function in a point cloud is considered, and an observation is presented that relates natural neighbor coordinates to a continuous dependency between discrete harmonic functions and the coordinates of the point cloud. Attention is then turned to integrating the flexibility and meritable properties of natural neighbor interpolation into a framework that allows the algorithmically transparent and smooth extrapolation of any known natural neighbor interpolant. Finally, essential properties are proved for a recently introduced novel finite element tessellation technique in which a Delaunay triangulation is transformed into a unique polygonal tessellation.

In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.

This thesis is separated into three main parts: Development of Gaussian and White Noise Analysis, Hamiltonian Path Integrals as White Noise Distributions, Numerical methods for polymers driven by fractional Brownian motion.
Throughout this thesis the Donsker's delta function plays a key role. We investigate this generalized function also in Chapter 2. Moreover we show by giving a counterexample, that the general definition for complex kernels is not true.
In Chapter 3 we take a closer look to generalized Gauss kernels and generalize these concepts to the case of vector-valued White Noise. These results are the basis for Hamiltonian path integrals of quadratic type. The core result of this chapter gives conditions under which pointwise products of generalized Gauss kernels and certain Hida distributions have a mathematical rigorous meaning as distributions in the Hida space.
In Chapter 4 we discuss operators which are related to applications for Feynman Integrals as differential operators, scaling, translation and projection. We show the relation of these operators to differential operators, which leads to the well-known notion of so called convolution operators. We generalize the central homomorphy theorem to regular generalized functions.
We generalize the concept of complex scaling to scaling with bounded operators and discuss the relation to generalized Radon-Nikodym derivatives. With the help of this we consider products of generalized functions in chapter 5. We show that the projection operator from the Wick formula for products with Donsker's deltais not closable on the square-integrable functions..
In Chapter 5 we discuss products of generalized functions. Moreover the Wick formula is revisited. We investigate under which conditions and on which spaces the Wick formula can be generalized to. At the end of the chapter we consider the products of Donsker's delta function with a generalized function with help of a measure transformation. Here also problems as measurability are concerned.
In Chapter 6 we characterize Hamiltonian path integrands for the free particle, the harmonic oscillator and the charged particle in a constant magnetic field as Hida distributions. This is done in terms of the T-transform and with the help of the results from chapter 3. For the free particle and the harmonic oscillator we also investigate the momentum space propagators. At the same time, the $T$-transform of the constructed Feynman integrands provides us with their generating functional. In Chapter 7, we can show that the generalized expectation (generating functional at zero) gives the Greens function to the corresponding Schrödinger equation.
Moreover, with help of the generating functional we can show that the canonical commutation relations for the free particle and the harmonic oscillator in phase space are fulfilled. This confirms on a mathematical rigorous level the heuristics developed by Feynman and Hibbs.
In Chapter 8 we give an outlook, how the scaling approach which is successfully applied in the Feynman integral setting can be transferred to the phase space setting. We give a mathematical rigorous meaning to an analogue construction to the scaled Feynman-Kac kernel. It is open if the expression solves the Schrödinger equation. At least for quadratic potentials we can get the right physics.
In the last chapter, we focus on the numerical analysis of polymer chains driven by fractional Brownian motion. Instead of complicated lattice algorithms, our discretization is based on the correlation matrix. Using fBm one can achieve a long-range dependence of the interaction of the monomers inside a polymer chain. Here a Metropolis algorithm is used to create the paths of a polymer driven by fBm taking the excluded volume effect in account.

In this thesis we classify simple coherent sheaves on Kodaira fibers of types II, III and IV (cuspidal and tacnode cubic curves and a plane configuration of three concurrent lines). Indecomposable vector bundles on smooth elliptic curves were classified in 1957 by Atiyah. In works of Burban, Drozd and Greuel it was shown that the categories of vector bundles and coherent sheaves on cycles of projective lines are tame. It turns out, that all other degenerations of elliptic curves are vector-bundle-wild. Nevertheless, we prove that the category of coherent sheaves of an arbitrary reduced plane cubic curve, (including the mentioned Kodaira fibers) is brick-tame. The main technical tool of our approach is the representation theory of bocses. Although, this technique was mainly used for purely theoretical purposes, we illustrate its computational potential for investigating tame behavior in wild categories. In particular, it allows to prove that a simple vector bundle on a reduced cubic curve is determined by its rank, multidegree and determinant, generalizing Atiyah's classification. Our approach leads to an interesting class of bocses, which can be wild but are brick-tame.

Gröbner bases are one of the most powerful tools in computer algebra and commutative algebra, with applications in algebraic geometry and singularity theory. From the theoretical point of view, these bases can be computed over any field using Buchberger's algorithm. In practice, however, the computational efficiency depends on the arithmetic of the coefficient field.
In this thesis, we consider Gröbner bases computations over two types of coefficient fields. First, consider a simple extension \(K=\mathbb{Q}(\alpha)\) of \(\mathbb{Q}\), where \(\alpha\) is an algebraic number, and let \(f\in \mathbb{Q}[t]\) be the minimal polynomial of \(\alpha\). Second, let \(K'\) be the algebraic function field over \(\mathbb{Q}\) with transcendental parameters \(t_1,\ldots,t_m\), that is, \(K' = \mathbb{Q}(t_1,\ldots,t_m)\). In particular, we present efficient algorithms for computing Gröbner bases over \(K\) and \(K'\). Moreover, we present an efficient method for computing syzygy modules over \(K\).
To compute Gröbner bases over \(K\), starting from the ideas of Noro [35], we proceed by joining \(f\) to the ideal to be considered, adding \(t\) as an extra variable. But instead of avoiding superfluous S-pair reductions by inverting algebraic numbers, we achieve the same goal by applying modular methods as in [2,4,27], that is, by inferring information in characteristic zero from information in characteristic \(p > 0\). For suitable primes \(p\), the minimal polynomial \(f\) is reducible over \(\mathbb{F}_p\). This allows us to apply modular methods once again, on a second level, with respect to the
modular factors of \(f\). The algorithm thus resembles a divide and conquer strategy and
is in particular easily parallelizable. Moreover, using a similar approach, we present an algorithm for computing syzygy modules over \(K\).
On the other hand, to compute Gröbner bases over \(K'\), our new algorithm first specializes the parameters \(t_1,\ldots,t_m\) to reduce the problem from \(K'[x_1,\ldots,x_n]\) to \(\mathbb{Q}[x_1,\ldots,x_n]\). The algorithm then computes a set of Gröbner bases of specialized ideals. From this set of Gröbner bases with coefficients in \(\mathbb{Q}\), it obtains a Gröbner basis of the input ideal using sparse multivariate rational interpolation.
At current state, these algorithms are probabilistic in the sense that, as for other modular Gröbner basis computations, an effective final verification test is only known for homogeneous ideals or for local monomial orderings. The presented timings show that for most examples, our algorithms, which have been implemented in SINGULAR [17], are considerably faster than other known methods.

Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems:
(a) the end-to-end delay data flows experience and
(b) the buffer space required by a server to queue all incoming data.
NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft.
The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers.
This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.

Today's ubiquity of visual content as driven by the availability of broadband Internet, low-priced storage, and the omnipresence of camera equipped mobile devices conveys much of our thinking and feeling as individuals and as a society. As a result the growth of video repositories is increasing at enourmous rates with content now being embedded and shared through social media. To make use of this new form of social multimedia, concept detection, the automatic mapping of semantic concepts and video content has to be extended such that concept vocabularies are synchronized with current real-world events, systems can perform scalable concept learning with thousands of concepts, and high-level information such as sentiment can be extracted from visual content. To catch up with these demands the following three contributions are made in this thesis: (i) concept detection is linked to trending topics, (ii) visual learning from web videos is presented including the proper treatment of tags as concept labels, and (iii) the extension of concept detection with adjective noun pairs for sentiment analysis is proposed.
In order for concept detection to satisfy users' current information needs, the notion of fixed concept vocabularies has to be reconsidered. This thesis presents a novel concept learning approach built upon dynamic vocabularies, which are automatically augmented with trending topics mined from social media. Once discovered, trending topics are evaluated by forecasting their future progression to predict high impact topics, which are then either mapped to an available static concept vocabulary or trained as individual concept detectors on demand. It is demonstrated in experiments on YouTube video clips that by a visual learning of trending topics, improvements of over 100% in concept detection accuracy can be achieved over static vocabularies (n=78,000).
To remove manual efforts related to training data retrieval from YouTube and noise caused by tags being coarse, subjective and context-depedent, this thesis suggests an automatic concept-to-query mapping for the retrieval of relevant training video material, and active relevance filtering to generate reliable annotations from web video tags. Here, the relevance of web tags is modeled as a latent variable, which is combined with an active learning label refinement. In experiments on YouTube, active relevance filtering is found to outperform both automatic filtering and active learning approaches, leading to a reduction of required label inspections by 75% as compared to an expert annotated training dataset (n=100,000).
Finally, it is demonstrated, that concept detection can serve as a key component to infer the sentiment reflected in visual content. To extend concept detection for sentiment analysis, adjective noun pairs (ANP) as novel entities for concept learning are proposed in this thesis. First a large-scale visual sentiment ontology consisting of 3,000 ANPs is automatically constructed by mining the web. From this ontology a mid-level representation of visual content – SentiBank – is trained to encode the visual presence of 1,200 ANPs. This novel approach of visual learning is validated in three independent experiments on sentiment prediction (n=2,000), emotion detection (n=807) and pornographic filtering (n=40,000). SentiBank is shown to outperform known low-level feature representations (sentiment prediction, pornography detection) or perform comparable to state-of-the art methods (emotion detection).
Altogether, these contributions extend state-of-the-art concept detection approaches such that concept learning can be done autonomously from web videos on a large-scale, and can cope with novel semantic structures such as trending topics or adjective noun pairs, adding a new dimension to the understanding of video content.

This thesis aims at an overall improvement of the diffusion coefficient predictions. For this reason the theoretical determination of diffusion, viscosity, and thermodynamics in liquid systems is discussed. Furthermore, the experimental determination of diffusion coefficients is also part of this work. All investigations presented are carried out for organic binary liquid mixtures. Diffusion coefficient data of 9 highly nonideal binary mixtures are reported over the whole concentration range at various temperatures, (25, 30, and 35) °C. All mixtures investigated in a Taylor dispersion apparatus consist of an alcohol (ethanol, 1-propanol, or 1-butanol) dissolved in hexane, cyclohexane, carbon tetrachloride, or toluene. The uncertainty of the reported data is estimated to be within 310-11 m2s-1. To compute the thermodynamic correction factor an excess Gibbs energy model is required. Therefore, the applicability of COSMOSPACE to binary VLE predictions is thoroughly investigated. For this purpose a new method is developed to determine the required molecular parameters such as segment types, areas, volumes, and interaction parameters. So-called sigma profiles form the basis of this approach which describe the screening charge densities appearing on a molecule’s surface. To improve the prediction results a constrained two-parameter fitting strategy is also developed. These approaches are crucial to guarantee the physical significance of the segment parameters. Finally, the prediction quality of this approach is compared to the findings of the Wilson model, UNIQUAC, and the a priori predictive method COSMO-RS for a broad range of thermodynamic situations. The results show that COSMOSPACE yields results of similar quality compared to the Wilson model, while both perform much better than UNIQUAC and COSMO-RS. Since viscosity influences also the diffusion process, a new mixture viscosity model has been developed on the basis of Eyring’s absolute reaction rate theory. The nonidealities of the mixture are accounted for with the thermodynamically consistent COSMOSPACE approach. The required model and component parameters are derived from sigma-profiles, which form the basis of the a priori predictive method COSMO-RS. To improve the model performance two segment parameters are determined from a least-squares analysis to experimental viscosity data, whereas a constraint optimisation procedure is applied. In this way the parameters retain their physical meaning. Finally, the viscosity calculations of this approach are compared to the findings of the Eyring-UNIQUAC model for a broad range of chemical mixtures. These results show that the new Eyring-COSMOSPACE approach is superior to the frequently employed Eyring-UNIQUAC method. Finally, on the basis of Eyring’s absolute reaction rate theory a new model for the Maxwell-Stefan diffusivity has been developed. This model, an extension of the Vignes equation, describes the concentration dependence of the diffusion coefficient in terms of the diffusivities at infinite dilution and an additional excess Gibbs energy contribution. This energy part allows the explicit consideration of thermodynamic nonidealities within the modelling of this transport property. If the same set of interaction parameters, which has been derived from VLE data, is applied for this part and for the thermodynamic correction, a theoretically sound modelling of VLE and diffusion can be achieved. The influence of viscosity and thermodynamics on the model accuracy is thoroughly investigated. For this purpose diffusivities of 85 binary mixtures consisting of alkanes, cycloalkanes, halogenated alkanes, aromatics, ketones, and alcohols are computed. The average relative deviation between experimental data and computed values is approximately 8 % depending on the choice of the gE-model. These results indicate that this model is superior to some widely used methods. In summary, it can be said that the new approach facilitates the prediction of diffusion coefficients. The final equation is mathematically simple, universally applicable, and the prediction quality is as good as other models recently developed without having to worry about additional parameters, like pure component physical property data, self diffusion coefficients, or mixture viscosities. In contrast to many other models, the influence of the mixture viscosity can be omitted. Though a viscosity model is not required in the prediction of diffusion coefficients with the new equation, the models presented in this work allow a consistent modelling approach of diffusion, viscosity, and thermodynamics in liquid systems.

This dissertation is intended to give a systematic treatment of hypersurface singularities in arbitrary characteristic which provides the necessary tools, theoretically and computationally, for the purpose of classification. This thesis consists of five chapters: In chapter 1, we introduce the background on isolated hypersurface singularities needed for our work. In chapter 2, we formalize the notions of piecewise-homogeneous grading and we discuss thoroughly non-degeneracy in arbitrary characteristic. Chapter 3 is devoted to determinacy and normal forms of isolated hypersurface singularities. In the first part, we give finite determinacy theorems in arbitrary characteristic with respect to right respectively contact equivalence. Furthermore, we show that "isolated" and finite determinacy properties are equivalent. In the second part, we formalize Arnol'd's key ideas for the computation of normal forms an define the conditions (AA) and (AAC). The last part of Chapter 3 is devoted to the study of normal forms in the general setting of hypersurface singularities imposing neither condition (A) nor Newton-Nondegeneracy. In Chapter 4, we present algorithms which we implement in Singular for the purpose of explicit computation of regular bases and normal forms. In chapter 5, we transfer some classical results on invariants over the field C of complex numbers to algebraically closed fields of characteristic zero known as Lefschetz principle.

This thesis generalizes the Cohen-Lenstra heuristic for the class groups of real quadratic
number fields to higher class groups. A "good part" of the second class group is defined.
In general this is a non abelian proper factor group of the second class group. Properties
of those groups are described, a probability distribution on the set of those groups is in-
troduced and proposed as generalization of the Cohen-Lenstra heuristic for real quadratic
number fields. The calculation of number field tables which contain information about
higher class groups is explained and the tables are compared to the heuristic. The agree-
ment is close. A program which can create an internet database for number field tables is
presented.

Lung cancer, mainly caused by tobacco smoke, is the leading cause of cancer mortality. Large efforts in prevention and cessation have reduced smoking rates in the U.S. and other countries. Nevertheless, since 1990, rates have remained constant and it is believed that most of those currently smoking (~25%) are addicted to nicotine, and therefore are unable to stop smoking. An alternative strategy to reduce lung cancer mortality is the development of chemopreventive mixtures used to reduce cancer risk. Before entering clinical trails, it is crucial to know the efficacy, toxicity and the molecular mechanism by which the active compounds prevent carcinogenesis. 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), N-nitrosonornicotine (NNN) and benzo[a]pyrene (B[a]P) are among the most carcinogenic compounds in tobacco smoke. All have been widely used as model carcinogens and their tumorigenic activities are well established. It is believed that formation of DNA adducts is a crucial step in carcinogenesis. NNK and NNN form 4-hydroxy-1-(3-pyridyl)-1-butanone releasing and methylating adducts, while B[a]P forms B[a]P-tetraol-releasing adducts. Different isothiocyanates (ITCs) are able to prevent NNK-, NNN- or B[a]P-induced tumor formation, but relative little is know about the mechanism of these preventive effects. In this thesis, the influence of different ITCs on adduct formation from NNK plus B[a]P and NNN were evaluated. Using an A/J mouse lung tumor model, it was first shown that the formation of HPB-releasing, O6-mG and B[a]P-tetraol-releasing adducts were not affected when NNK and B[a]P were given individually or in combination, of by gavage. Using the same model, the effects of different mixtures of PEITC and BITC, given by gavage or in the diet, on DNA adduct formation were evaluated. Dietary treatment with phenethyl isothiocyanate (PEITC) or PEITC plus benzyl isothiocyanate (BITC) reduced levels of HPB-releasing adducts by 40*50%. This is consistent with a previously shown 40% inhibition of tumor multiplicity for the same treatment. In the gavage treatments with ITCs it seemed that PEITC reduced HPB-releasing DNA adducts, while levels of BITC counteracted these effects. Levels of O6-mG were minimally affected by any of the treatments. Levels of B[a]P-tetraol releasing adducts were reduced by gavaged PEITC Summary Page XII and BITC, 120 h after the last carcinogen treatment, while dietary treatment had no effects. We then extended our investigation to F-344 rats by using a similar ITC treatment protocol as in the mouse model. NNK was given in the drinking water and B[a]P in diet. Dietary PEITC reduced the formation of HPB-releasing globin and DNA adducts in lung but not in liver, while levels of B[a]P-tetraol-releasing adducts were unaffected. Additionally, the effects of PEITC, 3-phenlypropyl isothiocyanate, and their N-acetylcystein conjugates in diet on adducts from NNN in drinking water were evaluated in rat esophageal DNA and globin. Using a protocol known to inhibit NNNinduced esophageal tumorigenesis, the levels of HPB-releasing adduct levels were unaffected by the ITCs treatment. The observations that dietary PEITC inhibited the formation of HPB-releasing DNA adducts only in mice where the control levels were above 1 fmol/µg DNA and adduct levels in rat lung were reduced to levels seen in liver, lead to the conclusion that in mice and rats, there are at least two activation pathway of NNK. One is PEITC-sensitive and responsible for the high adduct levels in lung and presumably also for higher carcinogenicity of NNK in lung. The other is PEITC-insensitive and responsible for the remaining adduct levels and tumorigenicity. In conclusion, our results demonstrated that the preventive mechanism by which ITCs inhibit carcinogenesis is only in part due to inhibition of DNA adduct formation and that other mechanisms are involved. There is a large body of evidence indicating that induction of apoptosis may be a mechanism by which ITCs prevent tumor formation, but further studies are required.

Embedded systems have become ubiquitous in everyday life, and especially in the automotive industry. New applications challenge their design by introducing a new class of problems that are based on a detailed analysis of the environmental situation. Situation analysis systems rely on models and algorithms of the domain of computational geometry. The basic model is usually an Euclidean plane, which contains polygons to represent the objects of the environment. Usual implementations of computational geometry algorithms cannot be directly used for safety-critical systems. First, a strict analysis of their correctness is indispensable and second, nonfunctional requirements with respect to the limited resources must be considered. This thesis proposes a layered approach to a polygon-processing system. On top of rational numbers, a geometry kernel is formalised at first. Subsequently, geometric primitives form a second layer of abstraction that is used for plane sweep and polygon algorithms. These layers do not only divide the whole system into manageable parts but make it possible to model problems and reason about them at the appropriate level of abstraction. This structure is used for the verification as well as the implementation of the developed polygon-processing library.

Product development with end-user integration is not an end in itself but a logical necessity due to divergent types of knowledge of the user and the developer of a product. While the user is an expert in regard to the product’s usage the developer is an expert in the product’s construction and functioning. For the development of high-end products both types of expertises were a prerequisite at all times. The efficient and throughout integration of the user’s perspective into existing product development approaches is the core of user-centred product development. Activities that are the basic ingredient of just any user-centred development approach can be roughly categorized into analysis, design and evaluation activities. Research and practice prove the early integration of real end-users within those activities to add significant and sustainable value to product innovation. The instrumental, methodological and procedural impact of globalization tendencies, on modern user-centred product development in particular, is the primary research focus of the field of cross-cultural user-centred product development. This research aims at the further advancement of the methodological foundations of cross-cultural user centred product development approaches based on a stabile and profound theoretical basis. Primary research objects are established user-analysis methodologies, which are mainly based on Western concepts and theories, and their applicability in disparate cultural contexts of the Far East (China and Korea in particular). For facilitating the adaptation of abstract method characteristics to the situational context of method application as foundation of cross-cultural methodological advancement, a model of method localization was developed. In alignment with internationalization and localization activities within product development processes, a framework for localizing user-centred methodologies was developed. Equivalent to internationalization activities of real product development, the abstraction of method traits from specific methodologies is a necessity in a first step. Methodological adaptation with the primary objective of optimizing situational application of a methodology is to be done in a second step – the step of method-localization. This model of method localization and its underlying theories and principles were tested within an extensive empirical study in Germany, China and Korea. Within this study the applicability of six distinct user-centred product development methodologies, each with its very own profile of abstract method traits, was tested with 248 participants in total. Results clearly back the basic hypothesis of method-localization, i.e. that the application of a user-centred methodology rises and falls with the alignment of its characteristic traits with the cross-cultural application context. Beyond, applicability-influencing factors identified within this study could be proven to be valid indicators of adaptation-necessities and –potentials of user-centred product development methodologies.

This PhD thesis aims at finding a global robot navigation strategy for rugged off-road terrain which is robust against inaccurate self-localization, scalable to large environments, but also cost-efficient, e.g. able to generate navigation paths which optimize a cost measure closely related to terrain traversability. In order to meet this goal, aspects of both metrical and topological navigation techniques are combined. A primarily topological map is extended with the previously lacking capability of cost-efficient path planning and map extension. Further innovations include a multi-dimensional cost measure for topological edges, a method to learn these costs based on live feedback from the robot and a set of extrapolation methods to predict the traversability costs for untraversed edges. The thesis presents two sophisticated new image analysis techniques to optimize cost prediction based on the shape and appearance of surrounding terrain. Experimental results indicate that the proposed global navigation system is indeed able to perform cost-efficient, large scale path planning. At the same time, the need to maintain a fine-grained, global world model which would reduce the scalability of the approach is avoided.

As the sustained trend towards integrating more and more functionality into systems on a chip can be observed in all fields, their economic realization is a challenge for the chip making industry. This is, however, barely possible today, as the ability to design and verify such complex systems could not keep up with the rapid technological development. Owing to this productivity gap, a design methodology, mainly using pre designed and pre verifying blocks, is mandatory. The availability of such blocks, meeting the highest possible quality standards, is decisive for its success. Cost-effective, this can only be achieved by formal verification on the block-level, namely by checking properties, ranging over finite intervals of time. As this verification approach is based on constructing and solving Boolean equivalence problems, it allows for using backtrack search procedures, such as SAT. Recent improvements of the latter are responsible for its high capacity. Still, the verification of some classes of hardware designs, enjoying regular substructures or complex arithmetic data paths, is difficult and often intractable. For regular designs, this is mainly due to individual treatment of symmetrical parts of the search space by backtrack search procedures used. One approach to tackle these deficiencies, is to exploit the regular structure for problem reduction on the register transfer level (RTL). This work describes a new approach for property checking on the RTL, preserving the problem inherent structure for subsequent reduction. The reduction is based on eliminating symmetrical parts from bitvector functions, and hence, from the search space. Several approaches for symmetry reduction in search problems, based on invariance of a function under permutation of variables, have been previously proposed. Unfortunately, our investigations did not reveal this kind of symmetry in relevant cases. Instead, we propose a reduction based on symmetrical values, as we encounter them much more frequently in our industrial examples. Let \(f\) be a Boolean function. The values \(0\) and \(1\) are symmetrical values for a variable \(x\) in \(f\) iff there is a variable permutation \(\pi\) of the variables of \(f\), fixing \(x\), such that \(f|_{x=0} = \pi(f|_{x=1})\). Then the question whether \(f=1\) holds is independent from this variable, and it can be removed. By iterative application of this approach to all variables of \(f\), they are either all removed, leaving \(f=1\) or \(f=0\) trivially, or there is a variable \(x'\) with no such \(\pi\). The latter leads to the conclusion that \(f=1\) does not hold, as we found a counter-example either with \(x'=0\), or \(x'=1\). Extending this basic idea to vectors of variables, allows to elevate it to the RTL. There, self similarities in the function representation, resulting from the regular structure preserved, can be exploited, and as a consequence, symmetrical bitvector values can be found syntactically. In particular, bitvector term-rewriting techniques, isomorphism procedures for specially manipulated term graphs, and combinations thereof, are proposed. This approach dramatically reduces the computational effort needed for functional verification on the block-level and, in particular, for the important problem class of regular designs. It allows the verification of industrial designs previously intractable. The main contributions of this work are in providing a framework for dealing with bitvector functions algebraically, a concise description of bounded model checking on the register transfer level, as well as new reduction techniques and new approaches for finding and exploiting symmetrical values in bitvector functions.

In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.

The application behind the subject of this thesis are multiscale simulations on highly heterogeneous particle-reinforced composites with large jumps in their material coefficients. Such simulations are used, e.g., for the prediction of elastic properties. As the underlying microstructures have very complex geometries, a discretization by means of finite elements typically involves very fine resolved meshes. The latter results in discretized linear systems of more than \(10^8\) unknowns which need to be solved efficiently. However, the variation of the material coefficients even on very small scales reveals the failure of most available methods when solving the arising linear systems. While for scalar elliptic problems of multiscale character, robust domain decomposition methods are developed, their extension and application to 3D elasticity problems needs to be further established.
The focus of the thesis lies in the development and analysis of robust overlapping domain decomposition methods for multiscale problems in linear elasticity. The method combines corrections on local subdomains with a global correction on a coarser grid. As the robustness of the overall method is mainly determined by how well small scale features of the solution can be captured on the coarser grid levels, robust multiscale coarsening strategies need to be developed which properly transfer information between fine and coarse grids.
We carry out a detailed and novel analysis of two-level overlapping domain decomposition methods for the elasticity problems. The study also provides a concept for the construction of multiscale coarsening strategies to robustly solve the discretized linear systems, i.e. with iteration numbers independent of variations in the Young's modulus and the Poisson ratio of the underlying composite. The theory also captures anisotropic elasticity problems and allows applications to multi-phase elastic materials with non-isotropic constituents in two and three spatial dimensions.
Moreover, we develop and construct new multiscale coarsening strategies and show why they should be preferred over standard ones on several model problems. In a parallel implementation (MPI) of the developed methods, we present applications to real composites and robustly solve discretized systems of more than \(200\) million unknowns.

Generic layout analysis--process of decomposing document image into homogeneous regions for a collection of diverse document images--has many important applications in document image analysis and understanding such as preprocessing of degraded warped, camera-captured document images, high performance layout analysis of document images containing complex cursive scripts, and word spotting in historical document images at page level. Many areas in this field like generic text line extraction method are considered as elusive goals so far, still beyond the reach of the state-of-the-art methods [NJ07, LSZT07, KB06]. This thesis addresses this problem in such a way that it presents generic, domain-independent, text line extraction and text and non-text segmentation methods, and then describes some important applications, that were developed based on these methods. An overview of the key contributions of this thesis is as follows.
The first part of this thesis presents a generic text line extraction method using a combination of matched filtering and ridge detection techniques, which are commonly used in computer vision. Unlike the state-of-the-art text line extraction methods in the literature, the generic text line extraction method can be equally and robustly applied to a large variety of document image classes including scanned and camera-captured documents, binary and grayscale documents, typed-text and handwritten documents, historical and contemporary documents, and documents containing different scripts. Different standard datasets are selected for performance evaluation that belong to different categories of document images such as the UW-III [GHHP97] dataset of scanned documents, the ICDAR 2007 [GAS07] and the UMD [LZDJ08] datasets of handwritten documents, the DFKI-I [SB07] dataset of camera-captured documents, Arabic/Urdu script documents dataset, and German calligraphic (Fraktur) script historical documents dataset. The generic text line extraction method achieves 86% (n = 23,763 text lines in 650 documents) text line detection accuracy which is better than the aggregate accuracy of 73% of the best performing domain-specific state-of-the-art methods. To the best of the author's knowledge, it is the first general-purpose text line extraction method that can be equally used for a diverse collection of documents.
This thesis also presents an active contour (snake) based curled text line extraction method for warped, camera-captured document images. The presented approach is applied to DFKI-I [SB07] dataset of camera-captured, Latin script document images for curled text line extraction. It achieves above 95% (n = 3,091 text lines in 102 documents) text line detection accuracy, which is significantly better than the competing state-of-the-art curled text line extraction methods. The presented text line extraction method can also be applied to document images containing different scripts like Chinese, Devanagari, and Arabic after small modifications.
The second part of this thesis presents an improved version of the state-of-the-art multiresolution morphology (Leptonica) based text and non-text segmentation method [Blo91], which is a domain-independent page segmentation approach and can be equally applied to a diverse collection of binarized document images. It is demonstrated that the presented improvements result in an increase in segmentation accuracy from 93% to 99% (n = 113 documents).
This thesis also introduces a discriminative learning based approach for page segmentation, where a self-tunable multi-layer perceptron (MLP) classifier [BS10] is trained for distinguishing between text and non-text connected components. Unlike other classification based page segmentation approaches in the literature, the connected components based discriminative learning based approach is faster than pixel based classification methods and does not require a block segmentation method beforehand. A segmentation accuracy of $96\%$ ($n = 113$ documents) is achieved in comparison to the state-of-the-art multiresolution morphology (Leptonica) based page segmentation method [Blo91] that achieves a segmentation accuracy of 93%. In addition to text and non-text segmentation of Latin script documents, the presented approach can also be adapted for document images containing other scripts as well as for other specialized layout analysis tasks such as digit and non-digit segmentation [HBSB12], orientation detection [RBSB09], and body-text and side-note segmentation [BAESB12].
Finally, this thesis presents important applications of the two generic layout analysis techniques, ridge-based text line extraction method and the multi-resolution morphology based text and non-text segmentation method, discussed above. First, a complete preprocessing pipeline is described for removing different types of degradations from grayscale warped, camera-captured document images that includes removal of grayscale degradations such as non-uniform shadows and blurring through binarization, noise cleanup applying page frame detection, and document rectification using monocular dewarping. Each of these preprocessing steps shows significant improvement in comparison to the analyzed state-of-the-art methods in the literature. Second, a high performance layout analysis method is described for complex Arabic script document images written in different languages such as Arabic, Urdu, and Persian and different styles for example Naskh and Nastaliq. The presented layout analysis system is robust against different types of document image degradations and shows better performance for text and non-text segmentation, text line extraction, and reading order determination on a variety of Arabic and Urdu document images as compared to the state-of-the-art methods. It can be used for large scale Arabic and Urdu documents' digitization processes. These applications demonstrate that the layout analysis methods, ridge-based text line extraction and the multi-resolution morphology based text and non-text segmentation, are generic and can be applied easily to a large collection of diverse document images.

This thesis is devoted to applying symbolic methods to the problems of decoding linear codes and of algebraic cryptanalysis. The paradigm we employ here is as follows. We reformulate the initial problem in terms of systems of polynomial equations over a finite field. The solution(s) of such systems should yield a way to solve the initial problem. Our main tools for handling polynomials and polynomial systems in such a paradigm is the technique of Gröbner bases and normal form reductions. The first part of the thesis is devoted to formulating and solving specific polynomial systems that reduce the problem of decoding linear codes to the problem of polynomial system solving. We analyze the existing methods (mainly for the cyclic codes) and propose an original method for arbitrary linear codes that in some sense generalizes the Newton identities method widely known for cyclic codes. We investigate the structure of the underlying ideals and show how one can solve the decoding problem - both the so-called bounded decoding and more general nearest codeword decoding - by finding reduced Gröbner bases of these ideals. The main feature of the method is that unlike usual methods based on Gröbner bases for "finite field" situations, we do not add the so-called field equations. This tremendously simplifies the underlying ideals, thus making feasible working with quite large parameters of codes. Further we address complexity issues, by giving some insight to the Macaulay matrix of the underlying systems. By making a series of assumptions we are able to provide an upper bound for the complexity coefficient of our method. We address also finding the minimum distance and the weight distribution. We provide solid experimental material and comparisons with some of the existing methods in this area. In the second part we deal with the algebraic cryptanalysis of block iterative ciphers. Namely, we analyze the small-scale variants of the Advanced Encryption Standard (AES), which is a widely used modern block cipher. Here a cryptanalyst composes the polynomial systems which solutions should yield a secret key used by communicating parties in a symmetric cryptosystem. We analyze the systems formulated by researchers for the algebraic cryptanalysis, and identify the problem that conventional systems have many auxiliary variables that are not actually needed for the key recovery. Moreover, having many such auxiliary variables, specific to a given plaintext/ciphertext pair, complicates the use of several pairs which is common in cryptanalysis. We thus provide a new system where the auxiliary variables are eliminated via normal form reductions. The resulting system in key-variables only is then solved. We present experimental evidence that such an approach is quite good for small scaled ciphers. We investigate further our approach and employ the so-called meet-in-the-middle principle to see how far one can go in analyzing just 2-3 rounds of scaled ciphers. Additional "tuning techniques" are discussed together with experimental material. Overall, we believe that the material of this part of the thesis makes a step further in algebraic cryptanalysis of block ciphers.

In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.

This study deals with the optimal control problems of the glass tube drawing processes where the aim is to control the cross-sectional area (circular) of the tube by using the adjoint variable approach. The process of tube drawing is modeled by four coupled nonlinear partial differential equations. These equations are derived by the axisymmetric Stokes equations and the energy equation by using the approach based on asymptotic expansions with inverse aspect ratio as small parameter. Existence and uniqueness of the solutions of stationary isothermal model is also proved. By defining the cost functional, we formulated the optimal control problem. Then Lagrange functional associated with minimization problem is introduced and the first and the second order optimality conditions are derived. We also proved the existence and uniqueness of the solutions of the stationary isothermal model. We implemented the optimization algorithms based on the steepest descent, nonlinear conjugate gradient, BFGS, and Newton approaches. In the Newton method, CG iterations are introduced to solve the Newton equation. Numerical results are obtained for two different cases. In the first case, the cross-sectional area for the entire time domain is controlled and in the second case, the area at the final time is controlled. We also compared the performance of the optimization algorithms in terms of the solution iterations, functional evaluations and the computation time.

Computer Vision (CV) problems, such as image classification and segmentation, have traditionally been solved by manual construction of feature hierarchies or incorporation of other prior knowledge. However, noisy images, varying viewpoints and lighting conditions of images, and clutters in real-world images make the problem challenging. Such tasks cannot be efficiently solved without learning from data. Therefore, many Deep Learning (DL) approaches have recently been successful for various CV tasks, for instance, image classification, object recognition and detection, action recognition, video classification, and scene labeling. The main focus of this thesis is to investigate a purely learning-based approach, particularly, Multi-Dimensional LSTM (MD-LSTM) recurrent neural networks to tackle the challenging CV tasks, classification and segmentation on 2D and 3D image data. Due to the structural nature of MD-LSTM, the network learns directly from raw pixel values and takes the complex spatial dependencies of each pixel into account. This thesis provides several key contributions in the field of CV and DL.
Several MD-LSTM network architectural options are suggested based on the type of input and output, as well as the requiring tasks. Including the main layers, which are an input layer, a hidden layer, and an output layer, several additional layers can be added such as a collapse layer and a fully connected layer. First, a single Two Dimensional LSTM (2D-LSTM) is directly applied on texture images for segmentation and show improvement over other texture segmentation methods. Besides, a 2D-LSTM layer with a collapse layer is applied for image classification on texture and scene images and have provided an accurate classification results. In addition, a deeper model with a fully connected layer is introduced to deal with more complex images for scene labeling and outperforms the other state-of-the-art methods including the deep Convolutional Neural Networks (CNN). Here, several input and output representation techniques are introduced to achieve the robust classification. Randomly sampled windows as input are transformed in scaling and rotation, which are integrated to get the final classification. To achieve multi-class image classification on scene images, several pruning techniques are introduced. This framework provides a good results in automatic web-image tagging. The next contribution is an investigation of 3D data with MD-LSTM. The traditional cuboid order of computations in Multi-Dimensional LSTM (MD-LSTM) is re-arranged in pyramidal fashion. The resulting Pyramidal Multi-Dimensional LSTM (PyraMiD-LSTM) is easy to parallelize, especially for 3D data such as stacks of brain slice images. PyraMiD-LSTM was tested on 3D biomedical volumetric images and achieved best known pixel-wise brain image segmentation results and competitive results on Electron Microscopy (EM) data for membrane segmentation.
To validate the framework, several challenging databases for classification and segmentation are proposed to overcome the limitations of current databases. First, scene images are randomly collected from the web and used for scene understanding, i.e., the web-scene image dataset for multi-class image classification. To achieve multi-class image classification, the training and testing images are generated in a different setting. For training, images belong to a single pre-defined category which are trained as a regular single-class image classification. However, for testing, images containing multi-classes are randomly collected by web-image search engine by querying the categories. All scene images include noise, background clutter, unrelated contents, and also diverse in quality and resolution. This setting can make the database possible to evaluate for real-world applications. Secondly, an automated blob-mosaics texture dataset generator is introduced for segmentation. Random 2D Gaussian blobs are generated and filled with random material textures. These textures contain diverse changes in illumination, scale, rotation, and viewpoint. The generated images are very challenging since they are even visually hard to separate the related regions.
Overall, the contributions in this thesis are major advancements in the direction of solving image analysis problems with Long Short-Term Memory (LSTM) without the need of any extra processing or manually designed steps. We aim at improving the presented framework to achieve the ultimate goal of accurate fine-grained image analysis and human-like understanding of images by machines.

Human forest modification is among the largest global drivers of terrestrial degradation
of biodiversity, species interactions, and ecosystem functioning. One of the most
pertinent components, forest fragmentation, has a long history in ecological research
across the globe, particularly in lower latitudes. However, we still know little how
fragmentation shapes temperate ecosystems, irrespective of the ancient status quo of
European deforestation. Furthermore, its interaction with another pivotal component
of European forests, silvicultural management, are practically unexplored. Hence,
answering the question how anthropogenic modification of temperate forests affects
fundamental components of forest ecosystems is essential basic research that has
been neglected thus far. Most basal ecosystem elements are plants and their insect
herbivores, as they form the energetic basis of the tropic pyramid. Furthermore, their
respective biodiversity, functional traits, and the networks of interactions they
establish are key for a multitude of ecosystem functions, not least ecosystem stability.
Hence, the thesis at hand aimed to disentangle this complex system of
interdependencies of human impacts, biodiversity, species traits and inter-species
interactions.
The first step lay in understanding how woody plant assemblages are shaped by
human forest modification. For this purpose, field investigations in 57 plots in the
hyperfragmented cultural landscape of the Northern Palatinate highlands (SW
Germany) were conducted, censusing > 4,000 tree/shrub individuals from 34 species.
Use of novel, integrative indices for different types of land-use allowed an accurate
quantification of biotic responses. Intriguingly, woody tree/shrub communities reacted
strikingly positive to forest fragmentation, with increases in alpha and beta diversity,
as well as proliferation of heat/drought/light adapted pioneer species. Contrarily,
managed interior forests were homogenized/constrained in biodiversity, with
dominance of shade/cold adapted commercial tree species. Comparisons with recently
unmanaged stands (> 40 a) revealed first indications for nascent conversion to oldgrowth
conditions, with larger variability in light conditions and subsequent
community composition. Reactions to microclimatic conditions, the relationship
between associated species traits and the corresponding species pool, as well as
facilitative/constraining effects by foresters were discussed as underlying mechanisms.
Reactions of herbivore assemblages to forest fragmentation and the subsequent
changes in host plant communities were assessed by comprehensive sampling of >
1,000 live herbivores from 134 species in the forest understory. Diversity was –
similarly to plant communities - higher in fragmentation affected habitats, particularly
in edges of continuous control forests. Furthermore, average trophic specialization
showed an identical pattern. Mechanistically, benefits from microclimatic conditions,
host availability, as well as pronounced niche differentiation are deemed responsible.
While communities were heterogeneous, with no segregation across habitats, (smallforest fragments, edges, and interior of control forests), vegetation diversity, herbivore
diversity, as well as trophic specialization were identified to shape community
composition. This probably reflected a gradient from generalistic/species poor vs.
specialist/species rich herbivore assemblages.
Insect studies conducted in forest systems are doomed to incompleteness
without considering ‘the last biological frontier’, the tree canopies. To access their
biodiversity, relationship to edge effects, and their conservational value, the
arboricolous arthropod fauna of 24 beech (Fagus sylvatica) canopies was sampled via
insecticidal knockdown (‘fogging’). This resulted in an exhaustive collection of > 46,000
specimens from 24 major taxonomic/functional groups. Abundance distributions were
markedly negative exponential, indicating high abundance variability in tree crowns.
Individuals of six pertinent orders were identified to species level, returning > 3,100
individuals from 175 species and 52 families. This high diversity did marginally differ
across habitats, with slightly higher species richness in edge canopies. However,
communities in edge crowns were noticeably more heterogeneous than those in the
forest interior, possibly due to higher variability in environmental edge conditions. In
total, 49 species with protective value were identified, of which only one showed
habitat preferences (for near-natural interior forests). Among them, six species (all
beetles, Coleoptera) were classified as ‘priority species’ for conservation efforts. Hence,
beech canopies of the Northern Palatinate highlands can be considered strongholds of
insect biodiversity, incorporating many species of particular protective value.
The intricacy of plant-herbivore interaction networks and their relationship to
forest fragmentation is largely unexplored, particularly in Central Europe. Illumination
of this matter is all the more important, as ecological networks are highly relevant for
ecosystem stability, particularly in the face of additional anthropogenic disturbances,
such as climate change. Hence, plant-herbivore interaction networks (PHNs) were
constructed from woody plants and their associated herbivores, sampled alive in the
understory. Herbivory verification was achieved using no-choice-feeding assays, as well
as literature references. In total, networks across small forest fragments, edges, and
the forest interior consisted of 696 interactions. Network complexity and trophic niche
redundancy were compared across habitats using a rarefaction-like resampling
procedure. PHNs in fragmentation affected forest habitats were significantly more
complex, as well as more redundant in their realized niches, despite being composed of
relatively more specialist species. Furthermore, network robustness to climate change
was quantified utilizing four different scenarios for climate change susceptibility of
involved plants. In this procedure, remaining herbivores in the network were measured
upon successive loss of their host plant species. Consistently, PHNs in edges (and to a
smaller degree in small fragments) withstood primary extinction of plant species
longer, making them more robust. This was attributed to the high prevalence of
heat/drought-adapted species, as well as to beneficial effects of network topography
(complexity and redundancy). Consequently, strong correlative relationships were
found between realized niche redundancy and climate change robustness of PHNs.
This was both the first time that biologically realistic extinctions (instead of e.g.random extinctions) were used to measure network robustness, and that topographical
network parameters were identified as potential indicators for network robustness
against climate change.
In synthesis, in the light of global biotic degradation due to human forest
modification, the necessity to differentiate must be claimed. Ecosystems react
differently to anthropogenic disturbances, and it seems the particular features present
in Central European forests (ancient deforestation, extensive management, and, most
importantly, high richness in open-forest plant species) cause partly opposed patterns
to other biomes. Lenient microclimates and diverse plant communities facilitate
equally diverse herbivore assemblages, and hence complex and robust networks,
opposed to the forest interior. Therefore, in the reality of extensively used cultural
landscapes, fragmentation affected forest ecosystems, particularly forest edges, can be
perceived as reservoir for biodiversity, and ecosystem functionality. Nevertheless, as
practically all forest habitats considered in this thesis are under human cultivation,
recommendations for ecological enhancement of all forest habitats are discussed.

European economic, social and territorial cohesion is one of the fundamental aims of the European Union (EU). It seeks to both reduce the effects of internal borders and enhance European integration. In order to facilitate territorial cohesion, the linkage of member states by means of efficient cross-border transport infrastructures and services is an important factor. Many cross-border transport challenges have historically existed in everyday life. They have hampered smooth passenger and freight flows within the EU.
Two EU policies, namely European Territorial Cooperation (ETC) and the Trans-European Transport Networks (TEN-T), promote enhancing cross-border transport through cooperation in soft spaces. This dissertation seeks to explore the influence of these two EU policies on cross-border transport and further European integration.
Based on an analysis of European, national and cross-border policy and planning documents, surveys with TEN-T Corridor Coordinators and INTERREG Secretariats and a high number of elite interviews, the dissertation will investigate how the objectives of the two EU policies were formally implemented in both soft spaces and the EU member states as well as which practical implementations have taken place. Thereby, the initiated Europeanisation and European integration processes will be evaluated. The analysis is conducted in nine preliminary case studies and two in-depth case studies. The cases comprise cross-border regions funded by the ETC policy that are crossed by a TEN-T corridor. The in-depth analysis explores the Greater Region Saar-Lor-Lux+ and the Brandenburg-Lubuskie region. The cases are characterised by different initial situations.
The research determined that the two EU policies support cross-border transport on different levels and, further, that they need to be better intertwined in order to make effective use of their complementarities. Moreover, it became clear that the EU policies have a distinct influence on domestic policy and planning documents of different administrative levels and countries as well as on the practical implementation. The final implementation of the EU objectives and the cross-border transport initiatives was strongly influenced by the member states’ initial situations – particularly, the regional and local transport needs. This dissertation concludes that the two EU policies cannot remove the entirety of the cross-border transport-related challenges. However, in addition to their financial investments in concrete projects, they promote the importance of cross-border transport and facilitate cooperation, learning and exchange processes. These are all of high relevance to cross-border transport development, driven by member states, as well as to further European integration.
The dissertation recommends that the transport planning competences of the EU in addition to the TEN-T network should not be enlarged in the future, but rather further transnational transport development tasks should be decentralised to transnational transport planning committees that are aware of regional needs and can coordinate a joint transport development strategy. The latter should be implemented with the support of additional EU funds for secondary and tertiary cross-border connections. Moreover, the potential complementarities of the transnational regions and transport corridors as well as the two EU policy fields should be made better use of by improving communication. This means that soft spaces, the TEN-T and ETC Policy as well as the domestic transport ministries and the domestic administrations that are responsible for the two EU policies need to intensify their cooperation. Furthermore, a focus of future ETC projects on topics that are of added value for the whole cross-border region or else that can be applied in different territorial contexts is recommended rather than investing in small-scale scattered expensive infrastructures and services that are only of benefit for a small part of the region. Additionally, the dissemination of project results should be enhanced so that the developed tools can be accessed by potential users and benefits become more visible to a wider society, despite the fact that they might not be measurable in numbers. In addition, the research points at another success factor for more concrete outputs: the frequent involvement of transport and spatial planners in transnational projects could increase the relation to planning practice. Besides that, advanced training regarding planning culture could reduce cooperation barriers.

The lattice Boltzmann method (LBM) is a numerical solver for the Navier-Stokes equations, based on an underlying molecular dynamic model. Recently, it has been extended towardsthe simulation of complex fluids. We use the asymptotic expansion technique to investigate the standard scheme, the initialization problem and possible developments towards moving boundary and fluid-structure interaction problems. At the same time, it will be shown how the mathematical analysis can be used to understand and improve the algorithm. First of all, we elaborate the tool "asymptotic analysis", proposing a general formulation of the technique and explaining the methods and the strategy we use for the investigation. A first standard application to the LBM is described, which leads to the approximation of the Navier-Stokes solution starting from the lattice Boltzmann equation. As next, we extend the analysis to investigate origin and dynamics of initial layers. A class of initialization algorithms to generate accurate initial values within the LB framework is described in detail. Starting from existing routines, we will be able to improve the schemes in term of efficiency and accuracy. Then we study the features of a simple moving boundary LBM. In particular, we concentrate on the initialization of new fluid nodes created by the variations of the computational fluid domain. An overview of existing possible choices is presented. Performing a careful analysis of the problem we propose a modified algorithm, which produces satisfactory results. Finally, to set up an LBM for fluid structure interaction, efficient routines to evaluate forces are required. We describe the Momentum Exchange algorithm (MEA). Precise accuracy estimates are derived, and the analysis leads to the construction of an improved method to evaluate the interface stresses. In conclusion, we test the defined code and validate the results of the analysis on several simple benchmarks. From the theoretical point of view, in the thesis we have developed a general formulation of the asymptotic expansion, which is expected to offer a more flexible tool in the investigation of numerical methods. The main practical contribution offered by this work is the detailed analysis of the numerical method. It allows to understand and improve the algorithms, and construct new routines, which can be considered as starting points for future researches.

Knowing the extent to which we rely on technology one may think that correct programs are nowadays the norm. Unfortunately, this is far from the truth. Luckily, possible reasons why program correctness is difficult often come hand in hand with some solutions. Consider concurrent program correctness under Sequential Consistency (SC). Under SC, instructions of each program's concurrent component are executed atomically and in order. By using logic to represent correctness specifications, model checking provides a successful solution to concurrent program verification under SC. Alas, SC’s atomicity assumptions do not reflect the reality of hardware architectures. Total Store Order (TSO) is a less common memory model implemented in SPARC and in Intel x86 multiprocessors that relaxes the SC constraints. While the architecturally de-atomized execution of stores under TSO speeds up program execution, it also complicates program verification. To be precise, due to TSO’s unbounded store buffers, a program’s semantics under TSO might be infinite. This, for example, turns reachability under SC (a PSPACE-complete task) into a non-primitive-recursive-complete problem under TSO. This thesis develops verification techniques targeting TSO-relaxed programs. To be precise, we present under- and over-approximating heuristics for checking reachability in TSO-relaxed programs as well as state-reducing methods for speeding up such heuristics. In a first contribution, we propose an algorithm to check reachability of TSO-relaxed programs lazily. The under-approximating refinement algorithm uses auxiliary variables to simulate TSO’s buffers along instruction sequences suggested by an oracle. The oracle’s deciding characteristic is that if it returns the empty sequence then the program’s SC- and TSO-reachable states are the same. Secondly, we propose several approaches to over-approximate TSO buffers. Combined in a refinement algorithm, these approaches can be used to determine safety with respect to TSO reachability for a large class of TSO-relaxed programs. On the more technical side, we prove that checking reachability is decidable when TSO buffers are approximated by multisets with tracked per address last-added-values. Finally, we analyze how the explored state space can be reduced when checking TSO and SC reachability. Intuitively, through the viewpoint of Shasha-and-Snir-like traces, we exploit the structure of program instructions to explain several state-space reducing methods including dynamic and cartesian partial order reduction.

Mechanical and electrical properties of carbon nanofiber–ceramic nanoparticle–polymer composites
(2010)

The present research is focused on the manufacturing and analysis of composites consisting of a thermosetting polymer reinforced with fillers of nanometric dimensions. The materials were chosen to be an epoxy resin matrix and two different kinds of fillers: electrically conductive carbon nanofibers (CNFs) and ceramic titanium dioxide (TiO2) and aluminium dioxide (Al2O3) nanoparticles. In an initial step of the work, in order to understand the effect that each kind of filler had when added separately to the polymer matrix, CNF–EP and ceramic nanoparticle–EP composites were manufactured and tested. Each type of filler was dispersed in the polymer matrix using two different dispersion technologies. CNFs were dispersed in the resin with the aid of a three roll calender (TRC) whereas a torus bead mill (TML) was used in the ceramic nanoparticle case. Calendering proved to be an efficient method to disperse the untreated CNFs in the polymer matrix. The study of the physical properties of undispersed CNF composites showed that the tensile strength and the maximum sustained strain, were more sensitive to the state of dispersion of the nanofibers than the elastic modulus, fracture toughness, impact energy and electrical conductivity (for filler loadings above the percolation threshold of the system). Rheological investigation of the uncured CNF–epoxy mixture at different stages of dispersion indicated the formation of an interconnected nanofiber network within the matrix after the initial steps of calendering. CNF–EP composites showed better mechanical performance than the unmodified polymer matrix. However, the tensile modulus and strength of the CNF composites accused the presence of remaining nanofiber clusters and did not reach theoretically predicted values. Fracture toughness and resistance against impact did not seem to be so sensitive to the state of nanofiber dispersion and improved consistently with the incorporation of the CNFs. The electrical conductivity of the CNF composites saw an eight orders of magnitude percolative enhancement with increasing nanofiber content. The percolation threshold for the achieved level of CNF dispersion was found to be 0.14 vol. %. It was also determined that, for these composites, the main mechanism of electrical transmission was the electron tunnelling mechanism. Ceramic nanoparticle–EP composites were manufactured using TiO2 and Al2O3 particles as fillers in the epoxy matrix. Mechanical dispersion of the nanoparticles in the liquid polymer by means of a torus bead mill dissolver led to homogeneous distributions of particles in the matrix. Remaining particle agglomerates had a mean value of 80 nm. However, micrometer sized agglomerates could clearly be observed in the microscopical analysis of the composites, especially in the TiO2 case. The inclusion of the nanoparticles in the epoxy resin resulted in a general improvement of the modulus, strength, maximum sustained strain, fracture toughness and impact energy of the polymer matrix. Nanoparticles were able to overcome the stiffness/toughness problem. On the other hand, nanoparticle–EP composites showed lower electrical conductivity than the neat epoxy. In general, there were no significant differences between the incorporation of TiO2 or Al2O3 particles. Based on the previous results, CNFs and nanoparticles were combined as fillers to create a nanocomposite that could benefit from the electrical properties provided by the conductive CNFs and, at the same time, have improved mechanical performance thanks to the presence of the well dispersed ceramic nanoparticles. Nanoparticles and CNFs were dispersed separately to create two batches which were blended together in a dissolver mixer. This method proved effective to create well dispersed CNF–nanoparticle–epoxy composites which showed improved electrical and mechanical properties compared with the neat polymer matrix. The well dispersed ceramic nanofillers were able to introduce additional energy dissipating mechanisms in the CNF–EP composites that resulted in an improvement of their mechanical performance. With high volume loadings of nanoparticles most of the reinforcement came from the presence of the nanoparticles in the polymer matrix. Therefore, the observed trends were, in essence, similar to the ones observed in the ceramic nanoparticle–EP composites. The enhancement in the mechanical performance of the CNF composites with the inclusion of ceramic nanoparticles came at the price of an increase in the percolation threshold and a reduction of the electrical conductivity of the CNF–nanoparticle–EP composites compared with the CNF–EP materials. A modified Weber and Kamal’s fiber contact model (FCM) was used to explain the electrical behaviour of the CNF–nanoparticle–EP composites once percolation was achieved. This model was able to fit rather accurately the experimentally measured conductivity of these composites.

Proprietary polyurea based thermosets (3P resins) were produced from polymeric methylene diphenylisocyanate (PMDI) and water glass (WG) using a phosphate emulsifier. Polyisocyanates when combined with WG in presence of suitable emulsifier result in very versatile products. WG acts in the resulting polyurea through a special sol-gel route as a cheap precursor of the silicate (xerogel) filler produced in-situ. The particle size and its distribution of the silicate are coarse and very broad, respectively, which impart the mechanical properties of the 3P systems negatively. The research strategy was to achieve initially a fine water in oil type (W/O = WG/PMDI) emulsion by “hybridising” the polyisocyanate with suitable thermosetting resins (such as vinylester (VE), melamine/formaldehyde (MF) or epoxy resin (EP)). As the presently used phosphate emulsifiers may leak into the environment, the research work was directed to find such “reactive” emulsifiers which can be chemically built in into the final polyurea-based thermosets. The progressive elimination of the organic phosphate, following the European Community Regulation on chemicals and their safe use (REACH), was studied and alternative emulsifiers for the PMDI/WG systems were found. The new hybrid systems in which the role of the phosphate emulsifier has been overtaken by suitable resins (VE, EP) or additives (MF) are designed 2P resins. Further, the cure behaviour (DSC, ATR-IR), chemorheology (plate/plate rheometer), morphology (SEM, AFM) and mechanical properties (flexure, fracture mechanics) have been studied accordingly. The property upgrade targeted not only the mechanical performances but also thermal and flame resistance. Therefore, emphasis was made to improve the thermal and fire resistance (e.g. TGA, UL-94 flammability test) of the in-situ filled hybrid resins. Improvements on the fracture mechanical properties as well as in the flexural properties of the novel 3P and 2P hybrids were obtained. This was accompanied in most of the cases by a pronounced reduction of the polysilicate particle size as well as by a finer dispersion. Further the complex reaction kinetics of the reference 3P was studied, and some of the main reactions taking place during the curing process were established. The pot life of the hybrid resins was, in most of the cases, prolonged, which facilitates the posterior processing of such resins. The thermal resistance of the hybrid resins was also enhanced for all the novel hybrids. However, the hybridization strategy (mostly with EP and VE) did not have satisfactory results when taking into account the fire resistance. Efforts will be made in the future to overcome this problem. Finally it was confirmed that the elimination of the organic phosphate emulsifier was feasible, obtaining the so called 2P hybrids. Those, in many cases, showed improved fracture mechanical, flexural and thermal resistance properties as well as a finer and more homogeneous morphology. The novel hybrid resins of unusual characteristics (e.g. curing under wet conditions and even in water) are promising matrix materials for composites in various application fields such as infrastructure (rehabilitation of sewers), building and construction (refilling), transportation (coating of vessels, pipes of improved chemical resistance)…

The aim of this work was to synthesize and characterize new bidentate N,N,P-ligands and their corresponding heterobimetallic complexes. These bidentate pyridylpyrimidine aminophosphine ligands were synthesized by ring closure of two different enaminones ( 3-(dimethylamino)-1-(pyridine-2-yl)-prop-2-en-1-one or 3-(dimethylamino)-1-(pyridine-2-yl)-but-2-en-1-one) with excess amount of guanidinium salts in the presence of base. The novel phosphine functionalized guanidinium salts were prepared from 2-(diphenylphosphinyl)ethylamine or 3-(diphenyl-phosphinyl)propylamine. These bidentate N,N,P-ligands contain hard and soft donor sites which allows the coordination of two different metal centers and bimetallic complexes. These bimetallic complexes can exhibit a unique behavior as a result of a cooperation between the two metal atoms. First, the gold(I) complexes of all these four different ligands were synthesized. The gold metal coordinates only to the phosphorus atom. It was proved by X-Ray crystallography technique and 31P NMR spectroscopy. Addition to the gold(I)-monometallic complexes, trans- coordinated rhodium complex of (2-amino)pyridylpyrimidine aminophosphine ligand was successfully prepared. The characterization of this complex was achieved by NMR and IR spectroscopy. Reacting the mono gold(I) complexes with the different metal salts like Pd(PhCN)2Cl2, ZnCl2, [Ru(p-cymene)Cl2] dimer gave the target heterobimetallic complexes. The second metal centers coordinated to the N,N donor site which was proved by the help of NMR spectroscopy and ESI-MS measurements. The Au(I) and Au-Zn complexes of N,N,P-ligands were examined as catalysts for the hydroamidation reaction of cyclohexene with p-toluenesulfonamide. They did not show activities under the tested conditions. Further studies are necessary to understand the catalytic activities and cooperativity between the two metal atoms. In addition, bi-and trimetallic complexes with the rhodium compound could be synthesized and tested in different organic transformations. Furthermore, the chiral hydroxyl[2.2]paracyclophane substituted with five different aminopyrimidines were accomplished. These aminopyrimidine ligands were synthesized by a cyclization reaction with hydroxyl[2.2]paracyclophane substituted enaminone and excess amount of corresponding guanidinium salts under basic conditions. In the last part of this work, kinetic studies of cyclopalladation reaction of the 2-(arylaminopyrimidin-4-yl)pyridine ligands with Pd(PhCN)2 These measurements were carried out by using UV-Vis spectroscopy. The spectral studies of cyclometallation step showed that the reaction fits a second order kinetics. In addition to this, a full kinetic investigation was performed at different temperatures and the activation parameters of complex formation were calculated.

The last couple of years have marked the entire field of information technology with the introduction of a new global resource, called data. Certainly, one can argue that large amounts of information and highly interconnected and complex datasets were available since the dawn of the computer and even centuries before. However, it has been only a few years since digital data has exponentially expended, diversified and interconnected into an overwhelming range of domains, generating an entire universe of zeros and ones. This universe represents a source of information with the potential of advancing a multitude of fields and sparking valuable insights. In order to obtain this information, this data needs to be explored, analyzed and interpreted.
While a large set of problems can be addressed through automatic techniques from fields like artificial intelligence, machine learning or computer vision, there are various datasets and domains that still rely on the human intuition and experience in order to parse and discover hidden information. In such instances, the data is usually structured and represented in the form of an interactive visual representation that allows users to efficiently explore the data space and reach valuable insights. However, the experience, knowledge and intuition of a single person also has its limits. To address this, collaborative visualizations allow multiple users to communicate, interact and explore a visual representation by building on the different views and knowledge blocks contributed by each person.
In this dissertation, we explore the potential of subjective measurements and user emotional awareness in collaborative scenarios as well as support flexible and user- centered collaboration in information visualization systems running on tabletop displays. We commence by introducing the concept of user-centered collaborative visualization (UCCV) and highlighting the context in which it applies. We continue with a thorough overview of the state-of-the-art in the areas of collaborative information visualization, subjectivity measurement and emotion visualization, combinable tabletop tangibles, as well as browsing history visualizations. Based on a new web browser history visualization for exploring user parallel browsing behavior, we introduce two novel user-centered techniques for supporting collaboration in co-located visualization systems. To begin with, we inspect the particularities of detecting user subjectivity through brain-computer interfaces, and present two emotion visualization techniques for touch and desktop interfaces. These visualizations offer real-time or post-task feedback about the users’ affective states, both in single-user and collaborative settings, thus increasing the emotional self-awareness and the awareness of other users’ emotions. For supporting collaborative interaction, a novel design for tabletop tangibles is described together with a set of specifically developed interactions for supporting tabletop collaboration. These ring-shaped tangibles minimize occlusion, support touch interaction, can act as interaction lenses, and describe logical operations through nesting operations. The visualization and the two UCCV techniques are each evaluated individually capturing a set of advantages and limitations of each approach. Additionally, the collaborative visualization supported by the two UCCV techniques is also collectively evaluated in three user studies that offer insight into the specifics of interpersonal interaction and task transition in collaborative visualization. The results show that the proposed collaboration support techniques do not only improve the efficiency of the visualization, but also help maintain the collaboration process and aid a balanced social interaction.

This work shall provide a foundation for the cross-design of wireless networked control systems with limited resources. A cross-design methodology is devised, which includes principles for the modeling, analysis, design, and realization of low cost but high performance and intelligent wireless networked control systems. To this end, a framework is developed in which control algorithms and communication protocols are jointly designed, implemented, and optimized taking into consideration the limited communication, computing, memory, and energy resources of the low performance, low power, and low cost wireless nodes used. A special focus of the proposed methodology is on the prediction and minimization of the total energy consumption of the wireless network (i.e. maximization of the lifetime of wireless nodes) under control performance constraints (e.g. stability and robustness) in dynamic environments with uncertainty in resource availability, through the joint (offline/online) adaptation of communication protocol parameters and control algorithm parameters according to the traffic and channel conditions. Appropriate optimization approaches that exploit the structure of the optimization problems to be solved (e.g. linearity, affinity, convexity) and which are based on Linear Matrix Inequalities (LMIs), Dynamic Programming (DP), and Genetic Algorithms (GAs) are investigated. The proposed cross-design approach is evaluated on a testbed consisting of a real lab plant equipped with wireless nodes. Obtained results show the advantages of the proposed cross-design approach compared to standard approaches which are less flexible.