### Refine

#### Year of publication

- 2010 (22) (remove)

#### Document Type

- Doctoral Thesis (22) (remove)

#### Language

- English (22) (remove)

#### Keywords

- Erwarteter Nutzen (2)
- Numerische Strömungssimulation (2)
- Portfolio Selection (2)
- Stochastische dynamische Optimierung (2)
- Abstraction (1)
- Abstraktion (1)
- Additionsreaktion (1)
- Algebraische Geometrie (1)
- Algorithmus (1)
- Buffer Zone Method (1)

#### Faculty / Organisational entity

The aim of this study is to describe the consolidation in thermoplastic tape placement
process to obtain high quality structure, making the process viable for automotive
and aerospace industrial applications. The major barrier in this technique is very
short residence time of material under the consolidation roller to accomplished complete
polymer diffusion in the bonded region. Hence investigation is performed to find
out the optimize manufacturing parameters by extensive material, process, product
testing and through process simulation.
Temperature distribution and convective heat transfer under the hot gas torch is experimentally
mapped out. Bonding process inside the laminate is the combine effect
of layers (tapes) intimate contact Dic development and resulting polymer diffusion Dh
at these contacted sections. Three energy levels are identified based on the process
velocity and hot gas flow combinations. For the low energy parameter combinations,
the energy input to the incoming tape and substrate material is limited and result in
incomplete intimate contact which restricts the bonding process. On other hand high
energy input although could increase the bonding degree Db even up to the 97%, but
also activate the thermal degradation phenomena. It is found out that the rate of polymer
healing (diffusion) and polymer crosslinking follows the Arrhenius laws with the
activation energies of 43 KJ/mol and 276 KJ/mol. The polymer crosslinking at high
temperature exposure hinder the polymer diffusion process and reduces the strength
development. So the parameters combination at intermediate energy level provides
the opportunity of continuous interlaminar strength improvement through out the layup
process.
Deformation of tape edges is identified as the dictating factor for the laminate’s transverse
strength. Tape placement with slight overlap reinforced the transverse joint by
more 10 % as compared to pure matrix joint. Finally the simulation tool developed in
this research work is used for identifying the existing limitation to achieve full consolidation.
A parameter study shows that extended consolidation either by mean of additional
pass or by increasing consolidation length widens the high strength (over 90%)
bonding degree Db contour. Thus high lay-up velocity (up to 7 m/min) is viable for industrial
production rate.

In recent years the consumption of polymer based composites in many engineering
fields where friction and wear are critical issues has increased enormously. Satisfying
the growing industrial needs can be successful only if the costly, labor-intensive and
time-consuming cycle of manufacturing, followed by testing, and additionally followed
by further trial-and-error compounding is reduced or even avoided. Therefore, the
objective is to get in advance as much fundamental understanding as possible of the
interaction between various composite components and that of the composite against
its counterface. Sliding wear of polymers and polymer composites involves very
complex and highly nonlinear processes. Consequently, to develop analytical models
for the simulation of the sliding wear behavior of these materials is extremely difficult
or even impossible. It necessitates simplifying hypotheses and thus compromising
accuracy. An alternative way, discussed in this work, is an artificial neural network
based modeling. The principal benefit of artificial neural networks (ANNs) is their ability
to learn patterns through a training experience from experimentally generated data
using self-organizing capabilities.
Initially, the potential of using ANNs for the prediction of friction and wear properties
of polymers and polymer composites was explored using already published friction
and wear data of 101 independent fretting wear tests of polyamide 46 (PA 46) composites.
For comparison, ANNs were also applied to model the mechanical properties
of polymer composites using a commercial data bank of 93 pairs of independent Izod
impact, tension and bending tests of polyamide 66 (PA 66) composites. Different
stages in the development of ANN models such as selection of optimum network
configuration, multi-dimensional modeling, training and testing of the network were
addressed at length. The results of neural network predictions appeared viable and
very promising for their application in the field of tribology.
A case example was subsequently presented to model the sliding friction and wear
properties of polymer composites by using newly measured datasets of polyphenylene
sulfide (PPS) matrix composites. The composites were prepared by twinscrew
extrusion and injection molding. The dataset investigated was generated from
pin-on-disc testing in dry sliding conditions under various contact pressures and sliding speeds. Initially the focus was placed on exploring the possible synergistic effects
between traditional reinforcements and particulate fillers, with special emphasis on
sub-micro TiO2 particles (300 nm average diameter) and short carbon fibers (SCFs).
Subsequently, the lubricating contributions of graphite (Gr) and polytetrafluoroethylene
(PTFE) in these multiphase materials were also studied. ANNs were trained
using a conjugate gradient with Powell/Beale restarts (CGB) algorithm as well as a
variable learning rate backpropagation (GDX) algorithm in order to learn compositionproperty
relationships between the inputs and outputs of the system. Likewise, the
influence of the operating parameters (contact pressure (p) and sliding speed (v))
was also examined. The incorporation of short carbon fibers and sub-micro TiO2
particles resulted in both a lower friction and a great improvement in the wear resistance
of the PPS composites within the low and medium pv-range. The mechanical
characterization and surface analysis after wear testing revealed that this beneficial
tribological performance could be explained by the following phenomena: (i)
enhanced mechanical properties through the inclusion of short carbon fibers, (ii)
favorable protection of the short carbon fibers by the sub-micro particles diminishing
fiber breakage and removal, (iii) self-repairing effects with the sub-micro particles, (iv)
formation of quasi-spherical transfer particles free to roll at the tribological contact.
Still, in the high pv-range stick-slip sliding motion was observed with these hybrid
materials. The adverse stick-slip behavior could be effectively eliminated through the
additional inclusion of solid lubricant reservoirs (Gr and PTFE), analogous to the
lubricants used in real ball bearings. Likewise, solid lubricants improved the wear resistance
of the multiphase system PPS/SCF/TiO2 in the high pv-range (≥ 9 MPa·m/s).
Yet, their positive effect, especially that of graphite, was limited up to certain volume
fraction and loading conditions. The optimum results were obtained by blending
comparatively low amounts of Gr and PTFE (≈ 5 vol.% from each additive). An introduction
of softer sub-micro particles did not bring the desired ball bearing effect and
fiber protection. The ANN prediction profiles for PPS tribo-compounds exhibited very
good or even perfect agreement with the measured results demonstrating that the
target of achieving a well trained network was reached. The results of employing a
validation test dataset indicated that the trained neural network acquired enough
generalization capability to extend what it has learned about the training patterns to
data that it has not seen before from the same knowledge domain. Optimal brain surgeon (OBS) algorithm was employed to perform pruning of the network
topology by eliminating non-useful weights and bias in order to determine if the
performance of the pruned network was better than the fully-connected network.
Pruning resulted in accuracy gains over the fully-connected network, but induced
higher computational cost in coding the data in the required format. Within an importance
analysis, the sensitivity of the network response variable (frictional coefficient
or specific wear rate) to characteristic mechanical and thermo-mechanical input variables
was examined. The goal was to study the relationships between the diverse
input variables and the characteristic tribological parameters for a better understanding
of the sliding wear process with these materials. Finally, it was demonstrated that
the well-trained networks might be applied for visualization what will happen if a certain
filler is introduced into a composite, or what the impacts of the testing conditions
on the frictional coefficient and specific wear rate are. In this way, they might be a
helpful tool for design engineers and materials experts to explore materials and to
make reasoned selection and substitution decisions early in the design phase, when
they incur least cost.

In robotics, information is often regarded as a means to an end. The question of how to structure information and how to bridge the semantic gap between different levels of abstraction in a uniform way is still widely regarded as a technical issue. Ignoring these challenges appears to lead robotics into a similar stasis as experienced in the software industry of the late 1960s. From the beginning of the software crisis until today, numerous methods, techniques, and tools for managing the increasing complexity of software systems have evolved. The attempt to transfer several of these ideas towards applications in robotics yielded various control architectures, frameworks, and process models. These attempts mainly provide modularisation schemata which suggest how to decompose a complex system into less complex subsystems. The schematisation of representation and information ﬂow however is mostly ignored. In this work, a set of design schemata is proposed which is embedded into an action/perception-oriented design methodology to promote thorough abstractions between distinct levels of control. Action-oriented design decomposes control systems top-down and sensor data is extracted from the environment as required. This comes with the problem that information is often condensed in a premature fashion. That way, sensor processing is dependent on the control system design resulting in a monolithical system structure with limited options for reusability. In contrast, perception-oriented design constructs control systems bottom-up starting with the extraction of environment information from sensor data. The extracted entities are placed into structures which evolve with the development of the sensor processing algorithms. In consequence, the control system is strictly dependent on the sensor processing algorithms which again results in a monolithic system. In their particular domain, both design approaches have great advantages but fail to create inherently modular systems. The design approach proposed in this work combines the strengths of action orientation and perception orientation into one coherent methodology without inheriting their weaknesses. More precisely, design schemata for representation, translation, and fusion of environmental information are developed which establish thorough abstraction mechanisms between components. The explicit introduction of abstractions particularly supports extensibility and scalability of robot control systems by design.

In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.

Point defects in piezoelectric materials – continuum mechanical modelling and numerical simulation
(2010)

The topic of this work is the continuum mechanic modelling of point defects in piezoelectric materials. Devices containing piezoelectric material and especially ferroelectrics require a high precision and are exposed to a high number of electrical and mechanical load cycles. As a result, the relevant material properties may decrease with increasing load cycles. This phenomenon is called electric fatigue. The transported ionic and electric charge carriers can interact with each other, as well as with structural elements (grain boundaries, inhomogeneities) or with material interfaces (domain walls). A reduced domain wall mobility also reduces the electromechanical coupling effect, which leads to the electric fatigue effect. The materials considered here are barium titanate and lead zirconate titanate (PZT), in which oxygen vacancies is the most mobile and most frequently appearing defect species. Intentionally introduced foreign atoms (dopants) can adjust the material properties according to their field of application by generating electric dipoles with the vacancies. Agglomerations of point defects can strongly influence the domain wall motion. The domain wall can be slowed down or even be stopped by the locally varying fields in the vicinity of the clusters. Accumulations of point defects can be detected at electrodes, pores or in the bulk of fatigued samples. The present thesis concentrates focuses on the self interaction behaviour of point defects in the bulk. A micro mechanical continuum model is used to show the qualitative and the quantitative interaction behaviour of defects in a static setup and during drift processes. The modelling neglects the ferroelectric switching mechanisms, but is applicable to every piezoelectric material. The underlying differential equations are solved by means of analytical (Green's functions) and numerical (Finite Differences with discrete Fourier Transform) methods, depending on the boundary conditions. The defects are introduced as localised Eigenstrains, as electric charges and as electric dipoles. The required defect parameters are obtained by comparisons with atomistic methods (lattice statics). There are no standardised procedures available for the parameter identification. In this thesis, the mechanical parameter is obtained by a comparison of relaxation volumes of the atomic lattice and the continuum solution. Parameters for isotropic and anisotropic defect descriptions are identified. The strength of the electric defect is obtained by a comparison of the electric internal energies of atomistics and continuum. The appearing singularities are eliminated by taking only the energy difference of a infinite crystal and a periodic cell into account. Both identification processes are carried out for the cubic structure of barium titanate, which decouples the mechanical and the electrical problem. The defect interaction is analysed by means of configurational forces. The mechanical defect parameter generates a directional short-range attraction between defects. An electrical defect parameter produces the long-range Coulomb interaction, which predicts a repulsion of two similar charges. Additionally, an interaction with defect dipoles is taken into account. It is shown that a defect agglomeration is possible for any static defect configuration. Finally, defect drift is simulated using a thermodynamically motivated migration law based on configurational forces. In this context, the migration of point defects due to self interaction, and the influence of external fields is investigated.

Wireless Sensor Networks (WSN) are dynamically-arranged networks typically composed of a large number of arbitrarily-distributed sensor nodes with computing capabilities contributing to –at least– one common application. The main characteristic of these networks is that of being functionally constrained due to a scarce availability of resources and strong dependence on uncontrollable environmental factors. These conditions introduce severe restrictions on the applicability of classic real-time methods aiming at guaranteeing time-bounded communications. Existing real-time solutions tend to apply concepts that were originally not conceived for sensor networks, idealizing realistic application scenarios and overlooking at important design limitations. This results in a number of misleading practices contributing to approaches of restricted validity in real-world scenarios. Amending the confrontation between WSNs and real-time objectives starts with a review of the basic fundamentals of existing approaches. In doing so, this thesis presents an alternative approach based on a generalized timeliness notion suitable to the particularities of WSNs. The new conceptual notion allows the definition of feasible real-time objectives opening a new scope of possibilities not constrained to idealized systems. The core of this thesis is based on the definition and application of Quality of Service (QoS) trade-offs between timeliness and other significant QoS metrics. The analysis of local and global trade-offs provides a step-by-step methodology identifying the correlations between these quality metrics. This association enables the definition of alternative trade-off configurations (set points) influencing the quality performance of the network at selected instants of time. With the basic grounds established, the above concepts are embedded in a simple routing protocol constituting a proof of concept for the validity of the presented analysis. Extensive evaluations under realistic scenarios are driven on simulation environments as well as real testbeds, validating the consistency of this approach.

In this work, we develop a framework for analyzing an executive’s own- company stockholding and work effort preferences. The executive, character- ized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed form for logarithmic and power utility and for exponential utility for the case of zero interest rates. Additionally, a utility indifference rationale is applied to determine his fair compensation. Being unconstrained by performance contracting, the executive’s work effort strategy establishes a base case for theoretical or empirical assessment of the benefits or otherwise of constraining executives with performance contracting. Further, we consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market including the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility and power utility. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two positions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibilityto benefit from her work effort by acquiring own-company shares. This givesinsight into aspects of optimal contract design. Our framework is applicable to the pharmaceutical and financial industry, as well as the IT sector.

This thesis deals with the solution of special problems arising in financial engineering or financial mathematics. The main focus lies on commodity indices. Chapter 1 addresses the important issue for the financial engineering practice of developing well-suited models for certain assets (here: commodity indices). Descriptive analysis of the Dow Jones-UBS commodity index compared to the Standard & Poor 500 stock index provides us with first insights of some features of the corresponding distributions. Statistical tests of normality and mean reversion then helps us in setting up a model for commodity indices. Additionally, chapter 1 encompasses a thorough introduction to commodity investment, history of commodities trading and the most important derivatives, namely futures and European options on futures. Chapter 2 proposes a model for commodity indices and derives fair prices for the most important derivatives in the commodity markets. It is a Heston model supplemented with a stochastic convenience yield. The Heston model belongs to the model class of stochastic volatility models and is currently widely used in stock markets. For the application in the commodity markets the stochastic convenience yield is included in the drift of the instantaneous spot return process. Motivated by the results of chapter 1 it seems reasonable to model the convenience yield by a mean reverting Ornstein-Uhlenbeck process. Since trading desks only apply and consider models with closed form solutions for options I derive such formulas for commodity futures by solving the corresponding partial differential equation. Additionally, semi-closed form formulas for European options on futures are determined. The Cauchy problem with respect to these options is more challenging than the first one. A solution can be provided. Unlike equities, which typically entitle the holder to a continuing stake in a corporation, commodity futures contracts normally specify a certain date for the delivery of the underlying physical commodity. In order to avoid the delivery process and maintain a futures position, nearby contracts must be sold and contracts that have not yet reached the delivery period must be purchased (so called rolling). Optimal trading days for selling and buying futures are determined by applying statistical tests for stochastic dominance. Besides the optimization of the rolling procedure for commodity futures we dedicate ourselves in chapter 3 with the optimization of the weightings of the commodity futures that make up the index. To this end, I apply the Markowitz approach or mean-variance optimization. The mean-variance optimization penalizes up-side and down-side risk equally, whereas most investors do not mind up-side risk. To overcome this, I consider in the next step other risk measures, namely Value-at-Risk and Conditional Value-at-Risk. The Conditional Value-at-Risk is generalized to discontinuous cumulative distribution functions of the loss. For continuous loss distributions, the Conditional Value-at-Risk at a given confidence level is defined as the expected loss exceeding the Value-at-Risk. Loss distributions associated with finite sampling or scenario modeling are, however, discontinuous. Various risk measures involving discontinuous loss distributions shall be introduced and compared. I then apply the theoretical results to the field of portfolio optimization with commodity indices. Furthermore, I uncover graphically the behavior of these risk measures. For this purpose, I consider the risk measures as a function of the confidence level. Based on a special discrete loss distribution, the graphs demonstrate the different properties of these risk measures. The goal of the first section of chapter 4 is to apply the mathematical concept of excursions for the creation of optimal highly automated or algorithmic trading strategies. The idea is to consider the gain of the strategy and the excursion time it takes to realize the gain. In this section I calculate formulas for the Ornstein-Uhlenbeck process. I show that the corresponding formulas can be calculated quite fast since the only function appearing in the formulas is the so called imaginary error function. This function is already implemented in many programs, such as in Maple. My main contribution of this topic is the optimization of the trading strategy for Ornstein-Uhlenbeck processes via the Banach fixed-point theorem. The second section of chapter 4 deals with statistical arbitrage strategies, a long horizon trading opportunity that generates a riskless profit. The results of this section provide an investor with a tool to investigate empirically if some strategies (for example momentum strategies) constitute statistical arbitrage opportunities or not.

The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.

A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.

A classical conjecture in the representation theory of finite groups, the McKay conjecture, states that for any finite group and prime number p the number of complex irreducible characters of degree prime to p is equal to the number of complex irreducible characters of degree prime to p of the normalizer of a p-Sylow subgroup. Recently a reduction theorem was proved by Isaacs, Malle and Navarro: If all simple groups are “good”, then the McKay conjecture holds. In this work we are concerned with the problem of goodness for finite groups of Lie type in their defining characteristic. A simple group is called “good” if certain equivariant bijections between the involved character sets exist. We present a structural approach to the construction of such a bijection by utilizing the so-called “Steinberg-Map”. This yields very natural bijections and we prove most of the desired properties.

A number of natural products are known that contain an enamide as a key structural feature. This functionality is a very important subunit in various biologically active products and pharmaceutical drug lead compounds. In addition, enamides serve as highly versatile synthetic intermediates, particularly in the pericyclic reaction, formation of heterocycles, cross-coupling and in asymmetric synthesis. As a result, several protocols have been devised for the preparation of enamides. Traditional syntheses include condensation of aldehydes and ketones with amides or from hydroxylamines and acetic anhydride, require harsh conditions and yield mixtures of E/Z products. Several metal catalyzed approaches have been also investigated, such as isomerization of N-allylamides and catalytic cross-coupling of amides with vinyl halides or pseudohalides. These protocols proceed under milder conditions but suffer from the limited availability of these starting materials. The research described in this dissertation focuses on efficient and atom-economic preparation of enamides and thioenamides, using readily available starting materials. We developed catalyst systems generated in situ from bis(2-methallyl)-cycloocta-1,5-diene-ruthenium(II), phosphines and Lewis acid or base, efficiently catalyze the addition of primary amides and thioamides to terminal alkynes with exclusive formation of the anti-Markovnikov products in high yield and stereoselectivity under mild reaction conditions. The generality of the newly developed methodologies is demonstrated by common functional group tolerance. Furthermore, Markovnikov products were formed via phosphine-catalyzed addition of cyclic amides to phenylacetylene derivatives. The hydroamidation protocol of primary amides was successfully used in the synthesis of naturally occurring compounds, such as alatamide, lansiumamide A, botryllamides C and E, and the key intermediate in the synthesis of aristolactam. In order to investigate the reaction mechanism, the addition of various amides and carboxylic acids to terminal alkynes was performed using deuterium labeled starting materials and followed by in situ NMR and GC-MS studies.

This thesis deals with the numerical study of multiscale problems arising in the modelling of processes of the flow of fluid in plain and porous media. Many of these processes, governed by partial differential equations, are relevant in engineering, industry, and environmental studies. The overall task of modelling and simulating the filtration-related multiscale processes becomes interdisciplinary as it employs physics, mathematics and computer programming to reach its aim. Keeping the challenges in mind, the main focus is to overcome the limitations of accuracy, speed and memory and to develop novel efficient numerical algorithms which could, in part or whole, be utilized by those working in the field of porous media. This work has essentially four parts. A single grid basic algorithm and a corresponding parallel algorithm to solve the macroscopic Navier-Stokes-Brinkmann model is discussed. An upscaling subgrid algorithm is derived and numerically tested for the same model. Moving a step further in the line of multiscale methods, an iterative Mutliscale Finite Volume (iMSFV) method is developed for the Stokes-Darcy system. Additionally, the last part of the thesis deals with ways to incorporate changes occurring at different (meso) scale level. The flow equations are coupled with the Convection-Diffusion-Reaction (CDR) equation, which models the transport and capturing of particle concentrations. By employing the numerical method for the coupled flow and transport problem, we understand the interplay between the flow velocity and filtration.

The main focus of this dissertation is the synthesis and characterization of more recent zeolites with different pore architectures. The unique shape-selective properties of the zeolites are important in various chemical processes and the new zeolites containing novel internal pore architectures are of high interest, since they could lead to further improvement of existing processes or open the way to new applications. This dissertation is organized in the following way: The first part is focused on the synthesis of selected recent zeolites with different pore architectures and their modification to the acidic and bifunctional forms. The second part comprises the characterization of the physicochemical properties of the prepared zeolites by selected physicochemical methods, viz. powder X-ray diffractometry (XRD), N2 adsorption, thermogravimetric analysis (TGA/DTA/MS), ultraviolet-visible (UV-Vis) spectroscopy, atomic absorption spectroscopy (AAS), infrared (IR) spectroscopy, scanning electron microscopy (SEM), 27Al and 29Si magic angle spinning nuclear magnetic resonance (MAS NMR) spectroscopy, temperature-programmed reduction (TPR), temperature-programmed desorption of pyridine (pyridine TPD) and adsorption experiments with hydrocarbon adsorptives. The third part of this work is devoted to the application of test reactions, i.e., the acid catalyzed disproportionation of ethylbenzene and the bifunctional hydroconversion of n-decane, to characterize the pore size and architecture of the prepared zeolites. They are known to be valuable tools for exploring the pore structure of zeolites. Finally, an additional test, viz. the competitive hydrogenation of 1-hexene and 2,4,4-trimethyl-1-pentene, has been applied to probe the location of noble metals in medium pore zeolite. The synthesis of the following zeolite molecular sieves was successfully performed in the frame of this thesis (they are ranked according to the largest window size in the respective structure): • 14-MR pores: UTD-1, CIT-5, SSZ-53 and IM-12 • 12-MR pores: ITQ-21 and MCM-68 • 10-MR pores: SSZ-35 and MCM-71 All of them were obtained as pure phase (except zeolite MCM-71 with a minor impurity phase that is hardly to avoid and also present in samples shown in the patent literature). The synthesis conditions are very critical with respect to the formation of the zeolite with a given structure. In this work, the recommended synthesis recipes are included. Among the 14-MR zeolites, the aluminosilicates UTD-1 (nSi/nAl = 28), CIT-5 (nSi/nAl = 116) and SSZ-53 (nSi/nAl = 55) with unidimensional extra-large pore opening formed from 14-MR rings exhibit promising catalytic properties with high thermal stability and they possess strong Brønsted-acid sites. By contrast, the germanosilicate IM-12 with a structure containing 14-MR channels intersecting with 12-MR channels is unstable toward moisture. It was found that UTD-1 and SSZ-53 zeolites are highly active catalysts for the acid catalyzed disproportionation of ethylbenzene and n-decane hydroconversion due to their high Brønsted acidity. To explore their pore structures, the applied two test reactions suggest that UTD-1, CIT-5 and SSZ-53 zeolites contain a very open pore system (12-MR or larger pore systems) because the product distributions are not hampered by too small pores. ITQ-21, a germanoaluminosilicate zeolite with a three-dimensional pore system and large spherical cages accessible through six 12-MR windows, can be synthesized with nSi/nAl ratios between 27 and >200. It possesses a large amount of Brønsted-acid sites. The aluminosilicate zeolite MCM-68 (nSi/nAl = 9) is an extremely active catalyst in the disproportionation of ethylbenzene and in the n-decane hydroconversion. This is due to the presence of a high density of strong Brønsted-acid sites in its structure. The disproportionation of ethylbenzene suggests that MCM-68 is a large pore (i.e., at least 12-MR) zeolite, in agreement with its crystallographic structure. In the hydroconversion of n-decane, the presence of tribranched and ethylbranched isomers and a high isopentane yield of 58 % in the hydrocracked products suggest the presence of large (12-MR) pores in its structure. By contrast, a relatively high value for CI* (modified constraint index) of 2.9 suggests the presence of medium (10-MR) pores in its structure. As a whole, the results are in-line with the crystallographic structure of MCM-68. SSZ-35, a 10-MR zeolite, can be synthesized in a broad range of nSi/nAl ratios between 11 and >500. This zeolite is interesting in terms of shape selectivity resulting from its unusual pore system having unidimensional channels alternating between 10-MR windows and large 18-MR cages. This thermally very stable zeolite contains both, strong Brønsted- and strong Lewis-acid sites. The disproportionation of ethylbenzene classifies SSZ-35 as a large pore zeolite. In the hydroconversion of n-decane, the suppression of bulky ethyloctanes and propylheptane clearly suggests the presence of 10-MR sections in the pore system. By contrast, the low CI* values of 1.2-2.3 and the high isopentane yields of 56-60 % in the hydrocracked products suggest that SSZ-35 also possesses larger intracystalline voids, i.e., the 18-MR cages. The results from the catalytic characterization are in good agreement with the crystallographic structure of zeolite SSZ-35. It was also found that the nSi/nAl ratio influences the crystallite size and therefore the external surface area. As a consequence, product selectivities are also influenced: The lowest nSi/nAl ratio or the smallest crystallite size sample produces larger amounts of the relatively bulky products. The formation of these products probably results from the higher conversion or they are preferentially formed on the external surface area of the catalyst. Zeolite MCM-71 (nSi/nAl = 8) possesses an extremely thermally stable structure and contains a high concentration of Brønsted-acid sites. Its structure allows for the separation of n-alkanes from branched alkanes by selective adsorption. MCM-71 exhibits unique shape-selective properties towards the product distribution in ethylbenzene disproportionation, which is different to those obtained in the medium pore SSZ-35 zeolite. All reaction parameters are fulfilled to classify MCM-71 as medium pore zeolite and this is in good agreement with its reported structure consisting of two-dimensional network of elliptical 10-MR channels and an orthogonal sinusoidal 8-MR channels. The competitive hydrogenation of 1-hexene and 2,4,4-trimethyl-1-pentene was exploited to probe that the major part of the noble metal is located inside the intracrystalline void volume of the medium pore zeolite SSZ-35.

The aim of this thesis was to link Computational Fluid Dynamics (CFD) and Population Balance Modelling (PBM) to gain a combined model for the prediction of counter-current liquid-liquid extraction columns. Parts of the doctoral thesis project were done in close cooperation with the Fraunhofer ITWM. Their in-house CFD code Finite Pointset Method (FPM) was further developed for two-phase simulations and used for the CFD-PBM coupling. The coupling and all simulations were also carried out in the commercial CFD code Fluent in parallel. For the solution methods of the PBM there was a close cooperation with Prof. Attarakih from the Al-Balqa Applied University in Amman, Jordan, who developed a new adaptive method, the Sectional Quadrature Method of Moments (SQMOM). At the beginning of the project, there was a lack of two-phase liquid-liquid CFD simulations and their experimental validation in literature. Therefore, stand-alone CFD simulations without PBM were carried out both in FPM and Fluent to test the predictivity of CFD for stirred liquid-liquid extraction columns. The simulations were validated by Particle Image Velocimetry (PIV) measurements. The two-phase PIV measurements were possible when using an iso-optical system, where the refractive indices of both liquid phases are identical. These investigations were done in segments of two Rotating Disc Contactors with 150mm and 450mm diameter to validate CFD at lab and at industrial scale. CFD results of the aqueous phase velocities, hold-up, droplet raising velocities and turbulent energy dissipation were compared to experimental data. The results show that CFD can predict most phenomena and there was an overall good agreement. In the next steps, different solution methods for the PBM, e.g. the SQMOM and the Quadrature Method of Moments (QMOM) were implemented, varied and tested in Fluent and FPM in a two-fluid model. In addition, different closures for coalescence and breakage were implemented to predict drop size distributions and Sauter mean diameters in the RDC DN150 column. These results show that a prediction of the droplet size distribution is possible, even when no adjustable parameters are used. A combined multi-fluid CFD-PBM model was developed by means of the SQMOM to overcome drawbacks of the two-fluid approach. Benefits of the multi-fluid approach could be shown, but the high computational load was also visible. Therefore, finally, the One Primary One Secondary Particle Method (OPOSPM), which is a very easy and efficient special case of the SQMOM, was introduced in CFD to simulate a full pilot plant column of the RDC DN150. The OPOSPM offers the possibility of a one equation model for the solution of the PBM in CFD. The predicted results for the mean droplet diameter and the dispersed phase hold up agree well with literature data. The results also show that the new CFD-PBM model is very efficient from computational point of view (two times less than the QMOM and five times less than the method of classes). The overall results give rise to the expectation that the coupled CFD-PBM model will lead to a better, faster and more cost-efficient layout of counter-current extraction columns in future.

Model-based fault diagnosis and fault-tolerant control for a nonlinear electro-hydraulic system
(2010)

The work presented in this thesis discusses the model-based fault diagnosis and fault-tolerant control with application to a nonlinear electro-hydraulic system. High performance control with guaranteed safety and reliability for electro-hydraulic systems is a challenging task due to the high nonlinearity and system uncertainties. This thesis developed a diagnosis integrated fault-tolerant control (FTC) strategy for the electro-hydraulic system. In fault free case the nominal controller is in operation for achieving the best performance. If the fault occurs, the controller will be automatically reconfigured based on the fault information provided by the diagnosis system. Fault diagnosis and reconfigurable controller are the key parts for the proposed methodology. The system and sensor faults both are studied in the thesis. Fault diagnosis consists of fault detection and isolation (FDI). A model-base residual generating is realized by calculating the redundant information from the system model and available signal. In this thesis differential-geometric approach is employed, which gives a general formulation of FDI problem and is more compact and transparent among various model-based approaches. The principle of residual construction with differential-geometric method is to find an unobservable distribution. It indicates the existence of a system transformation, with which the unknown system disturbance can be decoupled. With the observability codistribution algorithm the local weak observability of transformed system is ensured. A Fault detection observer for the transformed system can be constructed to generate the residual. This method cannot isolated sensor faults. In the thesis the special decision making logic (DML) is designed based on the individual signal analysis of the residuals to isolate the fault. The reconfigurable controller is designed with the backstepping technique. Backstepping method is a recursive Lyapunov-based approach and can deal with nonlinear systems. Some system variables are considered as ``virtual controls'' during the design procedure. Then the feedback control laws and the associate Lyapunov function can be constructed by following step-by-step routine. For the electro-hydraulic system adaptive backstepping controller is employed for compensate the impact of the unknown external load in the fault free case. As soon as the fault is identified, the controller can be reconfigured according to the new modeling of faulty system. The system fault is modeled as the uncertainty of system and can be tolerated by parameter adaption. The senor fault acts to the system via controller. It can be modeled as parameter uncertainty of controller. All parameters coupled with the faulty measurement are replaced by its approximation. After the reconfiguration the pre-specified control performance can be recovered. FDI integrated FTC based on backstepping technique is implemented successfully on the electro-hydraulic testbed. The on-line robust FDI and controller reconfiguration can be achieved. The tracking performance of the controlled system is guaranteed and the considered faults can be tolerated. But the problem of theoretical robustness analysis for the time delay caused by the fault diagnosis is still open.

Mrázek et al. [25] proposed a unified approach to curve estimation which combines localization and regularization. Franke et al. [10] used that approach to discuss the case of the regularized local least-squares (RLLS) estimate. In this thesis we will use the unified approach of Mrázek et al. to study some asymptotic properties of local smoothers with regularization. In particular, we shall discuss the Huber M-estimate and its limiting cases towards the L2 and the L1 cases. For the regularization part, we will use quadratic regularization. Then, we will define a more general class of regularization functions. Finally, we will do a Monte Carlo simulation study to compare different types of estimates.

We tackle the problem of obtaining statistics on content and structure of XML documents by using summaries which may provide cardinality estimations for XML query expressions. Our focus is a data-centric processing scenario in which we use a query engine to process such query expressions. We provide three new summary structures called LESS (Leaf-Element-in-Subtree), LWES (Level-Wide Element Summarization), and EXsum (Element-centered XML Summarization) which are targeted to base an estimation process in an XML query optimizer. Each of these collects structural statistical information of XML documents, and the latter (EXsum) gathers, in addition, statistics on document content. Estimation procedures and/or heuristics for specic types of query expressions of each proposed approach are developed. We have incorporated and implemented our proposals in XTC, a native XML database management system (XDBMS). With this common implementation base, we present an empirical and comparative study in which our proposals are stressed against others published in the literature, which are also incorporated into the XTC. Furthermore, an analysis is made based on criteria pertinent to a query optimizer process.

Wetlands are special areas that they offer habitat for terrestrial and water life. Wetlands are nest sides also for amphibian, for this reason wetlands offer wide range diversity for species. Wetlands are also reproduction regions for birds. Wetlands have special importance for ecosystem because they obstruct erosion. Wetlands absorb contaminants from water therefore wetlands contribute to clean water and they offer more potable water. Wetlands obstruct waterflood. In that case wetlands must be maintained and conserved. Wetlands must be conserved because wetlands vanish very rapidly because of contamination, excessively agriculture, urban sprawl, dams…etc. this PhD thesis contributes to solve problems of wetlands that they are affected from urbanization especially metropolitan areas. Growth of cities requires more land for settlements; the more settlements bring about the more urban sprawl. The more urban sprawl deteriorates more natural regions. In this cycle wetlands are also affected from urbanization effects. In this sense some precautions should be developed in order to protect wetlands from urbanization effect. These precautions should include anticipation about effects of urbanization. An important tool for conserving wetlands and protecting these regions from cities is land uses and land use planning in city and regional planning. First step of land use planning is determination of settlement appropriateness. Settlement appropriateness contributes to choose correct locations for settlement in this respect wetlands can be affected in minimum level from urban sprawl. This PhD thesis inquires a method about buffer zones around wetlands and Thresholds in basin of wetlands; and this method is examined in two case study areas Mogan and Büyükçekmece Lake. According to results of Mogan and Büyükçekmece Lake the PhD method will be generalized to other quasi wetlands that they exist near cities and are affected from urban sprawl.

Prostate cancer preferentially metastasizes to the skeleton and abundant evidence exists that osteoblasts specifically support the metastatic process, including cancer stem cell niche formation. At early stages of bone metastasis, crosstalk of prostate cancer cells and osteoblasts through soluble molecules results in a decrease of cancer cell proliferation, accompanied by altered adhesive properties and increased expression of bone-specific genes, or osteomimicry. Osteoblasts synthesize a plethora of biologically active factors, which comprise the unique bone microenvironment. By means of quantitative real-time RT-PCR it was determined that exposure to the osteoblast secretome induced gene expression changes in prostate cancer cells, including the upregulation of osteomimetic genes such as BMP2, AP, COL1A1, OPG and RANKL. IL6 and TGFbeta1 signaling pathway components also became upregulated at early time points. Moreover, osteoblast-released IL6 and TGFbeta1 contributed to the upregulation of OPG mRNA in LNCaP. Thus, the earliest response of prostate cancer cells to osteoblast-released factors, which ultimately cause metastatic cells to assume an osteomimetic phenotype, involved activation of paracrine and autocrine IL6 and TGFbeta signaling. On the other hand, a microarray analysis showed that osteoblasts exposed to the secretome of prostate cancer cells exhibited gene expression alterations suggestive of repressed proliferation, decreased matrix synthesis and inhibited immune response, which together indicate enhanced preosteocytic differentiation. TGFbeta signaling, known to inhibit osteoblast maturation, was strongly suppressed, as shown by elevated expression of negative regulators, downregulation of pathway components and of numerous target genes. Transcriptional downregulation of osteoblast inhibitory molecules such as DKK1 and FST also occurred, with concomitant upregulation of the osteoinductive molecules ADM, STC1 and BMP2, and of the transcription factors CBFA1 and HES1, which promote osteoblast differentiation. Finally, the mRNA encoding NPPB, the precursor of a molecule implicated in the inhibition of TGFbetaeffects, in bone formation and in stem cell maintenance, became upregulated after coculture both in osteoblasts and in prostate cancer cells. These results provide an insight into potential mechanisms of dysregulated bone formation in metastatic prostate cancer, as well as mechanisms by which osteoblasts might enhance the invasive, osteomimetic and stem cell-like properties of the tumor cells. In particular, the differential modulation of TGFbetasignaling in prostate cancer cells and osteoblasts appears to merit further research.

Mechanical and electrical properties of carbon nanofiber–ceramic nanoparticle–polymer composites
(2010)

The present research is focused on the manufacturing and analysis of composites consisting of a thermosetting polymer reinforced with fillers of nanometric dimensions. The materials were chosen to be an epoxy resin matrix and two different kinds of fillers: electrically conductive carbon nanofibers (CNFs) and ceramic titanium dioxide (TiO2) and aluminium dioxide (Al2O3) nanoparticles. In an initial step of the work, in order to understand the effect that each kind of filler had when added separately to the polymer matrix, CNF–EP and ceramic nanoparticle–EP composites were manufactured and tested. Each type of filler was dispersed in the polymer matrix using two different dispersion technologies. CNFs were dispersed in the resin with the aid of a three roll calender (TRC) whereas a torus bead mill (TML) was used in the ceramic nanoparticle case. Calendering proved to be an efficient method to disperse the untreated CNFs in the polymer matrix. The study of the physical properties of undispersed CNF composites showed that the tensile strength and the maximum sustained strain, were more sensitive to the state of dispersion of the nanofibers than the elastic modulus, fracture toughness, impact energy and electrical conductivity (for filler loadings above the percolation threshold of the system). Rheological investigation of the uncured CNF–epoxy mixture at different stages of dispersion indicated the formation of an interconnected nanofiber network within the matrix after the initial steps of calendering. CNF–EP composites showed better mechanical performance than the unmodified polymer matrix. However, the tensile modulus and strength of the CNF composites accused the presence of remaining nanofiber clusters and did not reach theoretically predicted values. Fracture toughness and resistance against impact did not seem to be so sensitive to the state of nanofiber dispersion and improved consistently with the incorporation of the CNFs. The electrical conductivity of the CNF composites saw an eight orders of magnitude percolative enhancement with increasing nanofiber content. The percolation threshold for the achieved level of CNF dispersion was found to be 0.14 vol. %. It was also determined that, for these composites, the main mechanism of electrical transmission was the electron tunnelling mechanism. Ceramic nanoparticle–EP composites were manufactured using TiO2 and Al2O3 particles as fillers in the epoxy matrix. Mechanical dispersion of the nanoparticles in the liquid polymer by means of a torus bead mill dissolver led to homogeneous distributions of particles in the matrix. Remaining particle agglomerates had a mean value of 80 nm. However, micrometer sized agglomerates could clearly be observed in the microscopical analysis of the composites, especially in the TiO2 case. The inclusion of the nanoparticles in the epoxy resin resulted in a general improvement of the modulus, strength, maximum sustained strain, fracture toughness and impact energy of the polymer matrix. Nanoparticles were able to overcome the stiffness/toughness problem. On the other hand, nanoparticle–EP composites showed lower electrical conductivity than the neat epoxy. In general, there were no significant differences between the incorporation of TiO2 or Al2O3 particles. Based on the previous results, CNFs and nanoparticles were combined as fillers to create a nanocomposite that could benefit from the electrical properties provided by the conductive CNFs and, at the same time, have improved mechanical performance thanks to the presence of the well dispersed ceramic nanoparticles. Nanoparticles and CNFs were dispersed separately to create two batches which were blended together in a dissolver mixer. This method proved effective to create well dispersed CNF–nanoparticle–epoxy composites which showed improved electrical and mechanical properties compared with the neat polymer matrix. The well dispersed ceramic nanofillers were able to introduce additional energy dissipating mechanisms in the CNF–EP composites that resulted in an improvement of their mechanical performance. With high volume loadings of nanoparticles most of the reinforcement came from the presence of the nanoparticles in the polymer matrix. Therefore, the observed trends were, in essence, similar to the ones observed in the ceramic nanoparticle–EP composites. The enhancement in the mechanical performance of the CNF composites with the inclusion of ceramic nanoparticles came at the price of an increase in the percolation threshold and a reduction of the electrical conductivity of the CNF–nanoparticle–EP composites compared with the CNF–EP materials. A modified Weber and Kamal’s fiber contact model (FCM) was used to explain the electrical behaviour of the CNF–nanoparticle–EP composites once percolation was achieved. This model was able to fit rather accurately the experimentally measured conductivity of these composites.

Tropical intersection theory
(2010)

This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.