### Refine

#### Year of publication

- 2010 (56) (remove)

#### Document Type

- Report (26)
- Doctoral Thesis (22)
- Preprint (4)
- Bachelor Thesis (1)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical Part (1)

#### Language

- English (56) (remove)

#### Keywords

- Erwarteter Nutzen (2)
- Lagrangian mechanics (2)
- Numerische Strömungssimulation (2)
- Portfolio Selection (2)
- Stochastische dynamische Optimierung (2)
- numerical upscaling (2)
- optimal control (2)
- portfolio choice (2)
- work effort (2)
- Abstraction (1)

#### Faculty / Organisational entity

We consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market includ- ing the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility. The power utility case is discussed as well. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two posi- tions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibility to benefit from her work effort by acquiring own-company shares. This gives insight into aspects of optimal contract design. Our framework is applicable to the pharma- ceutical and financial industry, and the IT sector.

We introduce a class of models for time series of counts which include INGARCH-type models as well as log linear models for conditionally Poisson distributed data. For those processes, we formulate simple conditions for stationarity and weak dependence with a geometric rate. The coupling argument used in the proof serves as a role model for a similar treatment of integer-valued time series models based on other types of thinning operations.

We present a two-scale finite element method for solving Brinkman’s and Darcy’s equations. These systems of equations model fluid flows in highly porous and porous media, respectively. The method uses a recently proposed discontinuous Galerkin FEM for Stokes’ equations byWang and Ye and the concept of subgrid approximation developed by Arbogast for Darcy’s equations. In order to reduce the “resonance error” and to ensure convergence to the global fine solution the algorithm is put in the framework of alternating Schwarz iterations using subdomains around the coarse-grid boundaries. The discussed algorithms are implemented using the Deal.II finite element library and are tested on a number of model problems.

Universal Shortest Paths
(2010)

We introduce the universal shortest path problem (Univ-SPP) which generalizes both - classical and new - shortest path problems. Starting with the definition of the even more general universal combinatorial optimization problem (Univ-COP), we show that a variety of objective functions for general combinatorial problems can be modeled if all feasible solutions have the same cardinality. Since this assumption is, in general, not satisfied when considering shortest paths, we give two alternative definitions for Univ-SPP, one based on a sequence of cardinality contrained subproblems, the other using an auxiliary construction to establish uniform length for all paths between source and sink. Both alternatives are shown to be (strongly) NP-hard and they can be formulated as quadratic integer or mixed integer linear programs. On graphs with specific assumptions on edge costs and path lengths, the second version of Univ-SPP can be solved as classical sum shortest path problem.

Tropical intersection theory
(2010)

This thesis consists of five chapters: Chapter 1 contains the basics of the theory and is essential for the rest of the thesis. Chapters 2-5 are to a large extent independent of each other and can be read separately. - Chapter 1: Foundations of tropical intersection theory In this first chapter we set up the foundations of a tropical intersection theory covering many concepts and tools of its counterpart in algebraic geometry such as affine tropical cycles, Cartier divisors, morphisms of tropical cycles, pull-backs of Cartier divisors, push-forwards of cycles and an intersection product of Cartier divisors and cycles. Afterwards, we generalize these concepts to abstract tropical cycles and introduce a concept of rational equivalence. Finally, we set up an intersection product of cycles and prove that every cycle is rationally equivalent to some affine cycle in the special case that our ambient cycle is R^n. We use this result to show that rational and numerical equivalence agree in this case and prove a tropical Bézout's theorem. - Chapter 2: Tropical cycles with real slopes and numerical equivalence In this chapter we generalize our definitions of tropical cycles to polyhedral complexes with non-rational slopes. We use this new definition to show that if our ambient cycle is a fan then every subcycle is numerically equivalent to some affine cycle. Finally, we restrict ourselves to cycles in R^n that are "generic" in some sense and study the concept of numerical equivalence in more detail. - Chapter 3: Tropical intersection products on smooth varieties We define an intersection product of tropical cycles on tropical linear spaces L^n_k and on other, related fans. Then, we use this result to obtain an intersection product of cycles on any "smooth" tropical variety. Finally, we use the intersection product to introduce a concept of pull-backs of cycles along morphisms of smooth tropical varieties and prove that this pull-back has all expected properties. - Chapter 4: Weil and Cartier divisors under tropical modifications First, we introduce "modifications" and "contractions" and study their basic properties. After that, we prove that under some further assumptions a one-to-one correspondence of Weil and Cartier divisors is preserved by modifications. In particular we can prove that on any smooth tropical variety we have a one-to-one correspondence of Weil and Cartier divisors. - Chapter 5: Chern classes of tropical vector bundles We give definitions of tropical vector bundles and rational sections of tropical vector bundles. We use these rational sections to define the Chern classes of such a tropical vector bundle. Moreover, we prove that these Chern classes have all expected properties. Finally, we classify all tropical vector bundles on an elliptic curve up to isomorphisms.

The Train Marshalling Problem consists of rearranging an incoming train in a marshalling yard in such a way that cars with the same destinations appear consecutively in the final train and the number of needed sorting tracks is minimized. Besides an initial roll-in operation, just one pull-out operation is allowed. This problem was introduced by Dahlhaus et al. who also showed that the problem is NP-complete. In this paper, we provide a new lower bound on the optimal objective value by partitioning an appropriate interval graph. Furthermore, we consider the corresponding online problem, for which we provide upper and lower bounds on the competitiveness and a corresponding optimal deterministic online algorithm. We provide an experimental evaluation of our lower bound and algorithm which shows the practical tightness of the results.

The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.

We will present a rigorous derivation of the equations and interface conditions for ion, charge and heat transport in Li-ion insertion batteries. The derivation is based exclusively on universally accepted principles of nonequilibrium thermodynamics and the assumption of a one step intercalation reaction at the interface of electrolyte and active particles. Without loss of generality the transport in the active particle is assumed to be isotropic. The electrolyte is described as a fully dissociated salt in a neutral solvent. The presented theory is valid for transport on a spatial scale for which local charge neutrality holds i.e. beyond the scale of the diffuse double layer. Charge neutrality is explicitely used to determine the correct set of thermodynamically independent variables. The theory guarantees strictly positive entropy production. The various contributions to the Peltier coeficients for the interface between the active particles and the electrolyte as well as the contributions to the heat of mixing are obtained as a result of the theory.

The main focus of this dissertation is the synthesis and characterization of more recent zeolites with different pore architectures. The unique shape-selective properties of the zeolites are important in various chemical processes and the new zeolites containing novel internal pore architectures are of high interest, since they could lead to further improvement of existing processes or open the way to new applications. This dissertation is organized in the following way: The first part is focused on the synthesis of selected recent zeolites with different pore architectures and their modification to the acidic and bifunctional forms. The second part comprises the characterization of the physicochemical properties of the prepared zeolites by selected physicochemical methods, viz. powder X-ray diffractometry (XRD), N2 adsorption, thermogravimetric analysis (TGA/DTA/MS), ultraviolet-visible (UV-Vis) spectroscopy, atomic absorption spectroscopy (AAS), infrared (IR) spectroscopy, scanning electron microscopy (SEM), 27Al and 29Si magic angle spinning nuclear magnetic resonance (MAS NMR) spectroscopy, temperature-programmed reduction (TPR), temperature-programmed desorption of pyridine (pyridine TPD) and adsorption experiments with hydrocarbon adsorptives. The third part of this work is devoted to the application of test reactions, i.e., the acid catalyzed disproportionation of ethylbenzene and the bifunctional hydroconversion of n-decane, to characterize the pore size and architecture of the prepared zeolites. They are known to be valuable tools for exploring the pore structure of zeolites. Finally, an additional test, viz. the competitive hydrogenation of 1-hexene and 2,4,4-trimethyl-1-pentene, has been applied to probe the location of noble metals in medium pore zeolite. The synthesis of the following zeolite molecular sieves was successfully performed in the frame of this thesis (they are ranked according to the largest window size in the respective structure): • 14-MR pores: UTD-1, CIT-5, SSZ-53 and IM-12 • 12-MR pores: ITQ-21 and MCM-68 • 10-MR pores: SSZ-35 and MCM-71 All of them were obtained as pure phase (except zeolite MCM-71 with a minor impurity phase that is hardly to avoid and also present in samples shown in the patent literature). The synthesis conditions are very critical with respect to the formation of the zeolite with a given structure. In this work, the recommended synthesis recipes are included. Among the 14-MR zeolites, the aluminosilicates UTD-1 (nSi/nAl = 28), CIT-5 (nSi/nAl = 116) and SSZ-53 (nSi/nAl = 55) with unidimensional extra-large pore opening formed from 14-MR rings exhibit promising catalytic properties with high thermal stability and they possess strong Brønsted-acid sites. By contrast, the germanosilicate IM-12 with a structure containing 14-MR channels intersecting with 12-MR channels is unstable toward moisture. It was found that UTD-1 and SSZ-53 zeolites are highly active catalysts for the acid catalyzed disproportionation of ethylbenzene and n-decane hydroconversion due to their high Brønsted acidity. To explore their pore structures, the applied two test reactions suggest that UTD-1, CIT-5 and SSZ-53 zeolites contain a very open pore system (12-MR or larger pore systems) because the product distributions are not hampered by too small pores. ITQ-21, a germanoaluminosilicate zeolite with a three-dimensional pore system and large spherical cages accessible through six 12-MR windows, can be synthesized with nSi/nAl ratios between 27 and >200. It possesses a large amount of Brønsted-acid sites. The aluminosilicate zeolite MCM-68 (nSi/nAl = 9) is an extremely active catalyst in the disproportionation of ethylbenzene and in the n-decane hydroconversion. This is due to the presence of a high density of strong Brønsted-acid sites in its structure. The disproportionation of ethylbenzene suggests that MCM-68 is a large pore (i.e., at least 12-MR) zeolite, in agreement with its crystallographic structure. In the hydroconversion of n-decane, the presence of tribranched and ethylbranched isomers and a high isopentane yield of 58 % in the hydrocracked products suggest the presence of large (12-MR) pores in its structure. By contrast, a relatively high value for CI* (modified constraint index) of 2.9 suggests the presence of medium (10-MR) pores in its structure. As a whole, the results are in-line with the crystallographic structure of MCM-68. SSZ-35, a 10-MR zeolite, can be synthesized in a broad range of nSi/nAl ratios between 11 and >500. This zeolite is interesting in terms of shape selectivity resulting from its unusual pore system having unidimensional channels alternating between 10-MR windows and large 18-MR cages. This thermally very stable zeolite contains both, strong Brønsted- and strong Lewis-acid sites. The disproportionation of ethylbenzene classifies SSZ-35 as a large pore zeolite. In the hydroconversion of n-decane, the suppression of bulky ethyloctanes and propylheptane clearly suggests the presence of 10-MR sections in the pore system. By contrast, the low CI* values of 1.2-2.3 and the high isopentane yields of 56-60 % in the hydrocracked products suggest that SSZ-35 also possesses larger intracystalline voids, i.e., the 18-MR cages. The results from the catalytic characterization are in good agreement with the crystallographic structure of zeolite SSZ-35. It was also found that the nSi/nAl ratio influences the crystallite size and therefore the external surface area. As a consequence, product selectivities are also influenced: The lowest nSi/nAl ratio or the smallest crystallite size sample produces larger amounts of the relatively bulky products. The formation of these products probably results from the higher conversion or they are preferentially formed on the external surface area of the catalyst. Zeolite MCM-71 (nSi/nAl = 8) possesses an extremely thermally stable structure and contains a high concentration of Brønsted-acid sites. Its structure allows for the separation of n-alkanes from branched alkanes by selective adsorption. MCM-71 exhibits unique shape-selective properties towards the product distribution in ethylbenzene disproportionation, which is different to those obtained in the medium pore SSZ-35 zeolite. All reaction parameters are fulfilled to classify MCM-71 as medium pore zeolite and this is in good agreement with its reported structure consisting of two-dimensional network of elliptical 10-MR channels and an orthogonal sinusoidal 8-MR channels. The competitive hydrogenation of 1-hexene and 2,4,4-trimethyl-1-pentene was exploited to probe that the major part of the noble metal is located inside the intracrystalline void volume of the medium pore zeolite SSZ-35.

We tackle the problem of obtaining statistics on content and structure of XML documents by using summaries which may provide cardinality estimations for XML query expressions. Our focus is a data-centric processing scenario in which we use a query engine to process such query expressions. We provide three new summary structures called LESS (Leaf-Element-in-Subtree), LWES (Level-Wide Element Summarization), and EXsum (Element-centered XML Summarization) which are targeted to base an estimation process in an XML query optimizer. Each of these collects structural statistical information of XML documents, and the latter (EXsum) gathers, in addition, statistics on document content. Estimation procedures and/or heuristics for specic types of query expressions of each proposed approach are developed. We have incorporated and implemented our proposals in XTC, a native XML database management system (XDBMS). With this common implementation base, we present an empirical and comparative study in which our proposals are stressed against others published in the literature, which are also incorporated into the XTC. Furthermore, an analysis is made based on criteria pertinent to a query optimizer process.

In this paper, a multi-period supply chain network design problem is addressed. Several aspects of practical relevance are considered such as those related with the financial decisions that must be accounted for by a company managing a supply chain. The decisions to be made comprise the location of the facilities, the flow of commodities and the investments to make in alternative activities to those directly related with the supply chain design. Uncertainty is assumed for demand and interest rates, which is described by a set of scenarios. Therefore, for the entire planning horizon, a tree of scenarios is built. A target is set for the return on investment and the risk of falling below it is measured and accounted for. The service level is also measured and included in the objective function. The problem is formulated as a multi-stage stochastic mixed-integer linear programming problem. The goal is to maximize the total financial benefit. An alternative formulation which is based upon the paths in the scenario tree is also proposed. A methodology for measuring the value of the stochastic solution in this problem is discussed. Computational tests using randomly generated data are presented showing that the stochastic approach is worth considering in these type of problems.

In recent years the consumption of polymer based composites in many engineering
fields where friction and wear are critical issues has increased enormously. Satisfying
the growing industrial needs can be successful only if the costly, labor-intensive and
time-consuming cycle of manufacturing, followed by testing, and additionally followed
by further trial-and-error compounding is reduced or even avoided. Therefore, the
objective is to get in advance as much fundamental understanding as possible of the
interaction between various composite components and that of the composite against
its counterface. Sliding wear of polymers and polymer composites involves very
complex and highly nonlinear processes. Consequently, to develop analytical models
for the simulation of the sliding wear behavior of these materials is extremely difficult
or even impossible. It necessitates simplifying hypotheses and thus compromising
accuracy. An alternative way, discussed in this work, is an artificial neural network
based modeling. The principal benefit of artificial neural networks (ANNs) is their ability
to learn patterns through a training experience from experimentally generated data
using self-organizing capabilities.
Initially, the potential of using ANNs for the prediction of friction and wear properties
of polymers and polymer composites was explored using already published friction
and wear data of 101 independent fretting wear tests of polyamide 46 (PA 46) composites.
For comparison, ANNs were also applied to model the mechanical properties
of polymer composites using a commercial data bank of 93 pairs of independent Izod
impact, tension and bending tests of polyamide 66 (PA 66) composites. Different
stages in the development of ANN models such as selection of optimum network
configuration, multi-dimensional modeling, training and testing of the network were
addressed at length. The results of neural network predictions appeared viable and
very promising for their application in the field of tribology.
A case example was subsequently presented to model the sliding friction and wear
properties of polymer composites by using newly measured datasets of polyphenylene
sulfide (PPS) matrix composites. The composites were prepared by twinscrew
extrusion and injection molding. The dataset investigated was generated from
pin-on-disc testing in dry sliding conditions under various contact pressures and sliding speeds. Initially the focus was placed on exploring the possible synergistic effects
between traditional reinforcements and particulate fillers, with special emphasis on
sub-micro TiO2 particles (300 nm average diameter) and short carbon fibers (SCFs).
Subsequently, the lubricating contributions of graphite (Gr) and polytetrafluoroethylene
(PTFE) in these multiphase materials were also studied. ANNs were trained
using a conjugate gradient with Powell/Beale restarts (CGB) algorithm as well as a
variable learning rate backpropagation (GDX) algorithm in order to learn compositionproperty
relationships between the inputs and outputs of the system. Likewise, the
influence of the operating parameters (contact pressure (p) and sliding speed (v))
was also examined. The incorporation of short carbon fibers and sub-micro TiO2
particles resulted in both a lower friction and a great improvement in the wear resistance
of the PPS composites within the low and medium pv-range. The mechanical
characterization and surface analysis after wear testing revealed that this beneficial
tribological performance could be explained by the following phenomena: (i)
enhanced mechanical properties through the inclusion of short carbon fibers, (ii)
favorable protection of the short carbon fibers by the sub-micro particles diminishing
fiber breakage and removal, (iii) self-repairing effects with the sub-micro particles, (iv)
formation of quasi-spherical transfer particles free to roll at the tribological contact.
Still, in the high pv-range stick-slip sliding motion was observed with these hybrid
materials. The adverse stick-slip behavior could be effectively eliminated through the
additional inclusion of solid lubricant reservoirs (Gr and PTFE), analogous to the
lubricants used in real ball bearings. Likewise, solid lubricants improved the wear resistance
of the multiphase system PPS/SCF/TiO2 in the high pv-range (≥ 9 MPa·m/s).
Yet, their positive effect, especially that of graphite, was limited up to certain volume
fraction and loading conditions. The optimum results were obtained by blending
comparatively low amounts of Gr and PTFE (≈ 5 vol.% from each additive). An introduction
of softer sub-micro particles did not bring the desired ball bearing effect and
fiber protection. The ANN prediction profiles for PPS tribo-compounds exhibited very
good or even perfect agreement with the measured results demonstrating that the
target of achieving a well trained network was reached. The results of employing a
validation test dataset indicated that the trained neural network acquired enough
generalization capability to extend what it has learned about the training patterns to
data that it has not seen before from the same knowledge domain. Optimal brain surgeon (OBS) algorithm was employed to perform pruning of the network
topology by eliminating non-useful weights and bias in order to determine if the
performance of the pruned network was better than the fully-connected network.
Pruning resulted in accuracy gains over the fully-connected network, but induced
higher computational cost in coding the data in the required format. Within an importance
analysis, the sensitivity of the network response variable (frictional coefficient
or specific wear rate) to characteristic mechanical and thermo-mechanical input variables
was examined. The goal was to study the relationships between the diverse
input variables and the characteristic tribological parameters for a better understanding
of the sliding wear process with these materials. Finally, it was demonstrated that
the well-trained networks might be applied for visualization what will happen if a certain
filler is introduced into a composite, or what the impacts of the testing conditions
on the frictional coefficient and specific wear rate are. In this way, they might be a
helpful tool for design engineers and materials experts to explore materials and to
make reasoned selection and substitution decisions early in the design phase, when
they incur least cost.

We study global and local robustness properties of several estimators for shape and scale in a generalized Pareto model. The estimators considered in this paper cover maximum likelihood estimators, skipped maximum likelihood estimators, moment-based estimators, Cramér-von-Mises Minimum Distance estimators, and, as a special case of quantile-based estimators, Pickands Estimator as well as variants of the latter tuned for higher finite sample breakdown point (FSBP), and lower variance. We further consider an estimator matching population median and median of absolute deviations to the empirical ones (MedMad); again, in order to improve its FSBP, we propose a variant using a suitable asymmetric Mad as constituent, and which may be tuned to achieve an expected FSBP of 34%. These estimators are compared to one-step estimators distinguished as optimal in the shrinking neighborhood setting, i.e., the most bias-robust estimator minimizing the maximal (asymptotic) bias and the estimator minimizing the maximal (asymptotic) MSE. For each of these estimators, we determine the FSBP, the influence function, as well as statistical accuracy measured by asymptotic bias, variance, and mean squared error—all evaluated uniformly on shrinking convex contamination neighborhoods. Finally, we check these asymptotic theoretical findings against finite sample behavior by an extensive simulation study.

Simulation of multibody systems (mbs) is an inherent part in developing and design of complex mechanical systems. Moreover, simulation during operation gained in importance in the recent years, e.g. for HIL-, MIL- or monitoring applications. In this paper we discuss the numerical simulation of multibody systems on different platforms. The main section of this paper deals with the simulation of an established truck model [9] on different platforms, one microcontroller and two real-time processor boards. Additional to numerical C-code the latter platforms provide the possibility to build the model with a commercial mbs tool, which is also investigated. A survey of different ways of generating code and equations of mbs models is given and discussed concerning handling, possible limitations as well as performance. The presented benchmarks are processed under terms of on-board real time applications. A further important restriction, caused by the real-time requirement, is a fixed integration step size. Whence, carefully chosen numerical integration algorithms are necessary, especially in the case of closed loops in the model. We investigate linearly-implicit time integration methods with fixed step size, so-called Rosenbrock methods, and compare them with respect to their accuracy and performance on the tested processors.

Point defects in piezoelectric materials – continuum mechanical modelling and numerical simulation
(2010)

The topic of this work is the continuum mechanic modelling of point defects in piezoelectric materials. Devices containing piezoelectric material and especially ferroelectrics require a high precision and are exposed to a high number of electrical and mechanical load cycles. As a result, the relevant material properties may decrease with increasing load cycles. This phenomenon is called electric fatigue. The transported ionic and electric charge carriers can interact with each other, as well as with structural elements (grain boundaries, inhomogeneities) or with material interfaces (domain walls). A reduced domain wall mobility also reduces the electromechanical coupling effect, which leads to the electric fatigue effect. The materials considered here are barium titanate and lead zirconate titanate (PZT), in which oxygen vacancies is the most mobile and most frequently appearing defect species. Intentionally introduced foreign atoms (dopants) can adjust the material properties according to their field of application by generating electric dipoles with the vacancies. Agglomerations of point defects can strongly influence the domain wall motion. The domain wall can be slowed down or even be stopped by the locally varying fields in the vicinity of the clusters. Accumulations of point defects can be detected at electrodes, pores or in the bulk of fatigued samples. The present thesis concentrates focuses on the self interaction behaviour of point defects in the bulk. A micro mechanical continuum model is used to show the qualitative and the quantitative interaction behaviour of defects in a static setup and during drift processes. The modelling neglects the ferroelectric switching mechanisms, but is applicable to every piezoelectric material. The underlying differential equations are solved by means of analytical (Green's functions) and numerical (Finite Differences with discrete Fourier Transform) methods, depending on the boundary conditions. The defects are introduced as localised Eigenstrains, as electric charges and as electric dipoles. The required defect parameters are obtained by comparisons with atomistic methods (lattice statics). There are no standardised procedures available for the parameter identification. In this thesis, the mechanical parameter is obtained by a comparison of relaxation volumes of the atomic lattice and the continuum solution. Parameters for isotropic and anisotropic defect descriptions are identified. The strength of the electric defect is obtained by a comparison of the electric internal energies of atomistics and continuum. The appearing singularities are eliminated by taking only the energy difference of a infinite crystal and a periodic cell into account. Both identification processes are carried out for the cubic structure of barium titanate, which decouples the mechanical and the electrical problem. The defect interaction is analysed by means of configurational forces. The mechanical defect parameter generates a directional short-range attraction between defects. An electrical defect parameter produces the long-range Coulomb interaction, which predicts a repulsion of two similar charges. Additionally, an interaction with defect dipoles is taken into account. It is shown that a defect agglomeration is possible for any static defect configuration. Finally, defect drift is simulated using a thermodynamically motivated migration law based on configurational forces. In this context, the migration of point defects due to self interaction, and the influence of external fields is investigated.

This thesis deals with the solution of special problems arising in financial engineering or financial mathematics. The main focus lies on commodity indices. Chapter 1 addresses the important issue for the financial engineering practice of developing well-suited models for certain assets (here: commodity indices). Descriptive analysis of the Dow Jones-UBS commodity index compared to the Standard & Poor 500 stock index provides us with first insights of some features of the corresponding distributions. Statistical tests of normality and mean reversion then helps us in setting up a model for commodity indices. Additionally, chapter 1 encompasses a thorough introduction to commodity investment, history of commodities trading and the most important derivatives, namely futures and European options on futures. Chapter 2 proposes a model for commodity indices and derives fair prices for the most important derivatives in the commodity markets. It is a Heston model supplemented with a stochastic convenience yield. The Heston model belongs to the model class of stochastic volatility models and is currently widely used in stock markets. For the application in the commodity markets the stochastic convenience yield is included in the drift of the instantaneous spot return process. Motivated by the results of chapter 1 it seems reasonable to model the convenience yield by a mean reverting Ornstein-Uhlenbeck process. Since trading desks only apply and consider models with closed form solutions for options I derive such formulas for commodity futures by solving the corresponding partial differential equation. Additionally, semi-closed form formulas for European options on futures are determined. The Cauchy problem with respect to these options is more challenging than the first one. A solution can be provided. Unlike equities, which typically entitle the holder to a continuing stake in a corporation, commodity futures contracts normally specify a certain date for the delivery of the underlying physical commodity. In order to avoid the delivery process and maintain a futures position, nearby contracts must be sold and contracts that have not yet reached the delivery period must be purchased (so called rolling). Optimal trading days for selling and buying futures are determined by applying statistical tests for stochastic dominance. Besides the optimization of the rolling procedure for commodity futures we dedicate ourselves in chapter 3 with the optimization of the weightings of the commodity futures that make up the index. To this end, I apply the Markowitz approach or mean-variance optimization. The mean-variance optimization penalizes up-side and down-side risk equally, whereas most investors do not mind up-side risk. To overcome this, I consider in the next step other risk measures, namely Value-at-Risk and Conditional Value-at-Risk. The Conditional Value-at-Risk is generalized to discontinuous cumulative distribution functions of the loss. For continuous loss distributions, the Conditional Value-at-Risk at a given confidence level is defined as the expected loss exceeding the Value-at-Risk. Loss distributions associated with finite sampling or scenario modeling are, however, discontinuous. Various risk measures involving discontinuous loss distributions shall be introduced and compared. I then apply the theoretical results to the field of portfolio optimization with commodity indices. Furthermore, I uncover graphically the behavior of these risk measures. For this purpose, I consider the risk measures as a function of the confidence level. Based on a special discrete loss distribution, the graphs demonstrate the different properties of these risk measures. The goal of the first section of chapter 4 is to apply the mathematical concept of excursions for the creation of optimal highly automated or algorithmic trading strategies. The idea is to consider the gain of the strategy and the excursion time it takes to realize the gain. In this section I calculate formulas for the Ornstein-Uhlenbeck process. I show that the corresponding formulas can be calculated quite fast since the only function appearing in the formulas is the so called imaginary error function. This function is already implemented in many programs, such as in Maple. My main contribution of this topic is the optimization of the trading strategy for Ornstein-Uhlenbeck processes via the Banach fixed-point theorem. The second section of chapter 4 deals with statistical arbitrage strategies, a long horizon trading opportunity that generates a riskless profit. The results of this section provide an investor with a tool to investigate empirically if some strategies (for example momentum strategies) constitute statistical arbitrage opportunities or not.

We present some optimality results for robust Kalman filtering. To this end, we introduce the general setup of state space models which will not be limited to a Euclidean or time-discrete framework. We pose the problem of state reconstruction and repeat the classical existing algorithms in this context. We then extend the ideal-model setup allowing for outliers which in this context may be system-endogenous or -exogenous, inducing the somewhat conflicting goals of tracking and attenuation. In quite a general framework, we solve corresponding minimax MSE-problems for both types of outliers separately, resulting in saddle-points consisting of an optimally-robust procedure and a corresponding least favorable outlier situation. Still insisting on recursivity, we obtain an operational solution, the rLS filter and variants of it. Exactly robust-optimal filters would need knowledge of certain hard-to-compute conditional means in the ideal model; things would be much easier if these conditional means were linear. Hence, it is important to quantify the deviation of the exact conditional mean from linearity. We obtain a somewhat surprising characterization of linearity for the conditional expectation in this setting. Combining both optimal filter types (for system-endogenous and -exogenous situation) we come up with a delayed hybrid filter which is able to treat both types of outliers simultaneously. Keywords: robustness, Kalman Filter, innovation outlier, additive outlier

In this work, we develop a framework for analyzing an executive’s own- company stockholding and work effort preferences. The executive, character- ized by risk aversion and work effectiveness parameters, invests his personal wealth without constraint in the financial market, including the stock of his own company whose value he can directly influence with work effort. The executive’s utility-maximizing personal investment and work effort strategy is derived in closed form for logarithmic and power utility and for exponential utility for the case of zero interest rates. Additionally, a utility indifference rationale is applied to determine his fair compensation. Being unconstrained by performance contracting, the executive’s work effort strategy establishes a base case for theoretical or empirical assessment of the benefits or otherwise of constraining executives with performance contracting. Further, we consider a highly-qualified individual with respect to her choice between two distinct career paths. She can choose between a mid-level management position in a large company and an executive position within a smaller listed company with the possibility to directly affect the company’s share price. She invests in the financial market including the share of the smaller listed company. The utility maximizing strategy from consumption, investment, and work effort is derived in closed form for logarithmic utility and power utility. Conditions for the individual to pursue her career with the smaller listed company are obtained. The participation constraint is formulated in terms of the salary differential between the two positions. The smaller listed company can offer less salary. The salary shortfall is offset by the possibilityto benefit from her work effort by acquiring own-company shares. This givesinsight into aspects of optimal contract design. Our framework is applicable to the pharmaceutical and financial industry, as well as the IT sector.

The scope of this paper is to enhance the model for the own-company stockholder (given in Desmettre, Gould and Szimayer (2010)), who can voluntarily performance-link his personal wealth to his management success by acquiring stocks in the own-company whose value he can directly influence via spending work effort. The executive is thereby characterized by a parameter of risk aversion and the two work effectiveness parameters inverse work productivity and disutility stress. We extend the model to a constant absolute risk aversion framework using an exponential utility/disutility set-up. A closed-form solution is given for the optimal work effort an executive will apply and we derive the optimal investment strategies of the executive. Furthermore, we determine an up-front fair cash compensation applying an indifference utility rationale. Our study shows to a large extent that the results previously obtained are robust under the choice of the utility/disutility set-up.

In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.