Refine
Year of publication
- 2010 (22) (remove)
Document Type
- Doctoral Thesis (22) (remove)
Language
- English (22) (remove)
Has Fulltext
- yes (22)
Keywords
- Erwarteter Nutzen (2)
- Numerische Strömungssimulation (2)
- Portfolio Selection (2)
- Stochastische dynamische Optimierung (2)
- Abstraction (1)
- Abstraktion (1)
- Additionsreaktion (1)
- Algebraische Geometrie (1)
- Algorithmus (1)
- Buffer Zone Method (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (8)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (5)
- Kaiserslautern - Fachbereich Informatik (3)
- Kaiserslautern - Fachbereich Chemie (2)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (2)
- Kaiserslautern - Fachbereich ARUBI (1)
- Kaiserslautern - Fachbereich Biologie (1)
In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.
In robotics, information is often regarded as a means to an end. The question of how to structure information and how to bridge the semantic gap between different levels of abstraction in a uniform way is still widely regarded as a technical issue. Ignoring these challenges appears to lead robotics into a similar stasis as experienced in the software industry of the late 1960s. From the beginning of the software crisis until today, numerous methods, techniques, and tools for managing the increasing complexity of software systems have evolved. The attempt to transfer several of these ideas towards applications in robotics yielded various control architectures, frameworks, and process models. These attempts mainly provide modularisation schemata which suggest how to decompose a complex system into less complex subsystems. The schematisation of representation and information flow however is mostly ignored. In this work, a set of design schemata is proposed which is embedded into an action/perception-oriented design methodology to promote thorough abstractions between distinct levels of control. Action-oriented design decomposes control systems top-down and sensor data is extracted from the environment as required. This comes with the problem that information is often condensed in a premature fashion. That way, sensor processing is dependent on the control system design resulting in a monolithical system structure with limited options for reusability. In contrast, perception-oriented design constructs control systems bottom-up starting with the extraction of environment information from sensor data. The extracted entities are placed into structures which evolve with the development of the sensor processing algorithms. In consequence, the control system is strictly dependent on the sensor processing algorithms which again results in a monolithic system. In their particular domain, both design approaches have great advantages but fail to create inherently modular systems. The design approach proposed in this work combines the strengths of action orientation and perception orientation into one coherent methodology without inheriting their weaknesses. More precisely, design schemata for representation, translation, and fusion of environmental information are developed which establish thorough abstraction mechanisms between components. The explicit introduction of abstractions particularly supports extensibility and scalability of robot control systems by design.
We tackle the problem of obtaining statistics on content and structure of XML documents by using summaries which may provide cardinality estimations for XML query expressions. Our focus is a data-centric processing scenario in which we use a query engine to process such query expressions. We provide three new summary structures called LESS (Leaf-Element-in-Subtree), LWES (Level-Wide Element Summarization), and EXsum (Element-centered XML Summarization) which are targeted to base an estimation process in an XML query optimizer. Each of these collects structural statistical information of XML documents, and the latter (EXsum) gathers, in addition, statistics on document content. Estimation procedures and/or heuristics for specic types of query expressions of each proposed approach are developed. We have incorporated and implemented our proposals in XTC, a native XML database management system (XDBMS). With this common implementation base, we present an empirical and comparative study in which our proposals are stressed against others published in the literature, which are also incorporated into the XTC. Furthermore, an analysis is made based on criteria pertinent to a query optimizer process.
A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.
The purpose of Exploration in Oil Industry is to "discover" an oil-containing geological formation from exploration data. In the context of this PhD project this oil-containing geological formation plays the role of a geometrical object, which may have any shape. The exploration data may be viewed as a "cloud of points", that is a finite set of points, related to the geological formation surveyed in the exploration experiment. Extensions of topological methodologies, such as homology, to point clouds are helpful in studying them qualitatively and capable of resolving the underlying structure of a data set. Estimation of topological invariants of the data space is a good basis for asserting the global features of the simplicial model of the data. For instance the basic statistical idea, clustering, are correspond to dimension of the zero homology group of the data. A statistics of Betti numbers can provide us with another connectivity information. In this work represented a method for topological feature analysis of exploration data on the base of so called persistent homology. Loosely, this is the homology of a growing space that captures the lifetimes of topological attributes in a multiset of intervals called a barcode. Constructions from algebraic topology empowers to transform the data, to distillate it into some persistent features, and to understand then how it is organized on a large scale or at least to obtain a low-dimensional information which can point to areas of interest. The algorithm for computing of the persistent Betti numbers via barcode is realized in the computer algebra system "Singular" in the scope of the work.
This thesis deals with the solution of special problems arising in financial engineering or financial mathematics. The main focus lies on commodity indices. Chapter 1 addresses the important issue for the financial engineering practice of developing well-suited models for certain assets (here: commodity indices). Descriptive analysis of the Dow Jones-UBS commodity index compared to the Standard & Poor 500 stock index provides us with first insights of some features of the corresponding distributions. Statistical tests of normality and mean reversion then helps us in setting up a model for commodity indices. Additionally, chapter 1 encompasses a thorough introduction to commodity investment, history of commodities trading and the most important derivatives, namely futures and European options on futures. Chapter 2 proposes a model for commodity indices and derives fair prices for the most important derivatives in the commodity markets. It is a Heston model supplemented with a stochastic convenience yield. The Heston model belongs to the model class of stochastic volatility models and is currently widely used in stock markets. For the application in the commodity markets the stochastic convenience yield is included in the drift of the instantaneous spot return process. Motivated by the results of chapter 1 it seems reasonable to model the convenience yield by a mean reverting Ornstein-Uhlenbeck process. Since trading desks only apply and consider models with closed form solutions for options I derive such formulas for commodity futures by solving the corresponding partial differential equation. Additionally, semi-closed form formulas for European options on futures are determined. The Cauchy problem with respect to these options is more challenging than the first one. A solution can be provided. Unlike equities, which typically entitle the holder to a continuing stake in a corporation, commodity futures contracts normally specify a certain date for the delivery of the underlying physical commodity. In order to avoid the delivery process and maintain a futures position, nearby contracts must be sold and contracts that have not yet reached the delivery period must be purchased (so called rolling). Optimal trading days for selling and buying futures are determined by applying statistical tests for stochastic dominance. Besides the optimization of the rolling procedure for commodity futures we dedicate ourselves in chapter 3 with the optimization of the weightings of the commodity futures that make up the index. To this end, I apply the Markowitz approach or mean-variance optimization. The mean-variance optimization penalizes up-side and down-side risk equally, whereas most investors do not mind up-side risk. To overcome this, I consider in the next step other risk measures, namely Value-at-Risk and Conditional Value-at-Risk. The Conditional Value-at-Risk is generalized to discontinuous cumulative distribution functions of the loss. For continuous loss distributions, the Conditional Value-at-Risk at a given confidence level is defined as the expected loss exceeding the Value-at-Risk. Loss distributions associated with finite sampling or scenario modeling are, however, discontinuous. Various risk measures involving discontinuous loss distributions shall be introduced and compared. I then apply the theoretical results to the field of portfolio optimization with commodity indices. Furthermore, I uncover graphically the behavior of these risk measures. For this purpose, I consider the risk measures as a function of the confidence level. Based on a special discrete loss distribution, the graphs demonstrate the different properties of these risk measures. The goal of the first section of chapter 4 is to apply the mathematical concept of excursions for the creation of optimal highly automated or algorithmic trading strategies. The idea is to consider the gain of the strategy and the excursion time it takes to realize the gain. In this section I calculate formulas for the Ornstein-Uhlenbeck process. I show that the corresponding formulas can be calculated quite fast since the only function appearing in the formulas is the so called imaginary error function. This function is already implemented in many programs, such as in Maple. My main contribution of this topic is the optimization of the trading strategy for Ornstein-Uhlenbeck processes via the Banach fixed-point theorem. The second section of chapter 4 deals with statistical arbitrage strategies, a long horizon trading opportunity that generates a riskless profit. The results of this section provide an investor with a tool to investigate empirically if some strategies (for example momentum strategies) constitute statistical arbitrage opportunities or not.
Point defects in piezoelectric materials – continuum mechanical modelling and numerical simulation
(2010)
The topic of this work is the continuum mechanic modelling of point defects in piezoelectric materials. Devices containing piezoelectric material and especially ferroelectrics require a high precision and are exposed to a high number of electrical and mechanical load cycles. As a result, the relevant material properties may decrease with increasing load cycles. This phenomenon is called electric fatigue. The transported ionic and electric charge carriers can interact with each other, as well as with structural elements (grain boundaries, inhomogeneities) or with material interfaces (domain walls). A reduced domain wall mobility also reduces the electromechanical coupling effect, which leads to the electric fatigue effect. The materials considered here are barium titanate and lead zirconate titanate (PZT), in which oxygen vacancies is the most mobile and most frequently appearing defect species. Intentionally introduced foreign atoms (dopants) can adjust the material properties according to their field of application by generating electric dipoles with the vacancies. Agglomerations of point defects can strongly influence the domain wall motion. The domain wall can be slowed down or even be stopped by the locally varying fields in the vicinity of the clusters. Accumulations of point defects can be detected at electrodes, pores or in the bulk of fatigued samples. The present thesis concentrates focuses on the self interaction behaviour of point defects in the bulk. A micro mechanical continuum model is used to show the qualitative and the quantitative interaction behaviour of defects in a static setup and during drift processes. The modelling neglects the ferroelectric switching mechanisms, but is applicable to every piezoelectric material. The underlying differential equations are solved by means of analytical (Green's functions) and numerical (Finite Differences with discrete Fourier Transform) methods, depending on the boundary conditions. The defects are introduced as localised Eigenstrains, as electric charges and as electric dipoles. The required defect parameters are obtained by comparisons with atomistic methods (lattice statics). There are no standardised procedures available for the parameter identification. In this thesis, the mechanical parameter is obtained by a comparison of relaxation volumes of the atomic lattice and the continuum solution. Parameters for isotropic and anisotropic defect descriptions are identified. The strength of the electric defect is obtained by a comparison of the electric internal energies of atomistics and continuum. The appearing singularities are eliminated by taking only the energy difference of a infinite crystal and a periodic cell into account. Both identification processes are carried out for the cubic structure of barium titanate, which decouples the mechanical and the electrical problem. The defect interaction is analysed by means of configurational forces. The mechanical defect parameter generates a directional short-range attraction between defects. An electrical defect parameter produces the long-range Coulomb interaction, which predicts a repulsion of two similar charges. Additionally, an interaction with defect dipoles is taken into account. It is shown that a defect agglomeration is possible for any static defect configuration. Finally, defect drift is simulated using a thermodynamically motivated migration law based on configurational forces. In this context, the migration of point defects due to self interaction, and the influence of external fields is investigated.
Mrázek et al. [25] proposed a unified approach to curve estimation which combines localization and regularization. Franke et al. [10] used that approach to discuss the case of the regularized local least-squares (RLLS) estimate. In this thesis we will use the unified approach of Mrázek et al. to study some asymptotic properties of local smoothers with regularization. In particular, we shall discuss the Huber M-estimate and its limiting cases towards the L2 and the L1 cases. For the regularization part, we will use quadratic regularization. Then, we will define a more general class of regularization functions. Finally, we will do a Monte Carlo simulation study to compare different types of estimates.
Model-based fault diagnosis and fault-tolerant control for a nonlinear electro-hydraulic system
(2010)
The work presented in this thesis discusses the model-based fault diagnosis and fault-tolerant control with application to a nonlinear electro-hydraulic system. High performance control with guaranteed safety and reliability for electro-hydraulic systems is a challenging task due to the high nonlinearity and system uncertainties. This thesis developed a diagnosis integrated fault-tolerant control (FTC) strategy for the electro-hydraulic system. In fault free case the nominal controller is in operation for achieving the best performance. If the fault occurs, the controller will be automatically reconfigured based on the fault information provided by the diagnosis system. Fault diagnosis and reconfigurable controller are the key parts for the proposed methodology. The system and sensor faults both are studied in the thesis. Fault diagnosis consists of fault detection and isolation (FDI). A model-base residual generating is realized by calculating the redundant information from the system model and available signal. In this thesis differential-geometric approach is employed, which gives a general formulation of FDI problem and is more compact and transparent among various model-based approaches. The principle of residual construction with differential-geometric method is to find an unobservable distribution. It indicates the existence of a system transformation, with which the unknown system disturbance can be decoupled. With the observability codistribution algorithm the local weak observability of transformed system is ensured. A Fault detection observer for the transformed system can be constructed to generate the residual. This method cannot isolated sensor faults. In the thesis the special decision making logic (DML) is designed based on the individual signal analysis of the residuals to isolate the fault. The reconfigurable controller is designed with the backstepping technique. Backstepping method is a recursive Lyapunov-based approach and can deal with nonlinear systems. Some system variables are considered as ``virtual controls'' during the design procedure. Then the feedback control laws and the associate Lyapunov function can be constructed by following step-by-step routine. For the electro-hydraulic system adaptive backstepping controller is employed for compensate the impact of the unknown external load in the fault free case. As soon as the fault is identified, the controller can be reconfigured according to the new modeling of faulty system. The system fault is modeled as the uncertainty of system and can be tolerated by parameter adaption. The senor fault acts to the system via controller. It can be modeled as parameter uncertainty of controller. All parameters coupled with the faulty measurement are replaced by its approximation. After the reconfiguration the pre-specified control performance can be recovered. FDI integrated FTC based on backstepping technique is implemented successfully on the electro-hydraulic testbed. The on-line robust FDI and controller reconfiguration can be achieved. The tracking performance of the controlled system is guaranteed and the considered faults can be tolerated. But the problem of theoretical robustness analysis for the time delay caused by the fault diagnosis is still open.
The aim of this thesis was to link Computational Fluid Dynamics (CFD) and Population Balance Modelling (PBM) to gain a combined model for the prediction of counter-current liquid-liquid extraction columns. Parts of the doctoral thesis project were done in close cooperation with the Fraunhofer ITWM. Their in-house CFD code Finite Pointset Method (FPM) was further developed for two-phase simulations and used for the CFD-PBM coupling. The coupling and all simulations were also carried out in the commercial CFD code Fluent in parallel. For the solution methods of the PBM there was a close cooperation with Prof. Attarakih from the Al-Balqa Applied University in Amman, Jordan, who developed a new adaptive method, the Sectional Quadrature Method of Moments (SQMOM). At the beginning of the project, there was a lack of two-phase liquid-liquid CFD simulations and their experimental validation in literature. Therefore, stand-alone CFD simulations without PBM were carried out both in FPM and Fluent to test the predictivity of CFD for stirred liquid-liquid extraction columns. The simulations were validated by Particle Image Velocimetry (PIV) measurements. The two-phase PIV measurements were possible when using an iso-optical system, where the refractive indices of both liquid phases are identical. These investigations were done in segments of two Rotating Disc Contactors with 150mm and 450mm diameter to validate CFD at lab and at industrial scale. CFD results of the aqueous phase velocities, hold-up, droplet raising velocities and turbulent energy dissipation were compared to experimental data. The results show that CFD can predict most phenomena and there was an overall good agreement. In the next steps, different solution methods for the PBM, e.g. the SQMOM and the Quadrature Method of Moments (QMOM) were implemented, varied and tested in Fluent and FPM in a two-fluid model. In addition, different closures for coalescence and breakage were implemented to predict drop size distributions and Sauter mean diameters in the RDC DN150 column. These results show that a prediction of the droplet size distribution is possible, even when no adjustable parameters are used. A combined multi-fluid CFD-PBM model was developed by means of the SQMOM to overcome drawbacks of the two-fluid approach. Benefits of the multi-fluid approach could be shown, but the high computational load was also visible. Therefore, finally, the One Primary One Secondary Particle Method (OPOSPM), which is a very easy and efficient special case of the SQMOM, was introduced in CFD to simulate a full pilot plant column of the RDC DN150. The OPOSPM offers the possibility of a one equation model for the solution of the PBM in CFD. The predicted results for the mean droplet diameter and the dispersed phase hold up agree well with literature data. The results also show that the new CFD-PBM model is very efficient from computational point of view (two times less than the QMOM and five times less than the method of classes). The overall results give rise to the expectation that the coupled CFD-PBM model will lead to a better, faster and more cost-efficient layout of counter-current extraction columns in future.