Refine
Year of publication
- 2010 (117) (remove)
Document Type
- Doctoral Thesis (61)
- Report (27)
- Periodical Part (10)
- Preprint (4)
- Bachelor Thesis (3)
- Master's Thesis (3)
- Working Paper (3)
- Article (2)
- Book (2)
- Diploma Thesis (1)
Keywords
- Bemessung (3)
- Homogene Katalyse (3)
- Palladium (3)
- optimal control (3)
- Acrylamid (2)
- Biaryle (2)
- Carbonsäuren (2)
- Controlling (2)
- Datenbank (2)
- Erwarteter Nutzen (2)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (21)
- Fraunhofer (ITWM) (20)
- Kaiserslautern - Fachbereich Chemie (13)
- Kaiserslautern - Fachbereich ARUBI (12)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (12)
- Kaiserslautern - Fachbereich Biologie (8)
- Kaiserslautern - Fachbereich Informatik (7)
- Kaiserslautern - Fachbereich Sozialwissenschaften (7)
- Universität (6)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (5)
In the classical Merton investment problem of maximizing the expected utility from terminal wealth and intermediate consumption stock prices are independent of the investor who is optimizing his investment strategy. This is reasonable as long as the considered investor is small and thus does not influence the asset prices. However for an investor whose actions may affect the financial market the framework of the classical investment problem turns out to be inappropriate. In this thesis we provide a new approach to the field of large investor models. We study the optimal investment problem of a large investor in a jump-diffusion market which is in one of two states or regimes. The investor’s portfolio proportions as well as his consumption rate affect the intensity of transitions between the different regimes. Thus the investor is ’large’ in the sense that his investment decisions are interpreted by the market as signals: If, for instance, the large investor holds 25% of his wealth in a certain asset then the market may regard this as evidence for the corresponding asset to be priced incorrectly, and a regime shift becomes likely. More specifically, the large investor as modeled here may be the manager of a big mutual fund, a big insurance company or a sovereign wealth fund, or the executive of a company whose stocks are in his own portfolio. Typically, such investors have to disclose their portfolio allocations which impacts on market prices. But even if a large investor does not disclose his portfolio composition as it is the case of several hedge funds then the other market participants may speculate about the investor’s strategy which finally could influence the asset prices. Since the investor’s strategy only impacts on the regime shift intensities the asset prices do not necessarily react instantaneously. Our model is a generalization of the two-states version of the Bäuerle-Rieder model. Hence as the Bäuerle-Rieder model it is suitable for long investment periods during which market conditions could change. The fact that the investor’s influence enters the intensities of the transitions between the two states enables us to solve the investment problem of maximizing the expected utility from terminal wealth and intermediate consumption explicitly. We present the optimal investment strategy for a large investor with CRRA utility for three different kinds of strategy-dependent regime shift intensities – constant, step and affine intensity functions. In each case we derive the large investor’s optimal strategy in explicit form only dependent on the solution of a system of coupled ODEs of which we show that it admits a unique global solution. The thesis is organized as follows. In Section 2 we repeat the classical Merton investment problem of a small investor who does not influence the market. Further the Bäuerle-Rieder investment problem in which the market states follow a Markov chain with constant transition intensities is discussed. Section 3 introduces the aforementioned investment problem of a large investor. Besides the mathematical framework and the HJB-system we present a verification theorem that is necessary to verify the optimality of the solutions to the investment problem that we derive later on. The explicit derivation of the optimal investment strategy for a large investor with power utility is given in Section 4. For three kinds of intensity functions – constant, step and affine – we give the optimal solution and verify that the corresponding ODE-system admits a unique global solution. In case of the strategy-dependent intensity functions we distinguish three particular kinds of this dependency – portfolio-dependency, consumption-dependency and combined portfolio- and consumption-dependency. The corresponding results for an investor having logarithmic utility are shown in Section 5. In the subsequent Section 6 we consider the special case of a market consisting of only two correlated stocks besides the money market account. We analyze the investor’s optimal strategy when only the position in one of those two assets affects the market state whereas the position in the other asset is irrelevant for the regime switches. Various comparisons of the derived investment problems are presented in Section 7. Besides the comparisons of the particular problems with each other we also dwell on the sensitivity of the solution concerning the parameters of the intensity functions. Finally we consider the loss the large investor had to face if he neglected his influence on the market. In Section 8 we conclude the thesis.
In robotics, information is often regarded as a means to an end. The question of how to structure information and how to bridge the semantic gap between different levels of abstraction in a uniform way is still widely regarded as a technical issue. Ignoring these challenges appears to lead robotics into a similar stasis as experienced in the software industry of the late 1960s. From the beginning of the software crisis until today, numerous methods, techniques, and tools for managing the increasing complexity of software systems have evolved. The attempt to transfer several of these ideas towards applications in robotics yielded various control architectures, frameworks, and process models. These attempts mainly provide modularisation schemata which suggest how to decompose a complex system into less complex subsystems. The schematisation of representation and information flow however is mostly ignored. In this work, a set of design schemata is proposed which is embedded into an action/perception-oriented design methodology to promote thorough abstractions between distinct levels of control. Action-oriented design decomposes control systems top-down and sensor data is extracted from the environment as required. This comes with the problem that information is often condensed in a premature fashion. That way, sensor processing is dependent on the control system design resulting in a monolithical system structure with limited options for reusability. In contrast, perception-oriented design constructs control systems bottom-up starting with the extraction of environment information from sensor data. The extracted entities are placed into structures which evolve with the development of the sensor processing algorithms. In consequence, the control system is strictly dependent on the sensor processing algorithms which again results in a monolithic system. In their particular domain, both design approaches have great advantages but fail to create inherently modular systems. The design approach proposed in this work combines the strengths of action orientation and perception orientation into one coherent methodology without inheriting their weaknesses. More precisely, design schemata for representation, translation, and fusion of environmental information are developed which establish thorough abstraction mechanisms between components. The explicit introduction of abstractions particularly supports extensibility and scalability of robot control systems by design.
Untersuchungen zum Zugtragverhalten hochduktiler Faserbetone mit zusätzlicher Textilbewehrung
(2010)
Hochduktiler Faserbeton zeigt ein dehnungsverfestigendes Zugtragverhalten mit ausgeprägter Vielfachrissbildung und einer Bruchdehnung von bis zu 5 %. Aufgrund der sehr kleinen Rissbreiten im Gebrauchszustand von weniger als 0,1 mm können hochduktile Faserbetone als quasi-wasserundurchlässig angesehen werden. Ausgehend von diesen Materialeigenschaften entstand die Überlegung, Fugen, z.B. zwischen Betonfertigteilen im Hochbau, mit einer Abdeckung aus hochduktilem Faserbeton zu überbrücken. Hiermit könnten Bewegungen der Betonfertigteile, z.B. infolge Temperaturänderungen ausgeglichen werden, ohne dass breite Risse an der Oberseite der Abdeckung entstehen. Im Laufe vorangegangener Untersuchungen [Mechtcherine 2007/1] hat sich jedoch gezeigt, dass das Dauerstandverhalten des für die vorliegende Arbeit verwendeten hochduktilen PVA-Faserbetons unter Dauerlast unzureichend ist. Zur Verbesserung des Tragverhaltens unter Dauerlast wurde daher eine zusätzliche leichte Textilbewehrung verwendet. Die vorliegende Arbeit befasst sich mit der Untersuchung des Zugtragverhaltens eines hochduktilen Kurzfaserbetons mit zusätzlicher Textilbewehrung unter Kurzzeit- und Langzeit-Zugbeanspruchung. Auf der Grundlage der Versuchsergebnisse wird ein einfaches empirisches Bemessungskonzept für Faserbeton mit zusätzlicher Textilbewehrung entwickelt. Auf der Grundlage dieses Bemessungskonzeptes soll in weiterführenden Arbeiten, die sich u. a. mit dem Tragverhalten unter zyklischer Beanspruchung befassen sollten, die Entwicklung einer Fugenabdeckung zwischen Betonfertigteildecken möglich sein.
It has been observed that for understanding the biological function of certain RNA molecules, one has to study joint secondary structures of interacting pairs of RNA. In this thesis, a new approach for predicting the joint structure is proposed and implemented. For this, we introduce the class of m-dimensional context-free grammars --- an extension of stochastic context-free grammars to multiple dimensions --- and present an Earley-style semiring parser for this class. Additionally, we develop and thoroughly discuss an implementation variant of Earley parsers tailored to efficiently handle dense grammars, which embraces the grammars used for structure prediction. A currently proposed partitioning scheme for joint secondary structures is transferred into a two-dimensional context-free grammar, which in turn is used as a stochastic model for RNA-RNA interaction. This model is trained on actual data and then used for predicting most likely joint structures for given RNA molecules. While this technique has been widely used for secondary structure prediction of single molecules, RNA-RNA interaction was hardly approached this way in the past. Although our parser has O(n^3 m^3) time complexity and O(n^2 m^2) space complexity for two RNA molecules of sizes n and m, it remains practically applicable for typical sizes if enough memory is available. Experiments show that our parser is much more efficient for this application than classical Earley parsers. Moreover the predictions of joint structures are comparable in quality to current energy minimization approaches.
Ever since Mark Weiser’s vision of Ubiquitous Computing the importance of context has increased in the computer science domain. Future Ambient Intelligent Environments will assist humans in their everyday activities, even without them being constantly aware of it. Objects in such environments will have small computers embedded into them which have the ability to predict human needs from the current context and adapt their behavior accordingly. This vision equally applies to future production environments. In modern factories workers and technical staff members are confronted with a multitude of devices from various manufacturers, all with different user interfaces, interaction concepts and degrees of complexity. Production processes are highly dynamic, whole modules can be exchanged or restructured. Both factors force users to continuously change their mental model of the environment. This complicates their workflows and leads to avoidable user errors or slips in judgement. In an Ambient Intelligent Production Environment these challenges have to be approached. The SmartMote is a universal control device for ambient intelligent production environments like the SmartFactoryKL. It copes with the problems mentioned above by integrating all the user interfaces into a single, holistic and mobile device. Following an automated Model-Based User Interface Development (MBUID) process it generates a fully functional graphical user interface from an abstract task-based description of the environment during run-time. This work introduces an approach to integrating context, namely the user’s location, as an adaptation basis into the MBUID process. A Context Model is specified, which stores location information in a formal and precise way. Connected sensors continuously update the model with new values. The model is complemented by a reasoning component which uses an extensible set of rules. These rules are used to derive more abstract context information from basic sensor data and for providing this information to the MBUID process. The feasibility of the approach is shown by using the example of Interaction Zones, which let developers describe different task models depending on the user’s location. Using the context model to determine when a user enters or leaves a zone, the generator can adapt the graphical user interface accordingly. Context-awareness and the potential to adapt to the current context of use are key requirements of applications in ambient intelligent environments. The approach presented here provides a clear procedure and extension scheme for the consideration of additional context types. As context has significant influence on the overall User Experience, this results not only in a better usefulness, but also in an improved usability of the SmartMote.
Wireless sensor networks are the driving force behind many popular and interdisciplinary research areas, such as environmental monitoring, building automation, healthcare and assisted living applications. Requirements like compactness, high integration of sensors, flexibility, and power efficiency are often very different and cannot be fulfilled by state-of-the-art node platforms at once. In this paper, we present and analyze AmICA: a flexible, compact, easy-to-program, and low-power node platform. Developed from scratch and including a node, a basic communication protocol, and a debugging toolkit, it assists in an user-friendly rapid application development. The general purpose nature of AmICA was evaluated in two practical applications with diametric requirements. Our analysis shows that AmICA nodes are 67% smaller than BTnodes, have five times more sensors than Mica2Dot and consume 72% less energy than the state-of-the-art TelosB mote in sleep mode.
In this work a 3-dimensional contact elasticity problem for a thin fiber and a rigid foundation is studied. We describe the contact condition by a linear Robin-condition (by meaning of the penalized and linearized non-penetration and friction conditions).
The dimension of the problem is reduced by an asymptotic approach. Scaling the Robin parameters appropriately we obtain a recurrent chain of Neumann type boundary value problems which are considered only in the microscopic scale. The problem for the leading term is a homogeneous Neumann problem, hence the leading term depends only on the slow variable. This motivates the choice of a multiplicative ansatz in the asymptotic expansion.
The theoretical results are illustrated with numerical examples performed with a commercial finite-element software-tool.
El tema se desarrolló en dos partes, de las cuales, la primera, Los tratadistas europeos, abordó la situación de los libros de arquitectura editados durante los siglos XV, XVI y principios del XVII en Europa, y tocó también los primeros esfuerzos de teorización del arte edilicio durante la Antigüedad y la Edad Media. Con esta sistematización se logró una visión en conjunto de la evolución de la teoría arquitectónica, redactada en español y elaborada a partir de una bibliografía, en su mayoría publicada en idiomas extranjeros. En la segunda parte, La repercusión en Nueva España, se trató de comprobar la influencia de los textos teóricos importados desde la metrópoli, en la colonia del siglo XVI, a la luz de la circulación de los libros de arquitectura, la formación de los alarifes y las fechas de construcción de los edificios seleccionados. La división en dos secciones obedeció a la enunciación del tema, pues así resultó ser más fácil su tratamiento en una manera clara y ordenada, y se justificó por el logro del objetivo planteado, que para la primera parte era una sistematización de los tratados europeos de arquitectura del Renacimiento y del Manierismo que sirviera de base a futuras investigaciones relacionados con este asunto; y para la segunda, consistía en la comprobación de la aplicación de dichos manuales teóricos en la Nueva España.
We tackle the problem of obtaining statistics on content and structure of XML documents by using summaries which may provide cardinality estimations for XML query expressions. Our focus is a data-centric processing scenario in which we use a query engine to process such query expressions. We provide three new summary structures called LESS (Leaf-Element-in-Subtree), LWES (Level-Wide Element Summarization), and EXsum (Element-centered XML Summarization) which are targeted to base an estimation process in an XML query optimizer. Each of these collects structural statistical information of XML documents, and the latter (EXsum) gathers, in addition, statistics on document content. Estimation procedures and/or heuristics for specic types of query expressions of each proposed approach are developed. We have incorporated and implemented our proposals in XTC, a native XML database management system (XDBMS). With this common implementation base, we present an empirical and comparative study in which our proposals are stressed against others published in the literature, which are also incorporated into the XTC. Furthermore, an analysis is made based on criteria pertinent to a query optimizer process.