## Doctoral Thesis

### Refine

#### Faculty / Organisational entity

- Fachbereich Mathematik (179)
- Fachbereich Informatik (88)
- Fachbereich Maschinenbau und Verfahrenstechnik (55)
- Fachbereich Chemie (38)
- Fachbereich Elektrotechnik und Informationstechnik (37)
- Fachbereich Biologie (19)
- Fachbereich Sozialwissenschaften (12)
- Fachbereich ARUBI (5)
- Fachbereich Physik (4)
- Fraunhofer (ITWM) (4)

#### Year of publication

#### Document Type

- Doctoral Thesis (442) (remove)

#### Language

- English (442) (remove)

#### Keywords

- Visualisierung (10)
- finite element method (5)
- Algebraische Geometrie (4)
- Finite-Elemente-Methode (4)
- Navier-Stokes-Gleichung (4)
- Numerische Strömungssimulation (4)
- Optimization (4)
- Computeralgebra (3)
- Computergraphik (3)
- Finanzmathematik (3)

- Development of nano/micro hybrid susceptor sheet for induction heating applications (2016)
- Thermoplastic composite materials are being widely used in the automotive and aerospace industries. Due to the limitations of shape complexity, different components need to be joined. They can be joined by mechanical fasteners, adhesive bonding or both. However, these methods have several limitations. Components can be joined by fusion bonding due to the property of thermoplastics. Thermoplastics can be melted on heating and regain their shape on cooling. This property makes them ideal for joining through fusion bonding by induction heating. Joining of non-conducting or non-magnetic thermoplastic composites needs an additional material that can generate heat by induction heating. Polymers are neither conductive nor electromagnetic so they don’t have inherent potential for inductive heating. A susceptor sheet having conductive materials (e.g. carbon fiber) or magnetic materials (e.g. nickel) can generate heat during induction. The main issues related with induction heating are non-homogeneous and uncontrolled heating. In this work, it was observed that to generate heat with a susceptor sheet depends on its filler, its concentration, and its dispersion. It also depends on the coil, magnetic field strength and coupling distance. The combination of different fillers not only increased the heating rate but also changed the heating mechanism. Heating of 40ºC/ sec was achieved with 15wt.-% nickel coated short carbon fibers and 3wt.-% multiwalled carbon nanotubes. However, only nickel coated short carbon fibers (15wt-.%) attained the heating rate of 24ºC/ sec. In this study, electrical conductivity, thermal conductivity and magnetic properties testing were also performed. The results also showed that electrical percolation was achieved around 15wt.-% in fibers and (13- 6)wt.-% with hybrid fillers. Induction heating tests were also performed by making parallel and perpendicular susceptor sheet as fibers were uni-directionally aligned. The susceptor sheet was also tested by making perforations. The susceptor sheet showed homogeneous and fast heating, and can be used for joining of non-conductive or non-magnetic thermoplastic composites.

- Verification & Performance Measurement for Transport Protocol Parallel Routing of an AUTOSAR Gateway System (2016)
- A wide range of methods and techniques have been developed over the years to manage the increasing complexity of automotive Electrical/Electronic systems. Standardization is an example of such complexity managing techniques that aims to minimize the costs, avoid compatibility problems and improve the efficiency of development processes. A well-known and -practiced standard in automotive industry is AUTOSAR (Automotive Open System Architecture). AUTOSAR is a common standard among OEMs (Original Equipment Manufacturer), suppliers and other involved companies. It was developed originally with the goal of simplifying the overall development and integration process of Electrical/Electronic artifacts from different functional domains, such as hardware, software, and vehicle communication. However, the AUTOSAR standard, in its current status, is not able to manage the problems in some areas of the system development. Validation and optimization process of system configuration handled in this thesis are examples of such areas, in which the AUTOSAR standard offers so far no mature solutions. Generally, systems developed on the basis of AUTOSAR must be configured in a way that all defined requirements are met. In most cases, the number of configuration parameters and their possible settings in AUTOSAR systems are large, especially if the developed system is complex with modules from various knowledge domains. The verification process here can consume a lot of resources to test all possible combinations of configuration settings, and ideally find the optimal configuration variant, since the number of test cases can be very high. This problem is referred to in literature as the combinatorial explosion problem. Combinatorial testing is an active and promising area of functional testing that offers ideas to solve the combinatorial explosion problem. Thereby, the focus is to cover the interaction errors by selecting a sample of system input parameters or configuration settings for test case generation. However, the industrial acceptance of combinatorial testing is still weak because of the deficiency of real industrial examples. This thesis is tempted to fill this gap between the industry and the academy in the area of combinatorial testing to emphasizes the effectiveness of combinatorial testing in verifying complex configurable systems. The particular intention of the thesis is to provide a new applicable approach to combinatorial testing to fight the combinatorial explosion problem emerged during the verification and performance measurement of transport protocol parallel routing of an AUTOSAR gateway. The proposed approach has been validated and evaluated by means of two real industrial examples of AUTOSAR gateways with multiple communication buses and two different degrees of complexity to illustrate its applicability.

- Centimeter-Level Accuracy Path Tracking Control of Tractors and Actively Steered Implements (2015)
- Accurate path tracking control of tractors became a key technology for automation in agriculture. Increasingly sophisticated solutions, however, revealed that accurate path tracking control of implements is at least equally important. Therefore, this work focuses on accurate path tracking control of both tractors and implements. The latter, as a prerequisite for improved control, are equipped with steering actuators like steerable wheels or a steerable drawbar, i.e. the implements are actively steered. This work contributes both new plant models and new control approaches for those kinds of tractor-implement combinations. Plant models comprise dynamic vehicle models accounting for forces and moments causing the vehicle motion as well as simplified kinematic descriptions. All models have been derived in a systematic and automated manner to allow for variants of implements and actuator combinations. Path tracking controller design begins with a comprehensive overview and discussion of existing approaches in related domains. Two new approaches have been proposed combining the systematic setup and tuning of a Linear-Quadratic-Regulator with the simplicity of a static output feedback approximation. The first approach ensures accurate path tracking on slopes and curves by including integral control for a selection of controlled variables. The second approach, instead, ensures this by adding disturbance feedforward control based on side-slip estimation using a non-linear kinematic plant model and an Extended Kalman Filter. For both approaches a feedforward control approach for curved path tracking has been newly derived. In addition, a straightforward extension of control accounting for the implement orientation has been developed. All control approaches have been validated in simulations and experiments carried out with a mid-size tractor and a custom built demonstrator implement.

- Model-based Design of Embedded Systems by Desynchronization (2016)
- In this thesis we developed a desynchronization design flow in the goal of easing the de- velopment effort of distributed embedded systems. The starting point of this design flow is a network of synchronous components. By transforming this synchronous network into a dataflow process network (DPN), we ensures important properties that are difficult or theoretically impossible to analyze directly on DPNs are preserved by construction. In particular, both deadlock-freeness and buffer boundedness can be preserved after desyn- chronization. For the correctness of desynchronization, we developed a criteria consisting of two properties: a global property that demands the correctness of the synchronous network, as well as a local property that requires the latency-insensitivity of each local synchronous component. As the global property is also a correctness requirement of synchronous systems in general, we take this property as an assumption of our desyn- chronization. However, the local property is in general not satisfied by all synchronous components, and therefore needs to be verified before desynchronization. In this thesis we developed a novel technique for the verification of the local property that can be carried out very efficiently. Finally we developed a model transformation method that translates a set of synchronous guarded actions – an intermediate format for synchronous systems – to an asynchronous actor description language (CAL). Our theorem ensures that one passed the correctness verification, the generated DPN of asynchronous pro- cesses (or actors) preserves the functional behavior of the original synchronous network. Moreover, by the correctness of the synchronous network, our theorem guarantees that the derived DPN is deadlock-free and can be implemented with only finitely bounded buffers.

- Monoids as Storage Mechanisms (2016)
- Automata theory has given rise to a variety of automata models that consist of a finite-state control and an infinite-state storage mechanism. The aim of this work is to provide insights into how the structure of the storage mechanism influences the expressiveness and the analyzability of the resulting model. To this end, it presents generalizations of results about individual storage mechanisms to larger classes. These generalizations characterize those storage mechanisms for which the given result remains true and for which it fails. In order to speak of classes of storage mechanisms, we need an overarching framework that accommodates each of the concrete storage mechanisms we wish to address. Such a framework is provided by the model of valence automata, in which the storage mechanism is represented by a monoid. Since the monoid serves as a parameter to specifying the storage mechanism, our aim translates into the question: For which monoids does the given (automata-theoretic) result hold? As a first result, we present an algebraic characterization of those monoids over which valence automata accept only regular languages. In addition, it turns out that for each monoid, this is the case if and only if valence grammars, an analogous grammar model, can generate only context-free languages. Furthermore, we are concerned with closure properties: We study which monoids result in a Boolean closed language class. For every language class that is closed under rational transductions (in particular, those induced by valence automata), we show: If the class is Boolean closed and contains any non-regular language, then it already includes the whole arithmetical hierarchy. This work also introduces the class of graph monoids, which are defined by finite graphs. By choosing appropriate graphs, one can realize a number of prominent storage mechanisms, but also combinations and variants thereof. Examples are pushdowns, counters, and Turing tapes. We can therefore relate the structure of the graphs to computational properties of the resulting storage mechanisms. In the case of graph monoids, we study (i) the decidability of the emptiness problem, (ii) which storage mechanisms guarantee semilinear Parikh images, (iii) when silent transitions (i.e. those that read no input) can be avoided, and (iv) which storage mechanisms permit the computation of downward closures.

- Hecke algebras of type A: Auslander--Reiten quivers and branching rules (2016)
- The thesis consists of two parts. In the first part we consider the stable Auslander--Reiten quiver of a block \(B\) of a Hecke algebra of the symmetric group at a root of unity in characteristic zero. The main theorem states that if the ground field is algebraically closed and \(B\) is of wild representation type, then the tree class of every connected component of the stable Auslander--Reiten quiver \(\Gamma_{s}(B)\) of \(B\) is \(A_{\infty}\). The main ingredient of the proof is a skew group algebra construction over a quantum complete intersection. Also, for these algebras the stable Auslander--Reiten quiver is computed in the case where the defining parameters are roots of unity. As a result, the tree class of every connected component of the stable Auslander--Reiten quiver is \(A_{\infty}\).\[\] In the second part of the thesis we are concerned with branching rules for Hecke algebras of the symmetric group at a root of unity. We give a detailed survey of the theory initiated by I. Grojnowski and A. Kleshchev, describing the Lie-theoretic structure that the Grothendieck group of finite-dimensional modules over a cyclotomic Hecke algebra carries. A decisive role in this approach is played by various functors that give branching rules for cyclotomic Hecke algebras that are independent of the underlying field. We give a thorough definition of divided power functors that will enable us to reformulate the Scopes equivalence of a Scopes pair of blocks of Hecke algebras of the symmetric group. As a consequence we prove that two indecomposable modules that correspond under this equivalence have a common vertex. In particular, we verify the Dipper--Du Conjecture in the case where the blocks under consideration have finite representation type.

- Getting Ready to Read: Promoting Children´s Emergent Literacy Through Shared Book Reading in a German Context (2016)
- The present study investigated the effects of two methods of shared book reading on children´s emergent literacy skills, such as language skills (expressive vocabulary and semantic skills) and grapheme awareness, i.e. before the alphabetic phase of reading acquisition (Lachmann & van Leeuwen, 2014) in home and in kindergarten contexts. The two following shared book reading methods were investigated: Method I - literacy enrichment: 200 extra children's books were distributed in kindergartens and children were encouraged every week to borrow a book to take home and read with their parents. Further, a written letter was sent to the parents encouraging them to frequently read the books with their children at home. Method II - teacher training: kindergarten teachers participated in structured training which included formal instruction on how to promote child language development through shared book reading. The training was an adaptation of the Heidelberger Interaktionstraining für pädagogisches Fachpersonal zur Förderung ein- und mehrsprachiger Kinder - HIT (Buschmann & Jooss, 2011). In addition, the effects of the two methods in combination were investigated. Three questions were addressed in the present study: (1) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's expressive vocabulary? (2) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's semantic skills? (3) What effect does method I (literacy enrichment), method II (teacher training) and the combination of both methods have on children's grapheme awareness? Accordingly, 69 children, ranged in age from 3;0 to 4;8 years, were recruited from four kindergartens in the city of Kaiserslautern, Germany. The kindergartens were divided into: kindergarten 1 – Method I (N = 13); kindergarten 2 - Method II (N = 18); kindergarten 3 - Combination of both methods (N = 17); kindergarten 4 - Control group (N = 21). Half of the participants (N = 35) reported having a migration background. All groups were similar in regards to socioeconomic status and literacy activities at home. In a pre- posttest design, children performed three tests: expressive vocabulary (AWSTR, 3-5; Kiese-Himmel, 2005), semantic skills (SETK, 3-5 subtests ESR; Grimm, 2001), and grapheme awareness which is a task developed with the purpose of testing children’s familiarity with grapheme forms. The intervention period had duration of six months. The data analysis was performed using the software IBM SPSS Statistics version 22. Regarding language skills, Method I showed no significant effects on children expressive vocabulary and semantic skills. Method II showed significant effects for children expressive vocabulary. In addition, the children with migration background took more advantage of the method. Regarding semantic skills, no significant effects were found. No significant effects of the combination of both methods in children's language skills were found. For grapheme awareness, however, results showed positive effects for Method I, and Method II, as well as for the combination of both methods. The combination group, as reported by a large effect size, showed to be more effective than Method I and Method II alone. Moreover, the results indicated that in grapheme awareness, all children (in regards to age, gender, with and without migration background) took equal advantage in all three intervention groups. Overall, it can be concluded with the results of the present study, that by providing access to good books, Method I may help parents involve themselves in the active process of their child's literacy skills development. However, in order to improve language skills, access to books alone showed to be not enough. Therefore, it is suggested that access combined with additional support to parents in how to improve their language interactions with their children is highly recommended. In respect to Method II, the present study suggests that shared book reading through professional training is an important tool that supports children´s language development. For grapheme awareness it is concluded that with the combination of the two performed methods, high exposure to shared book reading helps children to informally learn about the surface characteristics of print, acquire some familiarity with the visual characteristics of the letters and learn to differentiate them from other visual patterns. Finally, it is suggested to organizations and institutions as well as to future research, the importance of having more programs that offer different possibilities to children to have more contact with adequate language interaction as well as more experiences with print through shared book reading as showed in the present study.

- New Aspects of Inflation Modeling (2016)
- Inflation modeling is a very important tool for conducting an efficient monetary policy. This doctoral thesis reviewed inflation models, in particular the Phillips curve models of inflation dynamics. We focused on a well known and widely used model, the so-called three equation new Keynesian model which is a system of equations consisting of a new Keynesian Phillips curve (NKPC), an investment and saving (IS) curve and an interest rate rule. We gave a detailed derivation of these equations. The interest rate rule used in this model is normally determined by using a Lagrangian method to solve an optimal control problem constrained by a standard discrete time NKPC which describes the inflation dynamics and an IS curve that represents the output gaps dynamics. In contrast to the real world, this method assumes that the policy makers intervene continuously. This means that the costs resulting from the change in the interest rates are ignored. We showed also that there are approximation errors made, when one log-linearizes non linear equations, by doing the derivation of the standard discrete time NKPC. We agreed with other researchers as mentioned in this thesis, that errors which result from ignoring such log-linear approximation errors and the costs of altering interest rates by determining interest rate rule, can lead to a suboptimal interest rate rule and hence to non-optimal paths of output gaps and inflation rate. To overcome such a problem, we proposed a stochastic optimal impulse control method. We formulated the problem as a stochastic optimal impulse control problem by considering the costs of change in interest rates and the approximation error terms. In order to formulate this problem, we first transform the standard discrete time NKPC and the IS curve into their high-frequency versions and hence into their continuous time versions where error terms are described by a zero mean Gaussian white noise with a finite and constant variance. After formulating this problem, we use the quasi-variational inequality approach to solve analytically a special case of the central bank problem, where an inflation rate is supposed to be on target and a central bank has to optimally control output gap dynamics. This method gives an optimal control band in which output gap process has to be maintained and an optimal control strategy, which includes the optimal size of intervention and optimal intervention time, that can be used to keep the process into the optimal control band. Finally, using a numerical example, we examined the impact of some model parameters on optimal control strategy. The results show that an increase in the output gap volatility as well as in the fixed and proportional costs of the change in interest rate lead to an increase in the width of the optimal control band. In this case, the optimal intervention requires the central bank to wait longer before undertaking another control action.

- Recursive Utility and Stochastic Differential Utility: From Discrete to Continuous Time (2016)
- In this thesis, mathematical research questions related to recursive utility and stochastic differential utility (SDU) are explored. First, a class of backward equations under nonlinear expectations is investigated: Existence and uniqueness of solutions are established, and the issues of stability and discrete-time approximation are addressed. It is then shown that backward equations of this class naturally appear as a continuous-time limit in the context of recursive utility with nonlinear expectations. Then, the Epstein-Zin parametrization of SDU is studied. The focus is on specifications with both relative risk aversion and elasitcity of intertemporal substitution greater that one. A concave utility functional is constructed and a utility gradient inequality is established. Finally, consumption-portfolio problems with recursive preferences and unspanned risk are investigated. The investor's optimal strategies are characterized by a specific semilinear partial differential equation. The solution of this equation is constructed by a fixed point argument, and a corresponding efficient and accurate method to calculate optimal strategies numerically is given.

- Utility-Based Risk Measures and Time Consistency of Dynamic Risk Measures (2016)
- This thesis deals with risk measures based on utility functions and time consistency of dynamic risk measures. It is therefore aimed at readers interested in both, the theory of static and dynamic financial risk measures in the sense of Artzner, Delbaen, Eber and Heath [7], [8] and the theory of preferences in the tradition of von Neumann and Morgenstern [134]. A main contribution of this thesis is the introduction of optimal expected utility (OEU) risk measures as a new class of utility-based risk measures. We introduce OEU, investigate its main properties, and its applicability to risk measurement and put it in perspective to alternative risk measures and notions of certainty equivalents. To the best of our knowledge, OEU is the only existing utility-based risk measure that is (non-trivial and) coherent if the utility function u has constant relative risk aversion. We present several different risk measures that can be derived with special choices of u and illustrate that OEU reacts in a more sensitive way to slight changes of the probability of a financial loss than value at risk (V@R) and average value at risk. Further, we propose implied risk aversion as a coherent rating methodology for retail structured products (RSPs). Implied risk aversion is based on optimal expected utility risk measures and, in contrast to standard V@R-based ratings, takes into account both the upside potential and the downside risks of such products. In addition, implied risk aversion is easily interpreted in terms of an individual investor's risk aversion: A product is attractive (unattractive) for an investor if its implied risk aversion is higher (lower) than his individual risk aversion. We illustrate this approach in a case study with more than 15,000 warrants on DAX ® and find that implied risk aversion is able to identify favorable products; in particular, implied risk aversion is not necessarily increasing with respect to the strikes of call warrants. Another main focus of this thesis is on consistency of dynamic risk measures. To this end, we study risk measures on the space of distributions, discuss concavity on the level of distributions and slightly generalize Weber's [137] findings on the relation of time consistent dynamic risk measures to static risk measures to the case of dynamic risk measures with time-dependent parameters. Finally, this thesis investigates how recursively composed dynamic risk measures in discrete time, which are time consistent by construction, can be related to corresponding dynamic risk measures in continuous time. We present different approaches to establish this link and outline the theoretical basis and the practical benefits of this relation. The thesis concludes with a numerical implementation of this theory.