Refine
Year of publication
Document Type
- Doctoral Thesis (939) (remove)
Language
- English (939) (remove)
Has Fulltext
- yes (939)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (53)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (22)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (8)
- Kaiserslautern - Fachbereich Bauingenieurwesen (6)
- Kaiserslautern - Fachbereich ARUBI (5)
- Landau - Fachbereich Psychologie (5)
- Fraunhofer (ITWM) (4)
- Kaiserslautern - Fachbereich Architektur (1)
- Landau - Fachbereich Erziehungswissenschaften (1)
Yield Curves and Chance-Risk Classification: Modeling, Forecasting, and Pension Product Portfolios
(2021)
This dissertation consists of three independent parts: The yield curve shapes generated by interest rate models, the yield curve forecasting, and the application of the chance-risk classification to a portfolio of pension products. As a component of the capital market model, the yield curve influences the chance-risk classification which was introduced to improve the comparability of pension products and strengthen consumer protection. Consequently, all three topics have a major impact on this essential safeguard.
Firstly, we focus on the obtained yield curve shapes of the Vasicek interest rate models. We extend the existing studies on the attainable yield curve shapes in the one-factor Vasicek model by analysis of the curvature. Further, we show that the two-factor Vasicek model can explain significantly more effects that are observed at the market than its one-factor variant. Among them is the occurrence of dipped yield curves.
We further introduce a general change of measure framework for the Monte Carlo simulation of the Vasicek model under a subjective measure. This can be used to avoid the occurrence of a far too high frequency of inverse yield curves with growing time.
Secondly, we examine different time series models including machine learning algorithms forecasting the yield curve. For this, we consider statistical time series models such as autoregression and vector autoregression. Their performances are compared with the performance of a multilayer perceptron, a fully connected feed-forward neural network. For this purpose, we develop an extended approach for the hyperparameter optimization of the perceptron which is based on standard procedures like Grid and Random Search but allows to search a larger hyperparameter space. Our investigation shows that multilayer perceptrons outperform statistical models for long forecast horizons.
The third part deals with the chance-risk classification of state-subsidized pension products in Germany as well as its relevance for customer consulting. To optimize the use of the chance-risk classes assigned by Produktinformationsstelle Altersvorsorge gGmbH, we develop a procedure for determining the chance-risk class of different portfolios of state-subsidized pension products under the constraint that the portfolio chance-risk class does not exceed the customer's risk preference. For this, we consider a portfolio consisting of two new pension products as well as a second one containing a product already owned by the customer as well as the offer of a new one. This is of particular interest for customer consulting and can include other assets of the customer. We examine the properties of various chance and risk parameters as well as their corresponding mappings and show that a diversification effect exists. Based on the properties, we conclude that the average final contract values have to be used to obtain the upper bound of the portfolio chance-risk class. Furthermore, we develop an approach for determining the chance-risk class over the contract term since the chance-risk class is only assigned at the beginning of the accumulation phase. On the one hand, we apply the current legal situation, but on the other hand, we suggest an approach that requires further simulations. Finally, we translate our results into recommendations for customer consultation.
Wreath product groups \(C_\ell \wr \mathfrak{S}_n\) have a rich combinatorial representation theory coming from the symmetric group case and involving partitions, Young tableaux, and Specht modules. To such a wreath product group \(W\), one can associate various algebras and geometric objects: Hecke algebras, quantum groups, Hilbert schemes, Calogero--Moser spaces, and (restricted) rational Cherednik algebras. Over the years, surprising connections have been made between a lot of these objects, with many of these connections having been traced back to combinatorial constructions and properties of the group \(W\) itself.
In this thesis, we have studied one of the algebras, namely the restricted rational Cherednik algebra \(\overline{\mathsf{H}}_\mathbf{c}(W)\), in order to find combinatorial models which describe certain representation theoretical phenomena around \(\overline{\mathsf{H}}_\mathbf{c}(W)\). In particular, we generalize a result by Gordon and describe the graded \(W\)-characters of the simple modules of \(\overline{\mathsf{H}}_\mathbf{c}(W)\) for generic parameter \(\mathbf{c}\) using Haiman's wreath Macdonald polynomials. These graded \(W\)-characters turn out to be specializations of Haiman's wreath Macdonald polynomials. In the non-generic parameter case, we use recent results by Maksimau to combinatorially express an inductive rule of \(\overline{\mathsf{H}}_\mathbf{c}(W)\)-modules first described by Bellamy. We use our results in type \(B\) to describe the (ungraded) \(B_n\)-character of simple \(\overline{\mathsf{H}}_\mathbf{c}(B_n)\)-modules associated to bipartitions with one empty part. Afterwards, we relate this combinatorial induction to various other algebras and families of \(W\)-characters found in the literature such as Lusztig's constructible characters, as well as detail some connections between generic and non-generic parameter using wreath Macdonald polynomials.
In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.
In 2002, Korn and Wilmott introduced the worst-case scenario optimal portfolio approach.
They extend a Black-Scholes type security market, to include the possibility of a
crash. For the modeling of the possible stock price crash they use a Knightian uncertainty
approach and thus make no probabilistic assumption on the crash size or the crash time distribution.
Based on an indifference argument they determine the optimal portfolio process
for an investor who wants to maximize the expected utility from final wealth. In this thesis,
the worst-case scenario approach is extended in various directions to enable the consideration
of stress scenarios, to include the possibility of asset defaults and to allow for parameter
uncertainty.
Insurance companies and banks regularly have to face stress tests performed by regulatory
instances. In the first part we model their investment decision problem that includes stress
scenarios. This leads to optimal portfolios that are already stress test prone by construction.
The solution to this portfolio problem uses the newly introduced concept of minimum constant
portfolio processes.
In the second part we formulate an extended worst-case portfolio approach, where asset
defaults can occur in addition to asset crashes. In our model, the strictly risk-averse investor
does not know which asset is affected by the worst-case scenario. We solve this problem by
introducing the so-called worst-case crash/default loss.
In the third part we set up a continuous time portfolio optimization problem that includes
the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine
the worst-case scenario approach with a model ambiguity approach that is also based on
Knightian uncertainty. We solve this portfolio problem and consider two concrete examples
with box uncertainty and ellipsoidal drift ambiguity.
Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems:
(a) the end-to-end delay data flows experience and
(b) the buffer space required by a server to queue all incoming data.
NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft.
The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers.
This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
Reading as a cultural skill is acquired over a long period of training. This thesis supports the idea that reading is based on specific strategies that result from modification and coordination of earlier developed object recognition strategies. The reading-specific processing strategies are considered to be more analytic compared to object recognition strategies, which are described as holistic. To enable proper reading skills these strategies have to become automatized. Study 1 (Chapter 4) examined the temporal and visual constrains of letter recognition strategies. In the first experiment two successively presented stimuli (letters or non-letters) had to be classified as same or different. The second stimulus could either be presented in isolation or surrounded by a shape, which was either similar (congruent) or different (incongruent) in its geometrical properties to the stimulus itself. The non-letter pairs were presented twice as often as the letter pairs. The results demonstrated a preference for the holistic strategy also in letters, even if the non- letter set was presented twice as often as the letter set, showing that the analytic strategy does not replace the holistic one completely, but that the usage of both strategies is task-sensitive. In Experiment 2, we compared the Global Precedence Effect (GPE) for letters and non-letters in central viewing, with the global stimulus size close to the functional visual field in whole word reading (6.5◦ of visual angle) and local stimuli close to the critical size for fluent reading of individual letters (0.5◦ of visual angle). Under these conditions, the GPE remained robust for non-letters. For letters, however, it disappeared: letters showed no overall response time advantage for the global level and symmetric congruence effects (local-to-global as well as global-to-local interference). These results indicate that reading is based on resident analytic visual processing strategies for letters. In Study 2 (Chapter 5) we replicated the latter result with a large group of participants as part of a study in which pairwise associations of non-letters and phonological or non-phonological sounds were systematically trained. We investigated whether training would eliminate the GPE also for non-letters. We observed, however, that the differentiation between letters and non-letter shapes persists after training. This result implies that pairwise association learning is not sufficient to overrule the process differentiation in adults. In addition, subtle effects arising in the letter condition (due to enhanced power) enable us to further specify the differentiation in processing between letters and non-letter shapes. The influence of reading ability on the GPE was examined in Study 3 (Chapter 6). Children with normal reading skills and children with poor reading skills were instructed to detect a target in Latin or Hebrew Navon letters. Children with normal reading skills showed a GPE for Latin letters, but not for Hebrew letters. In contrast, the dyslexia group did not show GPE for either kind of stimuli. These results suggest that dyslexic children are not able to apply the same automatized letter processing strategy as children with normal reading skills do. The difference between the analytic letter processing and the holistic non-letter processing was transferred to the context of whole word reading in Study 4 (Chapter 7). When participants were instructed to detect either a letter or a non-letter in a mixed character string, for letters the reaction times and error rates increased linearly from the left to the right terminal position in the string, whereas for non-letters a symmetrical U-shaped function was observed. These results suggest, that the letter-specific processing strategies are triggered automatically also for more word-like material. Thus, this thesis supports and expands prior results of letter-specific processing and gives new evidences for letter-specific processing strategies.
In an overall effort to contribute to the steadily expanding EO literature, this cumulative dissertation aims to help the literature to advance with greater clarity, comprehensive modeling, and more robust research designs. To achieve this, the first paper of this dissertation focuses on the consistency and coherence in variable choices and modeling considerations by conducting a systematic quantitative review of the EO-performance literature. Drawing on the plethora of previous EO studies, the second paper employs a comprehensive meta-analytic structural equation modeling approach (MASEM) to explore the potential for unique component-level relationships among EO’s three core dimensions in antecedent to outcome relationships. The third paper draws on these component-level insights and performs a finer-grained replication of the seminal MASEM of Rosenbusch, Rauch, and Bausch (2013) that proposes EO as a full mediator between the task environment and firm performance. The fourth and final paper of this cumulative dissertation illustrates exigent endogeneity concerns inherent in observational EO-performance research and provides guidance on how researchers can move towards establishing causal relationships.
Wetting of a solid surface with liquids is an important parameter in the chemical engineering process such as distillation, absorption and desorption. The degree of wetting in packed columns mainly contributes in the generating of the effective interfacial area and then enhancing of the heat and mass transfer process. In this work the wetting of solid surfaces was studied in real experimental work and virtually through three dimensional CFD simulations using the multiphase flow VOF model implemented in the commercial software FLUENT. That can be used to simulate the stratified flows [1]. The liquid rivulet flow which is a special case of the film flow and mostly found in packed columns has been discussed. Wetting of a solid flat and wavy metal plate with rivulet liquid flow was simulated and experimentally validated. The local rivulet thickness was measured using an optically assisted mechanical sensor using a needle which is moved perpendicular to the plate surface with a step motor and in the other two directions using two micrometers. The measured and simulated rivulet profiles were compared to some selected theoretical models founded in the literature such as Duffy & Muffatt [2], Towell & Rothfeld [3] and Al-Khalil et al. [4]. The velocity field in a cross section of a rivulet flow and the non-dimensional maximum and mean velocity values for the vertical flat plate was also compared with models from Al-Khalil et al. [4] and Allen & Biggin [5]. Few CFD simulations for the wavy plate case were compared to the experimental findings, and the Towel model for a flat plate [3]. In the second stage of this work 3-D CFD simulations and experimental study has been performed for wetting of a structured packing element and packing sheet consisting of three elements from the type Rombopak 4M, which is a product of the company Kuhni, Switzerland. The hydrodynamics parameters of a packed column, e. i. the degree of wetting, the interfacial area and liquid hold-up have been depicted from the CFD simulations for different liquid systems and liquid loads. Flow patterns on the degree of wetting have been compared to that of the experiments, where the experimental values for the degree of wetting were estimated from the snap shooting of the flow on the packing sheet in a test rig. A new model to describe the hydrodynamics of packed columns equipped with Rombopak 4M was derived with help of the CFD–simulation results. The model predicts the degree of wetting, the specific or interfacial area and liquid hold-up at different flow conditions. This model was compared to Billet & Schultes [6], the SRP model Rocha et al. [7-9], to Shi & Mersmann [10] and others. Since the pressure drop is one of the most important parameter in packed columns especially for vacuum operating columns, few CFD simulations were performed to estimate the dry pressure drop in a structured and flat packing element and were compared to the experimental results. It was found a good agreement from one side, between the experimental and the CFD simulation results, and from the other side between the simulations and theoretical models for the rivulet flow on an inclined plate. The flow patterns and liquid spreading behaviour on the packing element agrees well with the experimental results. The VOF (Volume of Fluid) was found very sensitive to different liquid properties and can be used in optimization of the packing geometries and revealing critical details of wetting and film flow. An extension of this work to perform CFD simulations for the flow inside a block of the packing to get a detailed picture about the interaction between the liquid and packing surfaces is recommended as further perspective.
This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.