Refine
Year of publication
- 1999 (425)
- 1998 (116)
- 2000 (96)
- 2007 (92)
- 2018 (89)
- 1996 (88)
- 2015 (82)
- 1995 (81)
- 2016 (80)
- 2009 (78)
- 2014 (77)
- 1997 (76)
- 1994 (70)
- 2005 (68)
- 2006 (67)
- 2008 (66)
- 2001 (64)
- 2003 (63)
- 2013 (62)
- 2012 (61)
- 2004 (57)
- 2010 (56)
- 2002 (54)
- 2011 (51)
- 2017 (51)
- 2019 (49)
- 1993 (42)
- 1992 (40)
- 1991 (33)
- 1990 (11)
- 1989 (5)
- 1987 (4)
- 1988 (4)
- 1979 (3)
- 1984 (3)
- 1985 (3)
- 1980 (1)
- 1981 (1)
Document Type
- Preprint (1033)
- Doctoral Thesis (623)
- Report (399)
- Article (193)
- Conference Proceeding (26)
- Diploma Thesis (22)
- Periodical Part (21)
- Working Paper (12)
- Master's Thesis (11)
- Lecture (7)
Language
- English (2369) (remove)
Keywords
- AG-RESY (47)
- PARO (25)
- SKALP (15)
- Visualisierung (13)
- Wavelet (13)
- Case-Based Reasoning (11)
- Inverses Problem (11)
- RODEO (11)
- Mehrskalenanalyse (10)
- finite element method (10)
Faculty / Organisational entity
- Fachbereich Mathematik (940)
- Fachbereich Informatik (652)
- Fachbereich Physik (243)
- Fraunhofer (ITWM) (203)
- Fachbereich Maschinenbau und Verfahrenstechnik (112)
- Fachbereich Elektrotechnik und Informationstechnik (81)
- Fachbereich Chemie (58)
- Fachbereich Biologie (35)
- Fachbereich Sozialwissenschaften (22)
- Fachbereich Wirtschaftswissenschaften (12)
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
This thesis addresses several challenges for sustainable logistics operations and investigates (1) the integration of intermediate stops in the route planning of transportation vehicles, which especially becomes relevant when alternative-fuel vehicles with limited driving range or a sparse refueling infrastructure are considered, (2) the combined planning of the battery replacement infrastructure and of the routing for battery electric vehicles, (3) the use of mobile load replenishment or refueling possibilities in environments where the respective infrastructure is not available, and (4) the additional consideration of the flow of goods from the end user in backward direction to the point of origin for the purpose of, e.g., recapturing value or proper disposal. We utilize models and solution methods from the domain of operations research to gain insights into the investigated problems and thus to support managerial decisions with respect to these issues.
Magnetoelastic coupling describes the mutual dependence of the elastic and magnetic fields and can be observed in certain types of materials, among which are the so-called "magnetostrictive materials". They belong to the large class of "smart materials", which change their shape, dimensions or material properties under the influence of an external field. The mechanical strain or deformation a material experiences due to an externally applied magnetic field is referred to as magnetostriction; the reciprocal effect, i.e. the change of the magnetization of a body subjected to mechanical stress is called inverse magnetostriction. The coupling of mechanical and electromagnetic fields is particularly observed in "giant magnetostrictive materials", alloys of ferromagnetic materials that can exhibit several thousand times greater magnitudes of magnetostriction (measured as the ratio of the change in length of the material to its original length) than the common magnetostrictive materials. These materials have wide applications areas: They are used as variable-stiffness devices, as sensors and actuators in mechanical systems or as artificial muscles. Possible application fields also include robotics, vibration control, hydraulics and sonar systems.
Although the computational treatment of coupled problems has seen great advances over the last decade, the underlying problem structure is often not fully understood nor taken into account when using black box simulation codes. A thorough analysis of the properties of coupled systems is thus an important task.
The thesis focuses on the mathematical modeling and analysis of the coupling effects in magnetostrictive materials. Under the assumption of linear and reversible material behavior with no magnetic hysteresis effects, a coupled magnetoelastic problem is set up using two different approaches: the magnetic scalar potential and vector potential formulations. On the basis of a minimum energy principle, a system of partial differential equations is derived and analyzed for both approaches. While the scalar potential model involves only stationary elastic and magnetic fields, the model using the magnetic vector potential accounts for different settings such as the eddy current approximation or the full Maxwell system in the frequency domain.
The distinctive feature of this work is the analysis of the obtained coupled magnetoelastic problems with regard to their structure, strong and weak formulations, the corresponding function spaces and the existence and uniqueness of the solutions. We show that the model based on the magnetic scalar potential constitutes a coupled saddle point problem with a penalty term. The main focus in proving the unique solvability of this problem lies on the verification of an inf-sup condition in the continuous and discrete cases. Furthermore, we discuss the impact of the reformulation of the coupled constitutive equations on the structure of the coupled problem and show that in contrast to the scalar potential approach, the vector potential formulation yields a symmetric system of PDEs. The dependence of the problem structure on the chosen formulation of the constitutive equations arises from the distinction of the energy and coenergy terms in the Lagrangian of the system. While certain combinations of the elastic and magnetic variables lead to a coupled magnetoelastic energy function yielding a symmetric problem, the use of their dual variables results in a coupled coenergy function for which a mixed problem is obtained.
The presented models are supplemented with numerical simulations carried out with MATLAB for different examples including a 1D Euler-Bernoulli beam under magnetic influence and a 2D magnetostrictive plate in the state of plane stress. The simulations are based on material data of Terfenol-D, a giant magnetostrictive materials used in many industrial applications.
Graphs and flow networks are important mathematical concepts that enable the modeling and analysis of a large variety of real world problems in different domains such as engineering, medicine or computer science. The number, sizes and complexities of those problems permanently increased during the last decades. This led to an increased demand of techniques that help domain experts in understanding their data and its underlying structure to enable an efficient analysis and decision making process.
To tackle this challenge, this work presents several new techniques that utilize concepts of visual analysis to provide domain scientists with new visualization methodologies and tools. Therefore, this work provides novel concepts and approaches for diverse aspects of the visual analysis such as data transformation, visual mapping, parameter refinement and analysis, model building and visualization as well as user interaction.
The presented techniques form a framework that enriches domain scientists with new visual analysis tools and help them analyze their data and gain insight from the underlying structures. To show the applicability and effectiveness of the presented approaches, this work tackles different applications such as networking, product flow management and vascular systems, while preserving the generality to be applicable to further domains.
In this thesis, we deal with the worst-case portfolio optimization problem occuring in discrete-time markets.
First, we consider the discrete-time market model in the presence of crash threats. We construct the discrete worst-case optimal portfolio strategy by the indifference principle in the case of the logarithmic utility. After that we extend this problem to general utility functions and derive the discrete worst-case optimal portfolio processes, which are characterized by a dynamic programming equation. Furthermore, the convergence of the discrete worst-case optimal portfolio processes are investigated when we deal with the explicit utility functions.
In order to further study the relation of the worst-case optimal value function in discrete-time models to continuous-time models we establish the finite-difference approach. By deriving the discrete HJB equation we verify the worst-case optimal value function in discrete-time models, which satisfies a system of dynamic programming inequalities. With increasing degree of fineness of the time discretization, the convergence of the worst-case value function in discrete-time models to that in continuous-time models are proved by using a viscosity solution method.
Study 1 (Chapter 2) is an empirical case study that concerns the nature of teaching–learning transactions that facilitate self-directed learning in vocational education and training of young adults in England. It addresses in part the concern that fostering the skills necessary for self-directed learning is an important endeavor of vocational education and training in many contexts internationally. However, there is a distinct lack of studies that investigate the extent to which facilitation of self-directed learning is present within vocational education and training in different contexts. An exploratory thematic qualitative analysis of inspectors’ comments within general Further Education college Ofsted inspection reports was conducted to investigate the balance of control of the learning process between teacher and learner within vocational education and training of young adults in England. A clear difference between outstanding and inadequate provision is reported. Inadequate provision was overwhelmingly teacher-directed. Outstanding provision reflected a collaborative relationship between teacher and learner in directing the learning process, despite the Ofsted framework not explicitly identifying the need for learner involvement in directing the learning process. The chapter offers insight into the understanding of how an effective balance of control of learning between teacher and learner may be realized in vocational education and training settings and highlights the need to consider the modulating role of contextual factors.
Following the further research directions outlined in Chapter 2, study 2 (Chapter 3) is a theoretical chapter that addresses the issue that fostering adult learners’ competence to adapt appropriately to our ever-changing world is a primary concern of adult education. The purpose of the chapter is novel and examines whether the consideration of modes of learning (instruction, performance, and inquiry) could assist in the design of adult education that facilitates self-directed learning and enables learners to think and perform adaptively. The concept of modes of learning originated from the typology of Houle (1980). However, to date, no study has reached beyond this typology, especially concerning the potential of using modes of learning in the design of adult education. Specifically, an apparent oversight in adult learning theory is the foremost importance of the consideration of whether inquiry is included in the learning process: its inclusion potentially differentiates the purpose of instruction, the nature of learners’ performance, and the underlying epistemological positioning. To redress this concern, two models of modes of learning are proposed and contrasted. The reinforcing model of modes of learning (instruction, performance, without inquiry) promotes teacher-directed learning. A key consequence of employing this model in adult education is that learners may become accustomed to habitually reinforcing patterns of perceiving, thinking, judging, feeling, and acting—performance that may be rather inflexible and represented by a distinct lack of a perceived need to adapt to social contextual changes: a lack of motivation for self-directed learning. Rather, the adapting model of modes of learning (instruction, performance, with inquiry) may facilitate learners to be adaptive in their performance—by encouraging an enhanced learner sensitivity toward changing social contextual conditions: potentially enhancing learners’ motivation for self-directed learning.
In line with the further research directions highlighted in Chapter 3, concerning the need to consider the nature and treatment of educational experiences that are conductive to learner growth and development, study 3 (Chapter 4) presents a systematic review of the experiential learning theory; a theory that perhaps cannot be uncoupled from self-directed learning theory, especially in regard to understanding the cognitive aspect of self-directed learning, which represents an important direction for further research on self-directed learning. D. A. Kolb’s (1984) experiential learning cycle is perhaps the most scholarly influential and cited model regarding experiential learning theory. However, a key issue in interpreting Kolb’s model concerns a lack of clarity regarding what constitutes a concrete experience, exactly. A systematic literature review was conducted in order to examine: what constitutes a concrete experience and what is the nature of treatment of a concrete experience in experiential learning? The analysis revealed five themes: learners are involved, active, participants; knowledge is situated in place and time; learners are exposed to novel experiences, which involves risk; learning demands inquiry to specific real-world problems; and critical reflection acts as a mediator of meaningful learning. Accordingly, a revision to Kolb’s model is proposed: experiential learning consists of contextually rich concrete experience, critical reflective observation, contextual-specific abstract conceptualization, and pragmatic active experimentation. Further empirical studies are required to test the model proposed. Finally, in Chapter 5 key findings of the studies are summarized, including that the models proposed in Chapters 3 and 4 (Figures 2 and 4, respectively) may be important considerations for further research on self-directed learning.
Wearable activity recognition aims to identify and assess human activities with the help
of computer systems by evaluating signals of sensors which can be attached to the human
body. This provides us with valuable information in several areas: in health care, e.g. fluid
and food intake monitoring; in sports, e.g. training support and monitoring; in entertainment,
e.g. human-computer interface using body movements; in industrial scenarios, e.g.
computer support for detected work tasks. Several challenges exist for wearable activity
recognition: a large number of nonrelevant activities (null class), the evaluation of large
numbers of sensor signals (curse of dimensionality), ambiguity of sensor signals compared
to the activities and finally the high variability of human activity in general.
This thesis develops a new activity recognition strategy, called invariants classification,
which addresses these challenges, especially the variability in human activities. The
core idea is that often even highly variable actions include short, more or less invariant
sub-actions which are due to hard physical constraints. If someone opens a door, the
movement of the hand to the door handle is not fixed. However the door handle has to
be pushed to open the door. The invariants classification algorithm is structured in four
phases: segmentation, invariant identification, classification, and spotting. The segmentation
divides the continuous sensor data stream into meaningful parts, which are related
to sub-activities. Our segmentation strategy uses the zero crossings of the central difference
quotient of the sensor signals, as segment borders. The invariant identification finds
the invariant sub-activities by means of clustering and a selection strategy dependent on
certain features. The classification identifies the segments of a specific activity class, using
models generated from the invariant sub-activities. The models include the invariant
sub-activity signal and features calculated on sensor signals related to the sub-activity. In
the spotting, the classified segments are used to find the entire activity class instances in
the continuous sensor data stream. For this purpose, we use the position of the invariant
sub-activity in the related activity class instance for the estimation of the borders of the
activity instances.
In this thesis, we show that our new activity recognition strategy, built on invariant
sub-activities, is beneficial. We tested it on three human activity datasets with wearable
inertial measurement units (IMU). Compared to previous publications on the same
datasets we got improvement in the activity recognition in several classes, some with a
large margin. Our segmentation achieves a sensible method to separate the sensor data in
relation to the underlying activities. Relying on sub-activities makes us independent from
imprecise labels on the training data. After the identification of invariant sub-activities,
we calculate a value called cluster precision for each sensor signal and each class activity.
This tells us which classes can be easily classified and which sensor channels support
the classification best. Finally, in the training for each activity class, our algorithm selects
suitable signal channels with invariant sub-activities on different points in time and
with different length. This makes our strategy a multi-dimensional asynchronous motif
detection with variable motif length.
Function of two redox sensing kinases from the methanogenic archaeon Methanosarcina acetivorans
(2019)
MsmS is a heme-based redox sensor kinase in Methanosarcina acetivorans consisting of alternating PAS and GAF domains connected to a C-terminal kinase domain. In addition to MsmS, M. acetivorans possesses a second kinase, MA0863 with high sequence similarity. Interestingly, MA0863 possesses an amber codon in its second GAF domain, encoding for the amino acid pyrrolysine. Thus far, no function of this residue has been resolved. In order to examine the heme iron coordination in both proteins, an improved method for the production of heme proteins was established using the Escherichia coli strain Nissle 1917. This method enables the complete reconstitution of a recombinant hemoprotein during protein production, thereby resulting in a native heme coordination. Analysis of the full-length MsmS and MA0863 confirmed a covalently bound heme cofactor, which is connected to one conserved cysteine residue in each protein. In order to identify the coordinating amino acid residues of the heme iron, UV/vis spectra of different variants were measured. These studies revealed His702 in MsmS and the corresponding His666 in MA0863 as the proximal heme ligands. MsmS has previously been described as a heme-based redox sensor. In order to examine whether the same is true for MA0863, redox dependent kinase assays were performed. MA0863 indeed displays redox dependent autophosphorylation activity, which is independent of heme ligands and only observed under oxidizing conditions. Interestingly, autophosphorylation was shown to be independent of the heme cofactor but rather relies on thiol oxidation. Therefore, MA0863 was renamed in RdmS (redox dependent methyltransferase-associated sensor). In order to identify the phosphorylation site of RdmS, thin layer chromatography was performed identifying a tyrosine as the putative phosphorylation site. This observation is in agreement with the lack of a so-called H-box in typical histidine kinases. Due to their genomic localization, MsmS and RdmS were postulated to form two-component systems (TCS) with vicinal encoded regulator proteins MsrG and MsrF. Therefore, protein-protein interaction studies using the bacterial adenylate two hybrid system were performed suggesting an interaction of RdmS and MsmS with the three regulators MsrG/F/C. Due to these multiple interactions these signal transduction pathways should rather be considered multicomponent system instead of two component systems.
Ranking lists are an essential methodology to succinctly summarize outstanding items, computed over database tables or crowdsourced in dedicated websites. In this thesis, we propose the usage of automatically generated, entity-centric rankings to discover insights in data. We present PALEO, a framework for data exploration through reverse engineering top-k database queries, that is, given a database and a sample top-k input list, our approach, aims at determining an SQL query that returns results similar to the provided input when executed over the database. The core problem consist of finding selection predicates that return the given items, determining the correct ranking criteria, and evaluating the most promising candidate queries first. PALEO operates on subset of the base data, uses data samples, histograms, descriptive statistics, and further proposes models that assess the suitability of candidate queries which facilitate limitation of false positives. Furthermore, this thesis presents COMPETE, a novel approach that models and computes dominance over user-provided input entities, given a database of top-k rankings. The resulting entities are found superior or inferior with tunable degree of dominance over the input set---a very intuitive, yet insightful way to explore pros and cons of entities of interest. Several notions of dominance are defined which differ in computational complexity and strictness of the dominance concept---yet, interdependent through containment relations. COMPETE is able to pick the most promising approach to satisfy a user request at minimal runtime latency, using a probabilistic model that is estimating the result sizes. The individual flavors of dominance are cast into a stack of algorithms over inverted indices and auxiliary structures, enabling pruning techniques to avoid significant data access over large datasets of rankings.
Wine and alcoholic fermentations are complex and fascinating ecosystems. Wine aroma is shaped by the wine’s chemical compositions, in which both microbes and grape constituents play crucial roles. Activities of the microbial community impact the sensory properties of the final product, therefore, the characterisation of microbial diversity is essential in understanding and predicting sensory properties of wine. Characterisation has been challenging with traditional approaches, where microbes are isolated and therefore analyzed outside from their natural environment. This causes a bias in the observed microbial composition structure. In addition, true community interactions cannot be studied using isolates. Furthermore, the multiplex ties between wine chemical and sensory compositions remain evasive due to their multivariate and nonlinear nature. Therefore, the sensorial outcome arising from different microbial communities has remained inconclusive.
In this thesis, microbial diversity during Riesling wine fermentations is investigated with the aim to understand the roles of microbial communities during fermentations and their links to sensory properties. With the advancement of high-throughput tools based ‘omic methods, such as next-generation sequencing (NGS) technologies, it is now possible to study microbial communities and their functions without isolation by culturing. This developing field and its potential to wine community is reviewed in Chapter 1. The standardisation of methods remains challenging in the field. DNA extraction is a key step in capturing the microbial diversity in samples for generating NGS data, therefore, DNA extraction methods are evaluated in Chapter 2. In Chapter 3, machine learning is utilized in guiding raw data mining generated by the untargeted GC-MS analysis. This step is crucial in order to take full advantages of the large scope of data generated by ‘omic methods. These lay a solid foundation for Chapters 4 and 5 where microbial community structures and their outputs - chemical and sensory compositions are studied by using approaches and tools based on multiple ‘omics methods.
The results of this thesis show first that by using novel statistical approaches, it is possible to extract meaningful information from heterogeneous biological, chemical and sensorial data. Secondly, results suggest that the variation in wine aroma, might be related
to microbial interactions taking place not only inside a single community, but also the
IV
interactions between communities, such as vineyard and winery communities. Therefore, the true sensory expression of terroir might be masked by the interaction between two microbial communities, although more work is needed to uncover this potential relationship. Such potential interaction mechanisms were uncovered between non- Saccharomyces yeast and bacteria in this work and unexpected novel bacterial growth was observed during alcohol fermentation. This suggests new layers in understanding of wine fermentations. In the future, multi-omic approaches could be applied to identify biological pathways leading to specific wine aroma as well as investigate the effects upon specific winemaking conditions. These results are relevant not just for the wine industry, but also to other industries where complex microbial networks are important. As such, the approaches presented in this thesis might find widely use in the food industry.
Economics of Downside Risk
(2019)
Ever since establishment of portfolio selection theory by Markowitz (1952), the use of Standard deviation as a measure of risk has heavily been criticized. The aim of this thesis is to refine classical portfolio selection and asset pricing theory by using a downside deviation risk measure. It is defined as below-target semideviation and referred to as downside risk.
Downside efficient portfolios maximize expected payoff given a prescribed upper bound for downside risk and, thus, are analogs to mean-variance efficient portfolios in the sense of Markowitz. The present thesis provides an alternative proof of existence of downside efficient portfolios and identifies a sufficient criterion for their uniqueness. A specific representation of their form brings structural similarity to mean-variance efficient portfolios to light. Eventually, a separation theorem for the existence and uniqueness of portfolios that maximize the trade-off between downside risk and return is established.
The notion of a downside risk asset market equilibrium (DRAME) in an asset market with finitely many investors is introduced. This thesis addresses the existence and uniqueness Problem of such equilibria and specifies a DRAME pricing formula. In contrast to prices obtained from the mean-variance CAPM pricing formula, DRAME prices are arbitrage-free and strictly positive.
The final part of this thesis addresses practical issues. An algorithm that allows for an effective computation of downside efficient portfolios from simulated or historical financial data is outlined. In a simulation study, it is revealed in which scenarios downside efficient portfolios
outperform mean-variance efficient portfolios.
The simulation of physical phenomena involving the dynamic behavior of fluids and gases
has numerous applications in various fields of science and engineering. Of particular interest
is the material transport behavior, the tendency of a flow field to displace parts of the
medium. Therefore, many visualization techniques rely on particle trajectories.
Lagrangian Flow Field Representation. In typical Eulerian settings, trajectories are
computed from the simulation output using numerical integration schemes. Accuracy concerns
arise because, due to limitations of storage space and bandwidth, often only a fraction
of the computed simulation time steps are available. Prior work has shown empirically that
a Lagrangian, trajectory-based representation can improve accuracy [Agr+14]. Determining
the parameters of such a representation in advance is difficult; a relationship between the
temporal and spatial resolution and the accuracy of resulting trajectories needs to be established.
We provide an error measure for upper bounds of the error of individual trajectories.
We show how areas at risk for high errors can be identified, thereby making it possible to
prioritize areas in time and space to allocate scarce storage resources.
Comparative Visual Analysis of Flow Field Ensembles. Independent of the representation,
errors of the simulation itself are often caused by inaccurate initial conditions,
limitations of the chosen simulation model, and numerical errors. To gain a better understanding
of the possible outcomes, multiple simulation runs can be calculated, resulting in
sets of simulation output referred to as ensembles. Of particular interest when studying the
material transport behavior of ensembles is the identification of areas where the simulation
runs agree or disagree. We introduce and evaluate an interactive method that enables application
scientists to reliably identify and examine regions of agreement and disagreement,
while taking into account the local transport behavior within individual simulation runs.
Particle-Based Representation and Visualization of Uncertain Flow Data Sets. Unlike
simulation ensembles, where uncertainty of the solution appears in the form of different
simulation runs, moment-based Eulerian multi-phase fluid simulations are probabilistic in
nature. These simulations, used in process engineering to simulate the behavior of bubbles in
liquid media, are aimed toward reducing the need for real-world experiments. The locations
of individual bubbles are not modeled explicitly, but stochastically through the properties of
locally defined bubble populations. Comparisons between simulation results and physical
experiments are difficult. We describe and analyze an approach that generates representative
sets of bubbles for moment-based simulation data. Using our approach, application scientists
can directly, visually compare simulation results and physical experiments.
Novel image processing techniques have been in development for decades, but most
of these techniques are barely used in real world applications. This results in a gap
between image processing research and real-world applications; this thesis aims to
close this gap. In an initial study, the quantification, propagation, and communication
of uncertainty were determined to be key features in gaining acceptance for
new image processing techniques in applications.
This thesis presents a holistic approach based on a novel image processing pipeline,
capable of quantifying, propagating, and communicating image uncertainty. This
work provides an improved image data transformation paradigm, extending image
data using a flexible, high-dimensional uncertainty model. Based on this, a completely
redesigned image processing pipeline is presented. In this pipeline, each
step respects and preserves the underlying image uncertainty, allowing image uncertainty
quantification, image pre-processing, image segmentation, and geometry
extraction. This is communicated by utilizing meaningful visualization methodologies
throughout each computational step.
The presented methods are examined qualitatively by comparing to the Stateof-
the-Art, in addition to user evaluation in different domains. To show the applicability
of the presented approach to real world scenarios, this thesis demonstrates
domain-specific problems and the successful implementation of the presented techniques
in these domains.
The focus of this work is to provide and evaluate a novel method for multifield topology-based analysis and visualization. Through this concept, called Pareto sets, one is capable to identify critical regions in a multifield with arbitrary many individual fields. It uses ideas found in graph optimization to find common behavior and areas of divergence between multiple optimization objectives. The connections between the latter areas can be reduced into a graph structure allowing for an abstract visualization of the multifield to support data exploration and understanding.
The research question that is answered in this dissertation is about the general capability and expandability of the Pareto set concept in context of visualization and application. Furthermore, the study of its relations, drawbacks and advantages towards other topological-based approaches. This questions is answered in several steps, including consideration and comparison with related work, a thorough introduction of the Pareto set itself as well as a framework for efficient implementation and an attached discussion regarding limitations of the concept and their implications for run time, suitable data, and possible improvements.
Furthermore, this work considers possible simplification approaches like integrated single-field simplification methods but also using common structures identified through the Pareto set concept to smooth all individual fields at once. These considerations are especially important for real-world scenarios to visualize highly complex data by removing small local structures without destroying information about larger, global trends.
To further emphasize possible improvements and expandability of the Pareto set concept, the thesis studies a variety of different real world applications. For each scenario, this work shows how the definition and visualization of the Pareto set is used and improved for data exploration and analysis based on the scenarios.
In summary, this dissertation provides a complete and sound summary of the Pareto set concept as ground work for future application of multifield data analysis. The possible scenarios include those presented in the application section, but are found in a wide range of research and industrial areas relying on uncertainty analysis, time-varying data, and ensembles of data sets in general.
The usage of sensors in modern technical systems and consumer products is in a rapid increase. This advancement can be characterized by two major factors, namely, the mass introduction of consumer oriented sensing devices to the market and the sheer amount of sensor data being generated. These characteristics raise subsequent challenges regarding both the consumer sensing devices' reliability and the management and utilization of the generated sensor data. This thesis addresses these challenges through two main contributions. It presents a novel framework that leverages sentiment analysis techniques in order to assess the quality of consumer sensing devices. It also couples semantic technologies with big data technologies to present a new optimized approach for realization and management of semantic sensor data, hence providing a robust means of integration, analysis, and reuse of the generated data. The thesis also presents several applications that show the potential of the contributions in real-life scenarios.
Due to the broad range, growing feature set and fast release pace of new sensor-based products, evaluating these products is very challenging as standard product testing is not practical. As an alternative, an end-to-end aspect-based sentiment summarizer pipeline for evaluation of consumer sensing devices is presented. The pipeline uses product reviews to extract the sentiment at the aspect level and includes several components namely, product name extractor, aspects extractor and a lexicon-based sentiment extractor which handles multiple sentiment analysis challenges such as sentiment shifters, negations, and comparative sentences among others. The proposed summarizer's components generally outperform the state-of-the-art approaches. As a use case, features of the market leading fitness trackers are evaluated and a dynamic visual summarizer is presented to display the evaluation results and to provide personalized product recommendations for potential customers.
The increased usage of sensing devices in the consumer market is accompanied with increased deployment of sensors in various other fields such as industry, agriculture, and energy production systems. This necessitates using efficient and scalable methods for storing and processing of sensor data. Coupling big data technologies with semantic techniques not only helps to achieve the desired storage and processing goals, but also facilitates data integration, data analysis, and the utilization of data in unforeseen future applications through preserving the data generation context. This thesis proposes an efficient and scalable solution for semantification, storage and processing of raw sensor data through ontological modelling of sensor data and a novel encoding scheme that harnesses the split between the statements of the conceptual model of an ontology (TBox) and the individual facts (ABox) along with in-memory processing capabilities of modern big data systems. A sample use case is further introduced where a smartphone is deployed in a transportation bus to collect various sensor data which is then utilized in detecting street anomalies.
In addition to the aforementioned contributions, and to highlight the potential use cases of sensor data publicly available, a recommender system is developed using running route data, used for proximity-based retrieval, to provide personalized suggestions for new routes considering the runner's performance, visual and nature of route preferences.
This thesis aims at enhancing the integration of sensing devices in daily life applications through facilitating the public acquisition of consumer sensing devices. It also aims at achieving better integration and processing of sensor data in order to enable new potential usage scenarios of the raw generated data.
Linking protistan community shifts along salinity gradients with cellular haloadaptation strategies
(2019)
Salinity is one of the most structuring environmental factors for microeukaryotic communities. Using eDNA barcoding, I detected significant shifts in microeukaryotic community compositions occurring at distinct salinities between brackish and marine conditions in the Baltic Sea. I, furthermore, conducted a metadata analysis including my and other marine and hypersaline community sequence data to confirm the existence of salinity-related transition boundaries and significant changes in alpha diversity patterns along a brackish to hypersaline gradient. One hypothesis for the formation of salinity-dependent transition boundaries between brackish to hypersaline conditions is the use of different cellular haloadaptation strategies. To test this hypothesis, I conducted metatranscriptome analyses of microeukaryotic communities along a pronounced salinity gradient (40 – 380 ‰). Clustering of functional transcripts revealed differences in metabolic properties and metabolic capacities between microeukaryotic communities at specific salinities, corresponding to the transition boundaries already observed in the taxonomic eDNA barcoding approach. In specific, microeukaryotic communities thriving at mid-hypersaline conditions (≤ 150 ‰) seem to predominantly apply the ‘low-salt – organic-solutes-in’ strategy by accumulating compatible solutes to counteract osmotic stress. Indications were found for both the intracellular synthesis of compatible solutes as well as for cellular transport systems. In contrast, communities of extreme-hypersaline habitats (≥ 200 ‰) may preferentially use the ‘high-salt-in’ strategy, i. e. the intracellular accumulation of inorganic ions in high concentrations, which is implied by the increased expression of Mg2+, K+, Cl- transporters and channels.
In order to characterize the ‘low-salt – organic-solutes-in’ strategy applied by protists in more detail, I conducted a time-resolved transcriptome analysis of the heterotrophic ciliate Schmidingerothrix salinarum serving as model organism. S. salinarum was thus subjected to a salt-up shock to investigate the intracellular response to osmotic stress by shifts of gene expression. After increasing the external salinity, an increased expression of two-component signal transduction systems and MAPK cascades was observed. In an early reaction, the expression of transport mechanisms for K+, Cl- and Ca2+ increased, which may enhance the capacity of K+, Cl- and Ca2+ in the cytoplasm to compensate possibly harmful Na+ influx. Expression of enzymes for the synthesis of possible compatible solutes, starting with glycine betaine, followed by ectoine and later proline, could imply that the inorganic ions K+, Cl- and Ca2+ are gradually replaced by the synthesized compatible solutes. Additionally, expressed transporters for choline (precursor of glycine betaine) and proline could indicate an intracellular accumulation of compatible solutes to balance the external salinity. During this accumulation, the up-regulated ion export mechanisms may increase the capacity for Na+ expulsion from the cytoplasm and ion compartmentalization between cell organelles seem to happen.
The results of my PhD project revealed first evidence at molecular level for the salinity-dependent use of different haloadaptation strategies in microeukaryotes and significantly extend existing knowledge about haloadaptation processes in ciliates. The results provide ground for future research, such as (comparative) transcriptome analysis of ciliates thriving in extreme-hypersaline habitats or experiments like qRT-PCR to validate transcriptome results.
In this thesis, we consider the problem of processing similarity queries over a dataset of top-k rankings and class constrained objects. Top-k rankings are the most natural and widely used technique to compress a large amount of information into a concise form. Spearman’s Footrule distance is used to compute the similarity between rankings, considering how well rankings agree on the positions (ranks) of ranked items. This setup allows the application of metric distance-based pruning strategies, and, alternatively, enables the use of traditional inverted indices for retrieving rankings that overlap in items. Although both techniques can be individually applied, we hypothesize that blending these two would lead to better performance. First, we formulate theoretical bounds over the rankings, based on Spearman's Footrule distance, which are essential for adapting existing, inverted index based techniques to the setting of top-k rankings. Further, we propose a hybrid indexing strategy, designed for efficiently processing similarity range queries, which incorporates inverted indices and metric space indices, such as M- or BK-trees, resulting in a structure that resembles both indexing methods with tunable emphasis on one or the other. Moreover, optimizations to the inverted index component are presented, for early termination and minimizing bookkeeping. As vast amounts of data are being generated on a daily bases, we further present a distributed, highly tunable, approach, implemented in Apache Spark, for efficiently processing similarity join queries over top-k rankings. To combine distance-based filtering with inverted indices, the algorithm works in several phases. The partial results are joined for the computation of the final result set. As the last contribution of the thesis, we consider processing k-nearest-neighbor (k-NN) queries over class-constrained objects, with the additional requirement that the result objects are of a specific type. We introduce the MISP index, which first indexes the objects by their (combination of) class belonging, followed by a similarity search sub index for each subset of objects. The number of such subsets can combinatorially explode, thus, we provide a cost model that analyzes the performance of the MISP index structure under different configurations, with the aim of finding the most efficient one for the dataset being searched.
In this thesis we consider the directional analysis of stationary point processes. We focus on three non-parametric methods based on second order analysis which we have defined as Integral method, Ellipsoid method, and Projection method. We present the methods in a general setting and then focus on their application in the 2D and 3D case of a particular type of anisotropy mechanism called geometric anisotropy. We mainly consider regular point patterns motivated by our application to real 3D data coming from glaciology. Note that directional analysis of 3D data is not so prominent in the literature.
We compare the performance of the methods, which depends on the relative parameters, in a simulation study both in 2D and 3D. Based on the results we give recommendations on how to choose the methods´ parameters in practice.
We apply the directional analysis to the 3D data coming from glaciology, which consist in the locations of air-bubbles in polar ice cores. The aim of this study is to provide information about the deformation rate in the ice and the corresponding thinning of ice layers at different depths. This information is substantial for the glaciologists in order to build ice dating models and consequently to give a correct interpretation of the climate information which can be found by analyzing ice cores. In this thesis we consider data coming from three different ice cores: the Talos Dome core, the EDML core and the Renland core.
Motivated by the ice application, we study how isotropic and stationary noise influences the directional analysis. In fact, due to the relaxation of the ice after drilling, noise bubbles can form within the ice samples. In this context we take two classification algorithms into consideration, which aim to classify points in a superposition of a regular isotropic and stationary point process with Poisson noise.
We introduce two methods to visualize anisotropy, which are particularly useful in 3D and apply them to the ice data. Finally, we consider the problem of testing anisotropy and the limiting behavior of the geometric anisotropy transform.
While the design step should be free from computational related constraints and operations due to its artistic aspect, the modeling phase has to prepare the model for the later stages of the pipeline.
This dissertation is concerned with the design and implementation of a framework for local remeshing and optimization. Based on the experience gathered, a full study about mesh quality criteria is also part of this work.
The contributions can be highlighted as: (1) a local meshing technique based on a completely novel approach constrained to the preservation of the mesh of non interesting areas. With this concept, designers can work on the design details of specific regions of the model without introducing more polygons elsewhere; (2) a tool capable of recovering the shape of a refined area to its decimated version, enabling details on optimized meshes of detailed models; (3) the integration of novel techniques into a single framework for meshing and smoothing which is constrained to surface structure; (4) the development of a mesh quality criteria priority structure, being able to classify and prioritize according to the application of the mesh.
Although efficient meshing techniques have been proposed along the years, most of them lack the possibility to mesh smaller regions of the base mesh, preserving the mesh quality and density of outer areas.
Considering this limitation, this dissertation seeks answers to the following research questions:
1. Given that mesh quality is relative to the application it is intended for, is it possible to design a general mesh evaluation plan?
2. How to prioritize specific mesh criteria over others?
3. Given an optimized mesh and its original design, how to improve the representation of single regions of the first, without degrading the mesh quality elsewhere?
Four main achievements came from the respective answers:
1. The Application Driven Mesh Quality Criteria Structure: Due to high variation in mesh standards because of various computer aided operations performed for different applications, e.g. animation or stress simulation, a structure for better visualization of mesh quality criteria is proposed. The criteria can be used to guide the mesh optimization, making the task consistent and reliable. This dissertation also proposes a methodology to optimize the criteria values, which is adaptable to the needs of a specific application.
2. Curvature Driven Meshing Algorithm: A novel approach, a local meshing technique, which works on a desired area of the mesh while preserving its boundaries as well as the rest of the topology. It causes a slow growth in the overall amount of polygons by making only small regions denser. The method can also be used to recover the details of a reference mesh to its decimated version while refining it. Moreover, it employs a geometric fast and easy to implement approach representing surface features as simple circles, being used to guide the meshing. It also generates quad-dominant meshes, with triangle count directly dependent on the size of the boundary.
3. Curvature-based Method for Anisotropic Mesh Smoothing: A geometric-based method is extended to 3D space to be able to produce anisotropic elements where needed. It is made possible by mapping the original space to another which embeds the surface curvature. This methodology is used to enhance the smoothing algorithm by making the nearly regularized elements follow the surface features, preserving the original design. The mesh optimization method also preserves mesh topology, while resizing elements according to the local mesh resolution, effectively enhancing the design aspects intended.
4. Framework for Local Restructure of Meshed Surfaces: The combination of both methods creates a complete tool for recovering surface details through mesh refinement and curvature aware mesh smoothing.
In computer graphics, realistic rendering of virtual scenes is a computationally complex problem. State-of-the-art rendering technology must become more scalable to
meet the performance requirements for demanding real-time applications.
This dissertation is concerned with core algorithms for rendering, focusing on the
ray tracing method in particular, to support and saturate recent massively parallel computer systems, i.e., to distribute the complex computations very efficiently
among a large number of processing elements. More specifically, the three targeted
main contributions are:
1. Collaboration framework for large-scale distributed memory computers
The purpose of the collaboration framework is to enable scalable rendering
in real-time on a distributed memory computer. As an infrastructure layer it
manages the explicit communication within a network of distributed memory
nodes transparently for the rendering application. The research is focused on
designing a communication protocol resilient against delays and negligible in
overhead, relying exclusively on one-sided and asynchronous data transfers.
The hypothesis is that a loosely coupled system like this is able to scale linearly
with the number of nodes, which is tested by directly measuring all possible
communication-induced delays as well as the overall rendering throughput.
2. Ray tracing algorithms designed for vector processing
Vector processors are to be efficiently utilized for improved ray tracing performance. This requires the basic, scalar traversal algorithm to be reformulated
in order to expose a high degree of fine-grained data parallelism. Two approaches are investigated: traversing multiple rays simultaneously, and performing
multiple traversal steps at once. Efficiently establishing coherence in a group
of rays as well as avoiding sorting of the nodes in a multi-traversal step are the
defining research goals.
3. Multi-threaded schedule and memory management for the ray tracing acceleration structure
Construction times of high-quality acceleration structures are to be reduced by
improvements to multi-threaded scalability and utilization of vector processors. Research is directed at eliminating the following scalability bottlenecks:
dynamic memory growth caused by the primitive splits required for high-
quality structures, and top-level hierarchy construction where simple task par-
allelism is not readily available. Additional research addresses how to expose
scatter/gather-free data-parallelism for efficient vector processing.
Together, these contributions form a scalable, high-performance basis for real-time,
ray tracing-based rendering, and a prototype path tracing application implemented
on top of this basis serves as a demonstration.
The key insight driving this dissertation is that the computational power necessary
for realistic light transport for real-time rendering applications demands massively
parallel computers, which in turn require highly scalable algorithms. Therefore this
dissertation provides important research along the path towards virtual reality.
Carotenoids are organic lipophilic tetraterpenes ubiquitously present in Nature and found across the three domains of life (Archaea, Bacteria and Eukaryotes). Their structure is characterized by an extensive conjugated double-bond system, which serves as a light-absorbing chromophore, hence determining its colour, and enables carotenoids to absorb energy from other molecules and to act as antioxidant agents. Humans obtain carotenoids mainly via the consumption of fruits and vegetables, and to a smaller extent from other food sources such as fish and eggs. The concentration of carotenoids in the human plasma and tissues has been positively associated with a lower incidence of several chronic diseases including, cancer, diabetes, macular degeneration and cardiovascular conditions, likely due to their antioxidant properties. However, an important aspect of carotenoids, namely β- and α-carotene and β-cryptoxanthin, in human health and development, is their potential to be converted by the body into Vitamin A.
Yet, bioavailability of carotenoids is relatively low (< 30%) and dependent, among others, on dietary factors, such as amount and type of dietary lipids and the presence of dietary fibres. One dietary factor that has been found to negatively impact carotenoid bioaccessibility and cellular uptake in vitro is high concentrations of divalent cations during simulated gastro-intestinal digestion. Nevertheless, the mechanism of action of divalent cations remains unclear. The goal of this thesis was to better understand how divalent cations act during digestion and modulate carotenoid bioavailability. In vitro trials of simulated gastro-intestinal digestion and cellular uptake were run to investigate how varying concentrations of calcium, magnesium and zinc affected the bioaccessibility of both pure carotenoids and carotenoids from food matrices. In order to validate or refute results obtained in vitro, a randomized and double blinded placebo controlled cross-over postprandial trial (24 male participants) was carried out, testing the effect of 3 supplementary calcium doses (0 mg, 500 mg and 1000 mg) on the bioavailability of carotenoids from a spinach based meal. In vitro trials showed that addition of the divalent cations significantly decreased the bioaccessibility of both pure carotenoids (P < 0.001) and those from food matrices (P < 0.01). This effect was dependent on the type of mineral and its concentration. Strongest effects were seen for increasing concentrations of calcium followed by magnesium and zinc. The addition of divalent cations also altered the physico-chemical properties, i.e. viscosity and surface tension, of the digestas. However, the extent of this effect varied according to the type of matrix. The effects on bioaccessibility and physico-chemical properties were accompanied by variations of the zeta-potential of the particles in solution. Taken together, results from the in vitro trials strongly suggested that divalent cations were able to bind bile salts and other surfactant agents, affecting their solubility. The observed i) decrease in macroviscosity, ii) increase in surface tension, and the iii) reduction of the zeta-potential of the digesta, confirmed the removal of surfactant agents from the system, most likely due to precipitation as a result of the lower solubility of the mineral-surfactant complexes. As such, micellarization of carotenoids was hindered, explaining their reduced bioaccessibility. As for the human trial, results showed that there was no significant influence of supplementation with either 500 or 1000 mg of supplemental calcium (in form of carbonate) on the bioavailability of a spinach based meal, as measured by the area-under curve of carotenoid concentrations in the plasma-triacylglycerol rich fraction, suggesting that the in vitro results are not supported in such an in vivo scenario, which may be explained by the initial low bioaccessibility of spinach carotenoids and the dissolution kinetics of the calcium pills. Further investigations are necessary to understand how divalent cations act during in vivo digestion and potentially interact with lipophilic nutrients and food constituents.
Indentation into a metastable austenite may induce the phase transformation to the bcc phase. We study this process using
atomistic simulation. At temperatures low compared to the equilibrium transformation temperature, the indentation triggers the
transformation of the entire crystallite: after starting the transformation, it rapidly proceeds throughout the simulation crystallite.
The microstructure of the transformed sample is characterized by twinned grains. At higher temperatures, around the equilibrium
transformation temperature, the crystal transforms only locally, in the vicinity of the indent pit. In addition, the indenter
produces dislocation plasticity in the remaining austenite. At intermediate temperatures, the crystal continuously transforms
throughout the indentation process.
Visualization is vital to the scientific discovery process.
An interactive high-fidelity rendering provides accelerated insight into complex structures, models and relationships.
However, the efficient mapping of visualization tasks to high performance architectures is often difficult, being subject to a challenging mixture of hardware and software architectural complexities in combination with domain-specific hurdles.
These difficulties are often exacerbated on heterogeneous architectures.
In this thesis, a variety of ray casting-based techniques are developed and investigated with respect to a more efficient usage of heterogeneous HPC systems for distributed visualization, addressing challenges in mesh-free rendering, in-situ compression, task-based workload formulation, and remote visualization at large scale.
A novel direct raytracing scheme for on-the-fly free surface reconstruction of particle-based simulations using an extended anisoptropic kernel model is investigated on different state-of-the-art cluster setups.
The versatile system renders up to 170 million particles on 32 distributed compute nodes at close to interactive frame rates at 4K resolution with ambient occlusion.
To address the widening gap between high computational throughput and prohibitively slow I/O subsystems, in situ topological contour tree analysis is combined with a compact image-based data representation to provide an effective and easy-to-control trade-off between storage overhead and visualization fidelity.
Experiments show significant reductions in storage requirements, while preserving flexibility for exploration and analysis.
Driven by an increasingly heterogeneous system landscape, a flexible distributed direct volume rendering and hybrid compositing framework is presented.
Based on a task-based dynamic runtime environment, it enables adaptable performance-oriented deployment on various platform configurations.
Comprehensive benchmarks with respect to task granularity and scaling are conducted to verify the characteristics and potential of the novel task-based system design.
A core challenge of HPC visualization is the physical separation of visualization resources and end-users.
Using more tiles than previously thought reasonable, a distributed, low-latency multi-tile streaming system is demonstrated, being able to sustain a stable 80 Hz when streaming up to 256 synchronized 3840x2160 tiles and achieve 365 Hz at 3840x2160 for sort-first compositing over the internet, thereby enabling lightweight visualization clients and leaving all the heavy lifting to the remote supercomputer.
Topology-Based Characterization and Visual Analysis of Feature Evolution in Large-Scale Simulations
(2019)
This manuscript presents a topology-based analysis and visualization framework that enables the effective exploration of feature evolution in large-scale simulations. Such simulations pose additional challenges to the already complex task of feature tracking and visualization, since the vast number of features and the size of the simulation data make it infeasible to naively identify, track, analyze, render, store, and interact with data. The presented methodology addresses these issues via three core contributions. First, the manuscript defines a novel topological abstraction, called the Nested Tracking Graph (NTG), that records the temporal evolution of features that exhibit a nesting hierarchy, such as superlevel set components for multiple levels, or filtered features across multiple thresholds. In contrast to common tracking graphs that are only capable of describing feature evolution at one hierarchy level, NTGs effectively summarize their evolution across all hierarchy levels in one compact visualization. The second core contribution is a view-approximation oriented image database generation approach (VOIDGA) that stores, at simulation runtime, a reduced set of feature images. Instead of storing the features themselves---which is often infeasable due to bandwidth constraints---the images of these databases can be used to approximate the depicted features from any view angle within an acceptable visual error, which requires far less disk space and only introduces a neglectable overhead. The final core contribution combines these approaches into a methodology that stores in situ the least amount of information necessary to support flexible post hoc analysis utilizing NTGs and view approximation techniques.
Many loads acting on a vehicle depend on the condition and quality of roads
traveled as well as on the driving style of the motorist. Thus, during vehicle development,
good knowledge on these further operations conditions is advantageous.
For that purpose, usage models for different kinds of vehicles are considered. Based
on these mathematical descriptions, representative routes for multiple user
types can be simulated in a predefined geographical region. The obtained individual
driving schedules consist of coordinates of starting and target points and can
thus be routed on the true road network. Additionally, different factors, like the
topography, can be evaluated along the track.
Available statistics resulting from travel survey are integrated to guarantee reasonable
trip length. Population figures are used to estimate the number of vehicles in
contained administrative units. The creation of thousands of those geo-referenced
trips then allows the determination of realistic measures of the durability loads.
Private as well as commercial use of vehicles is modeled. For the former, commuters
are modeled as the main user group conducting daily drives to work and
additional leisure time a shopping trip during workweek. For the latter, taxis as
example for users of passenger cars are considered. The model of light-duty commercial
vehicles is split into two types of driving patterns, stars and tours, and in
the common traffic classes of long-distance, local and city traffic.
Algorithms to simulate reasonable target points based on geographical and statistical
data are presented in detail. Examples for the evaluation of routes based
on topographical factors and speed profiles comparing the influence of the driving
style are included.
Under the notion of Cyber-Physical Systems an increasingly important research area has
evolved with the aim of improving the connectivity and interoperability of previously
separate system functions. Today, the advanced networking and processing capabilities
of embedded systems make it possible to establish strongly distributed, heterogeneous
systems of systems. In such configurations, the system boundary does not necessarily
end with the hardware, but can also take into account the wider context such as people
and environmental factors. In addition to being open and adaptive to other networked
systems at integration time, such systems need to be able to adapt themselves in accordance
with dynamic changes in their application environments. Considering that many
of the potential application domains are inherently safety-critical, it has to be ensured
that the necessary modifications in the individual system behavior are safe. However,
currently available state-of-the-practice and state-of-the-art approaches for safety assurance
and certification are not applicable to this context.
To provide a feasible solution approach, this thesis introduces a framework that allows
“just-in-time” safety certification for the dynamic adaptation behavior of networked
systems. Dynamic safety contracts (DSCs) are presented as the core solution concept
for monitoring and synthesis of decentralized safety knowledge. Ultimately, this opens
up a path towards standardized service provision concepts as a set of safety-related runtime
evidences. DSCs enable the modular specification of relevant safety features in
networked applications as a series of formalized demand-guarantee dependencies. The
specified safety features can be hierarchically integrated and linked to an interpretation
level for accessing the scope of possible safe behavioral adaptations. In this way, the networked
adaptation behavior can be conditionally certified with respect to the fulfilled
DSC safety features during operation. As long as the continuous evaluation process
provides safe adaptation behavior for a networked application context, safety can be
guaranteed for a networked system mode at runtime. Significant safety-related changes
in the application context, however, can lead to situations in which no safe adaptation
behavior is available for the current system state. In such cases, the remaining DSC
guarantees can be utilized to determine optimal degradation concepts for the dynamic
applications.
For the operationalization of the DSCs approach, suitable specification elements and
mechanisms have been defined. Based on a dedicated GUI-engineering framework it is
shown how DSCs can be systematically developed and transformed into appropriate runtime
representations. Furthermore, a safety engineering backbone is outlined to support
the DSC modeling process in concrete application scenarios. The conducted validation
activities show the feasibility and adequacy of the proposed DSCs approach. In parallel,
limitations and areas of future improvement are pointed out.
Topological insulators (TI) are a fascinating new state of matter. Like usual insulators, their band structure possesses a band gap, such that they cannot conduct current in their bulk. However, they are able to conduct current along their edges and surfaces, due to edge states that cross the band gap. What makes TIs so interesting and potentially useful are these robust unidirectional edge currents. They are immune to significant defects and disorder, which means that they provide scattering-free transport.
In photonics, using topological protection has a huge potential for applications, e.g. for robust optical data transfer [1-3] – even on the quantum level [4, 5] – or to make devices more stable and robust [6, 7]. Therefore, the field of topological insulators has spread to optics to create the new and active research field of topological photonics [8-10].
Well-defined and controllable model systems can help to provide deeper insight into the mechanisms of topologically protected transport. These model systems provide a vast control over parameters. For example, arbitrary lattice types without defects can be examined, and single lattice sites can be manipulated. Furthermore, they allow for the observation of effects that usually happen at extremely short time-scales in solids. Model systems based on photonic waveguides are ideal candidates for this.
They consist of optical waveguides arranged on a lattice. Due to evanescent coupling, light that is inserted into one waveguide spreads along the lattice. This coupling of light between waveguides can be seen as an analogue to electrons hopping/tunneling between atomic lattice sites in a solid.
The theoretical basis for this analogy is given by the mathematical equivalence between Schrödinger and paraxial Helmholtz equation. This means that in these waveguide systems, the role of time is assigned to a spatial axis. The field evolution along the waveguides' propagation axis z thus models the temporal evolution of an electron's wave-function in solid states. Electric and magnetic fields acting on electrons in solids need to be incorporated into the photonic platform by introducing artificial fields. These artificial gauge fields need to act on photons in the same way that their electro-magnetic counterparts act on electrons. E.g., to create a photonic analogue of a topological insulator the waveguides are bent helically along their propagation axis to model the effect of a magnetic field [3]. This means that the fabrication of these waveguide arrays needs to be done in 3D.
In this thesis, a new method to 3D micro-print waveguides is introduced. The inverse structure is fabricated via direct laser writing, and subsequently infiltrated with a material with higher refractive index contrast. We will use these model systems of evanescently coupled waveguides to look at different effects in topological systems, in particular at Floquet topological systems.
We will start with a topologically trivial system, consisting of two waveguide arrays with different artificial gauge fields. There, we observe that an interface between these trivial gauge fields has a profound impact on the wave vector of the light traveling across it. We deduce an analog to Snell's law and verify it experimentally.
Then we will move on to Floquet topological systems, consisting of helical waveguides. At the interface between two Floquet topological insulators with opposite helicity of the waveguides, we find additional trivial interface modes that trap the light. This allows to investigate the interaction between trivial and topological modes in the lattice.
Furthermore, we address the question if topological edge states are robust under the influence of time-dependent defects. In a one-dimensional topological model (the Su-Schrieffer-Heeger model [11]) we apply periodic temporal modulations to an edge wave-guide. We find Floquet copies of the edge state, that couple to the bulk in a certain frequency window and thus depopulate the edge state.
In the two-dimensional Floquet topological insulator, we introduce single defects at the edge. When these defects share the temporal periodicity of the helical bulk waveguides, they have no influence on a topological edge mode. Then, the light moves around/through the defect without being scattered into the bulk. Defects with different periodicity, however, can – likewise to the defects in the SSH model – induce scattering of the edge state into the bulk.
In the end we will briefly highlight a newly emerging method for the fabrication of waveguides with low refractive index contrast. Moreover, we will introduce new ways to create artificial gauge fields by the use of orbital angular momentum states in waveguides.
We report the design, fabrication and experimental investigation of a spectrally wide-band terahertz spatial light modulator (THz-SLM) based on an array of 768 actuatable mirrors with each having a length of 220 μm and a width of 100 μm. A mirror length of several hundred micrometers is required to reduce diffraction from individual mirrors at terahertz frequencies and to increase the pixel-to-pixel modulation contrast of the THz-SLM. By means of spatially selective actuation, we used the mirror array as reconfigurable grating to spatially modulate terahertz waves in a frequency range from 0.97 THz to 2.28 THz. Over the entire frequency band, the modulation contrast was higher than 50% with a peak modulation contrast of 87% at 1.38 THz. For spatial light modulation, almost arbitrary spatial pixel sizes can be realized by grouping of mirrors that are collectively switched as a pixel. For fabrication of the actuatable mirrors, we exploited the intrinsic residual stress in chrome-copper-chrome multi-layers that forces the mirrors into an upstanding position at an inclination angle of 35°. By applying a bias voltage of 37 V, the mirrors were pulled down to the substrate. By hysteretic switching, we were able to spatially modulate terahertz radiation at arbitrary pixel modulation patterns.
Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.
Large-scale distributed systems consist of a number of components, take a number of parameter values as input, and behave differently based on a number of non-deterministic events. All these features—components, parameter values, and events—interact in complicated ways, and unanticipated interactions may lead to bugs. Empirically, many bugs in these systems are caused by interactions of only a small number of features. In certain cases, it may be possible to test all interactions of \(k\) features for a small constant \(k\) by executing a family of tests that is exponentially or even doubly-exponentially smaller than the family of all tests. Thus, in such cases we can effectively uncover all bugs that require up to \(k\)-wise interactions of features.
In this thesis we study two occurrences of this phenomenon. First, many bugs in distributed systems are caused by network partition faults. In most cases these bugs occur due to two or three key nodes, such as leaders or replicas, not being able to communicate, or because the leading node finds itself in a block of the partition without quorum. Second, bugs may occur due to unexpected schedules (interleavings) of concurrent events—concurrent exchange of messages and concurrent access to shared resources. Again, many bugs depend only on the relative ordering of a small number of events. We call the smallest number of events whose ordering causes a bug the depth of the bug. We show that in both testing scenarios we can effectively uncover bugs involving small number of nodes or bugs of small depth by executing small families of tests.
We phrase both testing scenarios in terms of an abstract framework of tests, testing goals, and goal coverage. Sets of tests that cover all testing goals are called covering families. We give a general construction that shows that whenever a random test covers a fixed goal with sufficiently high probability, a small randomly chosen set of tests is a covering family with high probability. We then introduce concrete coverage notions relating to network partition faults and bugs of small depth. In case of network partition faults, we show that for the introduced coverage notions we can find a lower bound on the probability that a random test covers a given goal. Our general construction then yields a randomized testing procedure that achieves full coverage—and hence, find bugs—quickly.
In case of coverage notions related to bugs of small depth, if the events in the program form a non-trivial partial order, our general construction may give a suboptimal bound. Thus, we study other ways of constructing covering families. We show that if the events in a concurrent program are partially ordered as a tree, we can explicitly construct a covering family of small size: for balanced trees, our construction is polylogarithmic in the number of events. For the case when the partial order of events does not have a "nice" structure, and the events and their relation to previous events are revealed while the program is running, we give an online construction of covering families. Based on the construction, we develop a randomized scheduler called PCTCP that uniformly samples schedules from a covering family and has a rigorous guarantee of finding bugs of small depth. We experiment with an implementation of PCTCP on two real-world distributed systems—Zookeeper and Cassandra—and show that it can effectively find bugs.
Hardware Contention-Aware Real-Time Scheduling on Multi-Core Platforms in Safety-Critical Systems
(2019)
While the computing industry has shifted from single-core to multi-core processors for performance gain, safety-critical systems (SCSs) still require solutions that enable their transition while guaranteeing safety, requiring no source-code modifications and substantially reducing re-development and re-certification costs, especially for legacy applications that are typically substantial. This dissertation considers the problem of worst-case execution time (WCET) analysis under contentions when deadline-constrained tasks in independent partitioned task set execute on a homogeneous multi-core processor with dynamic time-triggered shared memory bandwidth partitioning in SCSs.
Memory bandwidth in multi-core processors is shared across cores and is a significant cause of performance bottleneck and temporal variability of multiple-orders in task’s execution times due to contentions in memory sub-system. Further, the circular dependency is not only between WCET and CPU scheduling of others cores, but also between WCET and memory bandwidth assignments over time to cores. Thus, there is need of solutions that allow tailoring memory bandwidth assignments to workloads over time and computing safe WCET. It is pragmatically infeasible to obtain WCET estimates from static WCET analysis tools for multi-core processors due to the sheer computational complexity involved.
We use synchronized periodic memory servers on all cores that regulate each core’s maximum memory bandwidth based on allocated bandwidth over time. First, we present a workload schedulability test for known even-memory-bandwidth-assignment-to-active-cores over time, where the number of active cores represents the cores with non-zero memory bandwidth assignment. Its computational complexity is similar to merge-sort. Second, we demonstrate using a real avionics certified safety-critical application how our method’s use can preserve an existing application’s single-core CPU schedule under contentions on a multi-core processor. It enables incremental certification using composability and requires no-source code modification.
Next, we provide a general framework to perform WCET analysis under dynamic memory bandwidth partitioning when changes in memory bandwidth to cores assignment are time-triggered and known. It provides a stall maximization algorithm that has a complexity similar to a concave optimization problem and efficiently implements the WCET analysis. Last, we demonstrate dynamic memory assignments and WCET analysis using our method significantly improves schedulability compared to the stateof-the-art using an Integrated Modular Avionics scenario.
Previously in this journal we have reported on fundamental transversemode selection (TMS#0) of broad area semiconductor lasers
(BALs) with integrated twice-retracted 4f set-up and film-waveguide lens as the Fourier-transform element. Now we choose and
report on a simpler approach for BAL-TMS#0, i.e., the use of a stable confocal longitudinal BAL resonator of length L with a
transverse constriction.The absolute value of the radius R of curvature of both mirror-facets convex in one dimension (1D) is R = L
= 2f with focal length f.The round trip length 2L = 4f againmakes up for a Fourier-optical 4f set-up and the constriction resulting
in a resonator-internal beam waist stands for a Fourier-optical low-pass spatial frequency filter. Good TMS#0 is achieved, as long
as the constriction is tight enough, but filamentation is not completely suppressed.
1. Introduction
Broad area (semiconductor diode) lasers (BALs) are intended
to emit high optical output powers (where “high” is relative
and depending on the material system). As compared to
conventional narrow stripe lasers, the higher power is distributed
over a larger transverse cross-section, thus avoiding
catastrophic optical mirror damage (COMD). Typical BALs
have emitter widths of around 100 ????m.
Thedrawback is the distribution of the high output power
over a large number of transverse modes (in cases without
countermeasures) limiting the portion of the light power in
the fundamental transverse mode (mode #0), which ought to
be maximized for the sake of good light focusability.
Thus techniques have to be used to support, prefer, or
select the fundamental transverse mode (transverse mode
selection TMS#0) by suppression of higher order modes
already upon build-up of the laser oscillation.
In many cases reported in the literature, either a BAL
facet, the
A measurement technique, i.e. reflectance anisotropy/difference spectroscopy (RAS/RDS), which had originally been developed for in-situ
epitaxial growth control, is employed here for in-situ real-time etch-depth control during reactive ion etching (RIE) of cubic crystalline III/V
semiconductor samples. Temporal optical Fabry-Perot oscillations of the genuine RAS signal (or of the average reflectivity) during etching due
to the ever shrinking layer thicknesses are used to monitor the current etch depth. This way the achievable in-situ etch-depth resolution has
been around 15 nm. To improve etch-depth control even further, i.e. down to below 5 nm, we now use the optical equivalent of a mechanical
vernier scale– by employing Fabry-Perot oscillations at two different wavelengths or photon energies of the RAS measurement light – 5%
apart, which gives a vernier scale resolution of 5%. For the AlGaAs(Sb) material system a 5 nm resolution is an improvement by a factor of 3
and amounts to a precision in in-situ etch-depth control of around 8 lattice constants.
Shared memory concurrency is the pervasive programming model for multicore architectures
such as x86, Power, and ARM. Depending on the memory organization, each architecture follows
a somewhat different shared memory model. All these models, however, have one common
feature: they allow certain outcomes for concurrent programs that cannot be explained
by interleaving execution. In addition to the complexity due to architectures, compilers like
GCC and LLVM perform various program transformations, which also affect the outcomes of
concurrent programs.
To be able to program these systems correctly and effectively, it is important to define a
formal language-level concurrency model. For efficiency, it is important that the model is
weak enough to allow various compiler optimizations on shared memory accesses as well
as efficient mappings to the architectures. For programmability, the model should be strong
enough to disallow bogus “out-of-thin-air” executions and provide strong guarantees for well-synchronized
programs. Because of these conflicting requirements, defining such a formal
model is very difficult. This is why, despite years of research, major programming languages
such as C/C++ and Java do not yet have completely adequate formal models defining their
concurrency semantics.
In this thesis, we address this challenge and develop a formal concurrency model that is very
good both in terms of compilation efficiency and of programmability. Unlike most previous
approaches, which were defined either operationally or axiomatically on single executions,
our formal model is based on event structures, which represents multiple program executions,
and thus gives us more structure to define the semantics of concurrency.
In more detail, our formalization has two variants: the weaker version, WEAKEST, and the
stronger version, WEAKESTMO. The WEAKEST model simulates the promising semantics proposed
by Kang et al., while WEAKESTMO is incomparable to the promising semantics. Moreover,
WEAKESTMO discards certain questionable behaviors allowed by the promising semantics.
We show that the proposed WEAKESTMO model resolve out-of-thin-air problem, provide
standard data-race-freedom (DRF) guarantees, allow the desirable optimizations, and can be
mapped to the architectures like x86, PowerPC, and ARMv7. Additionally, our models are
flexible enough to leverage existing results from the literature to establish data-race-freedom
(DRF) guarantees and correctness of compilation.
In addition, in order to ensure the correctness of compilation by a major compiler, we developed
a translation validator targeting LLVM’s “opt” transformations of concurrent C/C++
programs. Using the validator, we identified a few subtle compilation bugs, which were reported
and were fixed. Additionally, we observe that LLVM concurrency semantics differs
from that of C11; there are transformations which are justified in C11 but not in LLVM and
vice versa. Considering the subtle aspects of LLVM concurrency, we formalized a fragment
of LLVM’s concurrency semantics and integrated it into our WEAKESTMO model.
Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.
Spatial regression models provide the opportunity to analyse spatial data and spatial processes. Yet, several model specifications can be used, all assuming different types of spatial dependence. This study summarises the most commonly used spatial regression models and offers a comparison of their performance by using Monte Carlo experiments. In contrast to previous simulations, this study evaluates the bias of the impacts rather than the regression coefficients and additionally provides results for situations with a non-spatial omitted variable bias. Results reveal that the most commonly used spatial autoregressive (SAR) and spatial error (SEM) specifications yield severe drawbacks. In contrast, spatial Durbin specifications (SDM and SDEM) as well as the simple SLX provide accurate estimates of direct impacts even in the case of misspecification. Regarding the indirect `spillover' effects, several - quite realistic - situations exist in which the SLX outperforms the more complex SDM and SDEM specifications.
On the Effect of Nanofillers on the Environmental Stress Cracking Resistance of Glassy Polymers
(2019)
It is well known that reinforcing polymers with small amounts of nano-sized fillers is one of the most effective methods for simultaneously improving their mechanical and thermal properties. However, only a small number of studies have focused on environ-mental stress cracking (ESC), which is a major issue for premature failures of plastic products in service. Therefore, the contribution of this work focused on the influence of nano-SiO2 particles on the morphological, optical, mechanical, thermal, as well as envi-ronmental stress cracking properties of amorphous-based nanocomposites.
Polycarbonate (PC), polystyrene (PS) and poly(methyl methacrylate) (PMMA) nanocom-posites containing different amounts and sizes of nano-SiO2 particles were prepared using a twin-screw extruder followed by injection molding. Adding a small amount of nano-SiO2 caused a reduction in optical properties but improved the tensile, toughness, and thermal properties of the polymer nanocomposites. The significant enhancement in mechanical and thermal properties was attributed to the adequate level of dispersion and interfacial interaction of the SiO2 nanoparticles in the polymer matrix. This situation possibly increased the efficiency of stress transfer across the nanocomposite compo-nents. Moreover, the data revealed a clear dependency on the filler size. The polymer nanocomposites filled with smaller nanofillers exhibited an outstanding enhancement in both mechanical properties and transparency compared with nanocomposites filled with larger particles. The best compromise of strength, toughness, and thermal proper-ties was achieved in PC-based nanocomposites. Therefore, special attention to the influ-ence of nanofiller on the ESC resistance was given to PC.
The ESC resistance of the materials was investigated under static loading with and without the presence of stress-cracking agents. Interestingly, the incorporation of nano-SiO2 greatly enhanced the ESC resistance of PC in all investigated fluids. This result was particularly evident with the smaller quantities and sizes of nano-SiO2. The enhancement in ESC resistance was more effective in mild agents and air, where the quality of the deformation process was vastly altered with the presence of nano-SiO2. This finding confirmed that the new structural arrangements on the molecular scale in-duced by nanoparticles dominate over the ESC agent absorption effect and result in greatly improving the ESC resistance of the materials. This effect was more pronounced with increasing molecular weight of PC due to an increase in craze stability and fibril density. The most important and new finding is that the ESC behavior of polymer-based nanocomposites/ stress-cracking agent combinations can be scaled using the Hansen solubility parameter. Thus allowed us to predict the risk of ESC as a function of the filler content for different stress-cracking agents without performing extensive tests. For a comparison of different amorphous polymer-based nanocomposites at a given nano-SiO2 particle content, the ESC resistance of materials improved in the following order: PMMA/SiO2 < PS/SiO2 < low molecular weight PC/SiO2 < high molecular weight PC/SiO2. In most cases, nanocomposites with 1 vol.% of nano-SiO2 particles exhibited the largest improvement in ESC resistance.
However, the remarkable improvement in the ESC resistance—particularly in PC-based nanocomposites—created some challenges related to material characterization because testing times (failure time) significantly increased. Accordingly, the superposition ap-proach has been applied to construct a master curve of crack propagation model from the available short-term tests at different temperatures. Good agreement of the master curves with the experimental data revealed that the superposition approach is a suitable comparative method for predicting slow crack growth behavior, particularly for long-duration cracking tests as in mild agents. This methodology made it possible to mini-mize testing time.
Additionally, modeling and simulations using the finite element method revealed that multi-field modeling could provide reasonable predictions for diffusion processes and their impact on fracture behavior in different stress cracking agents. This finding sug-gests that the implemented model may be a useful tool for quick screening and mitigat-ing the risk of ESC failures in plastic products.
Most modern multiprocessors offer weak memory behavior to improve their performance in terms of throughput. They allow the order of memory operations to be observed differently by each processor. This is opposite to the concept of sequential consistency (SC) which enforces a unique sequential view on all operations for all processors. Because most software has been and still is developed with SC in mind, we face a gap between the expected behavior and the actual behavior on modern architectures. The issues described only affect multithreaded software and therefore most programmers might never face them. However, multi-threaded bare metal software like operating systems, embedded software, and real-time software have to consider memory consistency and ensure that the order of memory operations does not yield unexpected results. This software is more critical as general consumer software in terms of consequences, and therefore new methods are needed to ensure their correct behavior.
In general, a memory system is considered weak if it allows behavior that is not possible in a sequential system. For example, in the SPARC processor with total store ordering (TSO) consistency, all writes might be delayed by store buffers before they eventually are processed by the main memory. This allows the issuing process to work with its own written values before other processes observed them (i.e., reading its own value before it leaves the store buffer). Because this behavior is not possible with sequential consistency, TSO is considered to be weaker than SC. Programming in the context of weak memory architectures requires a proper comprehension of how the model deviates from expected sequential behavior. For verification of these programs formal representations are required that cover the weak behavior in order to utilize formal verification tools.
This thesis explores different verification approaches and respectively fitting representations of a multitude of memory models. In a joint effort, we started with the concept of testing memory operation traces in regard of their consistency with different memory consistency models. A memory operation trace is directly derived from a program trace and consists of a sequence of read and write operations for each process. Analyzing the testing problem, we are able to prove that the problem is NP-complete for most memory models. In that process, a satisfiability (SAT) encoding for given problem instances was developed, that can be used in reachability and robustness analysis.
In order to cover all program executions instead of just a single program trace, additional representations are introduced and explored throughout this thesis. One of the representations introduced is a novel approach to specify a weak memory system using temporal logics. A set of linear temporal logic (LTL) formulas is developed that describes all properties required to restrict possible traces to those consistent to the given memory model. The resulting LTL specifications can directly be used in model checking, e.g., to check safety conditions. Unfortunately, the derived LTL specifications suffer from the state explosion problem: Even small examples, like the Peterson mutual exclusion algorithm, tend to generate huge formulas and require vast amounts of memory for verification. For this reason, it is concluded that using the proposed verification approach these specifications are not well suited for verification of real world software. Nonetheless, they provide comprehensive and formally correct descriptions that might be used elsewhere, e.g., programming or teaching.
Another approach to represent these models are operational semantics. In this thesis, operational semantics of weak memory models are provided in the form of reference machines that are both correct and complete regarding the memory model specification. Operational semantics allow to simulate systems with weak memory models step by step. This provides an elegant way to study the effects that lead to weak consistent behavior, while still providing a basis for formal verification. The operational models are then incorporated in verification tools for multithreaded software. These state space exploration tools proved suitable for verification of multithreaded software in a weak consistent memory environment. However, because not only the memory system but also the processor are expressed as operational semantics, some verification approach will not be feasible due to the large size of the state space.
Finally, to tackle the beforementioned issue, a state transition system for parallel programs is proposed. The transition system is defined by a set of structural operational semantics (SOS) rules and a suitable memory structure that can cover multiple memory models. This allows to influence the state space by use of smart representations and approximation approaches in future work.
The systems in industrial automation management (IAM) are information systems. The management parts of such systems are software components that support the manufacturing processes. The operational parts control highly plug-compatible devices, such as controllers, sensors and motors. Process variability and topology variability are the two main characteristics of software families in this domain. Furthermore, three roles of stakeholders -- requirement engineers, hardware-oriented engineers, and software developers -- participate in different derivation stages and have different variability concerns. In current practice, the development and reuse of such systems is costly and time-consuming, due to the complexity of topology and process variability. To overcome these challenges, the goal of this thesis is to develop an approach to improve the software product derivation process for systems in industrial automation management, where different variability types are concerned in different derivation stages. Current state-of-the-art approaches commonly use general-purpose variability modeling languages to represent variability, which is not sufficient for IAM systems. The process and topology variability requires more user-centered modeling and representation. The insufficiency of variability modeling leads to low efficiency during the staged derivation process involving different stakeholders. Up to now, product line approaches for systematic variability modeling and realization have not been well established for such complex domains. The model-based derivation approach presented in this thesis integrates feature modeling with domain-specific models for expressing processes and topology. The multi-variability modeling framework includes the meta-models of the three variability types and their associations. The realization and implementation of the multi-variability involves the mapping and the tracing of variants to their corresponding software product line assets. Based on the foundation of multi-variability modeling and realization, a derivation infrastructure is developed, which enables a semi-automated software derivation approach. It supports the configuration of different variability types to be integrated into the staged derivation process of the involved stakeholders. The derivation approach is evaluated in an industry-grade case study of a complex software system. The feasibility is demonstrated by applying the approach in the case study. By using the approach, both the size of the reusable core assets and the automation level of derivation are significantly improved. Furthermore, semi-structured interviews with engineers in practice have evaluated the usefulness and ease-of-use of the proposed approach. The results show a positive attitude towards applying the approach in practice, and high potential to generalize it to other related domains.
Various physical phenomenons with sudden transients that results into structrual changes can be modeled via
switched nonlinear differential algebraic equations (DAEs) of the type
\[
E_{\sigma}\dot{x}=A_{\sigma}x+f_{\sigma}+g_{\sigma}(x). \tag{DAE}
\]
where \(E_p,A_p \in \mathbb{R}^{n\times n}, x\mapsto g_p(x),\) is a mapping, \(p \in \{1,\cdots,P\}, P\in \mathbb{N}
f \in \mathbb{R} \rightarrow \mathbb{R}^n , \sigma: \mathbb{R} \rightarrow \{1,\cdots, P\}\).
Two related common tasks are:
Task 1: Investigate if above (DAE) has a solution and if it is unique.
Task 2: Find a connection among a solution of above (DAE) and solutions of related
partial differential equations.
In the linear case \(g(x) \equiv 0\) the task 1 has been tackeled already in a
distributional solution framework.
A main goal of the dissertation is to give contribution to task 1 for the
nonlinear case \(g(x) \not \equiv 0\) ; also contributions to the task 2 are given for
switched nonlinear DAEs arising while modeling sudden transients in water
distribution networks. In addition, this thesis contains the following further
contributions:
The notion of structured switched nonlinear DAEs has been introduced,
allowing also non regular distributions as solutions. This extend a previous
framework that allowed only piecewise smooth functions as solutions. Further six mild conditions were given to ensure existence and uniqueness of the solution within the space of piecewise smooth distribution. The main
condition, namely the regularity of the matrix pair \((E,A)\), is interpreted geometrically for those switched nonlinear DAEs arising from water network graphs.
Another contribution is the introduction of these switched nonlinear DAEs
as a simplication of the PDE model used classically for modeling water networks. Finally, with the support of numerical simulations of the PDE model it has been illustrated that this switched nonlinear DAE model is a good approximation for the PDE model in case of a small compressibility coefficient.
The fifth-generation mobile telecommunication network is expected to support multi-access edge computing (MEC), which intends to distribute computation tasks and services from the central cloud to the edge clouds. Toward ultra-responsive, ultra-reliable, and ultra-low-latency MEC services, the current mobile network security architecture should enable a more decentralized approach for authentication and authorization processes. This paper proposes a novel decentralized authentication architecture that supports flexible and low-cost local authentication with the awareness of context information of network elements such as user equipment and virtual network functions. Based on a Markov model for backhaul link quality as well as a random walk mobility model with mixed mobility classes and traffic scenarios, numerical simulations have demonstrated that the proposed approach is able to achieve a flexible balance between the network operating cost and the MEC reliability.
Cyclic indentation is a technique used to characterize materials by indenting repeatedly on the same location. This technique allows information to be obtained on how the plastic material response changes under repeated loading. We explore the processes underlying this technique using a combined experimental and simulative approach. We focus on the loading–unloading hysteresis and the dependence of the hysteresis width ha,p on the cycle number. In both approaches, we obtain a power-law demonstrating ha,p with respect to the hardening exponent e. A detailed analysis of the atomistic simulation results shows that changes in the dislocation network under repeated indentation are responsible for this behavior.
Modern applications in the realms of wireless communication and mobile broadband Internet increase the demand for compact antennas with well defined directivity. Here, we present an approach for the design and implementation of hybrid antennas consisting of a classic feeding antenna that is near-field-coupled to a subwavelength resonator. In such a combined structure, the composite antenna always radiates at the resonance frequency of the subwavelength oscillator as well as at the resonance frequency of the feeding antenna. While the classic antenna serves as impedance-matched feeding element, the subwavelength resonator induces an additional resonance to the composite antenna. In general, these near-field coupled structures are known for decades and are lately published as near-field resonant parasitic antennas. We describe an antenna design consisting of a high-frequency electric dipole antenna at fd=25 GHz that couples to a low-frequency subwavelength split-ring resonator, which emits electromagnetic waves at fSRR=10.41 GHz. The radiating part of the antenna has a size of approximately 3.2mm×8mm×1mm and thus is electrically small at this frequency with a product k⋅a=0.5 . The input return loss of the antenna was moderate at −18 dB and it radiated at a spectral bandwidth of 120 MHz. The measured main lobe of the antenna was observed at 60∘ with a −3 dB angular width of 65∘ in the E-plane and at 130∘ with a −3 dB angular width of 145∘ in the H-plane
The coordination of multiple external representations is important for learning, but yet a difficult task for students, requiring instructional support. The subject in this study covers a typical relation in physics between abstract mathematical equations (definitions of divergence and curl) and a visual representation (vector field plot). To support the connection across both representations, two instructions with written explanations, equations, and visual representations (differing only in the presence of visual cues) were designed and their impact on students’ performance was tested. We captured students’ eye movements while they processed the written instruction and solved subsequent coordination tasks. The results show that students instructed with visual cues (VC students) performed better, responded with higher confidence, experienced less mental effort, and rated the instructional quality better than students instructed without cues. Advanced eye-tracking data analysis methods reveal that cognitive integration processes appear in both groups at the same point in time but they are significantly more pronounced for VC students, reflecting a greater attempt to construct a coherent mental representation during the learning process. Furthermore, visual cues increase the fixation count and total fixation duration on relevant information. During problem solving, the saccadic eye movement pattern of VC students is similar to experts in this domain. The outcomes imply that visual cues can be beneficial in coordination tasks, even for students with high domain knowledge. The study strongly confirms an important multimedia design principle in instruction, that is, that highlighting conceptually relevant information shifts attention to relevant information and thus promotes learning and problem solving. Even more, visual cues can positively influence students’ perception of course materials.
3D joint kinematics can provide important information about the quality of movements. Optical motion capture systems (OMC) are considered the gold standard in motion analysis. However, in recent years, inertial measurement units (IMU) have become a promising alternative. The aim of this study was to validate IMU-based 3D joint kinematics of the lower extremities during different movements. Twenty-eight healthy subjects participated in this study. They performed bilateral squats (SQ), single-leg squats (SLS) and countermovement jumps (CMJ). The IMU kinematics was calculated using a recently-described sensor-fusion algorithm. A marker based OMC system served as a reference. Only the technical error based on algorithm performance was considered, incorporating OMC data for the calibration, initialization, and a biomechanical model. To evaluate the validity of IMU-based 3D joint kinematics, root mean squared error (RMSE), range of motion error (ROME), Bland-Altman (BA) analysis as well as the coefficient of multiple correlation (CMC) were calculated. The evaluation was twofold. First, the IMU data was compared to OMC data based on marker clusters; and, second based on skin markers attached to anatomical landmarks. The first evaluation revealed means for RMSE and ROME for all joints and tasks below 3°. The more dynamic task, CMJ, revealed error measures approximately 1° higher than the remaining tasks. Mean CMC values ranged from 0.77 to 1 over all joint angles and all tasks. The second evaluation showed an increase in the RMSE of 2.28°– 2.58° on average for all joints and tasks. Hip flexion revealed the highest average RMSE in all tasks (4.87°– 8.27°). The present study revealed a valid IMU-based approach for the measurement of 3D joint kinematics in functional movements of varying demands. The high validity of the results encourages further development and the extension of the present approach into clinical settings.
Exploiting Direct Laser Writing for Hydrogel Integration into Fragile Microelectromechanical Systems
(2019)
The integration of chemo-responsive hydrogels into fragile microelectromechanical systems (MEMS) with reflective surfaces in the micron to submicron range is presented. Direct laser writing (DLW) for 3D microstructuring of chemoresponsive “smart” hydrogels on sensitive microstructures is demonstrated and discussed in detail, by production of thin hydrogel layers and discs with a controllable lateral size of 2 to 5 µm and a thickness of some hundred nm. Screening results of polymerizing laser settings for precision microstructuring were determined by controlling crosslinking and limiting active chain diffusion during polymerization with macromers. Macromers are linear polymers with a tunable amount of multifunctional crosslinker moieties, giving access to a broad range of different responsive hydrogels. To demonstrate integration into fragile MEMS, the gel was deposited by DLW onto a resonator with a 200 nm thick sensing plate with high precision. To demonstrate the applicability for sensors, proof of concept measurements were performed. The polymer composition was optimized to produce thin reproducible layers and the feasibility of 3D structures with the same approach is demonstrated.
Radar cross section reducing (RCSR) metasurfaces or coding metasurfaces were primarily designed for normally incident radiation in the past. It is evident that the performance of coding metasurfaces for RCSR can be significantly improved by additional backscattering reduction of obliquely incident radiation, which requires a valid analytic conception tool. Here, we derive an analytic current density distribution model for the calculation of the backscatter far-field of obliquely incident radiation on a coding metasurface for RCSR. For demonstration, we devise and fabricate a metasurface for a working frequency of 10.66GHz and obtain good agreement between the measured, simulated, and analytically calculated backscatter far-fields. The metasurface significantly reduces backscattering for incidence angles between −40∘ and 40∘ in a spectral working range of approximately 1GHz.
The main focus of the research lies in the interpretation and application of results and correlations of soil properties from in situ testing and subsequent use in terramechanical applications. The empirical correlations and current procedures were mainly developed for medium to large depths, and therefore they were re-evaluated and adjusted herein to reflect the current state of knowledge for the assessment of near-surface soil. For testing technologies, a field investigation to a moon analogue site was carried out. Focus was placed in the assessment of the near surface soil properties. Samples were collected for subsequent analysis in laboratory conditions. Further laboratory experiments in extraterrestrial soil simulants and other terrestrial soils were conducted and correlations with relative density and shear strength parameters were attempted. The correlations from the small scale laboratory experiments, and the new re-evaluated correlation for relative density were checked against the data from the field investigation. Additionally, single tire-soil tests were carried out, which enable the investigation of the localized soil response in order to advance current wheel designs and subsequently the vehicle’s mobility. Furthermore, numerical simulations were done to aid the investigation of the tire-soil interaction. Summing up, current relationships for estimating relative density of near surface soil were re-evaluated, and subsequently correlated to shear strength parameters that are the main input to model soil in numerical analyses. Single tire-soil tests were carried out and were used as a reference to calibrate the interaction of the tire and the soil and subsequently were utilized to model rolling scenarios which enable the assessment of soil trafficability and vehicle’s mobility.
In this dissertation we apply financial mathematical modelling to electricity markets. Electricity is different from any other underlying of financial contracts: it is not storable. This means that electrical energy in one time point cannot be transferred to another. As a consequence, power contracts with disjoint delivery time spans basically have a different underlying. The main idea throughout this thesis is exactly this two-dimensionality of time: every electricity contract is not only characterized by its trading time but also by its delivery time.
The basis of this dissertation are four scientific papers corresponding to the Chapters 3 to 6, two of which have already been published in peer-reviewed journals. Throughout this thesis two model classes play a significant role: factor models and structural models. All ideas are applied to or supported by these two model classes. All empirical studies in this dissertation are conducted on electricity price data from the German market and Chapter 4 in particular studies an intraday derivative unique to the German market. Therefore, electricity market design is introduced by the example of Germany in Chapter 1. Subsequently, Chapter 2 introduces the general mathematical theory necessary for modelling electricity prices, such as Lévy processes and the Esscher transform. This chapter is the mathematical basis of the Chapters 3 to 6.
Chapter 3 studies factor models applied to the German day-ahead spot prices. We introduce a qualitative measure for seasonality functions based on three requirements. Furthermore, we introduce a relation of factor models to ARMA processes, which induces a new method to estimate the mean reversion speed.
Chapter 4 conducts a theoretical and empirical study of a pricing method for a new electricity derivative: the German intraday cap and floor futures. We introduce the general theory of derivative pricing and propose a method based on the Hull-White model of interest rate modelling, which is a one-factor model. We include week futures prices to generate a price forward curve (PFC), which is then used instead of a fixed deterministic seasonality function. The idea that we can combine all market prices, and in particular futures prices, to improve the model quality also plays the major role in Chapter 5 and Chapter 6.
In Chapter 5 we develop a Heath-Jarrow-Morton (HJM) framework that models intraday, day-ahead, and futures prices. This approach is based on two stochastic processes motivated by economic interpretations and separates the stochastic dynamics in trading and delivery time. Furthermore, this framework allows for the use of classical day-ahead spot price models such as the ones of Schwartz and Smith (2000), Lucia and Schwartz (2002) and includes many model classes such as structural models and factor models.
Chapter 6 unifies the classical theory of storage and the concept of a risk premium through the introduction of an unobservable intrinsic electricity price. Since all tradable electricity contracts are derivatives of this actual intrinsic price, their prices should all be derived as conditional expectation under the risk-neutral measure. Through the intrinsic electricity price we develop a framework, which also includes many existing modelling approaches, such as the HJM framework of Chapter 5.
Analyzing Centrality Indices in Complex Networks: an Approach Using Fuzzy Aggregation Operators
(2018)
The identification of entities that play an important role in a system is one of the fundamental analyses being performed in network studies. This topic is mainly related to centrality indices, which quantify node centrality with respect to several properties in the represented network. The nodes identified in such an analysis are called central nodes. Although centrality indices are very useful for these analyses, there exist several challenges regarding which one fits best
for a network. In addition, if the usage of only one index for determining central
nodes leads to under- or overestimation of the importance of nodes and is
insufficient for finding important nodes, then the question is how multiple indices
can be used in conjunction in such an evaluation. Thus, in this thesis an approach is proposed that includes multiple indices of nodes, each indicating
an aspect of importance, in the respective evaluation and where all the aspects of a node’s centrality are analyzed in an explorative manner. To achieve this
aim, the proposed idea uses fuzzy operators, including a parameter for generating different types of aggregations over multiple indices. In addition, several preprocessing methods for normalization of those values are proposed and discussed. We investigate whether the choice of different decisions regarding the
aggregation of the values changes the ranking of the nodes or not. It is revealed that (1) there are nodes that remain stable among the top-ranking nodes, which
makes them the most central nodes, and there are nodes that remain stable
among the bottom-ranking nodes, which makes them the least central nodes; and (2) there are nodes that show high sensitivity to the choice of normalization
methods and/or aggregations. We explain both cases and the reasons why the nodes’ rankings are stable or sensitive to the corresponding choices in various networks, such as social networks, communication networks, and air transportation networks.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
The simulation of cutting process challenges established methods due to large deformations and topological changes. In this work a particle finite element method (PFEM) is presented, which combines the benefits of discrete modeling techniques and methods based on continuum mechanics. A crucial part of the PFEM is the detection of the boundary of a set of particles. The impact of this boundary detection method on the structural integrity is examined and a relation of the key parameter of the method to the eigenvalues of strain tensors is elaborated. The influence of important process parameters on the cutting force is studied and a comparison to an empirical relation is presented.
Optimal control of partial differential equations is an important task in applied mathematics where it is used in order to optimize, for example, industrial or medical processes. In this thesis we investigate an optimal control problem with tracking type cost functional for the Cattaneo equation with distributed control, that is, \(\tau y_{tt} + y_t - \Delta y = u\). Our focus is on the theoretical and numerical analysis of the limit process \(\tau \to 0\) where we prove the convergence of solutions of the Cattaneo equation to solutions of the heat equation.
We start by deriving both the Cattaneo and the classical heat equation as well as introducing our notation and some functional analytic background. Afterwards, we prove the well-posedness of the Cattaneo equation for homogeneous Dirichlet boundary conditions, that is, we show the existence and uniqueness of a weak solution together with its continuous dependence on the data. We need this in the following, where we investigate the optimal control problem for the Cattaneo equation: We show the existence and uniqueness of a global minimizer for an optimal control problem with tracking type cost functional and the Cattaneo equation as a constraint. Subsequently, we do an asymptotic analysis for \(\tau \to 0\) for both the forward equation and the aforementioned optimal control problem and show that the solutions of these problems for the Cattaneo equation converge strongly to the ones for the heat equation. Finally, we investigate these problems numerically, where we examine the different behaviour of the models and also consider the limit \(\tau \to 0\), suggesting a linear convergence rate.
The aim of this dissertation is to explain processes in recruitment by gaining a better understanding of how perceptions evolve and how recruitment outcomes and perceptions are influenced. To do so, this dissertation takes a closer look at the formation of fit perceptions, the effects of top employer awards on pre-hire recruitment outcomes, and on how perceptions about external sources are influenced.
Fast Internet content delivery relies on two layers of caches on the request path. Firstly, content delivery networks (CDNs) seek to answer user requests before they traverse slow Internet paths. Secondly, aggregation caches in data centers seek to answer user requests before they traverse slow backend systems. The key challenge in managing these caches is the high variability of object sizes, request patterns, and retrieval latencies. Unfortunately, most existing literature focuses on caching with low (or no) variability in object sizes and ignores the intricacies of data center subsystems.
This thesis seeks to fill this gap with three contributions. First, we design a new caching system, called AdaptSize, that is robust under high object size variability. Second, we derive a method (called Flow-Offline Optimum or FOO) to predict the optimal cache hit ratio under variable object sizes. Third, we design a new caching system, called RobinHood, that exploits variances in retrieval latencies to deliver faster responses to user requests in data centers.
The techniques proposed in this thesis significantly improve the performance of CDN and data center caches. On two production traces from one of the world's largest CDN AdaptSize achieves 30-91% higher hit ratios than widely-used production systems, and 33-46% higher hit ratios than state-of-the-art research systems. Further, AdaptSize reduces the latency by more than 30% at the median, 90-percentile and 99-percentile.
We evaluate the accuracy of our FOO analysis technique on eight different production traces spanning four major Internet companies.
We find that FOO's error is at most 0.3%. Further, FOO reveals that the gap between online policies and OPT is much larger than previously thought: 27% on average, and up to 43% on web application traces.
We evaluate RobinHood with production traces from a major Internet company on a 50-server cluster. We find that RobinHood improves the 99-percentile latency by more than 50% over existing caching systems.
As load imbalances grow, RobinHood's latency improvement can be more than 2x. Further, we show that RobinHood is robust against server failures and adapts to automatic scaling of backend systems.
The results of this thesis demonstrate the power of guiding the design of practical caching policies using mathematical performance models and analysis. These models are general enough to find application in other areas of caching design and future challenges in Internet content delivery.
Increasing costs due to the rising attrition of drug candidates in late developmental phases alongside post-marketing withdrawal of drugs challenge the pharmaceutical industry to further improve their current preclinical safety assessment strategies. One of the most common reasons for the termination of drug candidates is drug induced hepatotoxicity, which more often than not remains undetected in early developmental stages, thus emphasizing the necessity for improved and more predictive preclinical test systems. One reason for the very limited value of currently applied in vitro test systems for the detection of potential hepatotoxic liabilities is the lack of organotypic and tissue-specific physiology of hepatocytes cultured in ordinary monolayer culture formats.
The thesis at hand primarily deals with the evaluation of both two- and three-dimensional cell culture approaches with respect to their relative ability to predict the hepatotoxic potential of drug candidates in early developmental phases. First, different hepatic cell models, which are routinely used in pharmaceutical industry (primary human hepatocytes as well as the three cell lines HepG2, HepaRG and Upcyte hepatocytes), were investigated in conventional 2D monolayer culture with respect to their ability to detect hepatotoxic effects in simple cytotoxicity studies. Moreover, it could be shown that the global protein expression levels of all cell lines substantially differ from that of primary human hepatocytes, with the least pronounced difference in HepaRG cells.
The introduction of a third dimension through the cultivation of spheroids enables hepatocytes to recapitulate their typical native polarity and furthermore dramatically increases the contact surface of adjacent cells. These differences in cellular architecture have a positive influence on hepatocyte longevity and the expression of drug metabolizing enzymes and transporters, which could be proven via immunofluorescent (IF) staining for at least 14 days in PHH and at least 28 days in HepaRG spheroids, respectively. Additionally, the IF staining of three different phase III transporters (MDR1, MRP2 and BSEP) indicated a bile canalicular network in spheroids of both cell models. A dose-dependent inducibility of important cytochrome P450 isoenzymes in HepaRG spheroids could be shown on the protein level via IF for at least 14 days. CYP inducibility of HepaRG cells cultured in 2D and 3D was compared on the mRNA level for up to 14 days and inducibility was generally lower in 3D compared to 2D under the conditions of this study. In a comparative cytotoxicity study, both PHH and HepaRG spheroids as well as HepaRG monolayers have been treated with five hepatotoxic drugs for up to 14 days and viability was measured at three time points (days 3, 7 and 14). A clear time- and dose-dependent onset of the drug-induced hepatotoxic effects was observable in all conditions tested, indicated by a shift of the respective EC50 value towards lower doses by increasing exposure. The observed effects were most pronounced in PHH spheroids, thus indicating those as the most sensitive cell model in this study. Moreover, HepaRG cells were more sensitive in spheroid culture compared to monolayers, which suggests a potential application of spheroids as long-term test system for the detection of hepatotoxicities with slow onset. Finally, the basal protein expression levels of three antigens (CYP1A2, CYP3A4 and NAT 1/2) were analyzed via Western Blotting in HepaRG cells cultured in three different cell culture formats (2D, 3D and QV) in order to estimate the impact of the cell culture conditions on protein expression levels. In the QV system enables a pump-driven flow of cell culture media, which introduces both mechanical stimuli through shear and molecular stimuli through dynamic circulation to the monolayer. Those stimuli resulted in a clearly positive effect on the expression levels of the selected antigens by an increased expression level in comparison to both 2D and 3D. In contrast, HepaRG spheroids showed time-dependent differences with the overall highest levels at day 7.
The studies presented in this thesis delivered valuable information on the increased physiological relevance in dependence on the cell culture format: three-dimensionality as well as the circulation of media lead to a more differentiated phenotype in hepatic cell models. Those cell culture formats are applicable in preclinical drug development in order to obtain more relevant information at early developmental stages and thus help to create a more efficient drug development process. Nonetheless, further studies are necessary to thoroughly characterize, validate and standardize such novel cell culture approaches prior to their routine application in industry.
This thesis consists of five chapters. Chapter one elaborates on the principle of cognitive consistency and provides an overview of what extant research refers to as cognitive consistency theories (e.g., Abelson et al., 1968; Harmon-Jones & Harmon-Jones, 2007; Simon, Stenstrom, & Read, 2015). Moreover, it describes the most prominent theoretical representatives in this context, namely balance theory (Heider, 1946, 1958), congruity theory (Osgood & Tannenbaum, 1955), and cognitive dissonance theory (Festinger, 1957). Chapter one further outlines the role of individuals’ preference for cognitive consistency in the context of financial resource acquisition, the recruitment of employees and the acquisition of customers in the entrepreneurial context.
Chapter two is co-authored by Prof. Dr. Matthias Baum and presents two separate studies in which we empirically investigate the hypothesis that social entrepreneurs face a systematic disadvantage, compared to for-profit entrepreneurs, when seeking to acquire financial resources. Further, our work goes beyond existing research by introducing biased perceptions as a factor that may constrain social enterprise resource acquisition and therefore possibly stall the process of social value creation. On the foundation of role congruity theory (Eagly & Karau, 2002), we emphasize on the question whether social entrepreneurs provide signals which are less congruent with the stereotype of successful entrepreneurs and, in such, are perceived as less competent. We further test whether such biased competency perceptions feed forward into a lower probability to receive funding.
Chapter three is also co-authored by Prof. Dr. Matthias Baum as well as by Eva Henrich. The aim of this chapter is to further our understanding of the early recruitment phase and to contribute to the current debate about how firms should orchestrate their recruitment channels in order to enhance the creation of employer knowledge. We introduce the concept of integrated marketing communication into the recruitment field and examine how the level of consistency regarding job or organization information affects the recall and the recognition of that information. We additionally test whether information consistency among multiple recruitment channels influences information recognition failure quota. Answering this question is important as by failing to remember the source of recruitment information, job seekers may attribute job information to the wrong firm and thus create an incorrect employer knowledge.
Chapter four, which is co-authored by Prof. Dr. Matthias Baum, introduces customer congruity perceptions between a brand and a reward in the context of customer referral programs as an essential driver of the effectiveness of such programs. More precisely, we posit and empirically test a model according to which the decision-making process of the customer recommending a firm involves multiple mental steps and assumes reward perceptions to be an immediate antecedent of brand evaluation, which then, ultimately shapes the likelihood of recommendation. The level of congruity/incongruity is set up as an antecedent state and affects the perceived attractiveness of the reward. Our work contributes to the discussion on the optimal level of congruity between a prevailing schema in the mind of the customer and a stimulus presented. In addition, chapter four introduces customer referral programs as a strategic tool for brand managers. Chapter four is further published in Psychology & Marketing.
Chapter five first proposes that marketing strategies specifically designed to induce word-of-mouth (WOM) behavior are particular relevant for new ventures. Against the background that previous research suggests that customer perceptions of young firm age may influence customer behavior and the degree to which customers support new ventures (e.g., Choi & Shepherd, 2005; Stinchcombe, 1965), we secondly conduct an experiment to examine the causal mechanisms linking firm age and customer WOM. Chapter five, too, is co-authored by Prof. Dr. Matthias Baum.
The transfer of substrates between to enzymes within a biosynthesis pathway is an effective way to synthesize the specific product and a good way to avoid metabolic interference. This process is called metabolic channeling and it describes the (in-)direct transfer of an intermediate molecule between the active sites of two enzymes. By forming multi-enzyme cascades the efficiency of product formation and the flux is elevated and intermediate products are transferred and converted in a correct manner by the enzymes.
During tetrapyrrole biosynthesis several substrate transfer events occur and are prerequisite for an optimal pigment synthesis. In this project the metabolic channeling process during the pink pigment phycoerythrobilin (PEB) was investigated. The responsible ferredoxin-dependent bilin reductases (FDBR) for PEB formation are PebA and PebB. During the pigment synthesis the intermediate molecule 15,16-dihydrobiliverdin (DHBV) is formed and transferred from PebA to PebB. While in earlier studies a metabolic channeling of DHBV was postulated, this work revealed new insights into the requirements of this protein-protein interaction. It became clear, that the most important requirement for the PebA/PebB interaction is based on the affinity to their substrate/product DHBV. The already high affinity of both enzymes to each other is enhanced in the presence of DHBV in the binding pocket of PebA which leads to a rapid transfer to the subsequent enzyme PebB. DHBV is a labile molecule and needs to be rapidly channeled in order to get correctly further reduced to PEB. Fluorescence titration experiments and transfer assays confirmed the enhancement effect of DHBV for its own transfer.
More insights became clear by creating an active fusion protein of PebA and PebB and comparing its reaction mechanism with standard FDBRs. This fusion protein was able to convert biliverdin IXα (BV IXα) to PEB similar to the PebS activity, which also can convert BV IXα via DHBV to PEB as a single enzyme. The product and intermediate of the reaction were identified via HPLC and UV-Vis spectroscopy.
The results of this work revealed that PebA and PebB interact via a proximity channeling process where the intermediate DHBV plays an important role for the interaction. It also highlights the importance of substrate channeling in the synthesis of PEB to optimize the flux of intermediates through this metabolic pathway.
Computational problems that involve dynamic data, such as physics simulations and program development environments, have been an important
subject of study in programming languages. Recent advances in self-adjusting
computation made progress towards achieving efficient incremental computation by providing algorithmic language abstractions to express computations that respond automatically to dynamic changes in their inputs. Selfadjusting programs have been shown to be efficient for a broad range of problems via an explicit programming style, where the programmer uses specific
primitives to identify, create and operate on data that can change over time.
This dissertation presents implicit self-adjusting computation, a type directed technique for translating purely functional programs into self-adjusting
programs. In this implicit approach, the programmer annotates the (toplevel) input types of the programs to be translated. Type inference finds
all other types, and a type-directed translation rewrites the source program
into an explicitly self-adjusting target program. The type system is related to
information-flow type systems and enjoys decidable type inference via constraint solving. We prove that the translation outputs well-typed self-adjusting
programs and preserves the source program’s input-output behavior, guaranteeing that translated programs respond correctly to all changes to their
data. Using a cost semantics, we also prove that the translation preserves the
asymptotic complexity of the source program.
As a second contribution, we present two techniques to facilitate the processing of large and dynamic data in self-adjusting computation. First, we
present a type system for precise dependency tracking that minimizes the
time and space for storing dependency metadata. The type system improves
the scalability of self-adjusting computation by eliminating an important assumption of prior work that can lead to recording spurious dependencies.
We present a type-directed translation algorithm that generates correct selfadjusting programs without relying on this assumption. Second, we show a
probabilistic-chunking technique to further decrease space usage by controlling the fundamental space-time tradeoff in self-adjusting computation.
We implement implicit self-adjusting computation as an extension to Standard ML with compiler and runtime support. Using the compiler, we are able
to incrementalize an interesting set of applications, including standard list
and matrix benchmarks, ray tracer, PageRank, sparse graph connectivity, and
social circle counts. Our experiments show that our compiler incrementalizes existing code with only trivial amounts of annotation, and the resulting
programs bring asymptotic improvements to large datasets from real-world
applications, leading to orders of magnitude speedups in practice.
Tables or ranked lists summarize facts about a group of entities in a concise and structured fashion. They are found in all kind of domains and easily comprehensible by humans. Some globally prominent examples of such rankings are the tallest buildings in the World, the richest people in Germany, or most powerful cars. The availability of vast amounts of tables or rankings from open domain allows different ways to explore data. Computing similarity between ranked lists, in order to find those lists where entities are presented in a similar order, carries important analytical insights. This thesis presents a novel query-driven Locality Sensitive Hashing (LSH) method, in order to efficiently find similar top-k rankings for a given input ranking. Experiments show that the proposed method provides a far better performance than inverted-index--based approaches, in particular, it is able to outperform the popular prefix-filtering method. Additionally, an LSH-based probabilistic pruning approach is proposed that optimizes the space utilization of inverted indices, while still maintaining a user-provided recall requirement for the results of the similarity search. Further, this thesis addresses the problem of automatically identifying interesting categorical attributes, in order to explore the entity-centric data by organizing them into meaningful categories. Our approach proposes novel statistical measures, beyond known concepts, like information entropy, in order to capture the distribution of data to train a classifier that can predict which categorical attribute will be perceived suitable by humans for data categorization. We further discuss how the information of useful categories can be applied in PANTHEON and PALEO, two data exploration frameworks developed in our group.
The N-containing heterocycles have received strong attention from the organic synthesis field because of their importance for pharmaceutical and material sciences. Nitrogen element plays an important role between inorganic salts and biomolecules, to search convenient methods combine C-N bond together become a hot topic in recent decades.
Since the early beginning of 20th century, transition-metal-catalyzed coupling reactions had been well-known and world widely spread in organic researchs, achieved abundant significant progress. In the other side, the less toxic and more challenging transition metal free coupling method remained further potential value.
With the evolution of amination reactions and oxidants, more and more effective, simplified, and atom economic organic synthesis methods will come soon. And these stories also drove me to think about investigating the novel cross-dehydrogenative-coupling amination methods development as the topics of my PhD research.
Thus, we selected the phenothiazine derivatives as the N-nucleophile reagents and the phenols as the C-nucleophile reagents. To achieve the transition metal-free CDC aminations of phenols with phenothiazines, we scanned the chemical toolbox and tested a series of both common and uncommon oxidants.
Firstly, we start the condition in the presence of cumene and O2. The proposed mechanism initiated by a Hock process, which would form in situ peroxo-species as initiator of the reaction. And the initial infra-red analysis predicted there is a strong O-H..N interaction.
In the second method, a series of iodines with different valance have been tested to achieve the C-N bond formation of phenols with phenothiazines. This time, a simplified and more efficient method had been developed, which also provides a wider scope of phenols. Several controlling experiments had been conducted for the plausible pathway research. Large-scale synthesis of target molecular was also successfully performed.
And then, we focus the research on the cross-coupling reaction of pre-oxidized(iminated) phenothiazine with ubiquitous phenols and indoles. In this task, we first regio-selectively synthesized the novel iminated phenothiazine derivatives with the traditional biocide and mild disinfectant, Chloramine T. Then the phenothiazinimine performed an ultra-simple condensation technique with phenol or indole coupling partners in a simplified condition. Parallel reactions were also performed to investigate the plausible pathway.
Nowadays, the increasing demand for ever more customizable products has emphasized the need for more flexible and fast-changing manufacturing systems. In this environment, simulation has become a strategic tool for the design, development, and implementation of such systems. Simulation represents a relatively low-cost and risk-free alternative for testing the impact and effectiveness of changes in different aspects of manufacturing systems.
Systems that deal with this kind of data for its use in decision making processes are known as Simulation-Based Decision Support Systems (SB-DSS). Although most SB-DSS provide a powerful variety of tools for the automatic and semi-automatic analysis of simulations, visual and interactive alternatives for the manual exploration of the results are still open to further development.
The work in this dissertation is focused on enhancing decision makers’ analysis capabilities by making simulation data more accessible through the incorporation of visualization and analysis techniques. To demonstrate how this goal can be achieved, two systems were developed. The first system, viPhos – standing for visualization of Phos: Greek for light –, is a system that supports lighting design in factory layout planning. viPhos combines simulation, analysis, and visualization tools and techniques to facilitate the global and local (overall factory or single workstations, respectively) interactive exploration and comparison of lighting design alternatives.
The second system, STRAD - standing for Spatio-Temporal Radar -, is a web-based systems that considers the spatio/attribute-temporal analysis of event data. Since decision making processes in manufacturing also involve the monitoring of the systems over time, STRAD enables the multilevel exploration of event data (e.g., simulated or historical registers of the status of machines or results of quality control processes).
A set of four case studies and one proof of concept prepared for both systems demonstrate the suitability of the visualization and analysis strategies adopted for supporting decision making processes in diverse application domains. The results of these case studies indicate that both, the systems as well as the techniques included in the systems can be generalized and extended to support the analysis of different tasks and scenarios.
The authors explore the intrinsic trade-off in a DRAM between the power consumption (due to refresh) and the reliability. Their unique measurement platform allows tailoring to the design constraints depending on whether power consumption, performance or reliability has the highest design priority. Furthermore, the authors show how this measurement platform can be used for reverse engineering the internal structure of DRAMs and how this knowledge can be used to improve DRAM’s reliability.
Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.
Autonomous driving is disrupting the conventional automotive development. In fact, autonomous driving kicks off the consolidation of control units, i.e. the transition from distributed Electronic Control Units (ECUs) to centralized domain controllers. Platforms like Audi’s zFAS demonstrate this very clearly, where GPUs, Custom SoCs, Microcontrollers, and FPGAs are integrated on a single domain controller in order to perform sensor fusion, processing and decision making on a single Printed Circuit Board (PCB). The communication between these heterogeneous components and the algorithms for Advanced Driving Assistant Systems (ADAS) itself requires a huge amount of memory bandwidth, which will bring the Memory Wall from High Performance Computing (HPC) and data-centers directly in our cars. In this paper we highlight the roles and issues of Dynamic Random Access Memories (DRAMs) for future autonomous driving architectures.
The design of the fifth generation (5G) cellular network should take account of the emerging services with divergent quality of service requirements. For instance, a vehicle-to-everything (V2X) communication is required to facilitate the local data exchange and therefore improve the automation level in automated driving applications. In this work, we inspect the performance of two different air interfaces (i.e., LTE-Uu and PC5) which are proposed by the third generation partnership project (3GPP) to enable the V2X communication. With these two air interfaces, the V2X communication can be realized by transmitting data packets either over the network infrastructure or directly among traffic participants. In addition, the ultra-high reliability requirement in some V2X communication scenarios can not be fulfilled with any single transmission technology (i.e., either LTE-Uu or PC5). Therefore, we discuss how to efficiently apply multi-radio access technologies (multi-RAT) to improve the communication reliability. In order to exploit the multi-RAT in an efficient manner, both the independent and the coordinated transmission schemes are designed and inspected. Subsequently, the conventional uplink is also extended to the case where a base station can receive data packets through both the LTE-Uu and PC5 interfaces. Moreover, different multicast-broadcast single-frequency network (MBSFN) area mapping approaches are also proposed to improve the communication reliability in the LTE downlink. Last but not least, a system level simulator is implemented in this work. The simulation results do not only provide us insights on the performances of different technologies but also validate the effectiveness of the proposed multi-RAT scheme.
Asynchronous concurrency is a wide-spread way of writing programs that
deal with many short tasks. It is the programming model behind
event-driven concurrency, as exemplified by GUI applications, where the
tasks correspond to event handlers, web applications based around
JavaScript, the implementation of web browsers, but also of server-side
software or operating systems.
This model is widely used because it provides the performance benefits of
concurrency together with easier programming than multi-threading. While
there is ample work on how to implement asynchronous programs, and
significant work on testing and model checking, little research has been
done on handling asynchronous programs that involve heap manipulation, nor
on how to automatically optimize code for asynchronous concurrency.
This thesis addresses the question of how we can reason about asynchronous
programs while considering the heap, and how to use this this to optimize
programs. The work is organized along the main questions: (i) How can we
reason about asynchronous programs, without ignoring the heap? (ii) How
can we use such reasoning techniques to optimize programs involving
asynchronous behavior? (iii) How can we transfer these reasoning and
optimization techniques to other settings?
The unifying idea behind all the results in the thesis is the use of an
appropriate model encompassing global state and a promise-based model of
asynchronous concurrency. For the first question, We start from refinement
type systems for sequential programs and extend them to perform precise
resource-based reasoning in terms of heap contents, known outstanding
tasks and promises. This extended type system is known as Asynchronous
Liquid Separation Types, or ALST for short. We implement ALST in for OCaml
programs using the Lwt library.
For the second question, we consider a family of possible program
optimizations, described by a set of rewriting rules, the DWFM rules. The
rewriting rules are type-driven: We only guarantee soundness for programs
that are well-typed under ALST. We give a soundness proof based on a
semantic interpretation of ALST that allows us to show behavior inclusion
of pairs of programs.
For the third question, we address an optimization problem from industrial
practice: Normally, JavaScript files that are referenced in an HTML file
are be loaded synchronously, i.e., when a script tag is encountered, the
browser must suspend parsing, then load and execute the script, and only
after will it continue parsing HTML. But in practice, there are numerous
JavaScript files for which asynchronous loading would be perfectly sound.
First, we sketch a hypothetical optimization using the DWFM rules and a
static analysis.
To actually implement the analysis, we modify the approach to use a
dynamic analysis. This analysis, known as JSDefer, enables us to analyze
real-world web pages, and provide experimental evidence for the efficiency
of this transformation.
Motivation: Mathematical models take an important place in science and engineering.
A model can help scientists to explain dynamic behavior of a system and to understand
the functionality of system components. Since length of a time series and number of
replicates is limited by the cost of experiments, Boolean networks as a structurally simple
and parameter-free logical model for gene regulatory networks have attracted interests
of many scientists. In order to fit into the biological contexts and to lower the data
requirements, biological prior knowledge is taken into consideration during the inference
procedure. In the literature, the existing identification approaches can only deal with a
subset of possible types of prior knowledge.
Results: We propose a new approach to identify Boolean networks fromtime series data
incorporating prior knowledge, such as partial network structure, canalizing property,
positive and negative unateness. Using vector form of Boolean variables and applying
a generalized matrix multiplication called the semi-tensor product (STP), each Boolean
function can be equivalently converted into a matrix expression. Based on this, the
identification problem is reformulated as an integer linear programming problem to
reveal the system matrix of Boolean model in a computationally efficient way, whose
dynamics are consistent with the important dynamics captured in the data. By using
prior knowledge the number of candidate functions can be reduced during the inference.
Hence, identification incorporating prior knowledge is especially suitable for the case of
small size time series data and data without sufficient stimuli. The proposed approach is
illustrated with the help of a biological model of the network of oxidative stress response.
Conclusions: The combination of efficient reformulation of the identification problem
with the possibility to incorporate various types of prior knowledge enables the
application of computational model inference to systems with limited amount of time
series data. The general applicability of thismethodological approachmakes it suitable for
a variety of biological systems and of general interest for biological and medical research.
The core muscles play a central role in stabilizing the head during headers in soccer. The objective of this study was to examine the influence of a fatigued core musculature on the acceleration of the head during jump headers and run headers. Acceleration of the head was measured in a pre-post-design in 68 soccer players (age: 21.5 ± 3.8 years, height: 180.0 ± 13.9 cm, weight: 76.9 ± 8.1 kg). Data were recorded by means of a telemetric 3D acceleration sensor and with a pendulum header. The treatment encompassed two exercises each for the ventral, lateral, and dorsal muscle chains. The acceleration of the head between pre- and post-test was reduced by 0.3 G (p = 0.011) in jump headers and by 0.2 G (p = 0.067) in run headers. An additional analysis of all pretests showed an increased acceleration in run headers when compared to stand headers (p < 0.001) and jump headers (p < 0.001). No differences were found in the sub-group comparisons: semi-professional vs. recreational players, offensive vs. defensive players. Based on the results, we conclude that the acceleration of the head after fatiguing the core muscles does not increase, which stands in contrast to postulated expectations. More tests with accelerated soccer balls are required for a conclusive statement.
The complexity of modern real-time systems is increasing day by day. This inevitable rise in complexity predominantly stems from two contradicting requirements, i.e., ever increasing demand for functionality, and required low cost for the final product. The development of modern multi-processors and variety of network protocols and architectures have enabled such a leap in complexity and functionality possible. Albeit, efficient use of these multi-processors and network architectures is still a major problem. Moreover, the software design and its development process needs improvements in order to support rapid-prototyping for ever changing system designs. Therefore, in this dissertation, we provide solutions for different problems faced in the development and deployment process of real-time systems. The contributions presented in this thesis enable efficient utilization of system resources, rapid design & development and component modularity & portability.
In order to ease the certification process, time-triggered computation model is often used in distributed systems. However, time-triggered scheduling is NP-hard, due to which the process of schedule generation for complex large systems becomes convoluted. Large scheduler run-times and low scalability are two major problems with time-triggered scheduling. To solve these problems, we present a modular real-time scheduler based on a novel search-tree pruning technique, which consumes less time (compared to the state-of-the-art) in order to schedule tasks on large distributed time-triggered systems. In order to provide end-to-end guarantees, we also extend our modular scheduler to quickly generate schedules for time-triggered network traffic in large TTEthernet based networks. We evaluate our schedulers on synthetic but practical task-sets and demonstrate that our pruning technique efficiently reduces scheduler run-times and exhibits adequate scalability for future time-triggered distributed systems.
In safety critical systems, the certification process also requires strict isolation between independent components. This isolation is enforced by utilizing resource partitioning approach, where different criticality components execute in different partitions (each temporally and spatially isolated from each other). However, existing partitioning approaches use periodic servers or tasks to service aperiodic activities. This approach leads to utilization loss and potentially leads to large latencies. On the contrary to the periodic approaches, state-of-the-art aperiodic task admission algorithms do not suffer from problems like utilization loss. However, these approaches do not support partitioned scheduling or mixed-criticality execution environment. To solve this problem, we propose an algorithm for online admission of aperiodic tasks which provides job execution flexibility, jitter control and leads to lower latencies of aperiodic tasks.
For safety critical systems, fault-tolerance is one of the most important requirements. In time-triggered systems, modes are often used to ensure survivability against faults, i.e., when a fault is detected, current system configuration (or mode) is changed such that the overall system performance is either unaffected or degrades gracefully. In literature, it has been asserted that a task-set might be schedulable in individual modes but unschedulable during a mode-change. Moreover, conventional mode-change execution strategies might cause significant delays until the next mode is established. In order to address these issues, in this dissertation, we present an approach for schedulability analysis of mode-changes and propose mode-change delay reduction techniques in distributed system architecture defined by the DREAMS project. We evaluate our approach on an avionics use case and demonstrate that our approach can drastically reduce mode-change delays.
In order to manage increasing system complexity, real-time applications also require new design and development technologies. Other than fulfilling the technical requirements, the main features required from such technologies include modularity and re-usability. AUTOSAR is one of these technologies in automotive industry, which defines an open standard for software architecture of a real-time operating system. However, being an industrial standard, the available proprietary tools do not support model extensions and/or new developments by third-parties and, therefore, hinder the software evolution. To solve this problem, we developed an open-source AUTOSAR toolchain which supports application development and code generation for several modules. In order to exhibit the capabilities of our toolchain, we developed two case studies. These case studies demonstrate that our toolchain generates valid artifacts, avoids dirty workarounds and supports application development.
In order to cope with evolving system designs and hardware platforms, rapid-development of scheduling and analysis algorithms is required. In order to ease the process of algorithm development, a number of scheduling and analysis frameworks are proposed in literature. However, these frameworks focus on a specific class of applications and are limited in functionality. In this dissertation, we provide the skeleton of a scheduling and analysis framework for real-time systems. In order to support rapid-development, we also highlight different development components which promote code reuse and component modularity.
Ecophysiological characterizations of photoautotrophic communities are not only necessary to identify the response of carbon fixation related to different climatic factors, but also to evaluate risks connected to changing environments. In biological soil crusts (BSCs), the description of ecophysiological features is difficult, due to the high variability in taxonomic composition and variable methodologies applied. Especially for BSCs in early successional stages, the available datasets are rare or focused on individual constituents, although these crusts may represent the only photoautotrophic component in many heavily disturbed ruderal areas, such as parking lots or building areas with increasing surface area worldwide. We analyzed the response of photosynthesis and respiration to changing BSC water contents (WCs), temperature and light in two early successional BSCs. We investigated whether the response of these parameters was different between intact BSC and the isolated dominating components. BSCs dominated by the cyanobacterium Nostoc commune and dominated by the green alga Zygogonium ericetorum were examined. A major divergence between the two BSCs was their absolute carbon fixation rate on a chlorophyll basis, which was significantly higher for the cyanobacterial crust. Nevertheless, independent of species composition, both crust types and their isolated organisms had convergent features such as high light acclimatization and a minor and very late-occurring depression in carbon uptake at water suprasaturation. This particular setup of ecophysiological features may enable these communities to cope with a high variety of climatic stresses and may therefore be a reason for their success in heavily disturbed areas with ongoing human impact. However, the shape of the response was different for intact BSC compared to separated organisms, especially in absolute net photosynthesis (NP) rates. This emphasizes the importance of measuring intact BSCs under natural conditions for collecting reliable data for meaningful analysis of BSC ecosystem services.
Background: Aneuploidy, or abnormal chromosome numbers, severely alters cell physiology and is widespread in
cancers and other pathologies. Using model cell lines engineered to carry one or more extra chromosomes, it has
been demonstrated that aneuploidy per se impairs proliferation, leads to proteotoxic as well as replication stress
and triggers conserved transcriptome and proteome changes.
Results: In this study, we analysed for the first time miRNAs and demonstrate that their expression is altered in
response to chromosome gain. The miRNA deregulation is independent of the identity of the extra chromosome
and specific to individual cell lines. By cross-omics analysis we demonstrate that although the deregulated miRNAs
differ among individual aneuploid cell lines, their known targets are predominantly associated with cell development,
growth and proliferation, pathways known to be inhibited in response to chromosome gain. Indeed, we show that up
to 72% of these targets are downregulated and the associated miRNAs are overexpressed in aneuploid cells, suggesting
that the miRNA changes contribute to the global transcription changes triggered by aneuploidy. We identified
hsa-miR-10a-5p to be overexpressed in majority of aneuploid cells. Hsa-miR-10a-5p enhances translation of a
subset of mRNAs that contain so called 5’TOP motif and we show that its upregulation in aneuploids provides
resistance to starvation-induced shut down of ribosomal protein translation.
Conclusions: Our work suggests that the changes of the microRNAome contribute on one hand to the adverse
effects of aneuploidy on cell physiology, and on the other hand to the adaptation to aneuploidy by supporting
translation under adverse conditions.
Keywords: Aneuploidy, Cancer, miRNA, miR-10a-5p, Trisomy
Areal optical surface topography measurement is an emerging technology for industrial quality control. However, neither calibration procedures nor the utilization of material measures are standardized. State of the art is the calibration of a set of metrological characteristics with multiple calibration samples (material measures). Here, we propose a new calibration sample (artefact) capable of providing the entire set of relevant metrological characteristics within only one single sample. Our calibration artefact features multiple material measures and is manufactured with two-photon laser lithography (direct laser writing, DLW). This enables a holistic calibration of areal topography measuring instruments with only one series of measurements and without changing the sample.
Based on the Lindblad master equation approach we obtain a detailed microscopic model of photons in a dye-filled cavity, which features condensation of light. To this end we generalise a recent non-equilibrium approach of Kirton and Keeling such that the dye-mediated contribution to the photon-photon interaction in the light condensate is accessible due to an interplay of coherent and dissipative dynamics. We describe the steady-state properties of the system by analysing the resulting equations of motion of both photonic and matter degrees of freedom. In particular, we discuss the existence of two limiting cases for steady states: photon Bose-Einstein condensate and laser-like. In the former case, we determine the corresponding dimensionless photon-photon interaction strength by relying on realistic experimental data and find a good agreement with previous theoretical estimates. Furthermore, we investigate how the dimensionless interaction strength depends on the respective system parameters.
We studied the development of cognitive abilities related to intelligence and creativity
(N = 48, 6–10 years old), using a longitudinal design (over one school year), in order
to evaluate an Enrichment Program for gifted primary school children initiated by
the government of the German federal state of Rhineland-Palatinate (Entdeckertag
Rheinland Pfalz, Germany; ET; Day of Discoverers). A group of German primary school
children (N = 24), identified earlier as intellectually gifted and selected to join the
ET program was compared to a gender-, class- and IQ- matched group of control
children that did not participate in this program. All participants performed the Standard
Progressive Matrices (SPM) test, which measures intelligence in well-defined problem
space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined
problem space; and the test of creative thinking-drawing production (TCT-DP), which
measures creativity, also in ill-defined problem space. Results revealed that problem
space matters: the ET program is effective only for the improvement of intelligence
operating in well-defined problem space. An effect was found for intelligence as
measured by SPM only, but neither for intelligence operating in ill-defined problem space
(CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem
spaces presented, different cognitive abilities are elicited in the same child. Therefore,
enrichment programs for gifted, but also for children attending traditional schools,
should provide opportunities to develop cognitive abilities related to intelligence,
operating in both well- and ill-defined problem spaces, and to creativity in a parallel,
using an interactive approach.
Certain brain tumours are very hard to treat with radiotherapy due to their irregular shape caused by the infiltrative nature of the tumour cells. To enhance the estimation of the tumour extent one may use a mathematical model. As the brain structure plays an important role for the cell migration, it has to be included in such a model. This is done via diffusion-MRI data. We set up a multiscale model class accounting among others for integrin-mediated movement of cancer cells in the brain tissue, and the integrin-mediated proliferation. Moreover, we model a novel chemotherapy in combination with standard radiotherapy.
Thereby, we start on the cellular scale in order to describe migration. Then we deduce mean-field equations on the mesoscopic (cell density) scale on which we also incorporate cell proliferation. To reduce the phase space of the mesoscopic equation, we use parabolic scaling and deduce an effective description in the form of a reaction-convection-diffusion equation on the macroscopic spatio-temporal scale. On this scale we perform three dimensional numerical simulations for the tumour cell density, thereby incorporating real diffusion tensor imaging data. To this aim, we present programmes for the data processing taking the raw medical data and processing it to the form to be included in the numerical simulation. Thanks to the reduction of the phase space, the numerical simulations are fast enough to enable application in clinical practice.
Composite materials are used in many modern tools and engineering applications and
consist of two or more materials that are intermixed. Features like inclusions in a matrix
material are often very small compared to the overall structure. Volume elements that
are characteristic for the microstructure can be simulated and their elastic properties are
then used as a homogeneous material on the macroscopic scale.
Moulinec and Suquet [2] solve the so-called Lippmann-Schwinger equation, a reformulation of the equations of elasticity in periodic homogenization, using truncated
trigonometric polynomials on a tensor product grid as ansatz functions.
In this thesis, we generalize their approach to anisotropic lattices and extend it to
anisotropic translation invariant spaces. We discretize the partial differential equation
on these spaces and prove the convergence rate. The speed of convergence depends on
the smoothness of the coefficients and the regularity of the ansatz space. The spaces of
translates unify the ansatz of Moulinec and Suquet with de la Vallée Poussin means and
periodic Box splines, including the constant finite element discretization of Brisard and
Dormieux [1].
For finely resolved images, sampling on a coarser lattice reduces the computational
effort. We introduce mixing rules as the means to transfer fine-grid information to the
smaller lattice.
Finally, we show the effect of the anisotropic pattern, the space of translates, and the
convergence of the method, and mixing rules on two- and three-dimensional examples.
References
[1] S. Brisard and L. Dormieux. “FFT-based methods for the mechanics of composites:
A general variational framework”. In: Computational Materials Science 49.3 (2010),
pp. 663–671. doi: 10.1016/j.commatsci.2010.06.009.
[2] H. Moulinec and P. Suquet. “A numerical method for computing the overall response
of nonlinear composites with complex microstructure”. In: Computer Methods in
Applied Mechanics and Engineering 157.1-2 (1998), pp. 69–94. doi: 10.1016/s00457825(97)00218-1.
The Power and Energy Student Summit (PESS) is designed for students, young professionals and PhD-students in the field of power engineering. PESS offers the possibility to gain first experience in presentation, publication and discussion with a renowned audience of specialists. Therefore, the conference is accompanied and supervised by established scientists and experts. The venue changes every year. In 2018, the University of Kaiserslautern held the eighth PESS conference. This document presents the submissions of this conference.
In the present master’s thesis we investigate the connection between derivations and
homogeneities of complete analytic algebras. We prove a theorem, which describes a specific set of generators
for the module of derivations of an analytic algebra, which map the maximal ideal of R into itself. It turns out, that this set has a structure similar to a Cartan subalgebra and contains
information regarding multi-homogeneity. In order to prove
this theorem, we extend the notion of grading by Scheja and Wiebe to projective systems and state the connection between multi-gradings and pairwise
commuting diagonalizable derivations. We prove a theorem similar to Cartan’s Conjugacy Theorem in the setup of infinite-dimensional Lie algebras, which arise as projective limits of finite-dimensional Lie algebras. Using this result, we can show that the structure of the aforementioned set of generators is an intrinsic property of the analytic algebra. At the end we state an algorithm, which is theoretically able to compute the maximal multi-homogeneity of a complete analytic algebra.
Due to the steadily growing flood of data, the appropriate use of visualizations for efficient data analysis is as important today as it has never been before. In many application domains, the data flood is based on processes that can be represented by node-link diagrams. Within such a diagram, nodes may represent intermediate results (or products), system states (or snapshots), milestones or real (and possibly georeferenced) objects, while links (edges) can embody transition conditions, transformation processes or real physical connections. Inspired by the engineering sciences application domain and the research project “SinOptiKom: Cross-sectoral optimization of transformation processes in municipal infrastructures in rural areas”, a platform for the analysis of transformation processes has been researched and developed based on a geographic information system (GIS). Caused by the increased amount of available and interesting data, a particular challenge is the simultaneous visualization of several visible attributes within one single diagram instead of using multiple ones. Therefore, two approaches have been developed, which utilize the available space between nodes in a diagram to display additional information.
Motivated by the necessity of appropriate result communication with various stakeholders, a concept for a universal, dashboard-based analysis platform has been developed. This web-based approach is conceptually capable of displaying data from various data sources and has been supplemented by collaboration possibilities such as sharing, annotating and presenting features.
In order to demonstrate the applicability and usability of newly developed applications, visualizations or user interfaces, extensive evaluations with human users are often inevitable. To reduce the complexity and the effort for conducting an evaluation, the browser-based evaluation framework (BREF) has been designed and implemented. Through its universal and flexible character, virtually any visualization or interaction running in the browser can be evaluated with BREF without any additional application (except for a modern web browser) on the target device. BREF has already proved itself in a wide range of application areas during the development and has since grown into a comprehensive evaluation tool.
Collaboration aims to increase the efficiency of problem solving and decision making by bringing diverse areas of expertise together, i.e., teams of experts from various disciplines, all necessary to come up with acceptable concepts. This dissertation is concerned with the design of highly efficient computer-supported collaborative work involving active participation of user groups with diverse expertise. Three main contributions can be highlighted: (1) the definition and design of a framework facilitating collaborative decision making; (2) the deployment and evaluation of more natural and intuitive interaction and visualization techniques in order to support multiple decision makers in virtual reality environments; and (3) the integration of novel techniques into a single proof-of-concept system.
Decision making processes are time-consuming, typically involving several iterations of different options before a generally acceptable solution is obtained. Although, collaboration is an often-applied method, the execution of collaborative sessions is often inefficient, does not involve all participants, and decisions are often finalized with- out the agreement of all participants. An increasing number of computer-supported cooperative work systems (CSCW) facilitate collaborative work by providing shared viewpoints and tools to solve joint tasks. However, most of these software systems are designed from a feature-oriented perspective, rather than a human-centered perspective and without the consideration of user groups with diverse experience and joint goals instead of joint tasks. The aim of this dissertation is to bring insights to the following research question: How can computer-supported cooperative work be designed to be more efficient? This question opens up more specific questions like: How can collaborative work be designed to be more efficient? How can all participants be involved in the collaboration process? And how can interaction interfaces that support collaborative work be designed to be more efficient? As such, this dissertation makes contributions in:
1. Definition and design of a framework facilitating decision making and collaborative work. Based on examinations of collaborative work and decision making processes requirements of a collaboration framework are assorted and formulated. Following, an approach to define and rate software/frameworks is introduced. This approach is used to translate the assorted requirements into a software’s architecture design. Next, an approach to evaluate alternatives based on Multi Criteria Decision Making (MCDM) and Multi Attribute Utility Theory (MAUT) is presented. Two case studies demonstrate the usability of this approach for (1) benchmarking between systems and evaluates the value of the desired collaboration framework, and (2) ranking a set of alternatives resulting from a decision-making process incorporating the points of view of multiple stake- holders.
2. Deployment and evaluation of natural and intuitive interaction and visualization techniques in order to support multiple diverse decision makers. A user taxonomy of industrial corporations serves to create a petri network of users in order to identify dependencies and information flows between each other. An explicit characterization and design of task models was developed to define interfaces and further components of the collaboration framework. In order to involve and support user groups with diverse experiences, smart de- vices and virtual reality are used within the presented collaboration framework. Natural and intuitive interaction techniques as well as advanced visualizations of user centered views of the collaboratively processed data are developed in order to support and increase the efficiency of decision making processes. The smartwatch as one of the latest technologies of smart devices, offers new possibilities of interaction techniques. A multi-modal interaction interface is provided, realized with smartwatch and smartphone in full immersive environments, including touch-input, in-air gestures, and speech.
3. Integration of novel techniques into a single proof-of-concept system. Finally, all findings and designed components are combined into the new collaboration framework called IN2CO, for distributed or co-located participants to efficiently collaborate using diverse mobile devices. In a prototypical implementation, all described components are integrated and evaluated. Examples where next-generation network-enabled collaborative environments, connected by visual and mobile interaction devices, can have significant impact are: design and simulation of automobiles and aircrafts; urban planning and simulation of urban infrastructure; or the design of complex and large buildings, including efficiency- and cost-optimized manufacturing buildings as task in factory planning. To demonstrate the functionality and usability of the framework, case studies referring to factory planning are demonstrated. Considering that factory planning is a process that involves the interaction of multiple aspects as well as the participation of experts from different domains (i.e., mechanical engineering, electrical engineering, computer engineering, ergonomics, material science, and even more), this application is suitable to demonstrate the utilization and usability of the collaboration framework. The various software modules and the integrated system resulting from the research will all be subjected to evaluations. Thus, collaborative decision making for co-located and distributed participants is enhanced by the use of natural and intuitive multi-modal interaction interfaces and techniques.
Benzene is a natural constituent of crude oil and a product of incomplete combustion of petrol
and has been classified as “carcinogenic to humans” by IARC in 1982 (IARC 1982). (E,E)-
Muconaldehyde has been postulated to be a microsomal metabolite of benzene in vitro
(Latriano et al. 1986). (E,E)-Muconaldehyde is hematotoxic in vivo and its role in the
hematotoxicity of benzene is unclear (Witz et al. 1985).
We intended to ascertain the presence of (E,E)-muconaldehyde in vivo by detection of a
protein conjugate deriving from (E,E)-muconaldehyde.
Therefore we improved the current synthetic access to (E,E)-muconaldehyde. (E,E)-
muconaldehyde was synthesized in three steps starting from with (E,E)-muconic acid in an
overall yield of 60 %.
Reaction of (E,E)-muconaldehyde with bovine serum albumin resulted in formation of a
conjugate which was converted upon addition of NaBH4 to a new species whose HPLC-
retention time, UV spectra, Q1 mass and MS2 spectra matched those of the crude reaction
product from one pot conversion of Ac-Lys-OMe with (E,E)-muconaldehyde in the presence
of NaBH4 and subsequent cleavage of protection groups.
Synthetic access to the presumed structure (S)-2-ammonio-6-(((E,E)-6-oxohexa-2,4-dien-1-
yl)amino)hexanoate (Lys(MUC-CHO)) was provided in eleven steps starting from (E,E)-
muconic acid and Lys(Z)-OtBu*HCl in 2 % overall yield. Additionally synthetic access to
(S)-2-ammonio-6-(((E,E)-6-hydroxyhexa-2,4-dien-1-yl)amino)hexanoate (Lys(MUC-OH))
and (S)-2-ammonio-6-((6-hydroxyhexyl)amino)hexanoate (IS) was provided.
With synthetic reference material at hand, the presumed structure Lys(MUC-OH) could be
identified from incubations of (E,E)-muconaldehyde with bovine serum albumin via HPLC-ESI+-
MS/MS.
Cytotoxicity analysis of (E,E)-muconaldehyde and Lys(MUC-CHO) in human promyelocytic
NB4 cells resulted in EC50 ≈ 1 μM for (E,E)-muconaldehyde. Lys(MUC-CHO) did not show
any additional cytotoxicity up to 10 μM.
B6C3F1 mice were exposed to 0, 400 and 800 mg/kg b.w. benzene to examine the formation
of Lys(MUC-OH) in vivo. After 24 h mice were sacrificed and serum albumin was isolated.
Analysis for Lys(MUC-OH) has not been performed in this work.
Neuronal inhibition is mediated by glycine and/or GABA. Inferior colliculus (IC) neurons receive glycinergic and GABAergic
inputs, whereas inhibition in hippocampus (HC) predominantly relies on GABA. Astrocytes heterogeneously
express neurotransmitter transporters and are expected to adapt to the local requirements regarding neurotransmitter
homeostasis. Here we analyzed the expression of inhibitory neurotransmitter transporters in IC and HC astrocytes using
whole-cell patch-clamp and single-cell reverse transcription-PCR. We show that most astrocytes in both regions expressed
functional glycine transporters (GlyTs). Activation of these transporters resulted in an inward current (IGly) that
was sensitive to the competitive GlyT1 agonist sarcosine. Astrocytes exhibited transcripts for GlyT1 but not for
GlyT2. Glycine did not alter the membrane resistance (RM) arguing for the absence of functional glycine receptors (GlyRs).
Thus, IGly was mainly mediated by GlyT1. Similarly, we found expression of functional GABA transporters (GATs) in all IC
astrocytes and about half of the HC astrocytes. These transporters mediated an inward current (IGABA) that was sensitive to
the competitive GAT-1 and GAT-3 antagonists NO711 and SNAP5114, respectively. Accordingly, transcripts for GAT-1 and
GAT-3 were found but not for GAT-2 and BGT-1. Only in hippocampal astrocytes, GABA transiently reduced
RM demonstrating the presence of GABAA receptors (GABAARs). However, IGABA was mainly not contaminated
by GABAAR-mediated currents as RM changes vanished shortly after GABA application. In both regions, IGABA
was stronger than IGly. Furthermore, in HC the IGABA/IGly ratio was larger compared to IC. Taken together, our
results demonstrate that astrocytes are heterogeneous across and within distinct brain areas. Furthermore, we
could show that the capacity for glycine and GABA uptake varies between both brain regions.
This research explores the development of web based reference software for
characterisation of surface roughness for two-dimensional surface data. The reference software used for verification of surface characteristics makes the evaluation methods easier for clients. The algorithms used in this software
are based on International ISO standards. Most software used in industrial measuring
instruments may give variations in the parameters calculated due to numerical changes in
calculation. Such variations can be verified using the proposed reference software.
The evaluation of surface roughness is carried out in four major steps: data capture, data
align, data filtering and parameter calculation. This work walks through each of these steps
explaining how surface profiles are evaluated by pre-processing steps called fitting and
filtering. The analysis process is then followed by parameter evaluation according to DIN EN
ISO 4287 and DIN EN ISO 13565-2 standards to extract important information from the
profile to characterise surface roughness.
If gradient based derivative algorithms are used to improve industrial products by reducing their target functions, the derivatives need to be exact.
The last percent of possible improvement, like the efficiency of a turbine, can only be gained if the derivatives are consistent with the solution process that is used in the simulation software.
It is problematic that the development of the simulation software is an ongoing process which leads to the use of approximated derivatives.
If a derivative computation is implemented manually, it will be inconsistent after some time if it is not updated.
This thesis presents a generalized approach which differentiates the whole simulation software with Algorithmic Differentiation (AD), and guarantees a correct and consistent derivative computation after each change to the software.
For this purpose, the variable tagging technique is developed.
The technique checks at run-time if all dependencies, which are used by the derivative algorithms, are correct.
Since it is also necessary to check the correctness of the implementation, a theorem is developed which describes how AD derivatives can be compared.
This theorem is used to develop further methods that can detect and correct errors.
All methods are designed such that they can be applied in real world applications and are used within industrial configurations.
The process described above yields consistent and correct derivatives but the efficiency can still be improved.
This is done by deriving new derivative algorithms.
A fixed-point iterator approach, with a consistent derivation, yields all state of the art algorithms and produces two new algorithms.
These two new algorithms include all implementation details and therefore they produce consistent derivative results.
For detecting hot spots in the application, the state of the art techniques are presented and extended.
The data management is changed such that the performance of the software is affected only marginally when quantities, like the number of input and output variables or the memory consumption, are computed for the detection.
The hot spots can be treated with techniques like checkpointing or preaccumulation.
How these techniques change the time and memory consumption is analyzed and it is shown how they need to be used in selected AD tools.
As a last step, the used AD tools are analyzed in more detail.
The major implementation strategies for operator overloading AD tools are presented and implementation improvements for existing AD tools are discussed.\
The discussion focuses on a minimal memory consumption and makes it possible to compare AD tools on a theoretical level.
The new AD tool CoDiPack is based on these findings and its design and concepts are presented.
The improvements and findings in this thesis make it possible, that an automatic, consistent and correct derivative is generated in an efficient way for industrial applications.
Mobility has become an integral feature of many wireless networks. Along with this mobility comes the need for location awareness. A prime example for this development are today’s and future transportation systems. They increasingly rely on wireless communications to exchange location and velocity information for a multitude of functions and applications. At the same time, the technological progress facilitates the widespread availability of sophisticated radio technology such as software-defined radios. The result is a variety of new attack vectors threatening the integrity of location information in mobile networks.
Although such attacks can have severe consequences in safety-critical environments such as transportation, the combination of mobility and integrity of spatial information has not received much attention in security research in the past. In this thesis we aim to fill this gap by providing adequate methods to protect the integrity of location and velocity information in the presence of mobility. Based on physical effects of mobility on wireless communications, we develop new methods to securely verify locations, sequences of locations, and velocity information provided by untrusted nodes. The results of our analyses show that mobility can in fact be exploited to provide robust security at low cost.
To further investigate the applicability of our schemes to real-world transportation systems, we have built the OpenSky Network, a sensor network which collects air traffic control communication data for scientific applications. The network uses crowdsourcing and has already achieved coverage in most parts of the world with more than 1000 sensors.
Based on the data provided by the network and measurements with commercial off-the-shelf hardware, we demonstrate the technical feasibility and security of our schemes in the air traffic scenario. Moreover, the experience and data provided by the OpenSky Network allows us to investigate the challenges for our schemes in the real-world air traffic communication environment. We show that our verification methods match all
requirements to help secure the next generation air traffic system.
Road accidents remain as one of the major causes of death and injuries globally. Several million people die every year due to road accidents all over the world. Although the number of accidents in European region have reduced in the past years, road safety still remains a major challenge. Especially in case of commercial trucks, due to the size and load of the vehicle, even minor collisions with other road users would lead to serious injuries or death. In order to reduce number of accidents, automotive industry is rapidly developing advanced driver assistance systems (ADAS) and automated driving technologies. Efficient and reliable solutions are required for these systems to sense, perceive and react to different environmental conditions. For vehicle safety applications such as collision avoidance with vulnerable road users (VRUs), it is not only important for the system to efficiently detect and track the objects in the vicinity of the vehicle but should also function robustly.
An environment perception solution for application in commercial truck safety systems and for future automated driving is developed in this work. Thereby a method for integrated tracking and classification of road users in the near vicinity of the vehicle is formulated. The drawbacks in conventional multi-object tracking algorithms with respect to state, measurement and data association uncertainties have been addressed with the recent advancements in the field of unified multi-object tracking solutions based on random finite sets (RFS). Gaussian mixture implementation of the recently developed labeled multi-Bernoulli (LMB) filter [RSD15] is used as the basis for multi-object tracking in this work. Measurement from an high-resolution radar sensor is used as the main input for detecting and tracking objects.
On one side, the focus of this work is on tracking VRUs in the near vicinity of the truck. As it is beneficial for most of the vehicle safety systems to also know the category that the object belongs to, the focus on the other side is also to classify the road users. All the radar detections believed to originate from a single object are clustered together with help of density based spatial clustering for application with noise (DBSCAN) algorithm. Each cluster of detections would have different properties based on the respective object characteristics. Sixteen distinct features based on radar detections, that are suitable for separating pedestrians, bicyclists and passenger car categories are selected and extracted for each of the cluster. A machine learning based classifier is constructed, trained and parameterised for distinguishing the road users based on the extracted features.
The class information derived from the radar detections can further be used by the tracking algorithm, to adapt the model parameters used for precisely predicting the object motion according to the category of the object. Multiple model labeled multi-Bernoulli filter (MMLMB) is used for modelling different object motions. Apart from the detection level, the estimated state of an object on the tracking level also provides information about the object class. Both these informations are fused using Dempster-Shafer theory (DST) of evidence, based on respective class probabilities Thereby, the output of the integrated tracking and classification with MMLMB filter are classified tracks that can be used by truck safety applications with better reliability.
The developed environment perception method is further implemented as a real-time prototypical system on a commercial truck. The performance of the tracking and classification approaches are evaluated with the help of simulation and multiple test scenarios. A comparison of the developed approaches to a conventional converted measurements Kalman filter with global nearest neighbour association (CMKF-GNN) shows significant advantages in the overall accuracy and performance.
The phase field approach is a powerful tool that can handle even complicated fracture phenomena within an apparently simple framework. Nonetheless, a profound understanding of the model is required in order to be able to interpret the obtained results correctly. Furthermore, in the dynamic case the phase field model needs to be verified in comparison to experimental data and analytical results in order to increase the trust in this new approach. In this thesis, a phase field model for dynamic brittle fracture is investigated with regard to these aspects by analytical and numerical methods
Using valuation theory we associate to a one-dimensional equidimensional semilocal Cohen-Macaulay ring \(R\) its semigroup of values, and to a fractional ideal of \(R\) we associate its value semigroup ideal. For a class of curve singularities (here called admissible rings) including algebroid curves the semigroups of values, respectively the value semigroup ideals, satisfy combinatorial properties defining good semigroups, respectively good semigroup ideals. Notably, the class of good semigroups strictly contains the class of value semigroups of admissible rings. On good semigroups we establish combinatorial versions of algebraic concepts on admissible rings which are compatible with their prototypes under taking values. Primarily we examine duality and quasihomogeneity.
We give a definition for canonical semigroup ideals of good semigroups which characterizes canonical fractional ideals of an admissible ring in terms of their value semigroup ideals. Moreover, a canonical semigroup ideal induces a duality on the set of good semigroup ideals of a good semigroup. This duality is compatible with the Cohen-Macaulay duality on fractional ideals under taking values.
The properties of the semigroup of values of a quasihomogeneous curve singularity lead to a notion of quasihomogeneity on good semigroups which is compatible with its algebraic prototype. We give a combinatorial criterion which allows to construct from a quasihomogeneous semigroup \(S\) a quasihomogeneous curve singularity having \(S\) as semigroup of values.
As an application we use the semigroup of values to compute endomorphism rings of maximal ideals of algebroid curves. This yields an explicit description of the intermediate rings in an algorithmic normalization of plane central arrangements of smooth curves based on a criterion by Grauert and Remmert. Applying this result to hyperplane arrangements we determine the number of steps needed to compute the normalization of a the arrangement in terms of its Möbius function.
Fucoidan is a class of biopolymers mainly found in brown seaweeds. Due to its diverse medical importance, homogenous supply as well as a GMP-compliant product is of a special interest. Therefore, in addition to optimization of its extraction and purification from classical resources, other techniques were tried (e.g., marine tissue culture and heterologous expression of enzymes involved in its biosynthesis). Results showed that 17.5% (w/w) crude fucoidan after pre-treatment and extraction was obtained from the brown macroalgae F. vesiculosus. Purification by affinity chromatography improved purity relative to the commercial purified product. Furthermore, biological investigations revealed improved anti-coagulant and anti-viral activities compared with crude fucoidan. Furthermore, callus-like and protoplast cultures as well as bioreactor cultivation were developed from F. vesiculosus representing a new horizon to produce fucoidan biotechnologically. Moreover, heterologous expression of several enzymes involved in its biosynthesis by E. coli (e.g., FucTs and STs) demonstrated the possibility to obtain active enzymes that could be utilized in enzymatic in vitro synthesis of fucoidan. All these competitive techniques could provide the global demands from fucoidan.
Multiphase materials combine properties of several materials, which makes them interesting for high-performing components. This thesis considers a certain set of multiphase materials, namely silicon-carbide (SiC) particle-reinforced aluminium (Al) metal matrix composites and their modelling based on stochastic geometry models.
Stochastic modelling can be used for the generation of virtual material samples: Once we have fitted a model to the material statistics, we can obtain independent three-dimensional “samples” of the material under investigation without the need of any actual imaging. Additionally, by changing the model parameters, we can easily simulate a new material composition.
The materials under investigation have a rather complicated microstructure, as the system of SiC particles has many degrees of freedom: Size, shape, orientation and spatial distribution. Based on FIB-SEM images, that yield three-dimensional image data, we extract the SiC particle structure using methods of image analysis. Then we model the SiC particles by anisotropically rescaled cells of a random Laguerre tessellation that was fitted to the shapes of isotropically rescaled particles. We fit a log-normal distribution for the volume distribution of the SiC particles. Additionally, we propose models for the Al grain structure and the Aluminium-Copper (\({Al}_2{Cu}\)) precipitations occurring on the grain boundaries and on SiC-Al phase boundaries.
Finally, we show how we can estimate the parameters of the volume-distribution based on two-dimensional SEM images. This estimation is applied to two samples with different mean SiC particle diameters and to a random section through the model. The stereological estimations are within acceptable agreement with the parameters estimated from three-dimensional image data
as well as with the parameters of the model.
The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
Poor posture in childhood and adolescence is held responsible for the occurrence
of associated disorders in adult age. This study aimed to verify whether body
posture in adolescence can be enhanced through the improvement of neuromuscular
performance, attained by means of targeted strength, stretch, and body perception
training, and whether any such improvement might also transition into adulthood. From
a total of 84 volunteers, the posture development of 67 adolescents was checked
annually between the age of 14 and 20 based on index values in three posture
situations. 28 adolescents exercised twice a week for about 2 h up to the age of 18, 24
adolescents exercised continually up to the age of 20. Both groups practiced other
additional sports for about 1.8 h/week. Fifteen persons served as a non-exercising
control group, practicing optional sports of about 1.8 h/week until the age of 18,
after that for 0.9 h/week. Group allocation was not random, but depended on the
participants’ choice. A linear mixed model was used to analyze the development
of posture indexes among the groups and over time and the possible influence of
anthropometric parameters (weight, size), of optional athletic activity and of sedentary
behavior. The post hoc pairwise comparison was performed applying the Scheffé test.
The significance level was set at 0.05. The group that exercised continually (TR20)
exhibited a significant posture parameter improvement in all posture situations from
the 2nd year of exercising on. The group that terminated their training when reaching
adulthood (TR18) retained some improvements, such as conscious straightening of the
body posture. In other posture situations (habitual, closed eyes), their posture results
declined again from age 18. The effect sizes determined were between Eta² = 0.12 and
Eta² = 0.19 and represent moderate to strong effects. The control group did not exhibit
any differences. Anthropometric parameters, additional athletic activities and sedentary
behavior did not influence the posture parameters significantly. An additional athletic
training of 2 h per week including elements for improved body perception seems to
have the potential to improve body posture in symptom free male adolescents and
young adults.
To investigate whether participants can activate only one spatially oriented number line at a time or
multiple number lines simultaneously, they were asked to solve a unit magnitude comparison task
(unit smaller/larger than 5) and a parity judgment task (even/odd) on two-digit numbers. In both these
primary tasks, decades were irrelevant. After some of the primary task trials (randomly), participants
were asked to additionally solve a secondary task based on the previously presented number. In
Experiment 1, they had to decide whether the two-digit number presented for the primary task was
larger or smaller than 50. Thus, for the secondary task decades were relevant. In contrast, in Experiment
2, the secondary task was a color judgment task, which means decades were irrelevant. In Experiment
1, decades’ and units’ magnitudes influenced the spatial association of numbers separately. In contrast,
in Experiment 2, only the units were spatially associated with magnitude. It was concluded that
multiple number lines (one for units and one for decades) can be activated if attention is focused on
multiple, separate magnitude attributes.
For modeling approaches in systems biology, knowledge of the absolute abundances of cellular proteins is essential. One way to gain this knowledge is the use of quantification concatamers (QconCATs), which are synthetic proteins consisting of proteotypic peptides derived from the target proteins to be quantified. The QconCAT protein is labeled with a heavy isotope upon expression in E. coli and known amounts of the purified protein are spiked into a whole cell protein extract. Upon tryptic digestion, labeled and unlabeled peptides are released from the QconCAT and the native proteins, respectively, and both are quantified by LC-MS/MS. The labeled Q-peptides then serve as standards for determining the absolute quantity of the native peptides/proteins. Here we have applied the QconCAT approach to Chlamydomonas reinhardtii for the absolute quantification of the major proteins and protein complexes driving photosynthetic light reactions in the thylakoid membranes and carbon fixation in the pyrenoid. We found that with 25.2 attomol/cell the Rubisco large subunit makes up 6.6% of all proteins in a Chlamydomonas cell and with this exceeds the amount of the small subunit by a factor of 1.56. EPYC1, which links Rubisco to form the pyrenoid, is eight times less abundant than RBCS, and Rubisco activase is 32-times less abundant than RBCS. With 5.2 attomol/cell, photosystem II is the most abundant complex involved in the photosynthetic light reactions, followed by plastocyanin, photosystem I and the cytochrome b6/f complex, which range between 2.9 and 3.5 attomol/cell. The least abundant complex is the ATP synthase with 2 attomol/cell. While applying the QconCAT approach, we have been able to identify many potential pitfalls associated with this technique. We analyze and discuss these pitfalls in detail and provide an optimized workflow for future applications of this technique.
In this thesis, we deal with the finite group of Lie type \(F_4(2^n)\). The aim is to find information on the \(l\)-decomposition numbers of \(F_4(2^n)\) on unipotent blocks for \(l\neq2\) and \(n\in \mathbb{N}\) arbitrary and on the irreducible characters of the Sylow \(2\)-subgroup of \(F_4(2^n)\).
S. M. Goodwin, T. Le, K. Magaard and A. Paolini have found a parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), a Sylow \(2\)-subgroup of \(F_4(q)\), of \(F_4(p^n)\), \(p\) a prime, for the case \(p\neq2\).
We managed to adapt their methods for the parametrization of the irreducible characters of the Sylow \(2\)-subgroup for the case \(p=2\) for the group \(F_4(q)\), \(q=p^n\). This gives a nearly complete parametrization of the irreducible characters of the unipotent subgroup \(U\) of \(F_4(q)\), namely of all irreducible characters of \(U\) arising from so-called abelian cores.
The general strategy we have applied to obtain information about the \(l\)-decomposition numbers on unipotent blocks is to induce characters of the unipotent subgroup \(U\) of \(F_4(q)\) and Harish-Chandra induce projective characters of proper Levi subgroups of \(F_4(q)\) to obtain projective characters of \(F_4(q)\). Via Brauer reciprocity, the multiplicities of the ordinary irreducible unipotent characters in these projective characters give us information on the \(l\)-decomposition numbers of the unipotent characters of \(F_4(q)\).
Sadly, the projective characters of \(F_4(q)\) we obtained were not sufficient to give the shape of the entire decomposition matrix.
Arctic, Antarctic and alpine biological soil crusts (BSCs) are formed by adhesion of soil particles to exopolysaccharides (EPSs) excreted by cyanobacterial and green algal communities, the pioneers and main primary producers in these habitats. These BSCs provide and influence many ecosystem services such as soil erodibility, soil formation and nitrogen (N) and carbon (C) cycles. In cold environments degradation rates are low and BSCs continuously increase soil organic C; therefore, these soils are considered to be CO2 sinks. This work provides a novel, nondestructive and highly comparable method to investigate intact BSCs with a focus on cyanobacteria and green algae and their contribution to soil organic C. A new terminology arose,basedonconfocallaserscanningmicroscopy(CLSM) 2-D biomaps, dividing BSCs into a photosynthetic active layer (PAL) made of active photoautotrophic organisms and a photosynthetic inactive layer (PIL) harbouring remnants of cyanobacteria and green algae glued together by their remaining EPSs. By the application of CLSM image analysis (CLSM–IA) to 3-D biomaps, C coming from photosynthetic activeorganismscouldbevisualizedasdepthprofileswithC peaks at 0.5 to 2mm depth. Additionally, the CO2 sink character of these cold soil habitats dominated by BSCs could be highlighted, demonstrating that the first cubic centimetre of soil consists of between 7 and 17% total organic carbon, identified by loss on ignition.
In this thesis we integrate discrete dividends into the stock model, estimate
future outstanding dividend payments and solve different portfolio optimization
problems. Therefore, we discuss three well-known stock models, including
discrete dividend payments and evolve a model, which also takes early
announcement into account.
In order to estimate the future outstanding dividend payments, we develop a
general estimation framework. First, we investigate a model-free, no-arbitrage
methodology, which is based on the put-call parity for European options. Our
approach integrates all available option market data and simultaneously calculates
the market-implied discount curve. We illustrate our method using stocks
of European blue-chip companies and show within a statistical assessment that
the estimate performs well in practice.
As American options are more common, we additionally develop a methodology,
which is based on market prices of American at-the-money options.
This method relies on a linear combination of no-arbitrage bounds of the dividends,
where the corresponding optimal weight is determined via a historical
least squares estimation using realized dividends. We demonstrate our method
using all Dow Jones Industrial Average constituents and provide a robustness
check with respect to the used discount factor. Furthermore, we backtest our
results against the method using European options and against a so called
simple estimate.
In the last part of the thesis we solve the terminal wealth portfolio optimization
problem for a dividend paying stock. In the case of the logarithmic utility
function, we show that the optimal strategy is not a constant anymore but
connected to the Merton strategy. Additionally, we solve a special optimal
consumption problem, where the investor is only allowed to consume dividends.
We show that this problem can be reduced to the before solved terminal wealth
problem.