### Refine

#### Year of publication

- 2017 (51) (remove)

#### Document Type

- Doctoral Thesis (42)
- Preprint (3)
- Periodical Part (2)
- Diploma Thesis (1)
- Lecture (1)
- Other (1)
- Working Paper (1)

#### Language

- English (51) (remove)

#### Keywords

- A/D conversion (1)
- ADAS (1)
- AFDX (1)
- Ableitungsfreie Optimierung (1)
- Accountability (1)
- Achslage (1)
- Anion recognition (1)
- Automation (1)
- Backlog (1)
- Beschränkte Krümmung (1)
- Bildsegmentierung (1)
- Buffer (1)
- CFRP (1)
- Censoring (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Click chemistry (1)
- Control Engineering (1)
- Data Modeling (1)
- Differenzierbare Mannigfaltigkeit (1)
- Effizienter Algorithmus (1)
- Finanzmathematik (1)
- Firmware (1)
- Formal Verification (1)
- Formale Beschreibungstechnik (1)
- Formale Methode (1)
- Gamma-Konvergenz (1)
- Governance (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hardware/Software co-verification (1)
- Hardwareverifikation (1)
- Hazard Functions (1)
- Hyperspektraler Sensor (1)
- Incremental recomputation (1)
- Infrarotspektroskopie (1)
- Konjugierte Dualität (1)
- Konvergenz (1)
- Kullback-Leibler divergence (1)
- LIDAR (1)
- Lokalisierung (1)
- Macaulay’s inverse system (1)
- Magnetfeldbasierter Lokalisierung (1)
- Magnetfelder (1)
- Manufacturing Control (1)
- MapReduce (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mikrodrall (1)
- Model checking (1)
- Model-driven Engineering (1)
- Moreau-Yosida regularization (1)
- Mosco convergence (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariates Verfahren (1)
- Network (1)
- Neural ADC (1)
- Nichtglatte Optimierung (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Optimierung (1)
- Nichtpositive Krümmung (1)
- Nonprofit Organizations (1)
- Participatory Sensing (1)
- Peptide synthesis (1)
- Portfoliooptimierung (1)
- Programmverifikation (1)
- Prox-Regularisierung (1)
- Real-Time (1)
- Receptor design (1)
- Reflexionsspektroskopie (1)
- Räumliche Statistik (1)
- Scheduler (1)
- Schwache Konvergenz (1)
- Self-X (1)
- Sequenzieller Algorithmus (1)
- Service-oriented Architecture (1)
- Smart City (1)
- Spatial Statistics (1)
- Spectral theory (1)
- Spektralanalyse <Stochastik> (1)
- Spiking Neural ADC (1)
- Stochastic Processes (1)
- Stochastischer Prozess (1)
- Supramolecular chemistry (1)
- Survival Analysis (1)
- Symbolic execution (1)
- Systemarchitektur (1)
- Systems Engineering (1)
- TTEthernet (1)
- Temporal data processing (1)
- Texturrichtung (1)
- Tichonov-Regularisierung (1)
- Time-Series (1)
- Time-Triggered (1)
- ToF (1)
- Upper bound (1)
- Variationsrechnung (1)
- Wide-column stores (1)
- Zufälliges Feld (1)
- alternating minimization (1)
- axis orientation (1)
- calving (1)
- canonical module (1)
- convex constraints (1)
- crashworthiness (1)
- curvature (1)
- curve singularity (1)
- damage tolerance (1)
- depth sensing (1)
- driver status and intention prediction (1)
- drowsiness detection (1)
- duality (1)
- electrical conductivity (1)
- financial mathematics (1)
- finite element method (1)
- hybrid material (1)
- hyperspectal unmixing (1)
- ice shelves (1)
- impedance spectroscopy (1)
- infinite-dimensional manifold (1)
- integer programming (1)
- level K-algebras (1)
- magnetic field based localization (1)
- material characterisation (1)
- metal fibre (1)
- micro lead (1)
- multifunctionality (1)
- nonconvex optimization (1)
- nonnegative matrix factorization (1)
- oscillating magnetic fields (1)
- partial information (1)
- personnel scheduling (1)
- physicians (1)
- portfolio optimization (1)
- primal-dual algorithm (1)
- proximation (1)
- rostering (1)
- sensor fusion (1)
- sparsity (1)
- steel fibre (1)
- surrogate algorithm (1)
- system architecture (1)
- texture orientation (1)
- total variation spatial regularization (1)
- value semigroup (1)
- variational model (1)
- viscoelastic modeling (1)

#### Faculty / Organisational entity

The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.

The present situation of control engineering in the context of automated production can be described as a tension field between its desired outcome and its actual consideration. On the one hand, the share of control engineering compared to the other engineering domains has significantly increased within the last decades due to rising automation degrees of production processes and equipment. On the other hand, the control engineering domain is still underrepresented within the production engineering process. Another limiting factor constitutes a lack of methods and tools to decrease the amount of software engineering efforts and to permit the development of innovative automation applications that ideally support the business requirements.
This thesis addresses this challenging situation by means of the development of a new control engineering methodology. The foundation is built by concepts from computer science to promote structuring and abstraction mechanisms for the software development. In this context, the key sources for this thesis are the paradigm of Service-oriented Architecture and concepts from Model-driven Engineering. To mold these concepts into an integrated engineering procedure, ideas from Systems Engineering are applied. The overall objective is to develop an engineering methodology to improve the efficiency of control engineering by a higher adaptability of control software and decreased programming efforts by reuse.

A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)

Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.

In this paper a modified version of dynamic network
ows is discussed. Whereas dynamic network flows are widely analyzed already, we consider a dynamic flow problem with aggregate arc capacities called Bridge
Problem which was introduced by Melkonian [Mel07]. We extend his research to integer flows and show that this problem is strongly NP-hard. For practical relevance we also introduce and analyze the hybrid bridge problem, i.e. with underlying networks whose arc capacity can limit aggregate flow (bridge problem) or the flow entering an arc at each time (general dynamic flow). For this kind of problem we present efficient procedures for
special cases that run in polynomial time. Moreover, we present a heuristic for general hybrid graphs with restriction on the number of bridge arcs.
Computational experiments show that the heuristic works well, both on random graphs and on graphs modeling also on realistic scenarios.

The cytosolic Fe65 adaptor protein family, consisting of Fe65, Fe65L1 and Fe65L2 is involved in many intracellular signaling pathways linking via its three interaction domains a continuously growing list of proteins by facilitating functional interactions. One of the most important binding partners of Fe65 family proteins is the amyloid precursor protein (APP), which plays an important role in Alzheimer Disease.
To gain deeper insights in the function of the ubiquitously expressed Fe65 and the brain enriched Fe65L1, the goal of my study was I) to analyze their putative synaptic function in vivo, II) to examine structural analysis focusing on a putative dimeric complex of Fe65, III) to consider the involvement of Fe65 in mediating LRP1 and APP intracellular trafficking in murine hippocampal neurons. By utilizing several behavioral analyses of Fe65 KO, Fe65L1 KO and Fe65/Fe65L1 DKO mice I could demonstrate that the Fe65 protein family is essential for learning and memory as well as grip strength and locomotor activity. Furthermore, immunohistological as well as protein biochemical analysis revealed that the Fe65 protein family is important for neuromuscular junction formation in the peripheral nervous system, which involves binding of APP and acting downstream of the APP signaling pathway. Via Co-immunoprecipitation analysis I could verify that Fe65 is capable to form dimers ex vivo, which exclusively occur in the cytosol and upon APP expression are shifted to membrane compartments forming trimeric complexes. The influence of the loss of Fe65 and/or Fe65L1 on APP and/or LRP1 transport characteristics in axons could not be verified, possibly conditioned by the compensatory effect of Fe65L2. However, I could demonstrate that LRP1 affects the APP transport independently of Fe65 by shifting APP into slower types of vesicles leading to changed processing and endocytosis of APP.
The outcome of my thesis advanced our understanding of the Fe65 protein family, especially its interplay with APP physiological function in synapse formation and synaptic plasticity.

The development of autonomous mobile robots is a major topic of current research. As those robots must be able to react to changing environments and avoid collisions also with moving obstacles, the fulfilment of safety requirements is an important aspect. Behaviour-based systems (BBS) have proven to meet several of the properties required for these kindsof robots, such as reactivity, extensibility and re-usability of individual components. BBS consist of a number of behavioural components that individually realise simple tasks. Their interconnection allows to achieve complex robot behaviour, which implies that correct
connections are crucial. The resulting networks can get very large making them difficult to verify. This dissertation presents a novel concept for the analysis and verification of complex autonomous robot systems controlled by behaviour-based software architectures with special focus on the integration of environmental aspects into the processes.
Several analysis techniques have been investigated and adapted to the special requirements of BBS. These include a structural analysis, which is used to find constraint violations and faults in the network layout. Fault tree analysis is applied to identify root causes of hazards and the relationship of system events. For this, a technique to map the behaviour-based control network to the structure of a fault tree has been developed. Testing and data analysis are used for the detection of failures and their root causes. Here, a new concept that identifies patterns in data recorded during test runs has been introduced.
All of these methods cannot guarantee failure-free and safe robot behaviour and can never prove the absence of failures. Therefore, model checking as formal verification technique that proves a property to be correct for the given system, has been chosen to complement the set of analysis techniques. A novel concept for the integration of environmental influences into the model checking process is proposed. Environmental situations and the sensor processing chain are represented as synchronised automata similar to the modelling of the behavioural network. Tools supporting the whole verification process including the creation of formal queries in its environment have been developed.
During the verification of large behavioural networks, the scalability of the model checking approach appears as a big problem. Several approaches that deal with this problem have been investigated and the selection of slicing and abstraction methods has been justified. A concept for the application of these methods is provided, that reduces the behavioural network to the relevant parts before the actual verification process.
All techniques have been applied to the behaviour-based control system of the autonomous outdoor robot RAVON. Its complex network with more than 400 components allows for demonstrating the soundness of the presented concepts. The set of diﬀerent techniques provides a fundamental basis for a comprehensive analysis and verification of BBS acting in changing environments.

This thesis is concerned with different null-models that are used in network analysis. Whenever it is of interest whether a real-world graph is exceptional regarding a particular measure, graphs from a null-model can be used to compare the real-world graph to. By analyzing an appropriate null-model, a researcher may find whether the results of the measure on the real-world graph is exceptional or not.
Deciding which null-model to use is hard and sometimes the difference between the null-models is not even considered. In this thesis, there are several results presented: First, based on simple global measures, undirected graphs are analyzed. The results for these measures indicates that it is not important which null-model is used, thus, the fastest algorithm of a null-model may be used. Next, local measures are investigated. The fastest algorithm proves to be the most complicated to analyze. The model includes multigraphs which do not meet the conditions of all the measures, thus, the measures themselves have to be altered to take care of multigraphs as well. After careful consideration, the conditions are met and the analysis shows, that the fastest is not always the best.
The same applies for directed graphs, as is shown in the last part. There, another more complex measure on graphs is introduced. I continue testing the applicability of several null-models; in the end, a set of equations proves to be fast and good enough as long as conditions regarding the degree sequence are met.

Annual Report 2016
(2017)

Annual Report 2017
(2017)

The proliferation of sensors in everyday devices – especially in smartphones – has led to crowd sensing becoming an important technique in many urban applications ranging from noise pollution mapping or road condition monitoring to tracking the spreading of diseases. However, in order to establish integrated crowd sensing environments on a large scale, some open issues need to be tackled first. On a high level, this thesis concentrates on dealing with two of those key issues: (1) efficiently collecting and processing large amounts of sensor data from smartphones in a scalable manner and (2) extracting abstract data models from those collected data sets thereby enabling the development of complex smart city services based on the extracted knowledge.
Going more into detail, the first main contribution of this thesis is the development of methods and architectures to facilitate simple and efficient deployments, scalability and adaptability of crowd sensing applications in a broad range of scenarios while at the same time enabling the integration of incentivation mechanisms for the participating general public. During an evaluation within a complex, large-scale environment it is shown that real-world deployments of the proposed data recording architecture are in fact feasible. The second major contribution of this thesis is the development of a novel methodology for using the recorded data to extract abstract data models which are representing the inherent core characteristics of the source data correctly. Finally – and in order to bring together the results of the thesis – it is demonstrated how the proposed architecture and the modeling method can be used to implement a complex smart city service by employing a data driven development approach.

In change-point analysis the point of interest is to decide if the observations follow one model
or if there is at least one time-point, where the model has changed. This results in two sub-
fields, the testing of a change and the estimation of the time of change. This thesis considers
both parts but with the restriction of testing and estimating for at most one change-point.
A well known example is based on independent observations having one change in the mean.
Based on the likelihood ratio test a test statistic with an asymptotic Gumbel distribution was
derived for this model. As it is a well-known fact that the corresponding convergence rate is
very slow, modifications of the test using a weight function were considered. Those tests have
a better performance. We focus on this class of test statistics.
The first part gives a detailed introduction to the techniques for analysing test statistics and
estimators. Therefore we consider the multivariate mean change model and focus on the effects
of the weight function. In the case of change-point estimators we can distinguish between
the assumption of a fixed size of change (fixed alternative) and the assumption that the size
of the change is converging to 0 (local alternative). Especially, the fixed case in rarely analysed
in the literature. We show how to come from the proof for the fixed alternative to the
proof of the local alternative. Finally, we give a simulation study for heavy tailed multivariate
observations.
The main part of this thesis focuses on two points. First, analysing test statistics and, secondly,
analysing the corresponding change-point estimators. In both cases, we first consider a
change in the mean for independent observations but relaxing the moment condition. Based on
a robust estimator for the mean, we derive a new type of change-point test having a randomized
weight function. Secondly, we analyse non-linear autoregressive models with unknown
regression function. Based on neural networks, test statistics and estimators are derived for
correctly specified as well as for misspecified situations. This part extends the literature as
we analyse test statistics and estimators not only based on the sample residuals. In both
sections, the section on tests and the one on the change-point estimator, we end with giving
regularity conditions on the model as well as the parameter estimator.
Finally, a simulation study for the case of the neural network based test and estimator is
given. We discuss the behaviour under correct and mis-specification and apply the neural
network based test and estimator on two data sets.

For many years, most distributed real-time systems employed data communication systems specially tailored to address the specific requirements of individual domains: for instance, Controlled Area Network (CAN) and Flexray in the automotive domain, ARINC 429 [FW10] and TTP [Kop95] in the aerospace domain. Some of these solutions were expensive, and eventually not well understood.
Mostly driven by the ever decreasing costs, the application of such distributed real-time system have drastically increased in the last years in different domains. Consequently, cross-domain communication systems are advantageous. Not only the number of distributed real-time systems have been increasing but also the number of nodes per system, have drastically increased, which in turn increases their network bandwidth requirements. Further, the system architectures have been changing, allowing for applications to spread computations among different computer nodes. For example, modern avionics systems moved from federated to integrated modular architecture, also increasing the network bandwidth requirements.
Ethernet (IEEE 802.3) [iee12] is a well established network standard. Further, it is fast, easy to install, and the interface ICs are cheap [Dec05]. However, Ethernet does not offer any temporal guarantee. Research groups from academia and industry have presented a number of protocols merging the benefits of Ethernet and the temporal guarantees required by distributed real-time systems. Two of these protocols are: Avionics Full-Duplex Switched Ethernet (AFDX) [AFD09] and Time-Triggered Ethernet (TTEthernet) [tim16]. In this dissertation, we propose solutions for two problems faced during the design of AFDX and TTEthernet networks: avoiding data loss due to buffer overflow in AFDX networks with multiple priority traffic, and scheduling of TTEthernet networks.
AFDX guarantees bandwidth separation and bounded transmission latency for each communication channel. Communication channels in AFDX networks are not synchronized, and therefore frames might compete for the same output port, requiring buffering to avoid data loss. To avoid buffer overflow and the resulting data loss, the network designer must reserve a safe, but not too pessimistic amount of memory of each buffer. The current AFDX standard allows for the classification of the network traffic with two priorities. Nevertheless, some commercial solutions provide multiple priorities, increasing the complexity of the buffer backlog analysis. The state-of-the-art AFDX buffer backlog analysis does not provide a method to compute deterministic upper bounds
iiifor buffer backlog of AFDX networks with multiple priority traffic. Therefore, in this dissertation we propose a method to address this open problem. Our method is based on the analysis of the largest busy period encountered by frames stored in a buffer. We identify the ingress (and respective egress) order of frames in the largest busy period that leads to the largest buffer backlog, and then compute the respective buffer backlog upper bound. We present experiments to measure the computational costs of our method.
In TTEthernet, nodes are synchronized, allowing for message transmission at well defined points in time, computed off-line and stored in a conflict-free scheduling table. The computation of such scheduling tables is a NP-complete problem [Kor92], which should be solved in reasonable time for industrial size networks. We propose an approach to efficiently compute a schedule for the TT communication channels in TTEthernet networks, in which we model the scheduling problem as a search tree. As the scheduler traverses the search tree, it schedules the communication channels on a physical link. We presented two approaches to traverse the search tree while progressively creating the vertices of the search tree. A valid schedule is found once the scheduler reaches a valid leaf. If on the contrary, it reaches an invalid leaf, the scheduler backtracks searching for a path to a valid leaf. We present a set of experiments to demonstrate the impact of the input parameters on the time taken to compute a feasible schedule or to deem the set of virtual links infeasible.

Bulk-boundary correspondence in non-equilibrium dynamics of one-dimensional topological insulators
(2017)

Dynamical phase transitions (DPT) are receiving a rising interest. They are known to behave analogously to
equilibrium phase transitions (EPT) to a large extend. However, it is easy to see that DPT can occur in finite
systems, while EPT are only possible in the thermodynamic limit. So far it is not clear how far the analogy of
DPT and EPT goes. It was suggested, that there is a relation between topological phase transitions (TPT)
and DPT, but many open questions remain.
Typically, to study DPT, the Loschmidt echo (LE) after a quench is investigated, where DPT are visible as
singularities. For one-dimensional systems, each singularity is connected to a certain critical time scale, which
is given by the dispersion in the chain.
In topological free-fermion models with winding numbers 0 or 1, only the LE in periodic boundary conditions
(PBC) has been investigated. In open boundary conditions (OBC), these models are characterized by symmetry
protected edge modes in the topologically non-trivial phase. It is completely unclear how these modes affect
DPT. We investigate systems with PBC governed by multiple time scales with a Z topological invariant. In
OBC, we provide numerical evidence for the presence of bulk-boundary correspondence in DPT in quenches
across a TPT.

This thesis comprises several independent research studies on transition metal complexes as trapped ions in isolation. Electrospray Ionization (ESI) serves to transfer ions from solution into the gas phase for mass spectrometric investigations. Subsequently, a variety of experimental and theoretical methods provide fundamental insights into molecular properties of the isolated complexes: InfraRed (Multiple) Photon Dissociation (IR-(M)PD) spectroscopy provides information on binding motifs and molecular structures at cryo temperatures as well as at room temperature. Collision Induced Dissociation (CID) serves to elucidate molecular fragmentation pathways as well as relative stabilities of the complexes at room temperature. Quantum chemical calculations via Density Functional Theory (DFT) substantiate the experimental results and deepen the fundamental insights into the molecular properties of the complexes. Magnetic couplings between metal centers in oligonuclear complexes are investigated by Broken Symmetry DFT modelling and X Ray Magnetic Circular Dichroism (XMCD) spectroscopy.

”In contemporary electronics 80% of a chip may perform digital functions but the 20%
of analog functions may take 80% of the development time.” [1]. Aggravating this, the
demands on analog design is increasing with rapid technology scaling. Most designs
have moved away from analog to digital domains, where possible, however, interacting
with the environment will always require analog to digital data conversion. Adding to
this problem, the number of sensors used in consumer and industry related products are
rapidly increasing. Designers of ADCs are dealing with this problem in several ways, the
most important is the migration towards digital designs and time domain techniques.
Time to Digital Converters (TDC) are becoming increasingly popular for robust signal
processing. Biological neurons make use of spikes, which carry spike timing information
and will not be affected by the problems related to technology scaling. Neuromorphic
ADCs still remain exotic with few implementations in sub-micron technologies Table 2.7.
Even among these few designs, the strengths of biological neurons are rarely exploited.
From a previous work [2], LUCOS, a high dynamic range image sensor, the efficiency
of spike processing has been validated. The ideas from this work can be generalized to
make a highly effective sensor signal conditioning system, which carries the promise to
be robust to technology scaling.
The goal of this work is to create a novel spiking neural ADC as a novel form of a
Multi-Sensor Signal Conditioning and Conversion system, which
• Will be able to interface with or be a part of a System on Chip with traditional
analog or advanced digital components.
• Will have a graceful degradation.
• Will be robust to noise and jitter related problems.
• Will be able to learn and adapt to static errors and dynamic errors.
• Will be capable of self-repair, self-monitoring and self-calibration
Sensory systems in humans and other animals analyze the environment using several
techniques. These techniques have been evolved and perfected to help the animal sur-
vive. Different animals specialize in different sense organs, however, the peripheral
neural network architectures remain similar among various animal species with few ex-
ceptions. While there are many biological sensing techniques present, most popularly
used engineering techniques are based on intensity detection, frequency detection, and
edge detection. These techniques are used with traditional analog processing (e.g., colorvi
sensors using filters), and with biological techniques (e.g. LUCOS chip [2]). The local-
ization capability of animals has never been fully utilized.
One of the most important capabilities for animals, vertebrates or invertebrates, is the
capability for localization. The object of localization can be predator, prey, sources of
water, or food. Since these are basic necessities for survival, they evolve much faster
due to the survival of the fittest. In fact, localization capabilities, even if the sensors
are different, have convergently evolved to have same processing methods (coincidence
detection) in their peripheral neurons (for e.g., forked tongue of a snake, antennae of
a cockroach, acoustic localization in fishes and mammals). This convergent evolution
increases the validity of the technique. In this work, localization concepts based on
acoustic localization and tropotaxis are investigated and employed for creation of novel
ADCs.
Unlike intensity and frequency detection, which are not linear (for e.g. eyes saturate in
bright light, loose color perception in low light), localization is inherently linear. This
is mainly because the accurate localization of predator or prey can be the difference
between life and death for an animal.
Figure 1 visually explains the ADC concept proposed in this work. This has two parts.
(1) Sensor to Spike(time) Conversion (SSC), (2) Spike(time) to Digital Conversion(SDC).
Both of the structures have been designed with models of biological neurons. The
combination of these two structures is called SSDC.
To efficiently implement the proposed concept, a comparison of several biological neural
models is made and two models are shortlisted. Various synapse structures are also
studied. From this study, Leaky Integrate and Fire neuron (LIF) is chosen since it
fulfills all the requirements of the proposed structure. The analog neuron and synapse
designs from Indiveri et. al. [3], [4] were taken, and simulations were conducted using
cadence and the behavioral equivalence with biological counterpart was checked. The
LIF neuron had features, that were not required for the proposed approach. A simple
LIF neuron stripped of these features and was designed to be as fast as allowed by the
technology.
The SDC was designed with the neural building blocks and the delays were designed
using buffer chains. This SDC converts incoming Time Interval Code (TIC) to sparse
place coding using coincidence detection. Coincidence detection is a property of spiking
neurons, which is a time domain equivalent of a Gaussian Kernel. The SDC is designed to
have an online reconfigurable Gaussian kernel width, weight, threshold, and refractory
period. The advantage of sparse place codes, which contain rank order coding wasvii
Figure 1: ADC as a localization problem (right), Jeffress model of sound localization
visualized (left). The values t 1 and t 2 indicate the time taken from the source to s1 and
s2 respectively.
described in our work [5]. A time based winner take all circuit with memory was created
based on a previous work [6] for reading out of sparse place codes asynchronously.
The SSC was also initially designed with the same building blocks. Additionally, a
differential synapse was designed for better SSC. The sensor element considered wasviii
a Wheatstone full bridge AMR sensor AFF755 from Sensitec GmbH. A reconfigurable
version of the synapse was also designed for a more generic sensor interface.
The first prototype chip SSDCα was designed with 257 modules of coincidence detectors
realizing the SDC and the SSC. Since the spike times are the most important information,
the spikes can be treated as digital pulses. This provides the capability for digital
communication between analog modules. This creates a lot of freedom for use of digital
processing between the discussed analog modules. This advantage is fully exploited
in the design of SSDCα. Three SSC modules are multiplexed to the SDC. These SSC
modules also provide outputs from the chip simultaneously. A rising edge detecting fixed
pulse width generation circuit is used to create pulses that are best suited for efficient
performance of the SDC. The delay lines are made reconfigurable to increase robustness
and modify the span of the SDC. The readout technique used in the first prototype is
a relatively slow but safe shift register. It is used to analyze the characteristics of the
core work. This will be replaced by faster alternatives discussed in the work. The area
of the chip is 8.5 mm 2 . It has a sampling rate from DC to 150 kHz. It has a resolution
from 8-bit to 13-bit. It has 28,200 transistors on the chip. It has been designed in 350
nm CMOS technology from ams. The chip has been manufactured and tested with a
sampling rate of 10 kHz and a theoretical resolution of 8 bits. However, due to the
limitations of our Time-Interval-Generator, we are able to confirm for only 4 bits of
resolution.
The key novel contributions of this work are
• Neuromorphic implementation of AD conversion as a localization problem based
on sound localization and tropotaxis concepts found in nature.
• Coincidence detection with sparse place coding to enhance resolution.
• Graceful degradation without redundant elements, inherent robustness to noise,
which helps in scaling of technologies
• Amenable to local adaptation and self-x features.
Conceptual goals have all been fulfilled, with the exception of adaptation. The feasibility
for local adaptation has been shown with promising results and further investigation is
required for future work. This thesis work acts as a baseline, paving the way for R&D
in a new direction. The chip design has used 350 nm ams hitkit as a vehicle to prove
the functionality of the core concept. The concept can be easily ported to present
aggressively-scaled-technologies and future technologies.

In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.

This thesis brings together convex analysis and hyperspectral image processing.
Convex analysis is the study of convex functions and their properties.
Convex functions are important because they admit minimization by efficient algorithms
and the solution of many optimization problems can be formulated as
minimization of a convex objective function, extending much beyond
the classical image restoration problems of denoising, deblurring and inpainting.
\(\hspace{1mm}\)
At the heart of convex analysis is the duality mapping induced within the
class of convex functions by the Fenchel transform.
In the last decades efficient optimization algorithms have been developed based
on the Fenchel transform and the concept of infimal convolution.
\(\hspace{1mm}\)
The infimal convolution is of similar importance in convex analysis as the
convolution in classical analysis. In particular, the infimal convolution with
scaled parabolas gives rise to the one parameter family of Moreau-Yosida envelopes,
which approximate a given function from below while preserving its minimum
value and minimizers.
The closely related proximal mapping replaces the gradient step
in a recently developed class of efficient first-order iterative minimization algorithms
for non-differentiable functions. For a finite convex function,
the proximal mapping coincides with a gradient step of its Moreau-Yosida envelope.
Efficient algorithms are needed in hyperspectral image processing,
where several hundred intensity values measured in each spatial point
give rise to large data volumes.
\(\hspace{1mm}\)
In the \(\textbf{first part}\) of this thesis, we are concerned with
models and algorithms for hyperspectral unmixing.
As part of this thesis a hyperspectral imaging system was taken into operation
at the Fraunhofer ITWM Kaiserslautern to evaluate the developed algorithms on real data.
Motivated by missing-pixel defects common in current hyperspectral imaging systems,
we propose a
total variation regularized unmixing model for incomplete and noisy data
for the case when pure spectra are given.
We minimize the proposed model by a primal-dual algorithm based on the
proximum mapping and the Fenchel transform.
To solve the unmixing problem when only a library of pure spectra is provided,
we study a modification which includes a sparsity regularizer into model.
\(\hspace{1mm}\)
We end the first part with the convergence analysis for a multiplicative
algorithm derived by optimization transfer.
The proposed algorithm extends well-known multiplicative update rules
for minimizing the Kullback-Leibler divergence,
to solve a hyperspectral unmixing model in the case
when no prior knowledge of pure spectra is given.
\(\hspace{1mm}\)
In the \(\textbf{second part}\) of this thesis, we study the properties of Moreau-Yosida envelopes,
first for functions defined on Hadamard manifolds, which are (possibly) infinite-dimensional
Riemannian manifolds with negative curvature,
and then for functions defined on Hadamard spaces.
\(\hspace{1mm}\)
In particular we extend to infinite-dimensional Riemannian manifolds an expression
for the gradient of the Moreau-Yosida envelope in terms of the proximal mapping.
With the help of this expression we show that a sequence of functions
converges to a given limit function in the sense of Mosco
if the corresponding Moreau-Yosida envelopes converge pointwise at all scales.
\(\hspace{1mm}\)
Finally we extend this result to the more general setting of Hadamard spaces.
As the reverse implication is already known, this unites two definitions of Mosco convergence
on Hadamard spaces, which have both been used in the literature,
and whose equivalence has not yet been known.

Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification.
Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed.
However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA).
This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment.
Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints.
At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems.
The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces.
This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS.
Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties.

This thesis presents research studies on the fundamental interplay of diatomic molecules with transition metal compounds under cryogenic conditions. The utilized setup offers a multitude of opportunities to study isolated ions: The ions can either be generated by an ElectroSpray Ionization (ESI) source or a Laser VAPorization (LVAP) cluster ion source. The setup facilitates kinetic investigations of the ions with different reaction gases under well-defined isothermal conditions. Moreover it enables cryo InfraRed (Multiple) Photon Dissociation (IR-(M)PD) spectroscopy in combination with tunable OPO/OPA laser systems. In conjunction with density functional theory (DFT) modelling, the IR(M)-PD spectra allow for an assignment of geometric minimum structures. Furthermore DFT modelling helps to identify possible reaction pathways. Altogether the presented methods allow to gain fundamental insights into molecular structures and reactivity of the investigated systems.
The first part of this thesis focuses on the interplay of N2 with different transition metal clusters (Con+, Nin+, and Fen+) by cryo IR spectroscopy and cryo kinetics. In conjunction with DFT modelling the N2 coordination was elucidated (Con+), structures were assigned (Nin+), the concept of structure related surface adsorption behavior was introduced (Nin+), and the a first explanation for the inertness if Fe17+ was given (Fen+). Furthermore this thesis provides for a case study on the coadsorption of H2 and N2 on Ru8+ that elucidates the H migration on the Ru cluster. The last part of the thesis addresses the IR spectra of in vacuo generated [Hemin]+ complexes with N2, O2, and CO. Structures and spin states were assigned with the help of DFT modelling.

In the present work, the interaction of diatomic molecules with charged transition metal clusters and complexes was investigated. Temperature controlled isothermal kinetic studies served to elucidate the adsorption behavior of transition metal clusters. Infrared multiple photon dissociation (IR-MPD) experiments in conjunction with density functional theory (DFT) computations enabled the analysis of adsorbate induced changes on the structure and spin multiplicity of transition metal cores. A tandem cryo trap setup was used for the kinetic and spectroscopic investigations of the given compounds as isolated species in the gas phase. The presented investigations enabled insight into the metal-adsorbate bonding and provided cluster size and adsorbate coverage dependent information on cluster surface morphologies.

Computational simulations run on large supercomputers balance their outputs with the need of the scientist and the capability of the machine. Persistent storage is typically expensive and slow, its peformance grows at a slower rate than the processing power of the machine. This forces scientists to be practical about the size and frequency of the simulation outputs that can be later analyzed to understand the simulation states. Flexibility in the trade-offs of flexibilty and accessibility of the outputs of the simulations are critical the success of scientists using the supercomputers to understand their science. In situ transformations of the simulation state to be persistently stored is the focus of this dissertation.
The extreme size and parallelism of simulations can cause challenges for visualization and data analysis. This is coupled with the need to accept pre partitioned data into the analysis algorithms, which is not always well oriented toward existing software infrastructures. The work in this dissertation is focused on improving current work flows and software to accept data as it is, and efficiently produce smaller, more information rich data, for persistent storage that is easily consumed by end-user scientists. I attack this problem from both a theoretical and practical basis, by managing completely raw data to quantities of information dense visualizations and study methods for managing both the creation and persistence of data products from large scale simulations.

This paper presents a case study of duty rostering for physicians at a department of orthopedics and trauma surgery. We provide a detailed description of the rostering problem faced and present an integer programming model that has been used in practice for creating duty rosters at the department for more than a year. Using real world data, we compare the model output to a manually generated roster as used previously by the department and analyze the quality of the rosters generated by the model over a longer time span. Moreover, we demonstrate how unforeseen events such as absences of scheduled physicians are handled.

In current practices of system-on-chip (SoC) design a trend can be observed to integrate more and more low-level software components into the system hardware at different levels of granularity. The implementation of important control functions and communication structures is frequently shifted from the SoC’s hardware into its firmware. As a result, the tight coupling of hardware and software at a low level of granularity raises substantial verification challenges since the conventional practice of verifying hardware and software independently is no longer sufficient. This calls for new methods for verification based on a joint analysis of hardware and software.
This thesis proposes hardware-dependent models of low-level software for performing formal verification. The proposed models are conceived to represent the software integrated with its hardware environment according to the current SoC design practices. Two hardware/software integration scenarios are addressed in this thesis, namely, speed-independent communication of the processor with its hardware periphery and cycle-accurate integration of firmware into an SoC module. For speed-independent hardware/software integration an approach for equivalence checking of hardware-dependent software is proposed and an evaluated. For the case of cycle-accurate hardware/software integration, a model for hardware/software co-verification has been developed and experimentally evaluated by applying it to property checking.

We continue in this paper the study of k-adaptable robust solutions for combinatorial optimization problems with bounded uncertainty sets. In this concept not a single solution needs to be chosen to hedge against the uncertainty. Instead one is allowed to choose a set of k different solutions from which one can be chosen after the uncertain scenario has been revealed. We first show how the problem can be decomposed into polynomially many subproblems if k is fixed. In the remaining part of the paper we consider the special case where k=2, i.e., one is allowed to choose two different solutions to hedge against the uncertainty. We decompose this problem into so called coordination problems. The study of these coordination problems turns out to be interesting on its own. We prove positive results for the unconstrained combinatorial optimization problem, the matroid maximization problem, the selection problem, and the shortest path problem on series parallel graphs. The shortest path problem on general graphs turns out to be NP-complete. Further, we present for minimization problems how to transform approximation algorithms for the coordination problem to approximation algorithms for the original problem. We study the knapsack problem to show that this relation does not hold for maximization problems in general. We present a PTAS for the corresponding coordination problem and prove that the 2-adaptable knapsack problem is not at all approximable.

We extend the standard concept of robust optimization by the introduction of an alternative solution. In contrast to the classic concept, one is allowed to chose two solutions from which the best can be picked after the uncertain scenario has been revealed. We focus in this paper on the resulting robust problem for combinatorial problems with bounded uncertainty sets. We present a reformulation of the robust problem which decomposes it into polynomially many subproblems. In each subproblem one needs to find two solutions which are connected by a cost function which penalizes if the same element is part of both solutions. Using this reformulation, we show how the robust problem can be solved efficiently for the unconstrained combinatorial problem, the selection problem, and the minimum spanning tree problem. The robust problem corresponding to the shortest path problem turns out to be NP-complete on general graphs. However, for series-parallel graphs, the robust shortest path problem can be solved efficiently. Further, we show how approximation algorithms for the subproblem can be used to compute approximate solutions for the original problem.

Chlorogenic acids (CGA) are phenolic compounds that form during the esterification of certain trans-cinnamic acids with (-)-quinic acid. According to several human intervention studies, they may have potential health benefits. Coffee is the main source of CGA in human nutrition, and is consumed either alone or in combination with a variety of foods. For this reason, the presented study aimed to clarify whether the simultaneous consumption of food, for example, a breakfast rich in carbohydrates, with instant coffee affects the absorption and bioavailability of CGA. The research specifically focused on how various food matrices, which are consumed at the same time as a coffee beverage, will influence kinetic parameters such as area under the curve (AUC), maximum plasma concentration (cmax), and time needed to reach maximum plasma concentration (tmax).
In a randomized crossover study, fourteen healthy participants consumed either pure instant coffee or coffee with a carbohydrate- or fat-rich meal. All of the subjects consumed the same quantity of CGA (3.1 mg CGA/kg body weight). Blood samples, collected at various time points up to 15 h after instant coffee consumption, were quantitatively analysed. Additionally, three urine collection intervals were chosen over a time period of 24h. High performance liquid chromatography electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS) was used to determine the CGA present, along with the concentrations of respective metabolites.
During a blind data review meeting, 20 of the 56 analysed plasma metabolites were chosen for further statistical analysis. A total of 36 metabolites were monitored in the urine samples. Similar as in the plasma samples, between-treatment differences, measured through AUC, Cmax, and tmax, of various CGA derived metabolites were to estimate. Each treatment was also analysed in terms of the correlation between the plasma AUC and urinary excretion of seven metabolites.
It is already known that inter-individual variations in CGA absorption depends on gut microbial degradation and affects the efficacy of these compounds. Microorganisms present in the gastrointestinal tract metabolise CGA to form dihydroferulic acid (DHFA) and dihydrocaffeic acid (DHCA) derivatives, which precede the subsequent formation of a wide range of metabolites. Therefore stool samples were collected from the participants within 12 h before the second study day. Subsequent an ex-vivo incubation of faecal samples with 5-O-caffeoylquinic acid (5-CQA), the main chlorogenic acid found in coffee was performed. An HPLC system connected to a CoulArray® detector was used to measure the concentrations of 5-CQA and its metabolites. Reduced concentrations of 5-CQA as well as the appearance of DHCA and caffeic acid (CA) in the gut microbiota medium, were monitored to calculate the inter-individual kinetics for each compound. In addition, these samples were analysed for microbiota content by an external laboratory (L&S, Bad Bocklet, Germany). These results were used to distinguish whether the decreased or increased content of a specific microorganism was related to an individual’s decreased or increased metabolic efficiency. Finally, we used to aforementioned results to evaluate if any correlation could be drawn between the plasma appearance, urinary excretion and ability of microorganisms to degrade 5-CQA.
Strong inter-individual variation was observed for AUC, Cmax and tmax. The AUC measured the quantity of CGA in plasma samples. We noted that pure instant coffee consumption resulted in slightly higher CGA bioavailability than instant coffee with the additional consumption of a meal. However, these differences were not statistically significant. Additionally, the metabolites were divided into groups, according to similarity and chemical properties. They were further classified into three groups according to their physical structure and predicated from the area of appearance: directly from coffee (quinics), after first degradation and metabolism (phenolics, all trans-cinammic acids and their sulfates and glucuronides) as well as colonic degradation and metabolism (colonics, all dihydro compounds). These respective metabolic classes showed significant differences in the AUC values of certain classes yet no significant between-treatment differences. Our results corroborated earlier studies in that the three caffeoylquinic acid (CQA) isomers were absorbed to a lower extent whereas all feruloylquinic acids (FQA) were detected in comparably high amounts in the plasma samples of the volunteers. However, the amount of these quinic acid conjugates in the plasma samples accounted for only 0,5% of the total amount of identified. In contrast, at least 8.7% of the investigated compounds were identified to be phenolics. Dihydro compounds, the so known colonics, were identified as the most common metabolites (90.8%). Additionally, dihydroferulic acid (DHFA), meta-dihydrocoumaric acid (mDHCoA), dihydrocaffeic acid-3-sulfate (DHCA3S) and dihydroisoferulic acid (DHiFA) were identified to account for 78% of the studied metabolites, and thus represent the most abundant compounds circulating in the plasma after coffee consumption.
Irrespective of treatment, the tmax value for early metabolites (quinic and phenolic compounds) was observed between 0 and 2 h after the ingestion of coffee and tmax value for late metabolites (colonic metabolites) was observed between 7 and 10 h. The amount of colonic metabolites had not returned to the baseline level 15 h after the ingestion of coffee. The co-ingestion of breakfast and coffee, when compared to the ingestion of coffee alone, significantly increased the Cmax values for all quinic and phenolic compounds, as well as two colonic metabolites (DHCA and DHiFA). These differences also revealed that the three treatments differed in terms of the kinetics of release. Thus, future studies should use an extended plasma collection time with shorter intervals (e.g. 2 h) to provide a full pharmacokinetic profile.
There were no statistically significant between-treatment differences in the urine samples collected 24 h after coffee ingestion. However, urine samples collected within six hours of the consumption of coffee alone or in combination with a fat-rich meal showed significantly higher CGA quantities than samples collected at the same time point for coffee ingested with a carbohydrate-rich. Strong inter-individual variability and the fact that only 14 healthy subjects participated in the study hindered the identification of any clear trend between the plasma concentrations of metabolites and their excretion in urine.
Four hours after the ex vivo incubation of 5-CQA with individual faecal samples the sum of 5-CQA, CA, and DHCA varied strongly between participants. These findings could result from binding effects of the phenolic compounds with faecal constituents, further degradation or metabolism, and/or the release of bound phenolic substances before the experiment started. We hypothesized that for participants with high plasma AUCs of dihydro compounds, their incubation samples show also high concentrations of CA and DHCA in the incubation medium after four hours. No significant correlation could be found.
This study and all of the outcomes were exploratory. Due to the limited number of participants, we could only investigate tendencies for how the co-ingestion of food affects the bioavailability of CGAs and their respective metabolites following coffee consumption. Therefore, the achieved results are only indicative. Despite this limitation, the data highlight that even though all three treatments had strong similarities in the total bioavailability of CGAs and metabolites from instant coffee, there were between-treatment differences in the kinetics of release. The co-ingestion of breakfast and coffee favoured a slow and continuous release of colonic metabolites while non-metabolized coffee components were observed in plasma within the first hour when coffee was ingested alone.
In conclusion, both a shift in gastrointestinal transit time and the plasma metabolite composition were observed when the ingestion of coffee alone or in combination with breakfast were compared. These results showed that breakfast consumption induces the retarded release of chlorogenic acid metabolites in humans. The data from our human intervention study suggest that the bioavailability of chlorogenic acids from coffee and their derivatives does not only depend on chemical structure, molecular size and active or passive transport ability, but is also influenced by inter-individual differences. Therefore, we strongly recommend that future studies include metabolism experiments that focus on microbiota genotypes and/or the genotyping of individual subjects. This type of research could be pivotal to elucidating whether, and how, genotype affects the metabolic profile after chlorogenic acid intake.

Following the ideas presented in Dahlhaus (2000) and Dahlhaus and Sahm (2000) for time series, we build a Whittle-type approximation of the Gaussian likelihood for locally stationary random fields. To achieve this goal, we extend a Szegö-type formula, for the multidimensional and local stationary case and secondly we derived a set of matrix approximations using elements of the spectral theory of stochastic processes. The minimization of the Whittle likelihood leads to the so-called Whittle estimator \(\widehat{\theta}_{T}\). For the sake of simplicity we assume known mean (without loss of generality zero mean), and hence \(\widehat{\theta}_{T}\) estimates the parameter vector of the covariance matrix \(\Sigma_{\theta}\).
We investigate the asymptotic properties of the Whittle estimate, in particular uniform convergence of the likelihoods, and consistency and Gaussianity of the estimator. A main point is a detailed analysis of the asymptotic bias which is considerably more difficult for random fields than for time series. Furthemore, we prove in case of model misspecification that the minimum of our Whittle likelihood still converges, where the limit is the minimum of the Kullback-Leibler information divergence.
Finally, we evaluate the performance of the Whittle estimator through computational simulations and estimation of conditional autoregressive models, and a real data application.

Magnetic and Structural Characterization of Isolated Gaseous Ions by XMCD and IRMPD Spectroscopy
(2017)

This thesis comprises four independent research studies on the magnetic and structural characterization of isolated ions in the gas phase. The electrospray ionization (ESI) technique is used for the transfer of (multi-)metallic complexes and organic molecules from solution into the gas phase. The subsequent storage of molecular ions in ion traps allows for a variety of spectroscopic methods in order to investigate the intrinsic properties of the isolated species void of solvent, crystal lattice, bulk or supporting surface effects. The magnetic properties of metal complexes are elucidated by gas phase X-ray magnetic circular dichroism (XMCD) spectroscopy. The element selective technique in combination with sum rule analysis allows for a separate determination of spin and orbital magnetic moments at different metal centers. Structural investigations on isolated molecular ions in terms of coordination sphere, binding motifs and hydrogen bonds are conducted using infrared multiple photon dissociation (IRMPD) spectroscopy. A resonant two color IRMPD technique serves to increase fragmentation yields, overcome dissociation bottlenecks and reveal otherwise dark bands. Comparison of experimental IRMPD spectra with calculated harmonic absorption spectra by density functional theory (DFT) provides structural assignments for a profound understanding of intra- and intermolecular interactions.

This dissertation describes an indoor localization system based on oscillating magnetic fields and the underlying processing architecture. The system consists of several fixed anchor points, generating the magnetic fields (transmitter), and wearable magnetic field measurement units, whose position should be determined (receiver). The system is evaluated in different environments and application areas. Additionally, various fields of application are discussed and assessed in ubiquitous and pervasive computing and Ambient Assisted Living. The fusion of magnetic field-based distance information and positions derived from LIDAR distance measurements is described and evaluated.
The system architecture consists of three layers, a physical layer, a layer for position and distance estimation between a magnetic field transmitter and a receiver, and a layer which uses several measurements to different transmitters to estimate the overall position of a wearable measurement unit.
Each layer covers different aspects which have to be taken care of when magnetic field information is processed. Especially the properties of the generated magnetic field information are considered in the processing algorithms.
The physical layer covers the magnetic field generation and magnetic Field-Based information transfer, synchronization of a transmitter and the receivers and the description of the locally measured magnetic fields on the receiver side. After a transfer of this information to a central processing unit, the hardware specific signal levels are transformed to the levels of the theoretical magnetic field models. The values are then used to estimate candidate positions and distances. Due to symmetrical effects of the magnetic fields, it is only possible to reduce the receiver position to 8 points around the transmitter (one position in each of the octants of the coordinate system). The determined positions have a mean error of 108 cm, the average error of the distance is 40 cm.
On top of this, the distance and position information against different transmitters are fused, this covers clock synchronization of transmitters, triggering and scheduling sequences and distance and position based localization and tracking algorithms. The magnetic-field-based indoor localization system has been evaluated in different applications and environments; the mean position error is 60 cm to 70 cm depending on the environment. A comparison against an RF-based indoor localization system shows the robustness of magnetic fields against RF shadows caused by big metal objects.
We additionally present algorithms for regions of interest detection, working on raw magnetic field information and transformed position and distance information. Setups in larger areas can distinguish regions which are further than 50 cm apart, small scale coil setups (3 transmitters in 2m^3) allow to resolve regions below 20 cm.
In the end, we describe a fusion algorithm for a wearable localization system based on 4 LIDAR distance measurement units and magnetic field-based distance estimation. The magnetic field indoor localization system provides distance proximity information which is used to resolve ambiguous position estimates of the LIDAR system. In a room (8m × 10m), we achieve a mean error of 8 cm.

Manifolds
(2017)

Non–woven materials consist of many thousands of fibres laid down on a conveyor belt
under the influence of a turbulent air stream. To improve industrial processes for the
production of non–woven materials, we develop and explore novel mathematical fibre and
material models.
In Part I of this thesis we improve existing mathematical models describing the fibres on the
belt in the meltspinning process. In contrast to existing models, we include the fibre–fibre
interaction caused by the fibres’ thickness which prevents the intersection of the fibres and,
hence, results in a more accurate mathematical description. We start from a microscopic
characterisation, where each fibre is described by a stochastic functional differential
equation and include the interaction along the whole fibre path, which is described by a
delay term. As many fibres are required for the production of a non–woven material, we
consider the corresponding mean–field equation, which describes the evolution of the fibre
distribution with respect to fibre position and orientation. To analyse the particular case of
large turbulences in the air stream, we develop the diffusion approximation which yields a
distribution describing the fibre position. Considering the convergence to equilibrium on
an analytical level, as well as performing numerical experiments, gives an insight into the
influence of the novel interaction term in the equations.
In Part II of this thesis we model the industrial airlay process, which is a production method
whereby many short fibres build a three–dimensional non–woven material. We focus on
the development of a material model based on original fibre properties, machine data and
micro computer tomography. A possible linking of these models to other simulation tools,
for example virtual tensile tests, is discussed.
The models and methods presented in this thesis promise to further the field in mathematical
modelling and computational simulation of non–woven materials.

As there is a rising interest in accountability issues and governance in nonprofit organizations,
this work aims to give some notions on the context of these two topics. Hence,
within this work, a theoretical framework is developed, whereby the correlation of accountability
and governance in nonprofit organizations shall be measured. This framework
suggests, that in nonprofit organizations, nonprofit governance, represented by
board members and professionals, has an influence on compliance, as a component of
accountability. In respect to the board members, it is supposed that, board competence,
transparency, stakeholder relationship and (public) trust are positively related to compliance.
Furthermore, it is assumed, referring to professionals, that the variables performance,
training or development and satisfaction are positively and empowerment is
negatively correlated with compliance. These assumptions are based on a thorough theoretical
literature research. Furthermore, a questionnaire is designed to measure the correlations.
This questionnaire will be amplified in a discussion following to the explanation
of the research model. Concluding, some limitations on the research model are given,
which should be taken into account by undertaking the questionnaire.

Due to their superior weight-specific mechanical properties, carbon fibre epoxy composites (CFRP) are commonly used in aviation industry. However, their brittle failure behaviour limits the structural integrity and damage tolerance in case of impact (e.g. tool drop, bird strike, hail strike, ramp collision) or crash events. To ensure sufficient robustness, a minimum skin thickness is therefore prescribed for the fuselage, partially exceeding typical service load requirements from ground or flight manoeuvre load cases. A minimum skin thickness is also required for lightning strike protection purposes and to enable state-of-the-art bolted repair technology. Furthermore, the electrical conductivity of CFRP aircraft structures is insufficient for certain applications; additional metal components are necessary to provide electrical functionality (e.g. metal meshes on the outer skin for lightning strike protection, wires for electrical bonding and grounding, overbraiding of cables to provide electromagnetic shielding). The corresponding penalty weights compromise the lightweight potential that is actually given by the structural performance of CFRP over aluminium alloys.
Former research attempts tried to overcome these deficits by modifying the resin system (e.g. by addition of conductive particles or toughening agents) but could not prove sufficient enhancements. A novel holistic approach is the incorporation of highly conductive and ductile continuous metal fibres into CFRP. The basic idea of this hybrid material concept is to take advantage of both the electrical and mechanical capabilities of the integrated metal fibres in order to simultaneously improve the electrical conductivity and the damage tolerance of the composite. The increased density of the hybrid material is over-compensated by omitting the need for additional electrical system installation items and by the enhanced structural performance, enabling a reduction of the prescribed minimum skin thickness. Advantages over state-of-the-art fibre metal laminates mainly arise from design and processing technology aspects.
In this context, the present work focuses on analysing and optimising the structural and electrical performance of such hybrid composites with shares of metal fibres up to 20 vol.%. Bundles of soft-annealed austenitic steel or copper cladded low carbon steel fibres with filament diameters of 60 or 63 µm are considered. The fibre bundles are distinguished by high elongation at break (32 %) and ultimate tensile strength (900 MPa) or high electrical conductivity (2.4 × 10^7 S/m). Comprehensive researches are carried out on the fibre bundles as well as on unidirectional and multiaxial laminates. Both hybrid composites with homogeneous and accumulated steel fibre arrangement are taken into account. Electrical in-plane conductivity, plain tensile behaviour, suitability for bolted joints as well as impact and perforation performance of the composite are analysed. Additionally, a novel non-destructive testing method based on measurement of deformation-induced phase transformation of the metastable austenitic steel fibres is discussed.
The outcome of the conductivity measurements verifies a correlation of the volume conductivity of the composite with the volume share and the specific electrical resistance of the incorporated metal fibres. Compared to conventional CFRP, the electrical conductivity in parallel to the fibre orientation can be increased by one to two orders of magnitude even for minor percentages of steel fibres. The analysis, however, also discloses the challenge of establishing a sufficient connection to the hybrid composite in order to entirely exploit its electrical conductivity.
In case of plain tensile load, the performance of the hybrid composite is essentially affected by the steel fibre-resin-adhesion as well as the laminate structure. Uniaxial hybrid laminates show brittle, singular failure behaviour. Exhaustive yielding of the embedded steel fibres is confined to the arising fracture gap. The high transverse stiffness of the isotropic metal fibres additionally intensifies strain magnification within the resin under transverse tensile load. This promotes (intralaminar) inter-fibre-failure at minor composite deformation. By contrast, multiaxial hybrid laminates exhibit distinctive damage evolution. After failure initiation, the steel fibres extensively yield and sustain the load-carrying capacity of angularly (e.g. ±45°) aligned CFRP plies. The overall material response is thus not only a simple superimposition but a complex interaction of the mechanical behaviour of the composite’s constituents. As a result of this post-damage performance, an ultimate elongation of over 11 % can be proven for the hybrid laminates analysed in this work. In this context, the influence of the steel fibre-resin adhesion on the failure behaviour of the hybrid composite is explicated by means of an analytical model. Long term exposure to corrosive media has no detrimental effect on the mechanical performance of stainless steel fibre reinforced composites. By trend, water uptake increases the maximum elongation at break of the hybrid laminate.
Moreover, the suitability of CFRP for bolted joints can partially be improved by the integration of steel fibres. While the bearing strength basically remains nearly unaffected, the bypass failure behaviour (ε_{max}: +363 %) as well as the head pull-through resistance (E_{a,BPT}: +81 %) can be enhanced. The improvements primarily concern the load-carrying capacity after failure initiation. Additionally, the integrated ductile steel fibres significantly increase the energy absorption capacity of the laminate in case of progressive bearing failure by up to 63 %.
However, the hybrid composite exhibits a sensitive low velocity/low mass impact behaviour. Compared to conventional CFRP, the damage threshold load of very thin hybrid laminates is lower, making them prone for delamination at minor, non-critical impact energies. At higher energy levels, however, the impact-induced delamination spreads less since most of the impact energy is absorbed by yielding of the ductile metal fibres instead of crack propagation. This structural advantage compared to CFRP gains in importance with increasing impact energy. The plastic deformation of the metastable austenitic steel fibres is accompanied by a phase transformation from paramagnetic γ-austenite to ferromagnetic α’-martensite. This change of the magnetic behaviour can be used to detect and evaluate impacts on the surface of the hybrid composite, which provides a simple non-destructive testing method. In case of low velocity/high mass impact, integration of ductile metal fibres into CFRP enables to address spacious areas of the laminate for energy absorption purposes. As a consequence, the perforation resistance of the hybrid composite is significantly enhanced; by addition of approximately 20 vol.% of stainless steel fibres, the perforation strength can be increased by 61 %, while the maximum energy absorption capacity rises by 194 %.

The thesis studies change points in absolute time for censored survival data with some contributions to the more common analysis of change points with respect to survival time. We first introduce the notions and estimates of survival analysis, in particular the hazard function and censoring mechanisms. Then, we discuss change point models for survival data. In the literature, usually change points with respect to survival time are studied. Typical examples are piecewise constant and piecewise linear hazard functions. For that kind of models, we propose a new algorithm for numerical calculation of maximum likelihood estimates based on a cross entropy approach which in our simulations outperforms the common Nelder-Mead algorithm.
Our original motivation was the study of censored survival data (e.g., after diagnosis of breast cancer) over several decades. We wanted to investigate if the hazard functions differ between various time periods due, e.g., to progress in cancer treatment. This is a change point problem in the spirit of classical change point analysis. Horváth (1998) proposed a suitable change point test based on estimates of the cumulative hazard function. As an alternative, we propose similar tests based on nonparametric estimates of the hazard function. For one class of tests related to kernel probability density estimates, we develop fully the asymptotic theory for the change point tests. For the other class of estimates, which are versions of the Watson-Leadbetter estimate with censoring taken into account and which are related to the Nelson-Aalen estimate, we discuss some steps towards developing the full asymptotic theory. We close by applying the change point tests to simulated and real data, in particular to the breast cancer survival data from the SEER study.

Novel Pseudocyclopeptides Containing 1,4-Disubstituted 1,2,3-Triazole Subunits for Anion Recognition
(2017)

Anion recognition is one of the most rapidly growing areas in the field of Supramolecular Chemistry due to the vital role of anions in the environment, in biology and in industry. The development of new anion binding motifs that can also be combined with known ones in a novel receptor is a timely topic. In this context, we have synthesized three cyclic pseudopeptides 16, 17 and 18, containing conventional H-bond donors (amide) in combination with, respectively, triazole C–H or triazole C–I functions.
All three receptors were synthesized by using a combination of peptide and click chemistry. Structural studies show that all three pseudopeptides adopt conformations with the triazole C-H or C-I groups pointing into the cavity center to allow them to contribute to binding. Quantitative binding studies showed that the cyclic pseudohexapeptide 1 coordinates to oxoanions (sulfate, dihydrogenphosphate, and hydrogenpyrophosphate) with different binding strengths and complex stoichiometries in 2.5 vol% water/DMSO.
Anion selectivity of 16 significantly changes when the cavity size of this pseudopeptide is increased to obtain the larger analog 17. This pseudooctapeptide forms well defined complexes with protonated phosphate anions. The complexation involves sandwiching of a cyclic tetramer of dihydrogenphosphate or a dimer of dihydrogenpyrophosphate anions by two pseudopeptide rings. Both complexes were characterized structurally in the solid state. They are stable in solution (2.5 vol% water/DMSO) as result of the interaction between hydrogen bond donors of 17 and the oxygen atoms of the anionic aggregates. The complexes can also be transferred to the gas phase without decomposition.
Anion selectivity of 16 was further altered by introducing iodine atom in the C5 position of the 1,4-disubstituted 1,2,3-triazole units. The corresponding cyclic pseudohexapeptide 18 features a smaller cavity diameter than 17 as a result of the iodide atoms and was therefore found to only coordinate to smaller spherical anions such as chloride. It forms 1:1 complexes with chloride, bromide and iodide in 2.5 vol% water/DMSO. Among the halides, 18 has highest affinity for chloride followed by bromide and iodide. The same stability trend was also observed in the gas phase by ESI/MS.
Concluding, I prepared three new macrocyclic pseudopeptides during my PhD and characterized their complexes with anions in terms of structure and affinity. All of these pseudopeptides were shown to interact with phosphate-derived anions, which renders them unique among the anion receptors developed in the Kubik group before.

Redox-neutral decarboxylative coupling reactions have emerged as a powerful strategy for C-C bond formation. However, the existing reaction conditions possess limitations, such as the coupling of aryl halides restricted to ortho-substituted benzoic acids; alkenyl halides were not applicable in decarboxylative coupling reaction. Within this thesis, the developments of Pd/Cu bimetallic catalyst systems are presented to overcome the limitations.
In the first part of the PhD work, a customized bimetallic PdII/CuI catalyst system was successfully developed to facilitate the decarboxylative cross-coupling of non-ortho-substituted aromatic carboxylates with aryl chlorides. The restriction of decarboxylative cross-coupling reactions to ortho-substituted or heterocyclic carboxylate substrates was overcome by holistic optimization of this bimetallic Cu/Pd catalyst system. All kinds of benzoic acids regardless of their substitution pattern now can be applied in decarboxylative cross-coupling reaction. This confirms prediction by DFT studies that the previously observed limitation to certain activated carboxylates is not intrinsic. The catalyst system also presents higher performance in the coupling of ortho-substituted benzoates, giving much higher yields than those previously reported. ortho-Methyl benzoate and ortho-phenyl benzoate which have never before been converted in decarboxylative coupling reactions, gave reasonable yields. These together further confirm the superiority of the new protocol.
In the second part of the PhD work, arylalkenes syntheses via two different Pd/Cu bimetallic-catalyzed decarboxylative couplings have been developed. This part consists of two projects: 2a) decarboxylative coupling of alkenyl halides; 2b) decarboxylative Mizoroki-Heck coupling of aryl halides with α,β-unsaturated carboxylic acids.
In project 2a, widely available, inexpensive, bench-stable aromatic carboxylic acids are used as nucleophile precursors instead of expensive and sensitive organometallic reagents that are commonly used in previously reported transition-metal catalyzed cross-couplings of alkenyl halides. With this protocol, alkenyl halides for the first time are used in decarboxylative coupling reaction, allowing regiospecific synthesis of a broad range of (hetero)arylalkenes in high yields. Unwanted double bond isomerization, a common side reaction in the alternative Heck reactions especially in the coupling of cycloalkenes or aliphatic alkenes, did not take place in this decarboxylative coupling reaction. Polysubstituted alkenes that hard to access with Heck reaction are also produced in good yields. The reaction can easily be scaled up to gram scale. The synthetic utility of this reaction was also demonstrated by synthesizing an important intermediate of fungicidal compound in high yield within 2 steps.
In project 2b, a Cu/Pd bimetallic catalyzed decarboxylative Mizoroki-Heck coupling of aryl halides with α, β-unsaturated carboxylic acids was successfully developed in which the carboxylate group directs the arylation into its β-position before being tracelessly removed via protodecarboxylation. It opens up a convenient synthesis of unsymmetrical 1,1-disubstituted alkenes from widely available precursors. This reaction features good regioselectivity, which is complementary to that of traditional Heck reactions, and also presents excellent functional group tolerance. Moreover, a one-pot 3-step 1,1-diarylethylene synthesis from methyl acrylate was achieved, where solvent changes or isolation of intermediates are not required. This subproject presents an example of carboxylic acids utility in synthesizing valuable compounds which are hard to access via conventional methodologies.

We discuss the portfolio selection problem of an investor/portfolio manager in an arbitrage-free financial market where a money market account, coupon bonds and a stock are traded continuously. We allow for stochastic interest rates and in particular consider one and two-factor Vasicek models for the instantaneous
short rates. In both cases we consider a complete and an incomplete market setting by adding a suitable number of bonds.
The goal of an investor is to find a portfolio which maximizes expected utility
from terminal wealth under budget and present expected short-fall (PESF) risk
constraints. We analyze this portfolio optimization problem in both complete and
incomplete financial markets in three different cases: (a) when the PESF risk is
minimum, (b) when the PESF risk is between minimum and maximum and (c) without risk constraints. (a) corresponds to the portfolio insurer problem, in (b) the risk constraint is binding, i.e., it is satisfied with equality, and (c) corresponds
to the unconstrained Merton investment.
In all cases we find the optimal terminal wealth and portfolio process using the
martingale method and Malliavin calculus respectively. In particular we solve in the incomplete market settings the dual problem explicitly. We compare the
optimal terminal wealth in the cases mentioned using numerical examples. Without
risk constraints, we further compare the investment strategies for complete
and incomplete market numerically.

We introduce and investigate a product pricing model in social networks where the value a possible buyer assigns to a product is influenced by the previous buyers. The selling proceeds in discrete, synchronous rounds for some set price and the individual values are additively altered. Whereas computing the revenue for a given price can be done in polynomial time, we show that the basic problem PPAI, i.e., is there a price generating a requested revenue, is weakly NP-complete. With algorithm Frag we provide a pseudo-polynomial time algorithm checking the range of prices in intervals of common buying behavior we call fragments. In some special cases, e.g., solely positive influences, graphs with bounded in-degree, or graphs with bounded path length, the amount of fragments is polynomial. Since the run-time of Frag is polynomial in the amount of fragments, the algorithm itself is polynomial for these special cases. For graphs with positive influence we show that every buyer does also buy for lower prices, a property that is not inherent for arbitrary graphs. Algorithm FixHighest improves the run-time on these graphs by using the above property.
Furthermore, we introduce variations on this basic model. The version of delaying the propagation of influences and the awareness of the product can be implemented in our basic model by substituting nodes and arcs with simple gadgets. In the chapter on Dynamic Product Pricing we allow price changes, thereby raising the complexity even for graphs with solely positive or negative influences. Concerning Perishable Product Pricing, i.e., the selling of products that are usable for some time and can be rebought afterward, the principal problem is computing the revenue that a given price can generate in some time horizon. In general, the problem is #P-hard and algorithm Break runs in pseudo-polynomial time. For polynomially computable revenue, we investigate once more the complexity to find the best price.
We conclude the thesis with short results in topics of Cooperative Pricing, Initial Value as Parameter, Two Product Pricing, and Bounded Additive Influence.

In this thesis, we consider a problem from modular representation theory of finite groups. Lluís Puig asked the question whether the order of the defect groups of a block \( B \) of the group algebra of a given finite group \( G \) can always be bounded in terms of the order of the vertices of an arbitrary simple module lying in \( B \).
In characteristic \( 2 \), there are examples showing that this is not possible in general, whereas in odd characteristic, no such examples are known. For instance, it is known that the answer to Puig's question is positive in case that \( G \) is a symmetric group, by work of Danz, Külshammer, and Puig.
Motivated by this, we study the cases where \( G \) is a finite classical group in non-defining characteristic or one of the finite groups \( G_2(q) \) or \( ³D_4(q) \) of Lie type, again in non-defining characteristic. Here, we generalize Puig's original question by replacing the vertices occurring in his question by arbitrary self-centralizing subgroups of the defect groups. We derive positive and negative answers to this generalized question.
\[\]
In addition to that, we determine the vertices of the unipotent simple \( GL_2(q) \)-module labeled by the partition \( (1,1) \) in characteristic \( 2 \). This is done using a method known as Brauer construction.

Nonwoven materials are used as filter media which are the key component of automotive filters such as air filters, oil filters, and fuel filters. Today, the advanced engine technologies require innovative filter media with higher performances. A virtual microstructure of the nonwoven filter medium, which has similar filter properties as the existing material, can be used to design new filter media from existing media. Nonwoven materials considered in this thesis prominently feature non-overlapping fibers, curved fibers, fibers with circular cross section, fibers of apparently infinite length, and fiber bundles. To this end, as part of this thesis, we extend the Altendorf-Jeulin individual fiber model to incorporate all the above mentioned features. The resulting novel stochastic 3D fiber model can generate geometries with good visual resemblance of real filter media. Furthermore, pressure drop, which is one of the important physical properties of the filter, simulated numerically on the computed tomography (CT) data of the real nonwoven material agrees well (with a relative error of 8%) with the pressure drop simulated in the generated microstructure realizations from our model.
Generally, filter properties for the CT data and generated microstructure realizations are computed using numerical simulations. Since numerical simulations require extensive system memory and computation time, it is important to find the representative domain size of the generated microstructure for a required filter property. As part of this thesis, simulation and a statistical approach are used to estimate the representative domain size of our microstructure model. Precisely, the representative domain size with respect to the packing density, the pore size distribution, and the pressure drop are considered. It turns out that the statistical approach can be used to estimate the representative domain size for the given property more precisely and using less generated microstructures than the purely simulation based approach.
Among the various properties of fibrous filter media, fiber thickness and orientation are important characteristics which should be considered in design and quality assurance of filter media. Automatic analysis of images from scanning electron microscopy (SEM) is a suitable tool in that context. Yet, the accuracy of such image analysis tools cannot be judged based on images of real filter media since their true fiber thickness and orientation can never be known accurately. A solution is to employ synthetically generated models for evaluation. By combining our 3D fiber system model with simulation of the SEM imaging process, quantitative evaluation of the fiber thickness and orientation measurements becomes feasible. We evaluate the state-of-the-art automatic thickness and orientation estimation method that way.

In the present work the concept of decarboxylative couplings and the strategy to use carboxylates as directing groups for C-H functionalizations have been decisively improved in three ways. These concepts emphasize the multifaceted nature of aromatic carboxylic acids as expedient starting materials in homogeneous catalysis to construct highly desirable molecular scaffolds in a straightforward fashion.
In the first project, the restriction of decarboxylative biaryl synthesis to exclusively couple aryl halides with ortho-substituted benzoic acids has been overcome by a holistic optimization of a Cu/Pd bimetallic catalyst system. Long ago postulated, this is now the proof that decarboxylative cross-couplings are not intrinsically limited to different decarboxylation propensities of benzoic acids or hampered by excess halides, accessing for the first time the entire spectrum of aromatic carboxylic acids as starting materials for the decarboxylative biaryl synthesis. The second project uses the carboxyl moiety as directing group for the ortho-arylation with aryl bromides and -chlorides catalyzed by comparatively inexpensive ruthenium. The carboxylic acid group remains untouched after the ortho-functionalization giving the possibility to a wealth of further diversifications via decarboxylative ipso-substitutions. Within the same project, a Cu/Ru bimetallic catalyst system was found to be able to switch the decarboxylative biaryl coupling from the ipso- to the ortho-position, complementing the Cu/Pd system developed in the first project. In a third project, a redox neutral C-C bond formation revealed the full synthetic potential of the carboxyl group. The COOH moiety acts as a classical directing group for the C-H hydroarylation of internal alkynes to form highly desirable 2-vinyl benzoic acids. With propargylic alcohols the hydroarylation is followed by an in situ esterification, showing that after easing the C-H cleavage, the directing group can be transformed into another functional group, thus, acting as a transformable directing group. Most importantly, a new fascinating reaction mode is activated by embedding the decarboxylation within the C-H functionalization event. This mode of action is capable to solve regioselectivity issues that inherently occur when dealing with carboxylates as directing groups. A so-called deciduous directing group is cast off simultaneously within the C-H functionalization event, resulting in an inherently monoselective pathway.
These methods were developed with the permanent goal of ensuring high sustainability. They do require neither pre-functionalized starting materials nor additional oxidants and provide access to a number of chemically relevant molecules from abundant, inexpensive and toxicologically innocuous educts.

Temporal Data Management and Incremental Data Recomputation with Wide-column Stores and MapReduce
(2017)

In recent years, ”Big Data” has become an important topic in academia
and industry. To handle the challenges and problems caused by Big Data,
new types of data storage systems called ”NoSQL stores” (means ”Not-only-
SQL”) have emerged.
”Wide-column stores” are one kind of NoSQL stores. Compared to relational database systems, wide-column stores introduce a new data model,
new IRUD (Insert, Retrieve, Update and Delete) semantics with support for
schema-flexibility, single-row transactions and data expiration constraints.
Moreover, each column stores multiple data versions with associated time-
stamps. Well-known examples are Google’s ”Big-table” and its open sourced
counterpart ”HBase”. Recently, such systems are increasingly used in business intelligence and data warehouse environments to provide decision support, controlling and revision capabilities.
Besides managing the current values, data warehouses also require management and processing of historical, time-related data. Data warehouses
frequently employ techniques for processing changes in various data sources
and incrementally applying such changes to the warehouse to keep it up-to-
date. Although both incremental data warehousing maintenance and temporal data management have been the subject of intensive research in the
relational database and finally commercial database products have picked up
the ability for temporal data processing and management, such capabilities
have not been explored systematically for today’s wide-column stores.
This thesis helps to address the shortcomings mentioned above. It care-
fully analyzes the properties of wide-column stores and the applicability
of mechanisms for temporal data management and incremental data ware-
house maintenance known from relational databases, extends well-known approaches and develops new capabilities for providing equivalent support in
wide-column stores.

Wie Proteine sich innerhalb weniger Millisekunden korrekt falten können, ist eine der fundamentalen Fragen in der Biochemie. Ein beim Faltungsprozess durchlaufener Übergangszustand ist der molten globule Zustand (MG Zustand), der sich unter bestimmten Bedingungen stabilisieren und untersuchen lässt. In diesem Zustand ähnelt die Sekundärstruktur dem nativen Zustand, während die Tertiärstruktur eher dem vollständig entfalteten Zustand entspricht. In dieser Arbeit wurde der MG Zustand am Beispiel des Maltose bindenden Proteins (MBP) untersucht. Dazu wurde MBP bei pH 3,2 im MG-Zustand stabilisiert und dies mittels Fluoreszenz Spektroskopie bestätigt. Die Abstände zwischen definierten Aminosäuren im MG Zustand wurden durch Spinlabels, die an gezielt mutierten Cysteinpaaren angebracht wurden, mittels Elektronenspinresonanz (EPR) gemessen und mit den Abständen derselben Aminosäuren im nativen Zustand verglichen. Anhand von sieben verschiedenen Doppelmutanten wurde die periphere Struktur mittels gepulster EPR analysiert, zwei weitere Doppelmutanten dienten dazu, die Struktur der molekularen Bindungstasche von MBP mittels CW EPR zu untersuchen. Die Anwesenheit von Maltose führte im MG Zustand zu einer deutlichen Veränderung der Abstände bestimmter Spinlabels in der peripheren Struktur. Dies deutet darauf hin, dass MBP Maltose sogar im MG Zustand binden kann. Durch isotherme Titrationskalorimetrie (ITC) wurde diese Vermutung bestätigt: die Ergebnisse zeigen jedoch, dass der Bindungsprozess zwischen MBP und Maltose im MG Zustand mit 11 fach geringerer Bindungsenthalpie erfolgt wie im nativen Zustand. Die Abstände der Spinlabel Paare neben der Bindungstasche von MBP unterschieden sich im MG Zustand vom nativen Zustand weder mit noch ohne Maltose. Diese Ergebnisse weisen darauf hin, dass MBP im MG Zustand rund um die Bindungstasche bereits eine klar ausgebildete Tertiärstruktur besitzt. Um diese Befunde zu bestätigen, sollten nun Untersuchungen anhand weiterer Doppelmutanten und mittels empfindlicherer Messungen wie z.B. DQC durchgeführt werden.

In this dissertation convergence of binomial trees for option pricing is investigated. The focus is on American and European put and call options. For that purpose variations of the binomial tree model are reviewed.
In the first part of the thesis we investigated the convergence behavior of the already known trees from the literature (CRR, RB, Tian and CP) for the European options. The CRR and the RB tree suffer from irregular convergence, so our first aim is to find a way to get the smooth convergence. We first show what causes these oscillations. That will also help us to improve the rate of convergence. As a result we introduce the Tian and the CP tree and we proved that the order of convergence for these trees is \(O \left(\frac{1}{n} \right)\).
Afterwards we introduce the Split tree and explain its properties. We prove the convergence of it and we found an explicit first order error formula. In our setting, the splitting time \(t_{k} = k\Delta t\) is not fixed, i.e. it can be any time between 0 and the maturity time \(T\). This is the main difference compared to the model from the literature. Namely, we show that the good properties of the CRR tree when \(S_{0} = K\) can be preserved even without this condition (which is mainly the case). We achieved the convergence of \(O \left(n^{-\frac{3}{2}} \right)\) and we typically get better results if we split our tree later.

Die vorliegende Arbeit befasst sich mit der Untersuchung von Absorptionseigenschaften und elektronischer Kurzzeit-Dynamik von organischen Farbstoffmolekülen und supramolekularen Photokatalysatoren in der Gasphase. Dabei wurde erstmals sehr intensiv ein eine relativ unbekannte experimentelle Methode eingesetzt, nämlich die zeitaufgelöste, pump-probe (Anregung-Abfrage) Photofragmentations-Spektroskopie. Die Kombination eines kommerziellen Quadrupol Ionenfallen Massenspektrometers mit einem Femtosekunden Lasersystem erlaubt es die intrinsischen, elektronischen Eigenschaften molekularer, ionischer Systeme abzubilden. Neben Populationsdynamik angeregter Zustände wurden erstmals Schwingungs- und Rotationswellenpaket-Dynamik mit dieser Methode beobachtet und dokumentiert.
Im ersten Teil der Arbeit werden die Ergebnisse der Untersuchungen an einigen ausgewählten Fluoresecein-Derivaten und eines Carbocyanin-Farbstoffes präsentiert. Obwohl diese Modellsysteme zunächst nur dem Zweck dienen sollten die Möglichkeiten des experimentellen Aufbaus zu evaluieren, ergaben die Untersuchungen weiterhin tiefgreifende Einblicke in die elektronische Struktur isolierter organischer Farbstoffe, die bis heute in Literatur nicht dokumentiert worden sind.
Der zweite Teil befasst sich mit der Untersuchung an drei supramolekularen, ionischen Systemen zur photokatalytischen Wasserstofferzeugung. Dabei dienten wieder zwei der Systeme dem Zweck den experimentellen Aufbau zu evaluieren. Neben der elektronischen Populationsdynamik wurde mittels polarisationsabhängiger Messungen weitere Einblicke in den Elektronentransferprozess erhalten – ein Kernpunkt in der Wirkweise supramolekularer Katalysatoren. Die neugewonnen Erkenntnisse wurden schließlich verwendet um einen neuartigen Katalysator zu untersuchen. Dabei stellte sich heraus, dass die Labilität der Ligandensphäre am katalytischen Metallzentrum Untersuchungen am intakten System in Lösung stark beeinträchtigt und somit nur aussagekräftige Ergebnisse mittels einer Gasphasen Methode, einer wie der hier verwendeten, erhalten werden können.
Die experimentellen Ergebnisse werden unterstützt durch quantenchemische Berechnungen von energetischen Minimum-Strukturen, den Strukturen von Übergangszuständen, sowie der Berechnung von Schwingungs- und UV/Vis-Absorptionsspektren mittels (zeitabhängiger) Dichtefunktionaltheorie (DFT & TD-DFT).

In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.

The main goal of this work was the study of the applicability of a polymer film heat exchanger concept for the applications in the chemical industry, such as the condensation of organic solvents. The polymer film heat exchanger investigated is a plate heat exchanger with very thin (0.025 – 0.1 mm) plates or films, which separate the fluids and enable the heat transfer. After a successful application of this concept to seawater desalination in a previous work, a further step is in chemical engineering, where the good chemical resistance of polymers in aggressive fluids is the challenge.
Two approaches were performed in this work. The first one was experimental and included the study of the chemical and mechanical resistance of preselected films, made of polymer materials, such as polyimide (PI), polyethylene terephthalate (PET) and polytetrafluoroethylene (PTFE). To simulate realistic operating conditions in a heat exchanger the films were exposed to a combined thermal (up to 90°C) and mechanical pressure loads (4-6 bar) with permanent contact with the relevant organic solvents, such as toluene, hexane, heptane and tetrahydrofuran (THF). Furthermore, a lab-scale apparatus and a full-scale demonstrator were manufactured in cooperation with two industrial partners. These were used for the investigation of the heat transfer performance for operating modes with and without phase change.
In addition to the experimental work, a coupled finite element –computational fluid dynamics (FEM-CFD)-model was developed, based on the fluid-structure-interaction (FSI). Two major tasks had to be solved here. The first one was the modelling of the condensation process, based on available mathematical models and energy balances. The second one was the consideration of the partially reversible deformation of the used film during operation. Since this deformation changes the geometry of the fluid channels also has an influence on the overall performance of the apparatus, a coupled FEM-CFD model was developed.
During the experimental study of the chemical resistance of the films, the PTFE film showed the best performance, and hence can be used for all four tested solvents. For the polyimide film, failures while exposed to THF were observed, and the PET film can only be used with water and hexane. With the used lab-scale heat exchanger and the full-scale demonstrator competitive overall heat transfer coefficients between 270 W/m²K and 700 W/m²K could be reached for the liquid-liquid (water-water, water-hexane) operation mode without phase change. For the condensation process, overall heat transfer coefficients of up to 1700/m²K could be obtained.
The numerical approach led to a well-functioning coupled model in a very small scale (1 cm²). An upscale, however, failed due to enormous hardware resources necessary required for the simulation of the entire full-scale demonstrator. The main reason for this is the very low thickness of the films, which leads to tiny mesh element sizes (<0.05 mm) necessary to model the deformation of the film. The modelling of the liquid-liquid heat transfer provided an acceptable accuracy (approx. 10%), but at very low rates the deviations were then higher (over 30%). The results of the condensation modelling were ambivalent. One the one hand a physically plausible model was developed, which could map the entire condensation process. On the other hand, the corresponding energy balance revealed major inaccuracy and hence could not be used for the determination of the overall heat transfer and showed the current limits of the FEM-CFD approach.