### Refine

#### Year of publication

- 2014 (77) (remove)

#### Document Type

- Doctoral Thesis (47)
- Preprint (27)
- Article (1)
- Periodical Part (1)
- Report (1)

#### Language

- English (77) (remove)

#### Keywords

#### Faculty / Organisational entity

- Fachbereich Mathematik (40)
- Fachbereich Informatik (15)
- Fachbereich Maschinenbau und Verfahrenstechnik (6)
- Fachbereich Sozialwissenschaften (5)
- Fachbereich Chemie (4)
- Fachbereich Elektrotechnik und Informationstechnik (3)
- Fachbereich ARUBI (1)
- Fachbereich Biologie (1)
- Fachbereich Physik (1)
- Fraunhofer (ITWM) (1)

In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.

Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors.
Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study.
The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given.

This work presents a framework for the computation of complex geometries containing intersections of multiple patches with Reissner-Mindlin shell elements. The main objective is to provide an isogeometric finite element implementation which neither requires drilling rotation stabilization, nor user interaction to quantify the number of rotational degrees of freedom for every node. For this purpose, the following set of methods is presented. Control points with corresponding physical location are assigned to one common node for the finite element solution. A nodal basis system in every control point is defined, which ensures an exact interpolation of the director vector throughout the whole domain. A distinction criterion for the automatic quantification of rotational degrees of freedom for every node is presented. An isogeometric Reissner-Mindlin shell formulation is enhanced to handle geometries with kinks and allowing for arbitrary intersections of patches. The parametrization of adjacent patches along the interface has to be conforming. The shell formulation is derived from the continuum theory and uses a rotational update scheme for the current director vector. The nonlinear kinematic allows the computation of large deformations and large rotations. Two concepts for the description of rotations are presented. The first one uses an interpolation which is commonly used in standard Lagrange-based shell element formulations. The second scheme uses a more elaborate concept proposed by the authors in prior work, which increases the accuracy for arbitrary curved geometries. Numerical examples show the high accuracy and robustness of both concepts. The applicability of the proposed framework is demonstrated.

Due to the increasing number of natural or man-made disasters, the application of operations research methods in evacuation planning has seen a rising interest in the research community. From the beginning, evacuation planning has been highly focused on car-based evacuation. Recently, also the evacuation of transit depended evacuees with the help of buses has been considered.
In this case study, we apply two such models and solution algorithms to evacuate a core part of the metropolitan capital city Kathmandu of Nepal as a hypothetical endangered region, where a large part of population is transit dependent. We discuss the computational results for evacuation time under a broad range of possible scenarios, and derive planning suggestions for practitioners.

As the complexity of embedded systems continuously rises, their development becomes more and more challenging. One technique to cope with this complexity is the employment of virtual prototypes. The virtual prototypes are intended to represent the embedded system’s properties on different levels of detail like register transfer level or transaction level. Virtual prototypes can be used for different tasks throughout the development process. They can act as executable specification, can be used for architecture exploration, can ease system integration, and allow for pre- and post-silicon software development and verification. The optimization objectives for virtual prototypes and their creation process are manifold. Finding an appropriate trade-off between the simulation accuracy, the simulation performance, and the implementation effort is a major challenge, as these requirements are contradictory.
In this work, two new and complementary techniques for the efficient creation of accurate and high-performance SystemC based virtual prototypes are proposed: Advanced Temporal Decoupling (ATD) and Transparent Transaction Level Modeling (TTLM). The suitability for industrial environments is assured by the employment of common standards like SystemC TLM-2.0 and IP-XACT.
Advanced Temporal Decoupling enhances the simulation accuracy while retaining high simulation performance by allowing for cycle accurate simulation in the context of SystemC TLM-2.0 temporal decoupling. This is achieved by exploiting the local time warp arising in SystemC TLM-2.0 temporal decoupled models to support the computation of resource contention effects. In ATD, accesses to shared resource are managed by Temporal Decoupled Semaphores (TDSems) which are integrated into the modeled shared resources. The set of TDSems assures the correct execution order of shared resource accesses and incorporates timing effects resulting from shared resource access execution and resource conflicts. This is done by dynamically varying the data granularity of resource accesses based on information gathered from the local time warp. ATD facilitates modeling of a wide range of resource and resource access properties like preemptable and non-preemptable accesses, synchronous and asynchronous accesses, multiport resources, dynamic access priorities, interacting and cascaded resources, and user specified schedulers prioritizing simultaneous resource accesses.
Transparent Transaction Level Modeling focuses on the efficient creation of virtual prototypes by reducing the implementation effort and consists of a library and a code generator. The TTLM library adds a layer of convenience functions to ATD comprising various application programming interfaces for inter module communication, virtual prototype configuration and run time information extraction. The TTLM generator is used to automatically generate the structural code of the virtual prototype from the formal hardware specification language IP-XACT.
The applicability and benefits of the presented techniques are demonstrated using an image processing centric automotive application. Compared to an existing cycle accurate SystemC model, the implementation effort can be reduced by approximately 50% using TTLM. Applying ATD, the simulation performance can be increased by a factor of up to five while retaining cycle accuracy.

Mechanical ventilation of patients with severe lung injury is an important clinical treatment to ensure proper lung oxygenation and to mitigate the extent of collapsed lung regions. While current imaging technologies such as Computed Tomography (CT) and chest X-ray allow for a thorough inspection of the thorax, they are limited to static pictures and exhibit several disadvantages, including exposure to ionizing radiation and high cost. Electrical Impedance Tomography (EIT) is a novel method to determine functional processes inside the thorax such as lung ventilation and cardiac activity. EIT reconstructs the internal electrical conductivity distribution within the thorax from voltage measurements on the body surface. Conductivity changes correlate with important clinical parameters such as lung volume and perfusion. Current EIT systems and algorithms use simplified or generalized thorax models to solve the reconstruction problem, which reduce image quality and anatomical significance. In this thesis, the development of a clinically relevant workflow to compute sophisticated three-dimensional thorax models from patient-specific CT data is described. The method allows medical experts to generate a multi-material segmentation in an interactive and fast way, while a volumetric mesh is computed automatically from the segmentation. The significantly improved image quality and anatomical precision of EIT images reconstructed with these 3D models is reported, and the impact on clinical applicability is discussed. In addition, three projects concerning quantitative CT (qCT) measurements and multi-modal 3D visualization are presented, which demonstrate the importance and productivity of interdisciplinary research groups including computer scientists and medical experts. The results presented in this thesis contribute significantly to clinical research efforts to pave the way towards improved patient-specific treatments of lung injury using EIT and qCT.

The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.

According to the domain specific models of speech perception, speech is supposed to be processed distinctively compared to non-speech. This assumption is supported by many studies dealing with the processing of speech and non-speech stimuli. However, the complexity of both stimulus classes is not matched in most studies, which might be a confounding factor, according to the cue specific models of speech perception. One solution is spectrally rotated speech, which has already been used in a range of fMRI and PET studies. In order to be able to investigate the role of stimulus complexity, vowels, spectrally rotated vowels and a second non-speech condition with two bands of sinusoidal waves, representing the first two formants of the vowels, were used in the present thesis. A detailed description of the creation and the properties of the whole stimulus set are given in Chapter 2 (Experiment 1) of this work. These stimuli were used to investigate the auditory processing of speech and non-speech sounds in a group of dyslexic adults and age matched controls (Experiment 2). The results support the assumption of a general auditory deficit in dyslexia. In order to compare the sensory processing of speech and non-speech in healthy adults on the electrophysiological level, stimuli were also presented within a multifeature oddball paradigm (Experiment 3). Vowels evoked a larger mismatch negativity (MMN) compared to both non-speech stimulus types. The MMN evoked by tones and spectrally rotated tones were compared in Experiment 4, to investigate the role of harmony. No difference in the area of MMN was found, indicating that the results found in Experiment 3 were not moderated by the harmonic structure of the vowels. All results are discussed in the context of the domain and cue specific models of speech perception.

We consider two major topics in this thesis: spatial domain partitioning which serves as a framework to simulate creep flows in representative volume elements.
First, we introduce a novel multi-dimensional space partitioning method. A new type of tree combines the advantages of the Octree and the KD-tree without having their disadvantages. We present a new data structure allowing local refinement, parallelization and proper restriction of transition ratios between nodes. Our technique has no dimensional restrictions at all. The tree's data structure is defined by a topological algebra based on the symbols \( A = \{ L, I, R \} \) that encode the partitioning steps. The set of successors is restricted such that each node has the partition of unity property to partition domains without overlap. With our method it is possible to construct a wide choice of spline spaces to compress or reconstruct scientific data such as pressure and velocity fields and multidimensional images. We present a generator function to build a tree that represents a voxel geometry. The space partitioning system is used as a framework to allow numerical computations. This work is triggered by the problem of representing, in a numerically appropriate way, huge three-dimensional voxel geometries that could have up to billions of voxels. These large datasets occure in situations where it is needed to deal with large representative volume elements (REV).
Second, we introduce a novel approach of variable arrangement for pressure and velocity to solve the Stokes equations. The basic idea of our method is to arrange variables in a way such that each cell is able to satisfy a given physical law independently from its neighbor cells. This is done by splitting velocity values to a left and right converging component. For each cell we can set up a small linear system that describes the momentum and mass conservation equations. This formulation allows to use the Gauß-Seidel algorithm to solve the global linear system. Our tree structure is used for spatial partitioning of the geometry and provides a proper initial guess. In addition, we introduce a method that uses the actual velocity field to refine the tree and improve the numerical accuracy where it is needed. We developed a novel approach rather than using existing approaches such as the SIMPLE algorithm, Lattice-Boltzmann methods or Exlicit jump methods since they are suited for regular grid structures. Other standard CFD approaches extract surfaces and creates tetrahedral meshes to solve on unstructured grids thus can not be applied to our datastructure. The discretization converges to the analytical solution with respect to grid refinement. We conclude a high strength in computational time and memory for high porosity geometries and a high strength in memory requirement for low porosity geometries.

In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.

Test rig optimization
(2014)

Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.

When stimulus and response overlap in a choice-reaction task, enhanced performance can be observed. This effect, the so-called Stimulus-Response Compatibility (SRC) has been shown to appear for a variety of different stimulus features such as numerical or physical size, luminance, or pitch height. While many of these SRC effects have been investigated in an isolated manner, only fewer studies focus on possible interferences when more than one stimulus dimension is varied. The present thesis investigated how the SRC effect of pitch heights, the so-called SPARC effect (Spatial Pitch Associations of Response Codes), is influenced by additionally varied stimulus information. In Study 1, the pitch heights of presented tones were varied along with timbre categories under two different task and pitch range conditions and with two different response alignments. Similarly, in Study 2, pitch heights as well as numerical values were varied within sung numbers under two different task conditions. The results showed simultaneous SRC effects appearing independently of each other in both studies: In Study 1, an expected SRC effect of pitch heights with horizontal responses (i.e., a horizontal SPARC effect) was observed. More interestingly, an additional and unexpected SRC effect of timbre with response sides presented itself independently of this SPARC effect. Similar results were obtained in Study 2: Here, an SRC effect for pitch heights (SPARC) and an SRC effect for numbers (i.e., SNARC or Spatial Numerical Associations of Response Codes, respectively) were observed and again the effects did not interfere with each other. Thus, results indicate that SPARC with horizontal responses does not interfere with SRC effects of other, simultaneously varied stimulus dimensions. These findings are discussed within the principle of polarity correspondence and the dimensional overlap model as theoretical accounts for SRC effects. In sum, it appears that the different types of information according to varied stimulus dimensions enter the decision stage of stimulus processing from separate channels.

ABSTRACT
"Spin and orbital contribution to the magnetic moment of transition metal clusters and complexes"
The spin and orbital contributions to the magnetic moments of isolated iron \(Fe_n^+\) \((7 ≤ n ≤ 18)\), cobalt \(Co_n^+\) \((8 ≤ n ≤ 22)\) and nickel \(Ni_n^+\) \((7 ≤ n ≤ 17)\) clusters were investigated. An experimental access to both contributions is possible by the application of x-ray magnetic circular dichroism (XMCD) spectroscopy. XMCD spectroscopy is based on x-ray absorption spectroscopy (XAS). It exploits the fact that for a magnetic sample the resonant absorption cross sections for negative and positive circular polarized x-rays differ for the transition from a spin orbit split ground state to the valence level. The resulting dichroic effects contain the information about the magnetism of the investigated sample. It can be extracted from the experimental spectrum via application of the so called sum rules. However, only the projections of the magnetic moments onto the quantization axis are experimentally accessible which corresponds to the magnetization of the sample.
We developed a method to apply XMCD spectroscopy to isolated clusters in the gas phase. A modified Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometer was used to record the XA spectra in Total Ion Yield (TIY) mode, i.e. by recording the fragmentation intensity of the clusters in dependence of x-ray energy. The clusters can be considered to be a superparamagnetic ensemble. Thus, the magnetization follows a Langevin curve. The intrinsic magnetic moments can be calculated by Langevin correction of the experimental magnetic moments because the cluster temperature and the magnetic field are known.
The spin and the orbital magnetic moments are enhanced compared to the respective bulk values for all three investigated elements. The enhancement of the orbital contribution is more pronounced, by about a factor 3 - 4 compared to the bulk, than for the spin magnetic moment. However, if compared to the atomic value, both contributions are quenched. The orbital magnetic moment only amounts to about 10 - 15 % of the atomic value while the spin retains about 80 % of its atomic value. If the magnetic moments found for the clusters are put into perspective with respect to the atomic and bulk values by means of scaling laws, it becomes evident that both contributions follow different interpolations between the atomic and bulk value. The spin follows the well-known trend
\(n^{-1/3} = 1/(cluster radius)\) (n = number of atoms per cluster, assumption of a spherical particle). This trend relates to the ratio of surface to inner atoms in spherical particle. Hence, our interpretation is that the spin magnetic moment seems to follow the surface area of the cluster. On the other hand, the orbital magnetic moment follows \(1/n = 1/(cluster volume)\).
First XA spectra recorded with circularly polarized x-rays of a Single Molecule Magnet (SMM) \([Fe_4Ln_2(N_3)_4(Htea)_4(piv_6)]\) (Ln = Gd, Tb; \(H_3tea\) = triethanolamine, Hpiv = pivalic acid) are presented.

In the present work, the phase transitions in different Fe/FeC systems were studied by using the molecular dynamics simulation and the Meyer-Entel interaction potential (also the Johnson potential for Fe-C interaction). Fe-bicrystal, thin film, Fe-C bulk and Fe-C nanowire systems were investigated to study the behaviour of the phase transition, where the energetics, dynamics and transformations pathways were analysed.

The sink location problem is a combination of network flow and location problems: From a given set of nodes in a flow network a minimum cost subset \(W\) has to be selected such that given supplies can be transported to the nodes in \(W\). In contrast to its counterpart, the source location problem which has already been studied in the literature, sinks have, in general, a limited capacity. Sink location has a decisive application in evacuation planning, where the supplies correspond to the number of evacuees and the sinks to emergency shelters.
We classify sink location problems according to capacities on shelter nodes, simultaneous or non-simultaneous flows, and single or multiple assignments of evacuee groups to shelters. Resulting combinations are interpreted in the evacuation context and analyzed with respect to their worst-case complexity status.
There are several approaches to tackle these problems: Generic solution methods for uncapacitated problems are based on source location and modifications of the network. In the capacitated case, for which source location cannot be applied, we suggest alternative approaches which work in the original network. It turns out that latter class algorithms are superior to the former ones. This is established in numerical tests including random data as well as real world data from the city of Kaiserslautern, Germany.

This dissertation focuses on the evaluation of technical and environmental sustainability of water distribution systems based on scenario analysis. The decision support system is created to assist in the decision making-process and to visualize the results of the sustainability assessment for current and future populations and scenarios. First, a methodology is developed to assess the technical and environmental sustainability for the current and future water distribution system scenarios. Then, scenarios are produced to evaluate alternative solutions for the current water distribution system as well as future populations and water demand variations. Finally, a decision support system is proposed using a combination of several visualization approaches to increase the data readability and robustness for the sustainability evaluations of the water distribution system.
The technical sustainability of a water distribution system is measured using the sustainability index methodology which is based on the reliability, resiliency and vulnerability performance criteria. Hydraulic efficiency and water quality requirements are represented using the nodal pressure and water age parameters, respectively. The U.S. Environmental Protection Agency EPANET software is used to simulate hydraulic (i.e. nodal pressure) and water quality (i.e. water age) analysis in a case study. In addition, the environmental sustainability of a water network is evaluated using the “total fresh water use” and “total energy intensity” indicators. For each scenario, multi-criteria decision analysis is used to combine technical and environmental sustainability criteria for the study area.
The technical and environmental sustainability assessment methodology is first applied to the baseline scenario (i.e. the current water distribution system). Critical locations where hydraulic efficiency and water quality problems occur in the current system are identified. There are two major scenario options that are considered to increase the sustainability at these critical locations. These scenarios focus on creating alternative systems in order to test and verify the technical and environmental sustainability methodology rather than obtaining the best solution for the current and future water distribution systems. The first scenario is a traditional approach in order to increase the hydraulic efficiency and water quality. This scenario includes using additional network components such as booster pumps, valves etc. The second scenario is based on using reclaimed water supply to meet the non-potable water demand and fire flow. The fire flow simulation is specifically included in the sustainability assessment since regulations have significant impact on the urban water infrastructure design. Eliminating the fire flow need from potable water distribution systems would assist in saving fresh water resources as well as to reduce detention times.
The decision support system is created to visualize the results of each scenario and to effectively compare these results with each other. The EPANET software is a powerful tool used to conduct hydraulic and water quality analysis but for the decision support system purposes the visualization capabilities are limited. Therefore, in this dissertation, the hydraulic and water quality simulations are completed using EPANET software and the results for each scenario are visualized by combining several visualization techniques in order to provide a better data readability. The first technique introduced here is using small multiple maps instead of the animation technique to visualize the nodal pressure and water age parameters. This technique eliminates the change blindness and provides easy comparison of time steps. In addition, a procedure is proposed to aggregate the nodes along the edges in order to simplify the water network. A circle view technique is used to visualize two values of a single parameter (i.e. the nodal pressure or water age). The third approach is based on fitting the water network into a grid representation which assists in eliminating the irregular geographic distribution of the nodes and improves the visibility of each circle view. Finally, a prototype for an interactive decision support tool is proposed for the current population and water demand scenarios. Interactive tools enable analyzing of the aggregated nodes and provide information about the results of each of the current water distribution scenarios.

Geometric Programming is a useful tool with a wide range of applications in engineering. As in real-world problems input data is likely to be affected by uncertainty, Hsiung, Kim, and Boyd introduced robust geometric programming to include the uncertainty in the optimization process. They also developed a tractable approximation method to tackle this problem. Further, they pose the question whether there exists a tractable reformulation of their robust geometric programming model instead of only an approximation method. We give a negative answer to this question by showing that robust geometric programming is co-NP hard in its natural posynomial form.

We consider a network flow problem, where the outgoing flow is reduced by a certain percentage in each node. Given a maximum amount of flow that can leave the source node, the aim is to find a solution that maximizes the amount of flow which arrives at the sink.
Starting from this basic model, we include two new, additional aspects: On the one hand, we are able to reduce the loss at some of the nodes; on the other hand, the exact loss values are not known, but may come from a discrete uncertainty set of exponential size.
Applications for problems of this type can be found in evacuation planning, where one would like to improve the safety of nodes such that the number of evacuees reaching safety is maximized.
We formulate the resulting robust flow problem with losses and improvability as a mixed-integer program for finitely many scenarios, and present an iterative scenario-generation procedure that avoids the inclusion of all scenarios from the beginning. In a computational study using both randomly generated instance and realistic data based on the city of Nice, France, we compare our solution algorithms.

For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.

Regular physical activity is essential to maintain or even improve an individual’s health. There exist various guidelines on how much individuals should do. Therefore, it is important to monitor performed physical activities during people’s daily routine in order to tell how far they meet professional recommendations. This thesis follows the goal to develop a mobile, personalized physical activity monitoring system applicable for everyday life scenarios. From the mentioned recommendations, this thesis concentrates on monitoring aerobic physical activity. Two main objectives are defined in this context. On the one hand, the goal is to estimate the intensity of performed activities: To distinguish activities of light, moderate or vigorous effort. On the other hand, to give a more detailed description of an individual’s daily routine, the goal is to recognize basic aerobic activities (such as walk, run or cycle) and basic postures (lie, sit and stand).
With recent progress in wearable sensing and computing the technological tools largely exist nowadays to create the envisioned physical activity monitoring system. Therefore, the focus of this thesis is on the development of new approaches for physical activity recognition and intensity estimation, which extend the applicability of such systems. In order to make physical activity monitoring feasible in everyday life scenarios, the thesis deals with questions such as 1) how to handle a wide range of e.g.
everyday, household or sport activities and 2) how to handle various potential users. Moreover, this thesis deals with the realistic scenario where either the currently performed activity or the current user is unknown during the development and training
phase of activity monitoring applications. To answer these questions, this thesis proposes and developes novel algorithms, models and evaluation techniques, and performs thorough experiments to prove their validity.
The contributions of this thesis are both of theoretical and of practical value. Addressing the challenge of creating robust activity monitoring systems for everyday life the concept of other activities is introduced, various models are proposed and validated. Another key challenge is that complex activity recognition tasks exceed the potential of existing classification algorithms. Therefore, this thesis introduces a confidence-based extension of the well known AdaBoost.M1 algorithm, called ConfAdaBoost.M1. Thorough experiments show its significant performance improvement compared to commonly used boosting methods. A further major theoretical contribution is the introduction and validation of a new general concept for the personalization of physical activity recognition applications, and the development of a novel algorithm (called Dependent Experts) based on this concept. A major contribution of practical value is the introduction of a new evaluation technique (called leave-one-activity-out) to simulate when performing previously unknown activities in a physical activity monitoring system. Furthermore, the creation and benchmarking of publicly available physical activity monitoring datasets within this thesis are directly benefiting the research community. Finally, the thesis deals with issues related to the implementation of the proposed methods, in order to realize the envisioned mobile system and integrate it into a full healthcare application for aerobic activity monitoring and support in daily life.