### Refine

#### Year of publication

- 2015 (82) (remove)

#### Document Type

- Doctoral Thesis (61)
- Preprint (13)
- Article (2)
- Bachelor Thesis (2)
- Conference Proceeding (1)
- Diploma Thesis (1)
- Other (1)
- Periodical Part (1)

#### Language

- English (82) (remove)

#### Keywords

- NURBS (2)
- finite element method (2)
- isogeometric analysis (2)
- tractor (2)
- verification (2)
- ADAU 1761 (1)
- AMC225xe (1)
- Adaptive time step (1)
- Asymptotic Expansion (1)
- Audio (1)

#### Faculty / Organisational entity

- Fachbereich Mathematik (33)
- Fachbereich Informatik (22)
- Fachbereich Elektrotechnik und Informationstechnik (9)
- Fachbereich Maschinenbau und Verfahrenstechnik (9)
- Fachbereich Chemie (4)
- Fraunhofer (ITWM) (3)
- Fachbereich Sozialwissenschaften (2)
- Fachbereich Physik (1)
- Fachbereich Raum- und Umweltplanung (1)

Nitrogen element is preponderant in Nature. Found in its simplest form as diatomic gas in the air, as well as in elaborated molecules such as the double helix of DNA, this element is indisputably essential for life. Indeed, nitrogen is omnipresent in all metabolic pathways.
With the advent of green chemistry, researchers attempt to functionalize arenes without pre-functionalization of the later for the establishment of C-C bond formation. Why not C-N bond formation?
We investigated new oxidative amination reactions by cross-dehydrogenative-coupling. Concerned by atom economy and green processes, our objectives were: 1) to obviate pre-activation or pre-oxidation of both C-H coupling partner and N-aminating agent. 2) to avoid the use of chelating directing group.
We achieved C-N bond formation for some classes of amines. Thus, we will describe the reactivity of cyclic secondary amines: carbazole, in presence of catalytic amount of ruthenium (II) and copper (II) to build the challenging C-N bond between two carbazoles. The initial mechanistic experiments will be present and discuss.
Then, we will describe more challenging hetero-coupling formation between diarylamines and carbazoles. The new ruthenium (II)/ copper (II) catalytic system allowed forming the ortho-N-carbazolation of diarylamines. This reaction performed under mild conditions (O2 as terminal oxidant) displays an unusual intramolecular N-H••N interaction in the novel class of compounds.
Finally, we will present a surprising metal free C-N bond formation between the ubiquitous phenols and the phenothiazines. Initially conducted in the presence of transition metals (RuII/CuII), this reaction proved to be efficient with the only effect of cumene and O2. Those components suggest a mechanism initiated by a Hock process. An initial infra-red analysis might point out a strong intramolecular O-H••N interaction in the resulting products.
These first elements of reactivity, developed within the laboratory for “modern dehydrogenative amination reactions”, will be presented and discussed.

Optimal Multilevel Monte Carlo Algorithms for Parametric Integration and Initial Value Problems
(2015)

We intend to find optimal deterministic and randomized algorithms for three related problems: multivariate integration, parametric multivariate integration, and parametric initial value problems. The main interest is concentrated on the question, in how far randomization affects the precision of an approximation. We want to understand when and to which extent randomized algorithms are superior to deterministic ones.
All problems are studied for Banach space valued input functions. The analysis of Banach space valued problems is motivated by the investigation of scalar parametric problems; these can be understood as particular cases of Banach space valued problems. The gain achieved by randomization depends on the underlying Banach space.
For each problem, we introduce deterministic and randomized algorithms and provide the corresponding convergence analysis.
Moreover, we also provide lower bounds for the general Banach space valued settings, and thus, determine the complexity of the problems. It turns out that the obtained algorithms are order optimal in the deterministic setting. In the randomized setting, they are order optimal for certain classes of Banach spaces, which includes the L_p spaces and any finite dimensional Banach space. For general Banach spaces, they are optimal up to an arbitrarily small gap in the order of convergence.

In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions.
In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario.
In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE.
In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies.

In this work we focus on the regression models with asymmetrical error distribution,
more precisely, with extreme value error distributions. This thesis arises in the framework
of the project "Robust Risk Estimation". Starting from July 2011, this project won
three years funding by the Volkswagen foundation in the call "Extreme Events: Modelling,
Analysis, and Prediction" within the initiative "New Conceptual Approaches to
Modelling and Simulation of Complex Systems". The project involves applications in
Financial Mathematics (Operational and Liquidity Risk), Medicine (length of stay and
cost), and Hydrology (river discharge data). These applications are bridged by the
common use of robustness and extreme value statistics.
Within the project, in each of these applications arise issues, which can be dealt with by
means of Extreme Value Theory adding extra information in the form of the regression
models. The particular challenge in this context concerns asymmetric error distributions,
which significantly complicate the computations and make desired robustification
extremely difficult. To this end, this thesis makes a contribution.
This work consists of three main parts. The first part is focused on the basic notions
and it gives an overview of the existing results in the Robust Statistics and Extreme
Value Theory. We also provide some diagnostics, which is an important achievement of
our project work. The second part of the thesis presents deeper analysis of the basic
models and tools, used to achieve the main results of the research.
The second part is the most important part of the thesis, which contains our personal
contributions. First, in Chapter 5, we develop robust procedures for the risk management
of complex systems in the presence of extreme events. Mentioned applications use time
structure (e.g. hydrology), therefore we provide extreme value theory methods with time
dynamics. To this end, in the framework of the project we considered two strategies. In
the first one, we capture dynamic with the state-space model and apply extreme value
theory to the residuals, and in the second one, we integrate the dynamics by means of
autoregressive models, where the regressors are described by generalized linear models.
More precisely, since the classical procedures are not appropriate to the case of outlier
presence, for the first strategy we rework classical Kalman smoother and extended
Kalman procedures in a robust way for different types of outliers and illustrate the performance
of the new procedures in a GPS application and a stylized outlier situation.
To apply approach to shrinking neighborhoods we need some smoothness, therefore for
the second strategy, we derive smoothness of the generalized linear model in terms of
L2 differentiability and create sufficient conditions for it in the cases of stochastic and
deterministic regressors. Moreover, we set the time dependence in these models by
linking the distribution parameters to the own past observations. The advantage of
our approach is its applicability to the error distributions with the higher dimensional
parameter and case of regressors of possibly different length for each parameter. Further,
we apply our results to the models with generalized Pareto and generalized extreme value
error distributions.
Finally, we create the exemplary implementation of the fixed point iteration algorithm
for the computation of the optimally robust in
uence curve in R. Here we do not aim to
provide the most
exible implementation, but rather sketch how it should be done and
retain points of particular importance. In the third part of the thesis we discuss three applications,
operational risk, hospitalization times and hydrological river discharge data,
and apply our code to the real data set taken from Jena university hospital ICU and
provide reader with the various illustrations and detailed conclusions.

This diploma thesis sets out to analyse the applicability of the instrument ’European Grouping of
Territorial Cooperation (EGTC)’ in the transnational and interregional non-contiguous cooperation.
EGTCs that are applied in spatially non-contiguous cooperations are called ’Network-EGTCs’. As
no scientific research about network-EGTCs has been made so far, this diploma thesis fills this
research gap.
As a basis for the analysis, a literature review on the instrument EGTC in general and its historic
background was conducted. In addition the scientific literature has been searched for characteristics
of non-contiguous cooperations and different stakeholders were interviewed for their
estimations about network-EGTCs. The so far existing and planned network-EGTCs have been
explored. Out of these network-EGTCs two case studies – the E.G.T.C. Amphictyony and the
planned CETC-EGTC – have been examined in depth. Their characteristics have further been
compared with the information about EGTCs and non contiguous-cooperations in general.
It was found out that network-EGTCs show advantages from ordinary non-contiguous cooperations.
Additionally, it was discovered that network-EGTCs do not differ in their character as much
as it had been expected from EGTCs established in the cross-border cooperation. This applies
also to the establishment process as well as to the fulfilment of the instrument’s potentials. In
general all EGTCs show discrepancies between planning and practice. Only a few differences
have been discovered. Contrary to expectation network-EGTCs show only certain disadvantages
but also advantages compared to EGTCs in the cross-border cooperation.
This thesis delivers evidence that EGTCs are applicable in the transnational and interregional
cooperation when certain preconditions are fulfilled. Then they can contribute to a successful
transnational and interregional cooperation.
Recommendations were given to territorial non-contiguous cooperations that are considering to
establish an EGTC.
It is expected that more network-EGTCs will be established in the future due to the higher experience
and knowledge about network-EGTCs.

We consider the multiscale model for glioma growth introduced in a previous work and extend it to account
for therapy effects. Thereby, three treatment strategies involving surgical resection, radio-, and
chemotherapy are compared for their efficiency. The chemotherapy relies on inhibiting the binding
of cell surface receptors to the surrounding tissue, which impairs both migration and proliferation.

Component fault trees that contain safety basic events as well as security basic events cannot be analyzed like normal CFTs. Safety basic events are rated with probabilities in an interval [0,1], for security basic events simpler scales such as \{low, medium, high\} make more sense. In this paper an approach is described how to handle a quantitative safety analysis with different rating schemes for safety and security basic events. By doing so, it is possible to take security causes for safety failures into account and to rate their effect on system safety.

In this contribution a mortar-type method for the coupling of non-conforming NURBS surface patches is proposed. The connection of non-conforming patches with shared degrees of freedom requires mutual refinement, which propagates throughout the whole patch due to the tensor-product structure of NURBS surfaces. Thus, methods to handle non-conforming meshes are essential in NURBS-based isogeometric analysis. The main objective of this work is to provide a simple and efficient way to couple the individual patches of complex geometrical models without altering the variational formulation. The deformations of the interface control points of adjacent patches are interrelated with a master-slave relation. This relation is established numerically using the weak form of the equality of mutual deformations along the interface. With the help of this relation the interface degrees of freedom of the slave patch can be condensated out of the system. A natural connection of the patches is attained without additional terms in the weak form. The proposed method is also applicable for nonlinear computations without further measures. Linear and geometrical nonlinear examples show the high accuracy and robustness of the new method. A comparison to reference results and to computations with the Lagrange multiplier method is given.

This thesis is concerned with stochastic control problems under transaction costs. In particular, we consider a generalized menu cost problem with partially controlled regime switching, general multidimensional running cost problems and the maximization of long-term growth rates in incomplete markets. The first two problems are considered under a general cost structure that includes a fixed cost component, whereas the latter is analyzed under proportional and Morton-Pliska
transaction costs.
For the menu cost problem and the running cost problem we provide an equivalent characterization of the value function by means of a generalized version of the Ito-Dynkin formula instead of the more restrictive, traditional approach via the use of quasi-variational inequalities (QVIs). Based on the finite element method and weak solutions of QVIs in suitable Sobolev spaces, the value function is constructed iteratively. In addition to the analytical results, we study a novel application of the menu cost problem in management science. We consider a company that aims to implement an optimal investment and marketing strategy and must decide when to issue a new version of a product and when and how much
to invest into marketing.
For the long-term growth rate problem we provide a rigorous asymptotic analysis under both proportional and Morton-Pliska transaction costs in a general incomplete market that includes, for instance, the Heston stochastic volatility model and the Kim-Omberg stochastic excess return model as special cases. By means of a dynamic programming approach leading-order optimal strategies are constructed
and the leading-order coefficients in the expansions of the long-term growth rates are determined. Moreover, we analyze the asymptotic performance of Morton-Pliska strategies in settings with proportional transaction costs. Finally, pathwise optimality of the constructed strategies is established.

Sequential Consistency (SC) is the memory model traditionally applied by programmers and verification tools for the analysis of multithreaded programs.
SC guarantees that instructions of each thread are executed atomically and in program order.
Modern CPUs implement memory models that relax the SC guarantees: threads can execute instructions out of order, stores to the memory can be observed by different threads in different order.
As a result of these relaxations, multithreaded programs can show unexpected, potentially undesired behaviors, when run on real hardware.
The robustness problem asks if a program has the same behaviors under SC and under a relaxed memory model.
Behaviors are formalized in terms of happens-before relations — dataflow and control-flow relations between executed instructions.
Programs that are robust against a memory model produce the same results under this memory model and under SC.
This means, they only need to be verified under SC, and the verification results will carry over to the relaxed setting.
Interestingly, robustness is a suitable correctness criterion not only for multithreaded programs, but also for parallel programs running on computer clusters.
Parallel programs written in Partitioned Global Address Space (PGAS) programming model, when executed on cluster, consist of multiple processes, each running on its cluster node.
These processes can directly access memories of each other over the network, without the need of explicit synchronization.
Reorderings and delays introduced on the network level, just as the reorderings done by the CPUs, may result into unexpected behaviors that are hard to reproduce and fix.
Our first contribution is a generic approach for solving robustness against relaxed memory models.
The approach involves two steps: combinatorial analysis, followed by an algorithmic development.
The aim of combinatorial analysis is to show that among program computations violating robustness there is always a computation in a certain normal form, where reorderings are applied in a restricted way.
In the algorithmic development we work out a decision procedure for checking whether a program has violating normal-form computations.
Our second contribution is an application of the generic approach to widely implemented memory models, including Total Store Order used in Intel x86 and Sun SPARC architectures, the memory model of Power architecture, and the PGAS memory model.
We reduce robustness against TSO to SC state reachability for a modified input program.
Robustness against Power and PGAS is reduced to language emptiness for a novel class of automata — multiheaded automata.
The reductions lead to new decidability results.
In particular, robustness is PSPACE-complete for all the considered memory models.

This work aims at including nonlinear elastic shell models in a multibody framework. We focus our attention to Kirchhoff-Love shells and explore the benefits of an isogeometric approach, the latest development in finite element methods, within a multibody system. Isogeometric analysis extends isoparametric finite elements to more general functions such as B-Splines and Non-Uniform Rational B-Splines (NURBS) and works on exact geometry representations even at the coarsest level of discretizations. Using NURBS as basis functions, high regularity requirements of the shell model, which are difficult to achieve with standard finite elements, are easily fulfilled. A particular advantage is the promise of simplifying the mesh generation step, and mesh refinement is easily performed by eliminating the need for communication with the geometry representation in a Computer-Aided Design (CAD) tool.
Quite often the domain consists of several patches where each patch is parametrized by means of NURBS, and these patches are then glued together by means of continuity conditions. Although the techniques known from domain decomposition can be carried over to this situation, the analysis of shell structures is substantially more involved as additional angle preservation constraints between the patches might arise. In this work, we address this issue in the stationary and transient case and make use of the analogy to constrained mechanical systems with joints and springs as interconnection elements. Starting point of our work is the bending strip method which is a penalty approach that adds extra stiffness to the interface between adjacent patches and which is found to lead to a so-called stiff mechanical system that might suffer from ill-conditioning and severe stepsize restrictions during time integration. As a remedy, an alternative formulation is developed that improves the condition number of the system and removes the penalty parameter dependence. Moreover, we study another alternative formulation with continuity constraints applied to triples of control points at the interface. The approach presented here to tackle stiff systems is quite general and can be applied to all penalty problems fulfilling some regularity requirements.
The numerical examples demonstrate an impressive convergence behavior of the isogeometric approach even for a coarse mesh, while offering substantial savings with respect to the number of degrees of freedom. We show a comparison between the different multipatch approaches and observe that the alternative formulations are well conditioned, independent of any penalty parameter and give the correct results. We also present a technique to couple the isogeometric shells with multibody systems using a pointwise interaction.

In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations.
For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions.
For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain.
A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis.
Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods.

Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.

Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.

The advances in sensor technology have introduced smart electronic products with
high integration of multi-sensor elements, sensor electronics and sophisticated signal
processing algorithms, resulting in intelligent sensor systems with a significant level
of complexity. This complexity leads to higher vulnerability in performing their
respective functions in a dynamic environment. The system dependability can be
improved via the implementation of self-x features in reconfigurable systems. The
reconfiguration capability requires capable switching elements, typically in the form
of a CMOS switch or miniaturized electromagnetic relay. The emerging DC-MEMS
switch has the potential to complement the CMOS switch in System-in-Package as
well as integrated circuits solutions. The aim of this thesis is to study the feasibility
of using DC-MEMS switches to enable the self-x functionality at system level.
The self-x implementation is also extended to the component level, in which the
ISE-DC-MEMS switch is equipped with self-monitoring and self-repairing features.
The MEMS electrical behavioural model generated by the design tool is inadequate,
so additional electrical models have been proposed, simulated and validated. The
simplification of the mechanical MEMS model has produced inaccurate simulation
results that lead to the occurrence of stiction in the actual device. A stiction conformity
test has been proposed, implemented, and successfully validated to compensate
the inaccurate mechanical model. Four different system simulations of representative
applications were carried out using the improved behavioural MEMS model, to
show the aptness and the performances of the ISE-DC-MEMS switch in sensitive
reconfiguration tasks in the application and to compare it with transmission gates.
The current design of the ISE-DC-MEMS switch needs further optimization in terms
of size, driving voltage, and the robustness of the design to guarantee high output
yield in order to match the performance of commercial DC MEMS switches.

Industrial design has a long history. With the introduction of Computer-Aided Engineering, industrial design was revolutionised. Due to the newly found support, the design workflow changed, and with the introduction of virtual prototyping, new challenges arose. These new engineering problems have triggered
new basic research questions in computer science.
In this dissertation, I present a range of methods which support different components of the virtual design cycle, from modifications of a virtual prototype and optimisation of said prototype, to analysis of simulation results.
Starting with a virtual prototype, I support engineers by supplying intuitive discrete normal vectors which can be used to interactively deform the control mesh of a surface. I provide and compare a variety of different normal definitions which have different strengths and weaknesses. The best choice depends on
the specific model and on an engineer’s priorities. Some methods have higher accuracy, whereas other methods are faster.
I further provide an automatic means of surface optimisation in the form of minimising total curvature. This minimisation reduces surface bending, and therefore, it reduces material expenses. The best results can be obtained for analytic surfaces, however, the technique can also be applied to real-world examples.
Moreover, I provide engineers with a curvature-aware technique to optimise mesh quality. This helps to avoid degenerated triangles which can cause numerical issues. It can be applied to any component of the virtual design cycle: as a direct modification of the virtual prototype (depending on the surface defini-
tion), during optimisation, or dynamically during simulation.
Finally, I have developed two different particle relaxation techniques that both support two components of the virtual design cycle. The first component for which they can be used is discretisation. To run computer simulations on a model, it has to be discretised. Particle relaxation uses an initial sampling,
and it improves it with the goal of uniform distances or curvature-awareness. The second component for which they can be used is the analysis of simulation results. Flow visualisation is a powerful tool in supporting the analysis of flow fields through the insertion of particles into the flow, and through tracing their movements. The particle seeding is usually uniform, e.g. for an integral surface, one could seed on a square. Integral surfaces undergo strong deformations, and they can have highly varying curvature. Particle relaxation redistributes the seeds on the surface depending on surface properties like local deformation or curvature.

In this paper we consider the problem of decomposing a given integer matrix A into
a positive integer linear combination of consecutive-ones matrices with a bound on the
number of columns per matrix. This problem is of relevance in the realization stage
of intensity modulated radiation therapy (IMRT) using linear accelerators and multileaf
collimators with limited width. Constrained and unconstrained versions of the problem
with the objectives of minimizing beam-on time and decomposition cardinality are considered.
We introduce a new approach which can be used to find the minimum beam-on
time for both constrained and unconstrained versions of the problem. The decomposition
cardinality problem is shown to be NP-hard and an approach is proposed to solve the
lexicographic decomposition problem of minimizing the decomposition cardinality subject
to optimal beam-on time.

The Event Segmentation Theory (Kurby & Zacks, 2008; Zacks, Speer, Swallow, Braver, & Reynolds, 2007) explains the perceptual organization of an ongoing activity into meaningful events. The classical event segmentation task (Newtson, 1973) involves watching an online video and indicating with key presses the event boundaries, i.e., when one event ends and the next one begins. The resulting hierarchical organization of object-based coarse events and action-based fine events gives insight into various cognitive processes. I used the Event Segmentation Theory to develop assistance and training systems for assembly workers in industrial settings at various levels - experts, new hires, and intellectually disabled people. Therefore, the first scientific question I asked was whether online and offline event segmentation result in the same event boundaries. This is important because assembly work requires not only watching activities online but processing the information offline, e.g., while performing the assembly task. By developing a special software tool that enables assessment of offline event boundaries, I established that online perception and offline elaboration lead to similar event boundaries. This study supports prior work suggesting that instructions should be structured around event boundaries.
Secondly, I investigated the importance of fine versus coarse event boundaries when learning the sequence of steps in virtual training, both for novices and experts in car door assembly. I found memory, tested by ability to predict the next frame, to be enhanced for object-based coarse events from the nearest fine event boundary. However, virtual training did not improve memory for action-based fine events from the nearest coarse event boundary. I conjecture that trainees primarily acquire the sequence of object-based coarse events in an initial training. Based on differences found in memory performance between experts and novices, I conclude that memory for action-based fine events is dependent on expertise.
Thirdly, I used the Event Segmentation Theory to investigate whether the simple and repetitive assembly tasks offered at workshops for intellectually disabled persons utilize their full cognitive potential. I analyzed event segmentation performance of 32 intellectually disabled persons compared to 30 controls using a variety of event segmentation measures. I found specific deficits in event boundary detection and hierarchical organization of events for the intellectually disabled group. However, results suggest that hierarchical organization is task-dependent. Because the event segmentation task accounted for differences in general cognitive ability, I propose the event segmentation task as diagnostic method for the need for support in executing assembly tasks.
Based on these three studies, I argue that the Event Segmentation Theory offers a framework for assessment and assistance of important attentional, perceptual, and memory processes related to assembly tasks. I demonstrate how practical applications can make use of this framework for the development of new computer-based assistance and training systems that are tailored to the users’ need for support and improve their quality of life.

In this thesis, we investigate several upcoming issues occurring in the context of conceiving and building a decision support system. We elaborate new algorithms for computing representative systems with special quality guarantees, provide concepts for supporting the decision makers after a representative system was computed, and consider a methodology of combining two optimization problems.
We review the original Box-Algorithm for two objectives by Hamacher et al. (2007) and discuss several extensions regarding coverage, uniformity, the enumeration of the whole nondominated set, and necessary modifications if the underlying scalarization problem cannot be solved to optimality. In a next step, the original Box-Algorithm is extended to the case of three objective functions to compute a representative system with desired coverage error. Besides the investigation of several theoretical properties, we prove the correctness of the algorithm, derive a bound on the number of iterations needed by the algorithm to meet the desired coverage error, and propose some ideas for possible extensions.
Furthermore, we investigate the problem of selecting a subset with desired cardinality from the computed representative system, the Hypervolume Subset Selection Problem (HSSP). We provide two new formulations for the bicriteria HSSP, a linear programming formulation and a \(k\)-link shortest path formulation. For the latter formulation, we propose an algorithm for which we obtain the currently best known complexity bound for solving the bicriteria HSSP. For the tricriteria HSSP, we propose an integer programming formulation with a corresponding branch-and-bound scheme.
Moreover, we address the issue of how to present the whole set of computed representative points to the decision makers. Based on common illustration methods, we elaborate an algorithm guiding the decision makers in choosing their preferred solution.
Finally, we step back and look from a meta-level on the issue of how to combine two given optimization problems and how the resulting combinations can be related to each other. We come up with several different combined formulations and give some ideas for the practical approach.

The central topic of this thesis is Alperin's weight conjecture, a problem concerning the representation theory of finite groups.
This conjecture, which was first proposed by J. L. Alperin in 1986, asserts that for any finite group the number of its irreducible Brauer characters coincides with the number of conjugacy classes of its weights. The blockwise version of Alperin's conjecture partitions this problem into a question concerning the number of irreducible Brauer characters and weights belonging to the blocks of finite groups.
A proof for this conjecture has not (yet) been found. However, the problem has been reduced to a question on non-abelian finite (quasi-) simple groups in the sense that there is a set of conditions, the so-called inductive blockwise Alperin weight condition, whose verification for all non-abelian finite simple groups implies the blockwise Alperin weight conjecture. Now the objective is to prove this condition for all non-abelian finite simple groups, all of which are known via the classification of finite simple groups.
In this thesis we establish the inductive blockwise Alperin weight condition for three infinite series of finite groups of Lie type: the special linear groups \(SL_3(q)\) in the case \(q>2\) and \(q \not\equiv 1 \bmod 3\), the Chevalley groups \(G_2(q)\) for \(q \geqslant 5\), and Steinberg's triality groups \(^3D_4(q)\).

Attention-awareness is a key topic for the upcoming generation of computer-human interaction. A human moves his or her eyes to visually attends to a particular region in a scene. Consequently, he or she can process visual information rapidly and efficiently without being overwhelmed by vast amount of information from the environment. Such a physiological function called visual attention provides a computer system with valuable information of the user to infer his or her activity and the surrounding environment. For example, a computer can infer whether the user is reading text or not by analyzing his or her eye movements. Furthermore, it can infer with which object he or she is interacting by recognizing the object the user is looking at. Recent developments of mobile eye tracking technologies enable us
to capture human visual attention in ubiquitous everyday environments. There are various types of applications where attention-aware systems may be effectively incorporated. Typical examples are augmented reality (AR) applications such as Wikitude which overlay virtual information onto physical objects. This type of AR application presents augmentative information of recognized objects to the user. However, if it presents information of all recognized objects at once, the over
ow of information could be obtrusive to the user. As a solution for such a problem, attention-awareness can be integrated into a system. If a
system knows to which object the user is attending, it can present only the information of
relevant objects to the user.
Towards attention-aware systems in everyday environments, this thesis presents approaches
for analysis of user attention to visual content. Using a state-of-the-art wearable eye tracking device, one can measure the user's eye movements in a mobile scenario. By capturing the user's eye gaze position in a scene and analyzing the image where the eyes focus, a computer can recognize the visual content the user is currently attending to. I propose several image analysis methods to recognize the user-attended visual content in a scene image. For example, I present an application called Museum Guide 2.0. In Museum Guide 2.0, image-based object recognition and eye gaze analysis are combined together to recognize user-attended objects in a museum scenario. Similarly, optical character recognition
(OCR), face recognition, and document image retrieval are also combined with eye gaze analysis to identify the user-attended visual content in respective scenarios. In addition to Museum Guide 2.0, I present other applications in which these combined frameworks are effectively used. The proposed applications show that the user can benefit from active information presentation which augments the attended content in a virtual environment with
a see-through head-mounted display (HMD).
In addition to the individual attention-aware applications mentioned above, this thesis
presents a comprehensive framework that combines all recognition modules to recognize the user-attended visual content when various types of visual information resources such as text, objects, and human faces are present in one scene. In particular, two processing strategies are proposed. The first one selects an appropriate image analysis module according to the user's current cognitive state. The second one runs all image analysis modules simultaneously and merges the analytic results later. I compare these two processing strategies in terms of user-attended visual content recognition when multiple visual information resources are present in the same scene.
Furthermore, I present novel interaction methodologies for a see-through HMD using eye gaze input. A see-through HMD is a suitable device for a wearable attention-aware system for everyday environments because the user can also view his or her physical environment
through the display. I propose methods for the user's attention engagement estimation with the display, eye gaze-driven proactive user assistance functions, and a method for interacting
with a multi-focal see-through display.
Contributions of this thesis include:
• An overview of the state-of-the-art in attention-aware computer-human interaction
and attention-integrated image analysis.
• Methods for the analysis of user-attended visual content in various scenarios.
• Demonstration of the feasibilities and the benefits of the proposed user-attended visual content analysis methods with practical user-supportive applications.
• Methods for interaction with a see-through HMD using eye gaze.
• A comprehensive framework for recognition of user-attended visual content in a complex
scene where multiple visual information resources are present.
This thesis opens a novel field of wearable computer systems where computers can understand the user attention in everyday environments and provide with what the user wants. I will show the potential of such wearable attention-aware systems for everyday
environments for the next generation of pervasive computer-human interaction.

We discuss the problem of evaluating a robust solution.
To this end, we first give a short primer on how to apply robustification approaches to uncertain optimization problems using the assignment problem and the knapsack problem as illustrative examples.
As it is not immediately clear in practice which such robustness approach is suitable for the problem at hand,
we present current approaches for evaluating and comparing robustness from the literature, and introduce the new concept of a scenario curve. Using the methods presented in this paper, an easy guide is given to the decision maker to find, solve and compare the best robust optimization method for his purposes.

For the prediction of digging forces from a granular material simulation, the
Nonsmooth Contact Dynamics Method is examined. First, the equations of motion
for nonsmooth mechanical systems are laid out. They are a differential
variational inequality that has the same structure as classical discrete algebraic equations. Using a Galerkin projection in time, it becomes possible to derive
nonsmooth versions of the classical SHAK and RATTLE integrators.
A matrix-free Interior Point Method is used for the complementarity
problems that need to be solved in every time step. It is shown that this method
outperforms the Projected Gauss-Jacobi method by several orders of magnitude
and produces the same digging force result as the Discrete Element Method in comparable computing time.

The main goal of this thesis is twofold. First, the thesis aims at bridging the gap between existing Pattern Recognition (PR) methods of automatic signature verification and the requirements for their application in forensic science. This gap, attributed by various factors ranging from system definition to evaluation, prevents automatic methods from being used by Forensic Handwriting Examiners (FHEs). Second, the thesis presents novel signature verification methods developed particularly considering the implications of forensic casework, and outperforming the state-of-the-art PR methods.
The first goal of the thesis is attributed by four important factors, i.e., data, terminology, output reporting, and how evaluation of automatic systems is carried out today. It is argued that traditionally the signature data used in PR are not actual/close representative of the real world data (especially that available in forensic cases). The systems trained on such data are, therefore, not suitable for forensic environments. This situation can be tackled by providing more realistic data to PR researchers. To this end, various signature and handwriting datasets are gathered in collaboration with FHEs and are made publicly available through the course of this thesis. A special attention is given to disguised signatures--where authentic authors purposefully make their signatures look like a forgery. This genre was at large neglected in PR research previously.
The terminology used, in the two communities - PR and FHEs, differ greatly. In fact, even in PR, there is no standard terminology and people often differ in the usage of various terms particularly related to various types of forged signatures/handwriting. The thesis presents a new terminology that is equally useful for both forensic scientists and PR researchers. The proposed terminology is hoped to increase the general acceptability of automatic signature analysis systems in forensic science.
The outputs reported by general signature verification systems are not acceptable for FHEs and courts as they are either binary (yes/no) or score (raw evidence) based on similarity/difference. The thesis describes that automatic systems should rather report the probability of observing the evidence (e.g., a certain similarity/difference score) given the signature belongs to the acclaimed identity, and the probability of observing the same evidence given the signature does not belong to the acclaimed identity. This will take automatic systems from hard decisions to soft decisions, thereby enabling them to report likelihood ratios that actually represent the evidential value of the score rather than the raw score (evidence).
When automatic systems report soft decisions (as in the form of likelihood ratios), the thesis argues that there must be some methods to evaluate such systems. This thesis presents one such adaptation. The thesis argues that the state-of-the-art evaluation methods, like equal error rate and area under curve, do not address the needs of forensic science. These needs require an assessment of the evidential value of signature verification, rather than a hard/pure classification (accept/reject binary decision). The thesis demonstrates and validates a relatively simple adaptation of the current verification methods based on the Bayesian inference dependent calibration of continuous scores rather than hard classifications (binary and/or score based classification).
The second goal of this thesis is to introduce various local features based techniques which are capable of performing signature verification in forensic cases and reporting results as anticipated by FHEs and courts. This is an important contribution of the thesis because of the following two reasons. First, to the best of author's knowledge, local feature descriptors are for the first time used for development of signature verification systems for forensic environments (particularly considering disguised signatures). Previously, such methods have been heavily used for recognition tasks, rather than verification of writing behaviors, such as character and digit recognition. Second, the proposed methods not only report the more traditional decisions (like scores-usually reported in PR) but also the Bayesian inference based likelihood ratios (suitable for courts and forensic cases).
Furthermore, the thesis also provides a detailed man vs. machine comparison for signature verification tasks. The men, in this comparison, are forensic scientists serving as forensic handwriting examiners and having experience of varying number of years. The machines are the local features based methods proposed in this thesis, along with various other state-of-the-art signature verification systems. The proposed methods clearly outperform the state-of-the-art systems, and sometimes the human experts.
Finally, the thesis details various tasks that have been performed in the areas closely related to signature verification and its application in forensic casework. These include, developing novel local feature based methods for extraction of signatures/handwritten text from document images, hyper-spectral image analysis for extraction of signatures from forensic documents, and analysis of on-line signatures acquired through specialized pens equipped with Accelerometer and Gyroscope. These tasks are important as they enable the thesis to take PR systems one step further close to direct application in forensic cases.

In this thesis, collision-induced dissociation (CID) studies serve to elucidate relative stabilities and to determine bond strengths within a given structure type of transition metal complexes. The infrared multi photon dissociation (IRMPD) spectroscopy combined with density functional theory (DFT) allow for structural analysis and provide insights into the coordination sphere of transition metal centers. The used combination of CID and IRMPD experiments is a powerful tool to obtain a detailed and comprehensive characterization and understanding of interactions between transition metals and organic ligands. The compounds’ spectrum comprises mono- or oligonuclear transition metal complexes containing iron, palladium, and ruthenium as well as lanthanide containing single molecule magnets (SMM). The presented investigations on the different transition metal complexes reveal manifold effects for each species leading to valuable results. A fundamental understanding of metal to ligand interactions is mandatory for the development of new and better organometallic complexes with catalytic, optical or magnetic properties.

In this thesis we develop a shape optimization framework for isogeometric analysis in the optimize first–discretize then setting. For the discretization we use
isogeometric analysis (iga) to solve the state equation, and search optimal designs in a space of admissible b-spline or nurbs combinations. Thus a quite
general class of functions for representing optimal shapes is available. For the
gradient-descent method, the shape derivatives indicate both stopping criteria and search directions and are determined isogeometrically. The numerical treatment requires solvers for partial differential equations and optimization methods, which introduces numerical errors. The tight connection between iga and geometry representation offers new ways of refining the geometry and analysis discretization by the same means. Therefore, our main concern is to develop the optimize first framework for isogeometric shape optimization as ground work for both implementation and an error analysis. Numerical examples show that this ansatz is practical and case studies indicate that it allows local refinement.

The overall goal of the work is to simulate rarefied flows inside geometries with moving boundaries. The behavior of a rarefied flow is characterized through the Knudsen number \(Kn\), which can be very small (\(Kn < 0.01\) continuum flow) or larger (\(Kn > 1\) molecular flow). The transition region (\(0.01 < Kn < 1\)) is referred to as the transition flow regime.
Continuum flows are mainly simulated by using commercial CFD methods, which are used to solve the Euler equations. In the case of molecular flows one uses statistical methods, such as the Direct Simulation Monte Carlo (DSMC) method. In the transition region Euler equations are not adequate to model gas flows. Because of the rapid increase of particle collisions the DSMC method tends to fail, as well
Therefore, we develop a deterministic method, which is suitable to simulate problems of rarefied gases for any Knudsen number and is appropriate to simulate flows inside geometries with moving boundaries. Thus, the method we use is the Finite Pointset Method (FPM), which is a mesh-free numerical method developed at the ITWM Kaiserslautern and is mainly used to solve fluid dynamical problems.
More precisely, we develop a method in the FPM framework to solve the BGK model equation, which is a simplification of the Boltzmann equation. This equation is mainly used to describe rarefied flows.
The FPM based method is implemented for one and two dimensional physical and velocity space and different ranges of the Knudsen number. Numerical examples are shown for problems with moving boundaries. It is seen, that our method is superior to regular grid methods with respect to the implementation of boundary conditions. Furthermore, our results are comparable to reference solutions gained through CFD- and DSMC methods, respectevly.

Today’s pervasive availability of computing devices enabled with wireless communication and location- or inertial sensing capabilities is unprecedented. The number of smartphones sold worldwide are still growing and increasing numbers of sensor enabled accessories are available which a user can wear in the shoe or at the wrist for fitness tracking, or just temporarily puts on to measure vital signs. Despite this availability of computing and sensing hardware the merit of application seems rather limited regarding the full potential of information inherent to such senor deployments. Most applications build upon a vertical design which encloses a narrowly defined sensor setup and algorithms specifically tailored to suit the application’s purpose. Successful technologies, however, such as the OSI model, which serves as base for internet communication, have used a horizontal design that allows high level communication protocols to be run independently from the actual lower-level protocols and physical medium access. This thesis contributes to a more horizontal design of human activity recognition systems at two stages. First, it introduces an integrated toolchain to facilitate the entire process of building activity recognition systems and to foster sharing and reusing of individual components. At a second stage, a novel method for automatic integration of new sensors to increase a system’s performance is presented and discussed in detail.
The integrated toolchain is built around an efficient toolbox of parametrizable components for interfacing sensor hardware, synchronization and arrangement of data streams, filtering and extraction of features, classification of feature vectors, and interfacing output devices and applications. The toolbox emerged as open-source project through several research projects and is actively used by research groups. Furthermore, the toolchain supports recording, monitoring, annotation, and sharing of large multi-modal data sets for activity recognition through a set of integrated software tools and a web-enabled database.
The method for automatically integrating a new sensor into an existing system is, at its core, a variation of well-established principles of semi-supervised learning: (1) unsupervised clustering to discover structure in data, (2) assumption that cluster membership is correlated with class membership, and (3) obtaining at a small number of labeled data points for each cluster, from which the cluster labels are inferred. In most semi-supervised approaches, however, the labels are the ground truth provided by the user. By contrast, the approach presented in this thesis uses a classifier trained on an N-dimensional feature space (old classifier) to provide labels for a few points in an (N+1)-dimensional feature space which are used to generate a new, (N+1)-dimensional classifier. The different factors that make a distribution difficult to handle are discussed, a detailed description of heuristics designed to mitigate the influences of such factors is provided, and a detailed evaluation on a set of over 3000 sensor combinations from 3 multi-user experiments that have been used by a variety of previous studies of different activity recognition methods is presented.

In DS-CDMA, spreading sequences are allocated to users to separate different
links namely, the base-station to user in the downlink or the user to base station in the uplink. These sequences are designed for optimum periodic correlation properties. Sequences with good periodic auto-correlation properties help in frame synchronisation at the receiver while sequences with good periodic cross-
correlation property reduce cross-talk among users and hence reduce the interference among them. In addition, they are designed to have reduced implementation complexity so that they are easy to generate. In current systems, spreading sequences are allocated to users irrespective of their channel condition. In this thesis,
the method of allocating spreading sequences based on users’ channel condition
is investigated in order to improve the performance of the downlink. Different
methods of dynamically allocating the sequences are investigated including; optimum allocation through a simulation model, fast sub-optimum allocation through
a mathematical model, and a proof-of-concept model using real-world channel
measurements. Each model is evaluated to validate, improvements in the gain
achieved per link, computational complexity of the allocation scheme, and its impact on the capacity of the network.
In cryptography, secret keys are used to ensure confidentiality of communication between the legitimate nodes of a network. In a wireless ad-hoc network, the
broadcast nature of the channel necessitates robust key management systems for
secure functioning of the network. Physical layer security is a novel method of
profitably utilising the random and reciprocal variations of the wireless channel to
extract secret key. By measuring the characteristics of the wireless channel within
its coherence time, reciprocal variations of the channel can be observed between
a pair of nodes. Using these reciprocal characteristics of
common shared secret key is extracted between a pair of the nodes. The process
of key extraction consists of four steps namely; channel measurement, quantisation, information reconciliation, and privacy amplification. The reciprocal channel
variations are measured and quantised to obtain a preliminary key of vector bits (0; 1). Due to errors in measurement, quantisation, and additive Gaussian noise,
disagreement in the bits of preliminary keys exists. These errors are corrected
by using, error detection and correction methods to obtain a synchronised key at
both the nodes. Further, by the method of secure hashing, the entropy of the key
is enhanced in the privacy amplification stage. The efficiency of the key generation process depends on the method of channel measurement and quantisation.
Instead of quantising the channel measurements directly, if their reciprocity is enhanced and then quantised appropriately, the key generation process can be made efficient and fast. In this thesis, four methods of enhancing reciprocity are presented namely; l1-norm minimisation, Hierarchical clustering, Kalman filtering,
and Polynomial regression. They are appropriately quantised by binary and adaptive quantisation. Then, the entire process of key generation, from measuring the channel profile to obtaining a secure key is validated by using real-world channel measurements. The performance evaluation is done by comparing their performance in terms of bit disagreement rate, key generation rate, test of randomness,
robustness test, and eavesdropper test. An architecture, KeyBunch, for effectively
deploying the physical layer security in mobile and vehicular ad-hoc networks is
also proposed. Finally, as an use-case, KeyBunch is deployed in a secure vehicular communication architecture, to highlight the advantages offered by physical layer security.

We propose and study a strongly coupled PDE-ODE system with tissue-dependent degenerate diffusion and haptotaxis that can serve as a model prototype for cancer cell invasion through the
extracellular matrix. We prove the global existence of weak solutions and illustrate the model behaviour by numerical simulations for a two-dimensional setting.

This thesis treats the application of configurational forces for the evaluation of fracture processes in Antarctic ice shelves. FE simulations are used to analyze the influence of geometric scales, material parameters and boundary conditions on single surface cracks. A break-up event at the Wilkins Ice Shelf that coincided with a major temperature drop motivates the consideration of frost wedging as a mechanism for ice shelf disintegration. An algorithm for the evaluation of the crack propagation direction is used to analyze the horizontal growth of rifts. Using equilibrium considerations for a viscoelastic fluid, a method is introduced to compute viscous volume forces from measured velocity fields as loads for a linear elastic fracture mechanical analysis.

This thesis deals with the development of a tractor front loader scale which measures payload continuously, independent of the center of gravity of the payload, and unaffected of the position and movements of the loader. To achieve this, a mathematic model of a common front loader is simplified which makes it possible to identify its parameters by a repeatable and automatic procedure. By measuring accelerations as well as cylinder forces, the payload is determined continuously during the working process. Finally, a prototype was build and the scale was tested on a tractor.

On the Extended Finite Element Method for the Elasto-Plastic Deformation of Heterogeneous Materials
(2015)

This thesis is concerned with the extended finite element method (XFEM) for deformation analysis of three-dimensional heterogeneous materials. Using the "enhanced abs enrichment" the XFEM is able to reproduce kinks in the displacements and therewith jumps in the strains within elements of the underlying tetrahedral finite element mesh. A complex model for the micro structure reconstruction of aluminum matrix composite AMC225xe and the modeling of its macroscopic thermo-mechanical plastic deformation behavior is presented, using the XFEM. Additionally, a novel stabilization algorithm is introduced for the XFEM. This algorithm requires preprocessing only.

In some processes for spinning synthetic fibers the filaments are exposed to highly turbulent air flows to achieve a high degree of stretching (elongation). The quality of the resulting filaments, namely thickness and uniformity, is thus determined essentially by the aerodynamic force coming from the turbulent flow. Up to now, there is a gap between the elongation measured in experiments and the elongation obtained by numerical simulations available in the literature.
The main focus of this thesis is the development of an efficient and sufficiently accurate simulation algorithm for the velocity of a turbulent air flow and the application in turbulent spinning processes.
In stochastic turbulence models the velocity is described by an \(\mathbb{R}^3\)-valued random field. Based on an appropriate description of the random field by Marheineke, we have developed an algorithm that fulfills our requirements of efficiency and accuracy. Applying a resulting stochastic aerodynamic drag force on the fibers then allows the simulation of the fiber dynamics modeled by a random partial differential algebraic equation system as well as a quantization of the elongation in a simplified random ordinary differential equation model for turbulent spinning. The numerical results are very promising: whereas the numerical results available in the literature can only predict elongations up to order \(10^4\) we get an order of \(10^5\), which is closer to the elongations of order \(10^6\) measured in experiments.

Das Ziel dieser Dissertation ist die Entwicklung und Implementation eines Algorithmus zur Berechnung von tropischen Varietäten über allgemeine bewertete Körper. Die Berechnung von tropischen Varietäten über Körper mit trivialer Bewertung ist ein hinreichend gelöstes Problem. Hierfür kombinieren die Autoren Bogart, Jensen, Speyer, Sturmfels und Thomas eindrucksvoll klassische Techniken der Computeralgebra mit konstruktiven Methoden der konvexer Geometrie.
Haben wir allerdings einen Grundkörper mit nicht-trivialer Bewertung, wie zum Beispiel den Körper der \(p\)-adischen Zahlen \(\mathbb{Q}_p\), dann stößt die konventionelle Gröbnerbasentheorie scheinbar an ihre Grenzen. Die zugrundeliegenden Monomordnungen sind nicht geeignet um Problemstellungen zu untersuchen, die von einer nicht-trivialen Bewertung auf den Koeffizienten abhängig sind. Dies führte zu einer Reihe von Arbeiten, welche die gängige Gröbnerbasentheorie modifizieren um die Bewertung des Grundkörpers einzubeziehen.\[\phantom{newline}\]
In dieser Arbeit präsentieren wir einen alternativen Ansatz und zeigen, wie sich die Bewertung mittels einer speziell eingeführten Variable emulieren lässt, so dass eine Modifikation der klassischen Werkzeuge nicht notwendig ist.
Im Rahmen dessen wird Theorie der Standardbasen auf Potenzreihen über einen Koeffizientenring verallgemeinert. Hierbei wird besonders Wert darauf gelegt, dass alle Algorithmen bei polynomialen Eingabedaten mit ihren klassischen Pendants übereinstimmen, sodass für praktische Zwecke auf bereits etablierte Softwaresysteme zurückgegriffen werden kann. Darüber hinaus wird die Konstruktion des Gröbnerfächers sowie die Technik des Gröbnerwalks für leicht inhomogene Ideale eingeführt. Dies ist notwendig, da bei der Einführung der neuen Variable die Homogenität des Ausgangsideal gebrochen wird.\[\phantom{newline}\]
Alle Algorithmen wurden in Singular implementiert und sind als Teil der offiziellen Distribution erhältlich. Es ist die erste Implementation, welches in der Lage ist tropische Varietäten mit \(p\)-adischer Bewertung auszurechnen. Im Rahmen der Arbeit entstand ebenfalls ein Singular Paket für konvexe Geometrie, sowie eine Schnittstelle zu Polymake.

This dissertation focuses on the visualization of urban microclimate data sets,
which describe the atmospheric impact of individual urban features. The application
and adaptation of visualization and analysis concepts to enhance the
insight into observational data sets used this specialized area are explored, motivated
through application problems encountered during active involvement
in urban microclimate research at the Arizona State University in Tempe, Arizona.
Besides two smaller projects dealing with the analysis of thermographs
recorded with a hand-held device and visualization techniques used for building
performance simulation results, the main focus of the work described in
this document is the development of a prototypic tool for the visualization
and analysis of mobile transect measurements. This observation technique involves
a sensor platform mounted to a vehicle, which is then used to traverse
a heterogeneous neighborhood to investigate the relationships between urban
form and microclimate. The resulting data sets are among the most complex
modes of in-situ observations due to their spatio-temporal dependence, their
multivariate nature, but also due to the various error sources associated with
moving platform observations.
The prototype enables urban climate researchers to preprocess their data,
to explore a single transect in detail, and to aggregate observations from multiple
traverses conducted over diverse routes for a visual delineation of climatic
microenvironments. Extending traditional analysis methods, the suggested visualization
tool provides techniques to relate the measured attributes to each
other and to the surrounding land cover structure. In addition to that, an
improved method for sensor lag correction is described, which shows the potential
to increase the spatial resolution of measurements conducted with slow
air temperature sensors.
In summary, the interdisciplinary approach followed in this thesis triggers
contributions to geospatial visualization and visual analytics, as well as to urban
climatology. The solutions developed in the course of this dissertation are
meant to support domain experts in their research tasks, providing means to
gain a qualitative overview over their specific data sets and to detect patterns,
which can then be further analyzed using domain-specific tools and methods.

In this dissertation, we discuss how to price American-style options. Our aim is to study and improve the regression-based Monte Carlo methods. In order to have good benchmarks to compare with them, we also study the tree methods.
In the second chapter, we investigate the tree methods specifically. We do research firstly within the Black-Scholes model and then within the Heston model. In the Black-Scholes model, based on Müller's work, we illustrate how to price one dimensional and multidimensional American options, American Asian options, American lookback options, American barrier options and so on. In the Heston model, based on Sayer's research, we implement his algorithm to price one dimensional American options. In this way, we have good benchmarks of various American-style options and put them all in the appendix.
In the third chapter, we focus on the regression-based Monte Carlo methods theoretically and numerically. Firstly, we introduce two variations, the so called "Tsitsiklis-Roy method" and the "Longstaff-Schwartz method". Secondly, we illustrate the approximation of American option by its Bermudan counterpart. Thirdly we explain the source of low bias and high bias. Fourthly we compare these two methods using in-the-money paths and all paths. Fifthly, we examine the effect using different number and form of basis functions. Finally, we study the Andersen-Broadie method and present the lower and upper bounds.
In the fourth chapter, we study two machine learning techniques to improve the regression part of the Monte Carlo methods: Gaussian kernel method and kernel-based support vector machine. In order to choose a proper smooth parameter, we compare fixed bandwidth, global optimum and suboptimum from a finite set. We also point out that scaling the training data to [0,1] can avoid numerical difficulty. When out-of-sample paths of stock prices are simulated, the kernel method is robust and even performs better in several cases than the Tsitsiklis-Roy method and the Longstaff-Schwartz method. The support vector machine can keep on improving the kernel method and needs less representations of old stock prices during prediction of option continuation value for a new stock price.
In the fifth chapter, we switch to the hardware (FGPA) implementation of the Longstaff-Schwartz method and propose novel reversion formulas for the stock price and volatility within the Black-Scholes and Heston models. The test for this formula within the Black-Scholes model shows that the storage of data is reduced and also the corresponding energy consumption.

Computational Homogenization of Piezoelectric Materials using FE² Methods and Configurational Forces
(2015)

Piezoelectric materials are electro-mechanically coupled materials. In these materials it is possible to produce an electric field by applying a mechanical load. This phenomenon is known as the piezoelectric effect. These materials also exhibit a mechanical deformation in response to an external electric loading, which is known as the inverse piezoelectric effect. By using these smart properties of piezoelectric materials, applications are possible in sensors and actuators. Ferroelectric or piezoelectric materials show switching behavior of the polarization in the material under an external loading. Due to this property, these materials are used to produce random access memory (RAM) for the non-volatile storage of data in computing devices. It is essential to understand the material responses of piezoelectric materials properly in order to use them in the engineering applications in innovative manners. Due to the growing interest in determining the material responses of smart material (e.g., piezoelectric material), computational methods are becoming increasingly important.
Many engineering materials possess inhomogeneities on the micro level. These inhomogeneities in the materials cause some difficulties in the determination of the material responses computationally as well as experimentally. But on the other hand, sometimes these inhomogeneities help the materials to render some good physical properties, e.g., glass or carbon fiber reinforced composites are light weight, but show higher strength. Piezoelectric materials also exhibit intense inhomogeneities on the micro level. These inhomogeneities are originating from the presence of domains, domain walls, grains, grain boundaries, micro cracks, etc. in the material. In order to capture the effects of the underlying microstructures on the macro quantities, it is essential to homogenize material parameters and the physical responses. There are several approaches to perform the homogenization. A two-scale classical (first-order) homogenization of electro-mechanically coupled materials using a FE²-approach is discussed in this work. The main objective of this work is to investigate the influences of the underlying micro structures on the macro Eshelby stress tensor and on the macro configurational forces. The configurational forces are determined in certain defect situations. These defect situations include the crack tip of a sharp crack in the macro specimen.
A literature review shows that the macro strain tensor is used to determine the micro boundary condition for the FE²-based homogenization in a small strain setting. This approach is capable to determine the consistent homogenized physical quantities (e.g., stress, strain) and the homogenized material quantities (e.g., stiffness tensor). But the application of these type of micro boundaries for the homogenization does not generate physically consistent macro Eshelby stress tensor or the macro configurational forces. Even in the absence of the micro volume configurational forces, this approach of the homogenization of piezoelectric materials produces unphysical volume configurational forces on the macro level. After a thorough investigation of the boundary conditions on the representative volume elements (RVEs), it is found that a displacement gradient driven micro boundary conditions remedy this issue. The use of the displacement gradient driven micro boundary conditions also satisfies the Hill-Mandel condition. The macro Eshelby stress tensor of a pure mechanical problem in a small deformation setting can be determined in two possible ways: by using the homogenized mechanical quantities (displacement gradient and stress tensor), or by homogenizing the Eshelby stress tensor on the micro level by volume averaging. The first approach does not satisfy the Hill-Mandel condition incorporating the Eshelby stress tensor in the energy term, on the other hand, the Hill-Mandel condition is satisfied in the second approach. In the case of homogenized Eshelby stress tensor determined from the homogenized physical quantities, the Hill-Mandel condition gives an additional energy term. A body in a small deformation setting is deformed according to the displacement gradient. If the homogenization is done using strain driven micro boundary conditions, the micro domain is deformed according to the macro strain, but the tiny vicinity around the corresponding Gauß point is deformed according to the macro displacement gradient. This implies that some restrictions are imposed at every Gauß point on the macro level. This situation helps the macro system to produce nonphysical volume configurational forces.
A FE²-based computational homogenization technique is also considered for the homogenization of piezoelectric materials. In this technique a representative volume element, which comprises of the micro structural features in the material, is assigned to every Gauß point of the macro domain. The macro displacement gradient and the macro electric field, or the macro stress tensor and the macro electric displacement are passed to the RVEs at every macro Gauß point. After determining boundary conditions on the RVEs, the homogenization process is performed. The homogenized physical quantities and the homogenized material parameters are passed back to macro Gauß points. In this work numerical investigations are carried out for two distinct situations of the microstructures of the piezoelectric materials regarding the evolution on the micro level: a) homogenization by using stationary microstructures, and b) homogenization by using evolving microstructures.
For the first case, the domain walls remain at fixed positions through out the simulations for the homogenization of piezoelectric materials. For a considerably large external loading, the real situation is different. But to understand the effects of the underlying microstructures on the macro configurational forces, to some extent it is sufficient to do the homogenization with fixed or stationary microstructures. The homogenization process is carried out for different microstructures and for different loading conditions. If the mechanical load is applied in the direction of the polarization, a smaller crack tip configurational force is observed in comparison to the configurational force determined for a mechanical loading perpendicular to the polarization. If the polarizations in the microstructures are parallel or perpendicular to the applied electric field and the applied displacement, configurational forces parallel to the crack ligament of the macro crack are observed only. In the case of inclined polarizations in the microstructures, configurational forces inclined to the crack ligament are obtained. The simulation results also reveal that an application of an external electric field to the material reduces the value of the nodal configurational forces at the crack tip.
In the second case, the interfaces of the micro structures are allowed to move from their initial positions at every step of the applied incremental external loading. Thus, at every step of the application of the external loading, the microstructures are changed when the external loading is larger than the coercive field. The movement of the interfaces is realized through the nodal configurational forces on the micro level. At every step of the application of the external loading, the nodal configurational forces per unit length on the domain walls are determined in the post-processing of the FE-simulation on the micro domain. With the help of the domain wall kinetics, the new positions of the domain walls are determined. Numerical results show that the crack tip region is the most affected area in the macro domain. For that reason a very different distribution of the macro electric displacement is observed comparing the same produced by using fixed microstructures. Due to the movement of the domain walls, the energy is dissipated in the system. As a result, a smaller configurational force appears at the crack tip on the macro level in the case of the homogenization by using evolving microstructures. By using the homogenization technique involving the evolution of the microstructures, it is possible to produce the electric displacement vs. electric field hysteresis loop on the macro level. The shape of the hysteresis loop depends on the value of the rate of application of the external electric loading. A faster deployment of the external electric field widens the hysteresis loop.

The current procedures for achieving industrial process surveillance, waste reduction, and prognosis of critical process states are still insufficient in some parts of the manufacturing industry. Increasing competitive pressure, falling margins, increasing cost, just-in-time production, environmental protection requirements, and guidelines concerning energy savings pose new challenges to manufacturing companies, from the semiconductor to the pharmaceutical industry.
New, more intelligent technologies adapted to the current technical standards provide companies with improved options to tackle these situations. Here, knowledge-based approaches open up pathways that have not yet been exploited to their full extent. The Knowledge-Discovery-Process for knowledge generation describes such a concept. Based on an understanding of the problems arising during production, it derives conclusions from real data, processes these data, transfers them into evaluated models and, by this open-loop approach, reiteratively reflects the results in order to resolve the production problems. Here, the generation of data through control units, their transfer via field bus for storage in database systems, their formatting, and the immediate querying of these data, their analysis and their subsequent presentation with its ensuing benefits play a decisive role.
The aims of this work result from the lack of systematic approaches to the above-mentioned issues, such as process visualization, the generation of recommendations, the prediction of unknown sensor und production states, and statements on energy cost.
Both science and commerce offer mature statistical tools for data preprocessing, analysis and modeling, and for the final reporting step. Since their creation, the insurance business, the world of banking, market analysis, and marketing have been the application fields of these software types; they are now expanding to the production environment.
Appropriate modeling can be achieved via specific machine learning procedures, which have been established in various industrial areas, e.g., in process surveillance by optical control systems. Here, State-of-the-art classification methods are used, with multiple applications comprising sensor technology, process areas, and production site data. Manufacturing companies now intend to establish a more holistic surveillance of process data, such as, e.g., sensor failures or process deviations, to identify dependencies. The causes of quality problems must be recognized and selected in real time from about 500 attributes of a highly complex production machine. Based on these identified causes, recommendations for improvement must then be generated for the operator at the machine, in order to enable timely measures to avoid these quality deviations.
Unfortunately, the ability to meet the required increases in efficiency – with simultaneous consumption and waste minimization – still depends on data that are, for the most part, not available. There is an overrepresentation of positive examples whereas the number of definite negative examples is too low.
The acquired information can be influenced by sensor drift effects and the occurrence of quality degradation may not be adequately recognized. Sensorless diagnostic procedures with dual use of actuators can be of help here.
Moreover, in the course of a process, critical states with sometimes unexplained behavior can occur. Also in these cases, deviations could be reduced by early countermeasures.
The generation of data models using appropriate statistical methods is of advantage here.
Conventional classification methods sometimes reach their limits. Supervised learning methods are mostly used in areas of high information density with sufficient data available for the classes under examination. However, there is a growing trend (e.g., spam filtering) to apply supervised learning methods to underrepresented classes, the datasets of which are, at best, outliers or not at all existent.
The application field of One-Class Classification (OCC) deals with this issue. Standard classification procedures (e.g., k-nearest-neighbor classifier, support vector machines) can be modified in adjustment to such problems. Thereby, a control system is able to classify statements on changing process states or sensor deviations. The above-described knowledge discovery process was employed in a case study from the polymer film industry, at the Mondi Gronau GmbH, taken as an example, and accomplished by a real-data survey at the production site and subsequent data preprocessing, modeling, evaluation, and deployment as a system for the generation of recommendations. To this end, questions regarding the following topics had to be clarified: data sources, datasets and their formatting, transfer pathways, storage media, query sequences, the employed methods of classification, their adjustment to the problems at hand, evaluation of the results, construction of a dynamic cycle, and the final implementation in the production process, along with its surplus value for the company.
Pivotal options for optimization with respect to ecological and economical aspects can be found here. Capacity for improvement is given in the reduction of energy consumption, CO\(_2\) emissions, and waste at all machines. At this one site, savings of several million euros per month can be achieved.
One major difficulty so far has been hardly accessible process data which, distributed on various data sources and unconnected, in some areas led to an increased analysis effort and a lack of holistic real-time quality surveillance. Monitoring of specifications and the thus obtained support for the operator at the installation resulted in a clear disadvantage with regard to cost minimization.
The data of the case study, captured according to their purposes and in coordination with process experts, amounted to 21,900 process datasets from cast film extrusion during 2 years’ time, including sensor data from dosing facilities and 300 site-specific energy datasets from the years 2002–2014.
In the following, the investigation sequence is displayed:
1. In the first step, industrial approaches according to Industrie 4.0 and related to Big Data were investigated. The applied statistical software suites and their functions were compared with a focus on real-time data acquisition from database systems, different data formats, their sensor locations at the machines, and the data processing part. The linkage of datasets from various data sources for, e.g., labeling and downstream exploration according to the knowledge discovery process is of high importance for polymer manufacturing applications.
2. In the second step, the aims were defined according to the industrial requirements, i.e. the critical production problem called “cut-off” as the main selection, and with regard to their investigation with machine learning methods. Therefore, a system architecture corresponding to the polymer industry was developed, containing the following processing steps: data acquisition, monitoring \& recommendation, and self-configuration.
3. The novel sensor datasets, with 160–2,500 real and synthetic attributes, were acquired within 1-min intervals via PLC and field bus from an Oracle database. The 160 features were reduced to 6 dimensions with feature reduction methods. Due to underrepresentation of the critical class, the learning approaches had to be modified and optimized for one-class classification, which achieved 99% accuracy after training, testing and evaluation with real datasets.
4. In the next step, the 6-dimensional dataset was scaled into lower 1-, 2-, or 3-dimensional space with classical and non-classical mapping approaches for downstream visualization. The mapped view was separated into zones of normal and abnormal process conditions by threshold setting.
5. Afterwards, the boundary zone was investigated and an approach for trajectory extraction consisting of condition points in sequence was developed, to optimize the prediction behavior of the model. The extracted trajectories were trained, tested and evaluated by State-of-the-art classification methods, achieving a 99% recognition ratio.
6. In the last step, the best methods and processing parts were converted into a specifically developed domain-specific graphical user interface for real-time visualization of process condition changes. The requirements of such an interface were discussed with the operators with regard to intuitive handling, interactive visualization and recommendations (as e.g., messaging and traffic lights), and implemented.
The software prototype was tested at a laboratory machine. Correct recognition of abnormal process problems was achieved at a 90\% ratio. The software was afterwards transferred to a group of on-line production machines.
As demonstrated, the monthly amount of waste arising at machine M150 could be decreased from 20.96% to 12.44% during the application time. The frequency of occurrence of the specific problem was reduced by 30% related to monthly savings of 50,000 EUR.
In the approach pertaining to the energy prognosis of load profiles, monthly energy data from 2002 to 2014 (about 36 trajectories with three to eight real parameters each) were used as the basis, analyzed and modeled systematically. The prognosis quality increased with approaching target date. Thereby, the site-specific load profile for 2014 could be predicted with an accuracy of 99%.
The achievement of sustained cost reductions of several 100,000 euros, combined with additional savings of EUR 2.8 million, could be demonstrated.
The process improvements achieved while pursuing scientific targets could be successfully and permanently integrated at the case study plant. The increase in methodical and experimental knowledge was reflected by first economical results and could be verified numerically. The expectations of the company were more than fulfilled and further developments based on the new findings were initiated. Among the new finding are the transfer of the scientific findings onto more machines and even the initiation of further studies expanding into the diagnostics area.
Considering the size of the enterprise, future enhanced success should also be possible for other locations. In the course of the grid charge exemption according to EEG, the energy savings at further German locations can amount to 4–11% on a monetary basis and at least 5% based on energy. Up to 10% of materials and cost can be saved with regard to waste reduction related to specific problems. According to projections, material savings of 5–10 t per month and time savings of up to 50 person-hours are achievable. Important synergy effects can be created by the knowledge transfer.

Since their invention in the 1980s, behaviour-based systems have become very popular among roboticists. Their component-based nature facilitates the distributed implementation of systems, fosters reuse, and allows for early testing and integration. However, the distributed approach necessitates the interconnection of many components into a network in order to realise complex functionalities. This network is crucial to the correct operation of the robotic system. There are few sound design techniques for behaviour networks, especially if the systems shall realise task sequences. Therefore, the quality of the resulting behaviour-based systems is often highly dependant on the experience of their developers.
This dissertation presents a novel integrated concept for the design and verification of behaviour-based systems that realise task sequences. Part of this concept is a technique for encoding task sequences in behaviour networks. Furthermore, the concept provides guidance to developers of such networks. Based on a thorough analysis of methods for defining sequences, Moore machines have been selected for representing complex tasks. With the help of the structured workflow proposed in this work and the developed accompanying tool support, Moore machines defining task sequences can be transferred automatically into corresponding behaviour networks, resulting in less work for the developer and a lower risk of failure.
Due to the common integration of automatically and manually created behaviour-based components, a formal analysis of the final behaviour network is reasonable. For this purpose, the dissertation at hand presents two verification techniques and justifies the selection of model checking. A novel concept for applying model checking to behaviour-based systems is proposed according to which behaviour networks are modelled as synchronised automata. Based on such automata, properties of behaviour networks that realise task sequences can be verified or falsified. Extensive graphical tool support has been developed in order to assist the developer during the verification process.
Several examples are provided in order to illustrate the soundness of the presented design and verification techniques. The applicability of the integrated overall concept to real-world tasks is demonstrated using the control system of an autonomous bucket excavator. It can be shown that the proposed design concept is suitable for developing complex sophisticated behaviour networks and that the presented verification technique allows for verifying real-world behaviour-based systems.

The last couple of years have marked the entire field of information technology with the introduction of a new global resource, called data. Certainly, one can argue that large amounts of information and highly interconnected and complex datasets were available since the dawn of the computer and even centuries before. However, it has been only a few years since digital data has exponentially expended, diversified and interconnected into an overwhelming range of domains, generating an entire universe of zeros and ones. This universe represents a source of information with the potential of advancing a multitude of fields and sparking valuable insights. In order to obtain this information, this data needs to be explored, analyzed and interpreted.
While a large set of problems can be addressed through automatic techniques from fields like artificial intelligence, machine learning or computer vision, there are various datasets and domains that still rely on the human intuition and experience in order to parse and discover hidden information. In such instances, the data is usually structured and represented in the form of an interactive visual representation that allows users to efficiently explore the data space and reach valuable insights. However, the experience, knowledge and intuition of a single person also has its limits. To address this, collaborative visualizations allow multiple users to communicate, interact and explore a visual representation by building on the different views and knowledge blocks contributed by each person.
In this dissertation, we explore the potential of subjective measurements and user emotional awareness in collaborative scenarios as well as support flexible and user- centered collaboration in information visualization systems running on tabletop displays. We commence by introducing the concept of user-centered collaborative visualization (UCCV) and highlighting the context in which it applies. We continue with a thorough overview of the state-of-the-art in the areas of collaborative information visualization, subjectivity measurement and emotion visualization, combinable tabletop tangibles, as well as browsing history visualizations. Based on a new web browser history visualization for exploring user parallel browsing behavior, we introduce two novel user-centered techniques for supporting collaboration in co-located visualization systems. To begin with, we inspect the particularities of detecting user subjectivity through brain-computer interfaces, and present two emotion visualization techniques for touch and desktop interfaces. These visualizations offer real-time or post-task feedback about the users’ affective states, both in single-user and collaborative settings, thus increasing the emotional self-awareness and the awareness of other users’ emotions. For supporting collaborative interaction, a novel design for tabletop tangibles is described together with a set of specifically developed interactions for supporting tabletop collaboration. These ring-shaped tangibles minimize occlusion, support touch interaction, can act as interaction lenses, and describe logical operations through nesting operations. The visualization and the two UCCV techniques are each evaluated individually capturing a set of advantages and limitations of each approach. Additionally, the collaborative visualization supported by the two UCCV techniques is also collectively evaluated in three user studies that offer insight into the specifics of interpersonal interaction and task transition in collaborative visualization. The results show that the proposed collaboration support techniques do not only improve the efficiency of the visualization, but also help maintain the collaboration process and aid a balanced social interaction.

The aim of this work was to synthesize and characterize new bidentate N,N,P-ligands and their corresponding heterobimetallic complexes. These bidentate pyridylpyrimidine aminophosphine ligands were synthesized by ring closure of two different enaminones ( 3-(dimethylamino)-1-(pyridine-2-yl)-prop-2-en-1-one or 3-(dimethylamino)-1-(pyridine-2-yl)-but-2-en-1-one) with excess amount of guanidinium salts in the presence of base. The novel phosphine functionalized guanidinium salts were prepared from 2-(diphenylphosphinyl)ethylamine or 3-(diphenyl-phosphinyl)propylamine. These bidentate N,N,P-ligands contain hard and soft donor sites which allows the coordination of two different metal centers and bimetallic complexes. These bimetallic complexes can exhibit a unique behavior as a result of a cooperation between the two metal atoms. First, the gold(I) complexes of all these four different ligands were synthesized. The gold metal coordinates only to the phosphorus atom. It was proved by X-Ray crystallography technique and 31P NMR spectroscopy. Addition to the gold(I)-monometallic complexes, trans- coordinated rhodium complex of (2-amino)pyridylpyrimidine aminophosphine ligand was successfully prepared. The characterization of this complex was achieved by NMR and IR spectroscopy. Reacting the mono gold(I) complexes with the different metal salts like Pd(PhCN)2Cl2, ZnCl2, [Ru(p-cymene)Cl2] dimer gave the target heterobimetallic complexes. The second metal centers coordinated to the N,N donor site which was proved by the help of NMR spectroscopy and ESI-MS measurements. The Au(I) and Au-Zn complexes of N,N,P-ligands were examined as catalysts for the hydroamidation reaction of cyclohexene with p-toluenesulfonamide. They did not show activities under the tested conditions. Further studies are necessary to understand the catalytic activities and cooperativity between the two metal atoms. In addition, bi-and trimetallic complexes with the rhodium compound could be synthesized and tested in different organic transformations. Furthermore, the chiral hydroxyl[2.2]paracyclophane substituted with five different aminopyrimidines were accomplished. These aminopyrimidine ligands were synthesized by a cyclization reaction with hydroxyl[2.2]paracyclophane substituted enaminone and excess amount of corresponding guanidinium salts under basic conditions. In the last part of this work, kinetic studies of cyclopalladation reaction of the 2-(arylaminopyrimidin-4-yl)pyridine ligands with Pd(PhCN)2 These measurements were carried out by using UV-Vis spectroscopy. The spectral studies of cyclometallation step showed that the reaction fits a second order kinetics. In addition to this, a full kinetic investigation was performed at different temperatures and the activation parameters of complex formation were calculated.

A Consistent Large Eddy Approach for Lattice Boltzmann Methods and its Application to Complex Flows
(2015)

Lattice Boltzmann Methods have shown to be promising tools for solving fluid flow problems. This is related to the advantages of these methods, which are among others, the simplicity in handling complex geometries and the high efficiency in calculating transient flows. Lattice Boltzmann Methods are mesoscopic methods, based on discrete particle dynamics. This is in contrast to conventional Computational Fluid Dynamics methods, which are based on the solution of the continuum equations. Calculations of turbulent flows in engineering depend in general on modeling, since resolving of all turbulent scales is and will be in near future far beyond the computational possibilities. One of the most auspicious modeling approaches is the large eddy simulation, in which the large, inhomogeneous turbulence structures are directly computed and the smaller, more homogeneous structures are modeled.
In this thesis, a consistent large eddy approach for the Lattice Boltzmann Method is introduced. This large eddy model includes, besides a subgrid scale model, appropriate boundary conditions for wall resolved and wall modeled calculations. It also provides conditions for turbulent domain inlets. For the case of wall modeled simulations, a two layer wall model is derived in the Lattice Boltzmann context. Turbulent inlet conditions are achieved by means of a synthetic turbulence technique within the Lattice Boltzmann Method.
The proposed approach is implemented in the Lattice Boltzmann based CFD package SAM-Lattice, which has been created in the course of this work. SAM-Lattice is feasible of the calculation of incompressible or weakly compressible, isothermal flows of engineering interest in complex three dimensional domains. Special design targets of SAM-Lattice are high automatization and high performance.
Validation of the suggested large eddy Lattice Boltzmann scheme is performed for pump intake flows, which have not yet been treated by LBM. Even though, this numerical method is very suitable for this kind of vortical flows in complicated domains. In general, applications of LBM to hydrodynamic engineering problems are rare. The results of the pump intake validation cases reveal that the proposed numerical approach is able to represent qualitatively and quantitatively the very complex flows in the intakes. The findings provided in this thesis can serve as the basis for a broader application of LBM in hydrodynamic engineering problems.

The present thesis describes the development and validation of a viscosity adaption method for the numerical simulation of non-Newtonian fluids on the basis of the Lattice Boltzmann Method (LBM), as well as the development and verification of the related software bundle SAM-Lattice.
By now, Lattice Boltzmann Methods are established as an alternative approach to classical computational fluid dynamics
methods. The LBM has been shown to be an accurate and efficient tool for the numerical simulation of weakly compressible or incompressible fluids. Fields of application reach from turbulent simulations through thermal problems to acoustic calculations among others. The transient nature of the method and the need for a regular grid based, non body conformal discretization makes the LBM ideally suitable for simulations involving complex solids. Such geometries are common, for instance, in the food processing industry, where fluids are mixed by static mixers or agitators. Those fluid flows are often laminar and non-Newtonian.
This work is motivated by the immense practical use of the Lattice Boltzmann Method, which is limited due to stability issues. The stability of the method is mainly influenced by the discretization and the viscosity of the fluid. Thus, simulations of non-Newtonian fluids, whose kinematic viscosity depend on the shear rate, are problematic. Several authors have shown that the LBM is capable of simulating those fluids. However, the vast majority of the simulations in the literature are carried out for simple geometries and/or moderate shear rates, where the LBM is still stable. Special care has to be taken for practical non-Newtonian Lattice Boltzmann simulations in order to keep them stable. A straightforward way is to truncate the modeled viscosity range by numerical stability criteria. This is an effective approach, but from the physical point of view the viscosity bounds are chosen arbitrarily. Moreover, these bounds depend on and vary with the grid and time step size and, therefore, with the simulation Mach number, which is freely chosen at the start of the simulation. Consequently, the modeled viscosity range may not fit to the actual range of the physical problem, because the correct simulation Mach number is unknown a priori. A way around is, to perform precursor simulations on a fixed grid to determine a possible time step size and simulation Mach number, respectively. These precursor simulations can be time consuming and expensive, especially for complex cases and a number of operating points. This makes the LBM unattractive for use in practical simulations of non-Newtonian fluids.
The essential novelty of the method, developed in the course of this thesis, is that the numerically modeled viscosity range is consistently adapted to the actual physically exhibited viscosity range through change of the simulation time step and the simulation Mach number, respectively, while the simulation is running. The algorithm is robust, independent of the Mach number the simulation was started with, and applicable for stationary flows as well as transient flows. The method for the viscosity adaption will be referred to as the "viscosity adaption method (VAM)" and the combination with LBM leads to the "viscosity adaptive LBM (VALBM)".
Besides the introduction of the VALBM, a goal of this thesis is to offer assistance in the spirit of a theory guide to students and assistant researchers concerning the theory of the Lattice Boltzmann Method and its implementation in SAM-Lattice. In Chapter 2, the mathematical foundation of the LBM is given and the route from the BGK approximation of the Boltzmann equation to the Lattice Boltzmann (BGK) equation is delineated in detail.
The derivation is restricted to isothermal flows only. Restrictions of the method, such as low Mach number flows are highlighted and the accuracy of the method is discussed.
SAM-Lattice is a C++ software bundle developed by the author and his colleague Dipl.-Ing. Andreas Schneider. It is a highly automated package for the simulation of isothermal flows of incompressible or weakly compressible fluids in 3D on the basis of the Lattice Boltzmann Method. By the time of writing of this thesis, SAM-Lattice comprises 5 components. The main components are the highly automated lattice generator SamGenerator and the Lattice Boltzmann solver SamSolver. Postprocessing is done with ParaSam, which is our extension of the
open source visualization software ParaView. Additionally, domain decomposition for MPI
parallelism is done by SamDecomposer, which makes use of the graph partitioning library MeTiS. Finally, all mentioned components can be controlled through a user friendly GUI (SamLattice) implemented by the author using QT, including features to visually track output data.
In Chapter 3, some fundamental aspects on the implementation of the main components, including the corresponding flow charts will be discussed. Actual details on the implementation are given in the comprehensive programmers guides to SamGenerator and SamSolver.
In order to ensure the functionality of the implementation of SamSolver, the solver is verified in Chapter 4 for Stokes's First Problem, the suddenly accelerated plate, and for Stokes's Second Problem, the oscillating plate, both for Newtonian fluids. Non-Newtonian fluids are modeled in SamSolver with the power-law model according to Ostwald de Waele. The implementation for non-Newtonian fluids is verified for the Hagen-Poiseuille channel flow in conjunction with a convergence analysis of the method. At the same time, the local grid refinement as it is implemented in SamSolver, is verified. Finally, the verification of higher order boundary conditions is done for the 3D Hagen-Poiseuille pipe flow for both Newtonian and non-Newtonian fluids.
In Chapter 5, the theory of the viscosity adaption method is introduced. For the adaption process, a target collision frequency or target simulation Mach number must be chosen and the distributions must be rescaled according to the modified time step size. A convenient choice is one of the stability bounds. The time step size for the adaption step is deduced from the target collision frequency \(\Omega_t\) and the currently minimal or maximal shear rate in the system, while obeying auxiliary conditions for the simulation Mach number. The adaption is done in the collision step of the Lattice Boltzmann algorithm. We use the transformation matrices of the MRT model to map from distribution space to moment space and vice versa. The actual scaling of the distributions is conducted on the back mapping, because we use the transformation matrix on the basis of the new adaption time step size. It follows an additional rescaling of the non-equilibrium part of the distributions, because of the form of the definition for the discrete stress tensor in the LBM context. For that reason it is clear, that the VAM is applicable for the SRT model as well as the MRT model, where there is virtually no extra cost in the latter case. Also, in Chapter 5, the multi level treatment will be discussed.
Depending on the target collision frequency and the target Mach number, the VAM can be used to optimally use the viscosity range that can be modeled within the stability bounds or it can be used to drastically accelerate the simulation. This is shown in Chapter 6. The viscosity adaptive LBM is verified in the stationary case for the Hagen-Poiseuille channel flow and in the transient case for the Wormersley flow, i.e., the pulsatile 3D Hagen-Poiseuille pipe flow. Although, the VAM is used here for fluids that can be modeled with the power-law approach, the implementation of the VALBM is straightforward for other non-Newtonian models, e.g., the Carreau-Yasuda or Cross model. In the same chapter, the VALBM is validated for the case of a propeller viscosimeter developed at the chair SAM. To this end, the experimental data of the torque on the impeller of three shear thinning non-Newtonian liquids serve for the validation. The VALBM shows excellent agreement with experimental data for all of the investigated fluids and in every operating point. For reasons of comparison, a series of standard LBM simulations is carried out with different simulation Mach numbers, which partly show errors of several hundred percent. Moreover, in Chapter 7, a sensitivity analysis on the parameters used within the VAM is conducted for the simulation of the propeller viscosimeter.
Finally, the accuracy of non-Newtonian Lattice Boltzmann simulations with the SRT and the MRT model is analyzed in detail. Previous work for Newtonian fluids indicate that depending on the numerical value of the collision frequency \(\Omega\), additional artificial viscosity is introduced due to the finite difference scheme, which negatively influences the accuracy. For the non-Newtonian case, an error estimate in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. The estimation of the error minimum is excellent in regions where the \(\Omega\) error is the dominant source of error as opposed to the compressibility error.
Result of this dissertation is a verified and validated software bundle on the basis of the viscosity adaptive Lattice Boltzmann Method. The work restricts itself on the simulation of isothermal, laminar flows with small Mach numbers. As further research goals, the testing of the VALBM with minimal error estimate and the investigation of the VALBM in the case of turbulent flows is suggested.

It is well known that the structure at a microscopic point of view strongly influences the
macroscopic properties of materials. Moreover, the advancement in imaging technologies allows
to capture the complexity of the structures at always decreasing scales. Therefore, more
sophisticated image analysis techniques are needed.
This thesis provides tools to geometrically characterize different types of three-dimensional
structures with applications to industrial production and to materials science. Our goal is to
enhance methods that allow the extraction of geometric features from images and the automatic
processing of the information.
In particular, we investigate which characteristics are sufficient and necessary to infer
the desired information, such as particles classification for technical cleanliness and
fitting of stochastic models in materials science.
In the production line of automotive industry, dirt particles collect on the surface of mechanical
components. Residual dirt might reduce the performance and durability of assembled products.
Geometric characterization of these particles allows to identify their potential danger.
While the current standards are based on 2d microscopic images, we extend the characterization
to 3d.
In particular, we provide a collection of parameters that exhaustively describe size and shape
of three-dimensional objects and can be efficiently estimated from binary images. Furthermore,
we show that only a few features are sufficient to classify particles according to the standards
of technical cleanliness.
In the context of materials science, we consider two types of microstructures: fiber systems
and foams.
Stochastic geometry grants the fundamentals for versatile models able to encompass the
geometry observed in the samples. To allow automatic model fitting, we need rules stating which
parameters of the model yield the best-fitting characteristics. However, the validity of such
rules strongly depends on the properties of the structures and on the choice of the model.
For instance, isotropic orientation distribution yields the best theoretical results for Boolean
models and Poisson processes of cylinders with circular cross sections. Nevertheless, fiber
systems in composites are often anisotropic.
Starting from analytical results from the literature, we derive formulae for anisotropic
Poisson processes of cylinders with polygonal cross sections that can be directly used in
applications. We apply this procedure to a sample of medium density fiber board. Even
if image resolution does not allow to estimate reliably characteristics of the singles fibers,
we can fit Boolean models and Poisson cylinder processes. In particular, we show the complete
model fitting and validation procedure with cylinders with circular and squared cross sections.
Different problems arise when modeling cellular materials. Motivated by the physics of foams,
random Laguerre tessellations are a good choice to model the pore system of foams.
Considering tessellations generated by systems of non-overlapping spheres allows to control the
cell size distribution, but yields the loss of an analytical description of the model.
Nevertheless, automatic model fitting can still be obtained by approximating the characteristics
of the tessellation depending on the parameters of the model. We investigate how to improve
the choice of the model parameters. Angles between facets and between edges were never considered
so far. We show that the distributions of angles in Laguerre tessellations
depend on the model parameters. Thus, including the moments of the angles still allows automatic
model fitting. Moreover, we propose an algorithm to estimate angles from images of real foams.
We observe that angles are matched well in random Laguerre tessellations also when they are not
employed to choose the model parameters. Then, we concentrate on the edge length distribution. In
Laguerre tessellations occur many more short edges than in real foams. To deal with this problem,
we consider relaxed models. Relaxation refers to topological and structural modifications
of a tessellation in order to make it comply with Plateau's laws of mechanical equilibrium. We inspect
samples of different types of foams, closed and open cell foams, polymeric and metallic. By comparing
the geometric characteristics of the model and of the relaxed tessellations, we conclude that whether
the relaxation improves the edge length distribution strongly depends on the type of foam.

In this paper we propose a phenomenological model for the formation of an interstitial gap between the tumor and the stroma. The gap
is mainly filled with acid produced by the progressing edge of the tumor front. Our setting extends existing models for acid-induced tumor invasion models to incorporate
several features of local invasion like formation of gaps, spikes, buds, islands, and cavities. These behaviors are obtained mainly due to the random dynamics at the intracellular
level, the go-or-grow-or-recede dynamics on the population scale, together with the nonlinear coupling between the microscopic (intracellular) and macroscopic (population)
levels. The wellposedness of the model is proved using the semigroup technique and 1D and 2D numerical simulations are performed to illustrate model predictions and draw
conclusions based on the observed behavior.

There are a number of designs for an online advertising system that allow for behavioral targeting without revealing user online behavior or user interest profiles to the ad network. Although these designs purport to be practical solutions, none of them adequately consider the role of ad auctions, which today are central to the operation of online advertising systems. Moreover, none of the proposed designs have been deployed in real-life settings. In this thesis, we present an effort to fill this gap. First, we address the challenge of running ad auctions that leverage user profiles while keeping the profile information private. We define the problem, broadly explore the solution space, and discuss the pros and cons of these solutions. We analyze the performance of our solutions using data from Microsoft Bing advertising auctions. We conclude that, while none of our auctions are ideal in all respects, they are adequate and practical solutions. Second, we build and evaluate a fully functional prototype of a practical privacy-preserving ad system at a reasonably large scale. With more than 13K opted-in users, our system was in operation for over two months serving an average of 4800 active users daily. During the last month alone, we registered 790K ad views, 417 clicks, and even a small number of product purchases. Our system obtained click-through rates comparable with those for Google display ads. In addition, our prototype is equipped with a differentially private analytics mechanism, which we used as the primary means for gathering experimental data. In this thesis, we describe our first-hand experience and lessons learned in running the world's first fully operational “private-by-design” behavioral advertising and analytics system.

An efficient multiscale approach is established in order to compute the macroscopic response of nonlinear composites. The micro problem is rewritten in an integral form of the Lippmann-Schwinger type and solved efficiently by Fast Fourier Transforms. Using realistic microstructure models complex nonlinear effects are reproduced and validated with measured data of fiber reinforced plastics. The micro problem is integrated in a Finite Element framework which is used to solve the macroscale. The scale coupling technique and a consistent numerical algorithm is established. The method provides an efficient way to determine the macroscopic response considering arbitrary microstructures, constitutive behaviors and loading conditions.

This bachelor thesis is concerned with arrangements of hyperplanes, that
is, finite collections of hyperplanes in a finite-dimensional vector
space. Such arrangements can be studied using methods from
combinatorics, topology or algebraic geometry. Our focus lies on an
algebraic object associated to an arrangement \(\mathcal{A}\), the module \(\mathcal{D(A)}\) of
logarithmic derivations along \(\mathcal{A}\). It was introduced by K. Saito in the
context of singularity theory, and intensively studied by Terao and
others. If \(\mathcal{D(A)}\) admits a basis, the arrangement \(\mathcal{A}\) is called free.
Ziegler generalized the concept of freeness to so-called
multiarrangements, where each hyperplane carries a multiplicity. Terao
conjectured that freeness of arrangements can be decided based on the
combinatorics. We pursue the analogous question for multiarrangements in
special cases. Firstly, we give a new proof of a result of Ziegler
stating that generic multiarrangements are totally non-free, that is,
non-free for any multiplicity. Our proof relies on the new concept of
unbalanced multiplicities. Secondly, we consider freeness asymptotically
for increasing multiplicity of a fixed hyperplane. We give an explicit
bound for the multiplicity where the freeness property has stabilized.

Many tasks in image processing can be tackled by modeling an appropriate data fidelity term \(\Phi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and then solve one of the regularized minimization problems \begin{align*}
&{}(P_{1,\tau}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \big\{ \Phi(x) \;{\rm s.t.}\; \Psi(x) \leq \tau \big\} \\ &{}(P_{2,\lambda}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \{ \Phi(x) + \lambda \Psi(x) \}, \; \lambda > 0 \end{align*} with some function \(\Psi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and a good choice of the parameter(s). Two tasks arise naturally here: \begin{align*} {}& \text{1. Study the solver sets \({\rm SOL}(P_{1,\tau})\) and
\({\rm SOL}(P_{2,\lambda})\) of the minimization problems.} \\ {}& \text{2. Ensure that the minimization problems have solutions.} \end{align*} This thesis provides contributions to both tasks: Regarding the first task for a more special setting we prove that there are intervals \((0,c)\) and \((0,d)\) such that the setvalued curves \begin{align*}
\tau \mapsto {}& {\rm SOL}(P_{1,\tau}), \; \tau \in (0,c) \\ {} \lambda \mapsto {}& {\rm SOL}(P_{2,\lambda}), \; \lambda \in (0,d) \end{align*} are the same, besides an order reversing parameter change \(g: (0,c) \rightarrow (0,d)\). Moreover we show that the solver sets are changing all the time while \(\tau\) runs from \(0\) to \(c\) and \(\lambda\) runs from \(d\) to \(0\).
In the presence of lower semicontinuity the second task is done if we have additionally coercivity. We regard lower semicontinuity and coercivity from a topological point of view and develop a new technique for proving lower semicontinuity plus coercivity.
Dropping any lower semicontinuity assumption we also prove a theorem on the coercivity of a sum of functions.

A nonlocal stochastic model for intra- and extracellular proton dynamics in a tumor is proposed.
The intracellular dynamics is governed by an SDE coupled to a reaction-diffusion
equation for the extracellular proton concentration on the macroscale. In a more general context
the existence and uniqueness of solutions for local and nonlocal
SDE-PDE systems are established allowing, in particular, to analyze the proton dynamics model both,
in its local version and the case with nonlocal path dependence.
Numerical simulations are performed
to illustrate the behavior of solutions, providing some insights into the effects of randomness on tumor acidity.

To write about the history of a subject is a challenge that grows with the number of pages as the original goal of completeness is turning more and more into an impossibility. With this in mind, the present article takes a very narrow approach and uses personal side trips and memories on conferences,
workshops, and summer schools as the stage for some of the most important protagonists and their contributions to the field of Differential-Algebraic Equations (DAEs).

In a networked system, the communication system is indispensable but often the weakest link w.r.t. performance and reliability. This, particularly, holds for wireless communication systems, where the error- and interference-prone medium and the character of network topologies implicate special challenges. However, there are many scenarios of wireless networks, in which a certain quality-of-service has to be provided despite these conditions. In this regard, distributed real-time systems, whose realization by wireless multi-hop networks becomes increasingly popular, are a particular challenge. For such systems, it is of crucial importance that communication protocols are deterministic and come with the required amount of efficiency and predictability, while additionally considering scarce hardware resources that are a major limiting factor of wireless sensor nodes. This, in turn, does not only place demands on the behavior of a protocol but also on its implementation, which has to comply with timing and resource constraints.
The first part of this thesis presents a deterministic protocol for wireless multi-hop networks with time-critical behavior. The protocol is referred to as Arbitrating and Cooperative Transfer Protocol (ACTP), and is an instance of a binary countdown protocol. It enables the reliable transfer of bit sequences of adjustable length and deterministically resolves contest among nodes based on a flexible priority assignment, with constant delays, and within configurable arbitration radii. The protocol's key requirement is the collision-resistant encoding of bits, which is achieved by the incorporation of black bursts. Besides revisiting black bursts and proposing measures to optimize their detection, robustness, and implementation on wireless sensor nodes, the first part of this thesis presents the mode of operation and time behavior of ACTP. In addition, possible applications of ACTP are illustrated, presenting solutions to well-known problems of distributed systems like leader election and data dissemination. Furthermore, results of experimental evaluations with customary wireless transceivers are outlined to provide evidence of the protocol's implementability and benefits.
In the second part of this thesis, the focus is shifted from concrete deterministic protocols to their model-driven development with the Specification and Description Language (SDL). Though SDL is well-established in the domain of telecommunication and distributed systems, the predictability of its implementations is often insufficient as previous projects have shown. To increase this predictability and to improve SDL's applicability to time-critical systems, real-time tasks, an approved concept in the design of real-time systems, are transferred to SDL and extended to cover node-spanning system tasks. In this regard, a priority-based execution and suspension model is introduced in SDL, which enables task-specific priority assignments in the SDL specification that are orthogonal to the static structure of SDL systems and control transition execution orders on design as well as on implementation level. Both the formal incorporation of real-time tasks into SDL and their implementation in a novel scheduling strategy are discussed in this context. By means of evaluations on wireless sensor nodes, evidence is provided that these extensions reduce worst-case execution times substantially, and improve the predictability of SDL implementations and the language's applicability to real-time systems.

We study an online flow shop scheduling problem where each job consists of several tasks that have to be completed in t different stages and the goal is to maximize the total weight of accepted jobs.
The set of tasks of a job contains one task for each stage and each stage has a dedicated set of identical parallel machines corresponding to it that can only process tasks of this stage. In order to gain the weight (profit) associated with a job j, each of its tasks has to be executed between a task-specific release date and deadline subject to the constraint that all tasks of job j from stages 1, …, i-1 have to be completed before the task of the ith stage can be started. In the online version, jobs arrive over time and all information about the tasks of a job becomes available at the release date of its first task. This model can be used to describe production processes in supply chains when customer orders arrive online.
We show that even the basic version of the offline problem with a single machine in each stage, unit weights, unit processing times, and fixed execution times for all tasks (i.e., deadline minus release date equals processing time) is APX-hard. Moreover, we show that the approximation ratio of any polynomial-time approximation algorithm for this basic version of the problem must depend on the number t of stages.
For the online version of the basic problem, we provide a (2t-1)-competitive deterministic online algorithm and a matching lower bound. Moreover, we provide several (sometimes tight) upper and lower bounds on the competitive ratio of online algorithms for several generalizations of the basic problem involving different weights, arbitrary release dates and deadlines, different processing times of tasks, and several identical machines per stage.

In the digital era we live in, users can access an abundance of digital resources in their daily life. These digital resources can be located on the user's devices, in traditional repositories such as intranets or digital libraries, but also in open environments such as the World Wide Web.
To be able to efficiently work with this abundance of information, users need support to get access to the resources that are relevant to them. Access to digital resources can be supported in various ways. Whether we talk about technologies for browsing, searching, filtering, ranking, or recommending resources: what they all have in common is that they depend on the available information (i.e., resources and metadata). The accessibility of digital resources that meet a user's information need, and the existence and quality of metadata is crucial for the success of any information system.
This work focuses on how social media technologies can support the access to digital resources. In contrast to closed and controlled environments where only selected users have the rights to contribute digital resources and metadata, and where this contribution involves a social process of formal agreement of the relevant stakeholders, potentially any user can easily create and provide information in social media environments. This usually leads to a larger variety of resources and metadata, and allows for dynamics that would otherwise hardly be possible.
Most information systems still mainly rely on traditional top-down approaches where only selected stakeholders can contribute information. The main idea of this thesis is an approach that allows for introducing the characteristics of social media environments in such traditional contexts. The requirements for such an approach are being examined, as well as the benefits and potentials it can provide.
The ALOE infrastructure was developed according to the identified requirements and realises a Social Resource and Metadata Hub. Case studies and evaluation results are provided to show the impact of the approach on the user's behaviours and the creation of digital resources and metadata, and to justify the presented approach.

Maintaining complex software systems tends to be a costly activity where software engineers spend a significant amount of time trying to understand the system's structure and behavior. As early as the 1980s, operation and maintenance costs were already twice as expensive as the initial development costs incurred. Since then these costs have steadily increased. The focus of this thesis is to reduce these costs through novel interactive exploratory visualization concepts and to apply these modern techniques in the context of services offered by software quality analysis.
Costs associated with the understanding of software are governed by specific features of the system in terms of different domains, including re-engineering, maintenance, and evolution. These features are reflected in software measurements or inner qualities such as extensibility, reusability, modifiability, testability, compatability, or adatability. The presence or absence of these qualities determines how easily a software system can conform or be customized to meet new requirements. Consequently, the need arises to monitor and evaluate the qualitative state of a software system in terms of these qualities. Using metrics-based analysis, production costs and quality defects of the software can be recorded objectively and analyzed.
In practice, there exist a number of free and commercial tools that analyze the inner quality of a software system through the use of software metrics. However, most of these tools focus on software data mining and metrics (computational analysis) and only a few support visual analytical reasoning. Typically, computational analysis tools generate data and software visualization tools facilitate the exploration and explanation of this data through static or interactive visual representations. Tools that combine these two approaches focus only on well-known metrics and lack the ability to examine user defined metrics. Further, they are often confined to simple visualization methods and metaphors, including charts, histograms, scatter plots, and node-link diagrams.
The goal of this thesis is to develop methodologies that combine computational analysis methods together with sophisticated visualization methods and metaphors through an interactive visual analysis approach. This approach promotes an iterative knowledge discovery process through multiple views of the data where analysts select features of interest in one of the views and inspect data items of the select subset in all of the views. On the one hand, we introduce a novel approach for the visual analysis of software measurement data that captures complete facts of the system, employs a flow-based visual paradigm for the specification of software measurement queries, and presents measurement results through integrated software visualizations. This approach facilitates the on-demand computation of desired features and supports interactive knowledge discovery - the analyst can gain more insight into the data through activities that involve: building a mental model of the system; exploring expected and unexpected features and relations; and generating, verifying, or rejecting hypothesis with visual tools. On the other hand, we have also extended existing tools with additional views of the data for the presentation and interactive exploration of system artifacts and their inter-relations.
Contributions of this thesis have been integrated into two different prototype tools. First evaluations of these tools show that they can indeed improve the understanding of large and complex software systems.

A new solution approach for solving the 2-facility location problem in the plane with block norms
(2015)

Motivated by the time-dependent location problem over T time-periods introduced in
Maier and Hamacher (2015) we consider the special case of two time-steps, which was shown
to be equivalent to the static 2-facility location problem in the plane. Geometric optimality
conditions are stated for the median objective. When using block norms, these conditions
are used to derive a polygon grid inducing a subdivision of the plane based on normal cones,
yielding a new approach to solve the 2-facility location problem in polynomial time. Combinatorial algorithms for the 2-facility location problem based on geometric properties are
deduced and their complexities are analyzed. These methods differ from others as they are
completely working on geometric objects to derive the optimal solution set.

Scheduling-Location (ScheLoc) Problems integrate the separate fields of
scheduling and location problems. In ScheLoc Problems the objective is to
find locations for the machines and a schedule for each machine subject to
some production and location constraints such that some scheduling object-
ive is minimized. In this paper we consider the Discrete Parallel Machine
Makespan (DPMM) ScheLoc Problem where the set of possible machine loc-
ations is discrete and a set of n jobs has to be taken to the machines and
processed such that the makespan is minimized. Since the separate location
and scheduling problem are both NP-hard, so is the corresponding ScheLoc
Problem. Therefore, we propose an integer programming formulation and
different versions of clustering heuristics, where jobs are split into clusters
and each cluster is assigned to one of the possible machine locations. Since
the IP formulation can only be solved for small scale instances we propose
several lower bounds to measure the quality of the clustering heuristics. Ex-
tensive computational tests show the efficiency of the heuristics.

The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem.

Large displays become more and more popular, due to dropping prices. Their size and high resolution leverages collaboration and they are capable of dis- playing even large datasets in one view. This becomes even more interesting as the number of big data applications increases. The increased screen size and other properties of large displays pose new challenges to the Human- Computer-Interaction with these screens. This includes issues such as limited scalability to the number of users, diversity of input devices in general, leading to increased learning efforts for users, and more.
Using smart phones and tablets as interaction devices for large displays can solve many of these issues. Since they are almost ubiquitous today, users can bring their own device. This approach scales well with the number of users. These mobile devices are easy and intuitive to use and allow for new interaction metaphors, as they feature a wide array of input and output capabilities, such as touch screens, cameras, accelerometers, microphones, speakers, Near-Field Communication, WiFi, etc.
This thesis will present a concept to solve the issues posed by large displays. We will show proofs-of-concept, with specialized approaches showing the via- bility of the concept. A generalized, eyes-free technique using smart phones or tablets to interact with any kind of large display, regardless of hardware or software then overcomes the limitations of the specialized approaches. This is implemented in a large display application that is designed to run under a multitude of environments, including both 2D and 3D display setups. A special visualization method is used to combine 2D and 3D data in a single visualization.
Additionally the thesis will present several approaches to solve common is- sues with large display interaction, such as target sizes on large display getting too small, expensive tracking hardware, and eyes-free interaction through vir- tual buttons. These methods provide alternatives and context for the main contribution.

Open distributed systems are a class of distributed systems where (i) only partial information about the environment, in which they are running, is present, (ii) new resources may become available at runtime, and (iii) a subsystem may become aware of other subsystems after some interaction. Modeling and implementing such systems correctly is a complex task due to the openness and the dynamicity aspects. One way to ensure that the resulting systems behave correctly is to utilize formal verification.
Formal verification requires an adequate semantic model of the implementation, a specification of the desired behavior, and a reasoning technique. The actor model is a semantic model that captures the challenging aspects of open distributed systems by utilizing actors as universal primitives to represent system entities and allowing them to create new actors and to communicate by sending directed messages as reply to received messages. To enable compositional reasoning, where the reasoning task is reduced to independent verification of the system parts, semantic entities at a higher level of abstraction than actors are needed.
This thesis proposes an automaton model and combines sound reasoning techniques to compositionally verify implementations of open actor systems. Based on I/O automata, the model allows automata to be created dynamically and captures dynamic changes in communication patterns. Each automaton represents either an actor or a group of actors. The specification of the desired behavior is given constructively as an automaton. As the basis for compositionality, we formalize a component notion based on the static structure of the implementation instead of the dynamic entities (the actors) occurring in the system execution. The reasoning proceeds in two stages. The first stage establishes the connection between the automata representing single actors and their implementation description by means of weakest liberal preconditions. The second stage employs this result as the basis for verifying whether a component specification is satisfied. The verification is done by building a simulation relation from the automaton representing the implementation to the component's automaton. Finally, we validate the compositional verification approach through a number of examples by proving correctness of their actor implementations with respect to system specifications.

The work consists of two parts.
In the first part an optimization problem of structures of linear elastic material with contact modeled by Robin-type boundary conditions is considered. The structures model textile-like materials and possess certain quasiperiodicity properties. The homogenization method is used to represent the structures by homogeneous elastic bodies and is essential for formulations of the effective stress and Poisson's ratio optimization problems. At the micro-level, the classical one-dimensional Euler-Bernoulli beam model extended with jump conditions at contact interfaces is used. The stress optimization problem is of a PDE-constrained optimization type, and the adjoint approach is exploited. Several numerical results are provided.
In the second part a non-linear model for simulation of textiles is proposed. The yarns are modeled by hyperelastic law and have no bending stiffness. The friction is modeled by the Capstan equation. The model is formulated as a problem with the rate-independent dissipation, and the basic continuity and convexity properties are investigated. The part ends with numerical experiments and a comparison of the results to a real measurement.

Context-Enabled Optimization of Energy-Autarkic Networks for Carrier-Grade Wireless Backhauling
(2015)

This work establishes the novel category of coordinated Wireless Backhaul Networks (WBNs) for energy-autarkic point-to-point radio backhauling. The networking concept is based on three major building blocks: cost-efficient radio transceiver hardware, a self-organizing network operations framework, and power supply from renewable energy sources. The aim of this novel backhauling approach is to combine carrier-grade network performance with reduced maintenance effort as well as independent and self-sufficient power supply. In order to facilitate the success prospects of this concept, the thesis comprises the following major contributions: Formal, multi-domain system model and evaluation methodology
First, adapted from the theory of cyber-physical systems, the author devises a multi-domain evaluation methodology and a system-level simulation framework for energy-autarkic coordinated WBNs, including a novel balanced scorecard concept. Second, the thesis specifically addresses the topic of Topology Control (TC) in point-to-point radio networks and how it can be exploited for network management purposes. Given a set of network nodes equipped with multiple radio transceivers and known locations, TC continuously optimizes the setup and configuration of radio links between network nodes, thus supporting initial network deployment, network operation, as well as topology re-configuration. In particular, the author shows that TC in WBNs belongs to the class of NP-hard quadratic assignment problems and that it has significant impact in operational practice, e.g., on routing efficiency, network redundancy levels, service reliability, and energy consumption. Two novel algorithms focusing on maximizing edge connectivity of network graphs are developed.
Finally, this work carries out an analytical benchmarking and a numerical performance analysis of the introduced concepts and algorithms. The author analytically derives minimum performance levels of the the developed TC algorithms. For the analyzed scenarios of remote Alpine communities and rural Tanzania, the evaluation shows that the algorithms improve energy efficiency and more evenly balance energy consumption across backhaul nodes, thus significantly increasing the number of available backhaul nodes compared to state-of-the-art TC algorithms.

The heterogeneity of today's access possibilities to wireless networks imposes challenges for efficient mobility support and resource management across different Radio Access Technologies (RATs). The current situation is characterized by the coexistence of various wireless communication systems, such as GSM, HSPA, LTE, WiMAX, and WLAN. These RATs greatly differ with respect to coverage, spectrum, data rates, Quality of Service (QoS), and mobility support.
In real systems, mobility-related events, such as Handover (HO) procedures, directly affect resource efficiency and End-To-End (E2E) performance, in particular with respect to signaling efforts and users' QoS. In order to lay a basis for realistic multi-radio network evaluation, a novel evaluation methodology is introduced in this thesis.
A central hypothesis of this thesis is that the consideration and exploitation of additional information characterizing user, network, and environment context, is beneficial for enhancing Heterogeneous Access Management (HAM) and Self-Optimizing Networks (SONs). Further, Mobile Network Operator (MNO) revenues are maximized by tightly integrating bandwidth adaptation and admission control mechanisms as well as simultaneously accounting for user profiles and service characteristics. In addition, mobility robustness is optimized by enabling network nodes to tune HO parameters according to locally observed conditions.
For establishing all these facets of context awareness, various schemes and algorithms are developed and evaluated in this thesis. System-level simulation results demonstrate the potential of context information exploitation for enhancing resource utilization, mobility support, self-tuning network operations, and users' E2E performance.
In essence, the conducted research activities and presented results motivate and substantiate the consideration of context awareness as key enabler for cognitive and autonomous network management. Further, the performed investigations and aspects evaluated in the scope of this thesis are highly relevant for future 5G wireless systems and current discussions in the 5G infrastructure Public Private Partnership (PPP).

The goal of this work is to develop statistical natural language models and processing techniques
based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short-
Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more
robust, and easier to train than traditional methods, i.e., words list and rule-based models. They
improve the output of recognition systems and make them more accessible to users for browsing
and reading. These techniques are required, especially for historical books which might take
years of effort and huge costs to manually transcribe them.
The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As
a second contribution, a hyphenation model for difficult transcription for alignment purposes
is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography.
A size normalization alignment is implemented to equal the size of strings, before the training
phase. Using the LSTM networks as a language model to improve the recognition results is
the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State
Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions
will be elaborated in more detail.
Context-dependent confusion rules is a new technique to build an error model for Optical
Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions
which appear in the recognition outputs and are translated into edit operations, e.g., insertions,
deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations
are extracted in a form of rules with respect to the context of the incorrect string to build an
error model using WFSTs. The context-dependent rules assist the language model to find the
best candidate corrections. They avoid the calculations that occur in searching the language
model and they also make the language model able to correct incorrect words by using context-
dependent confusion rules. The context-dependent error model is applied on the university of
Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR
results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the
state-of-the-art single rule-based which returns an error rate of 1.0%.
This thesis describes a new, simple, fast, and accurate system for generating correspondences
between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the
original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will
not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations
model that allows the alignment with respect to the varieties of the hyphenated words in the line
breaks of the OCR documents. In this work, several approaches are implemented to be used for
the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches
are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan-
derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach
returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0%
using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from
the transcription. They provide the transcription in a readable and reflowable format to be used
for alignment purposes. We consider the task as classification problem and classify the hyphens
from the given patterns as hyphens for line breaks, combined words, or noise. The methods are
applied on clean and noisy transcription for different languages. The Decision Trees classifier
returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97%
on Fraktur script.
A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New
High German applied on the Luther bible. It performed better than the rule-based word-list
approaches. It provides a transcription for various purposes such as part-of-speech tagging and
n-grams. Also two new techniques are presented for aligning the OCR results and normalize the
size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern
wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined
rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the
LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown
data.
In this thesis, a deep investigation has been done on constructing high-performance language
modeling for improving the recognition systems. A new method to construct a language model
using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu
script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens
during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from
1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate
from 6.9% to 1.58%.
Finally, the integration of multiple recognition outputs can give higher performance than a
single recognition system. Therefore, a new method for combining the results of OCR systems is
explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output
to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription
so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech
tagging. The method consists of two alignment steps. First, two recognition systems are aligned
using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors.
The LSTM model then is used to vote the best candidate correction of the two systems and
improve the incorrect tokens which are produced during the first alignment. The approaches
are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset
which are obtained from state-of-the-art OCR systems. The Experiments show that the error
rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the
error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has
the best performance with 0.40%.
The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and
more effective use of the documents in digital libraries.

Today's ubiquity of visual content as driven by the availability of broadband Internet, low-priced storage, and the omnipresence of camera equipped mobile devices conveys much of our thinking and feeling as individuals and as a society. As a result the growth of video repositories is increasing at enourmous rates with content now being embedded and shared through social media. To make use of this new form of social multimedia, concept detection, the automatic mapping of semantic concepts and video content has to be extended such that concept vocabularies are synchronized with current real-world events, systems can perform scalable concept learning with thousands of concepts, and high-level information such as sentiment can be extracted from visual content. To catch up with these demands the following three contributions are made in this thesis: (i) concept detection is linked to trending topics, (ii) visual learning from web videos is presented including the proper treatment of tags as concept labels, and (iii) the extension of concept detection with adjective noun pairs for sentiment analysis is proposed.
In order for concept detection to satisfy users' current information needs, the notion of fixed concept vocabularies has to be reconsidered. This thesis presents a novel concept learning approach built upon dynamic vocabularies, which are automatically augmented with trending topics mined from social media. Once discovered, trending topics are evaluated by forecasting their future progression to predict high impact topics, which are then either mapped to an available static concept vocabulary or trained as individual concept detectors on demand. It is demonstrated in experiments on YouTube video clips that by a visual learning of trending topics, improvements of over 100% in concept detection accuracy can be achieved over static vocabularies (n=78,000).
To remove manual efforts related to training data retrieval from YouTube and noise caused by tags being coarse, subjective and context-depedent, this thesis suggests an automatic concept-to-query mapping for the retrieval of relevant training video material, and active relevance filtering to generate reliable annotations from web video tags. Here, the relevance of web tags is modeled as a latent variable, which is combined with an active learning label refinement. In experiments on YouTube, active relevance filtering is found to outperform both automatic filtering and active learning approaches, leading to a reduction of required label inspections by 75% as compared to an expert annotated training dataset (n=100,000).
Finally, it is demonstrated, that concept detection can serve as a key component to infer the sentiment reflected in visual content. To extend concept detection for sentiment analysis, adjective noun pairs (ANP) as novel entities for concept learning are proposed in this thesis. First a large-scale visual sentiment ontology consisting of 3,000 ANPs is automatically constructed by mining the web. From this ontology a mid-level representation of visual content – SentiBank – is trained to encode the visual presence of 1,200 ANPs. This novel approach of visual learning is validated in three independent experiments on sentiment prediction (n=2,000), emotion detection (n=807) and pornographic filtering (n=40,000). SentiBank is shown to outperform known low-level feature representations (sentiment prediction, pornography detection) or perform comparable to state-of-the art methods (emotion detection).
Altogether, these contributions extend state-of-the-art concept detection approaches such that concept learning can be done autonomously from web videos on a large-scale, and can cope with novel semantic structures such as trending topics or adjective noun pairs, adding a new dimension to the understanding of video content.

In this thesis, an approach is presented that turns the currently unstructured process of automotive hazard analysis and risk assessments (HRA), which relies on creativity techniques, into a structured, model-based approach that makes the HRA results less dependent on experts' experience, more consistent, and gives them higher quality. The challenge can be subdivided into two steps. The first step is to improve the HRA as it is performed in current practice. The second step is to go beyond the current practice and consider not only single service failures as relevant hazards, but also multiple service failures. For the first step, the most important aspect is to formalize the operational situation of the system and to determine its likelihood. Current approaches use natural-language textual descriptions, which makes it hard to ensure consistency and increase efficiency through reuse. Furthermore, due to ambiguity in natural language, it is difficult to ensure consistent likelihood estimates for situations.
The main aspect of the second step is that considering multiple service failures as hazards implies that one needs to analyze an exponential number of hazards. Due to the fact that hazard assessments are currently done purely manually, considering multiple service failures is not possible. The only way to approach this challenge is to formalize the HRA and make extensive use of automation support.
In SAHARA we handle these challenges by first introducing a model-based representation of an HRA with GOBI. Based on this, we formalized the representation of operational situations and their likelihood assessment in OASIS and HEAT, respectively. We show that more consistent situation assessments are possible and that situations (including their likelihood) can be efficiently reused. The second aspect, coping with multiple service failures, is addressed in ARID. We show that using our tool-supported HRA approach, 100% coverage of all possible hazards (including multiple service failures) can be achieved by relying on very limited manual effort. We furthermore show that not considering multiple service failures results in insufficient safety goals.

In embedded systems, there is a trend of integrating several different functionalities on a common platform. This has been enabled by increasing processing power and the arise of integrated system-on-chips.
The composition of safety-critical and non-safety-critical applications results in mixed-criticality systems. Certification Authorities (CAs) demand the certification of safety-critical applications with strong confidence in the execution time bounds. As a consequence, CAs use conservative assumptions in the worst-case execution time (WCET) analysis which result in more pessimistic WCETs than the ones used by designers. The existence of certified safety-critical and non-safety-critical applications can be represented by dual-criticality systems, i.e., systems with two criticality levels.
In this thesis, we focus on the scheduling of mixed-criticality systems which are subject to certification. Scheduling policies cognizant of the mixed-criticality nature of the systems and the certification requirements are needed for efficient and effective scheduling. Furthermore, we aim at reducing the certification costs to allow faster modification and upgrading, and less error-prone certification. Besides certification aspects, requirements of different operational modes result in challenging problems for the scheduling process. Despite the mentioned problems, schedulers require a low runtime overhead for an efficient execution at runtime.
The presented solutions are centered around time-triggered systems which feature a low runtime overhead. We present a transformation to include event-triggered activities, represented by sporadic tasks, already into the offline scheduling process. Further, this transformation can also be applied on periodic tasks to shorten the length of schedule tables which reduces certification costs. These results can be used in our method to construct schedule tables which creates two schedule tables to fulfill the requirements of dual-criticality systems using mode changes at runtime. Finally, we present a scheduler based on the slot-shifting algorithm for mixed-criticality systems. In a first version, the method schedules dual-criticality jobs without the need for mode changes. An already certified schedule table can be used and at runtime, the scheduler reacts to the actual behavior of the jobs and thus, makes effective use of the available resources. Next, we extend this method to schedule mixed-criticality job sets with different operational modes. As a result, we can schedule jobs with varying parameters in different modes.

Information Visualization (InfoVis) and Human-Computer Interaction (HCI) have strong ties with each other. Visualization supports the human cognitive system by providing interactive and meaningful images of the underlying data. On the other side, the HCI domain cares about the usability of the designed visualization from the human perspectives. Thus, designing a visualization system requires considering many factors in order to achieve the desired functionality and the system usability. Achieving these goals will help these people in understanding the inside behavior of complex data sets in less time.
Graphs are widely used data structures to represent the relations between the data elements in complex applications. Due to the diversity of this data type, graphs have been applied in numerous information visualization applications (e.g., state transition diagrams, social networks, etc.). Therefore, many graph layout algorithms have been proposed in the literature to help in visualizing this rich data type. Some of these algorithms are used to visualize large graphs, while others handle the medium sized graphs. Regardless of the graph size, the resulting layout should be understandable from the users’ perspective and at the same time it should fulfill a list of aesthetic criteria to increase the representation readability. Respecting these two principles leads to produce a resulting graph visualization that helps the users in understanding and exploring the complex behavior of critical systems.
In this thesis, we utilize the graph visualization techniques in modeling the structural and behavioral aspects of embedded systems. Furthermore, we focus on evaluating the resulting representations from the users’ perspectives.
The core contribution of this thesis is a framework, called ESSAVis (Embedded Systems Safety Aspect Visualizer). This framework visualizes not only some of the safety aspects (e.g. CFT models) of embedded systems, but also helps the engineers and experts in analyzing the system safety critical situations. For this, the framework provides a 2Dplus3D environment in which the 2D represents the graph representation of the abstract data about the safety aspects of the underlying embedded system while the 3D represents the underlying system 3D model. Both views are integrated smoothly together in the 3D world fashion. In order to check the effectiveness and feasibility of the framework and its sub-components, we conducted many studies with real end users as well as with general users. Results of the main study that targeted the overall ESSAVis framework show high acceptance ratio and higher accuracy with better performance using the provided visual support of the framework.
The ESSAVis framework has been designed to be compatible with different 3D technologies. This enabled us to use the 3D stereoscopic depth of such technologies to encode nodes attributes in node-link diagrams. In this regard, we conducted an evaluation study to measure the usability of the stereoscopic depth cue approach, called the stereoscopic highlighting technique, against other selected visual cues (i.e., color, shape, and sizes). Based on the results, the thesis proposes the Reflection Layer extension to the stereoscopic highlighting technique, which was also evaluated from the users’ perspectives. Additionally, we present a new technique, called ExpanD (Expand in Depth), that utilizes the depth cue to show the structural relations between different levels of details in node-link diagrams. Results of this part opens a promising direction of the research in which visualization designers can get benefits from the richness of the 3D technologies in visualizing abstract data in the information visualization domain.
Finally, this thesis proposes the application of the ESSAVis frame- work as a visual tool in the educational training process of engineers for understanding the complex concepts. In this regard, we conducted an evaluation study with computer engineering students in which we used the visual representations produced by ESSAVis to teach the principle of the fault detection and the failure scenarios in embedded systems. Our work opens the directions to investigate many challenges about the design of visualization for educational purposes.

Spin and orbital magnetic moments of isolated single molecule magnets and transition metal clusters
(2015)

In the present work, magnetic moments of isolated Single Molecule Magnets (SMMs) and transition
metal clusters were investigated. Gas phase X‐ray Magnetic Circular Dichroism (XMCD) in
combination with sum rule analysis served to separate the total magnetic moments of the
investigated species into their spin and orbital contributions. Two different mass spectrometry based
setups were used for the presented investigations on transition metal clusters (GAMBIT‐setup) and
on single molecule magnets (NanoClusterTrap). Both experiments were coupled to the UE52‐PGM
beamline at the BESSY II synchrotron facility (Helmholtz Zentrum Berlin) which provided the
necessary polarized X‐ray photons. The investigation of the given compounds as isolated molecules
in the gas phase enabled a determination of their intrinsic magnetic properties void of any influences
of e.g. a surrounding bulk or supporting surface

Motivated by the results of infinite dimensional Gaussian analysis and especially white noise analysis, we construct a Mittag-Leffler analysis. This is an infinite dimensional analysis with respect to non-Gaussian measures of Mittag-Leffler type which we call Mittag-Leffler measures. Our results indicate that the Wick ordered polynomials, which play a key role in Gaussian analysis, cannot be generalized to this non-Gaussian case. We provide evidence that a system of biorthogonal polynomials, called generalized Appell system, is applicable to the Mittag-Leffler measures, instead of using Wick ordered polynomials. With the help of an Appell system, we introduce a test function and a distribution space. Furthermore we give characterizations of the distribution space and we characterize the weak integrable functions and the convergent sequences within the distribution space. We construct Donsker's delta in a non-Gaussian setting as an application.
In the second part, we develop a grey noise analysis. This is a special application of the Mittag-Leffler analysis. In this framework, we introduce generalized grey Brownian motion and prove differentiability in a distributional sense and the existence of generalized grey Brownian motion local times. Grey noise analysis is then applied to the time-fractional heat equation and the time-fractional Schrödinger equation. We prove a generalization of the fractional Feynman-Kac formula for distributional initial values. In this way, we find a Green's function for the time-fractional heat equation which coincides with the solutions given in the literature.

We consider storage loading problems where items with uncertain weights have
to be loaded into a storage area, taking into account stacking and
payload constraints. Following the robust optimization paradigm, we propose
strict and adjustable optimization models for finite and interval-based
uncertainties. To solve these problems, exact decomposition and heuristic
solution algorithms are developed.
For strict robustness, we also present a compact formulation based
on a characterization of worst-case scenarios.
Computational results show that computation times and algorithm
gaps are reasonable for practical applications.
Furthermore, we find that the robustness concepts show different
potential depending on the type of data being used.

In 2003, a dictionary data structure called jumplist has been introduced by Brönnimann, Cazals and Durand. It is based on a circularly closed (singly) linked list, but additional jump-pointers are added to provide shortcuts to parts further ahead in the list.
The original jump-and-walk data structure by Brönnimann, Cazals and Durand only introduces one jump-pointer per node. In this thesis, I add one more-jump pointer to each node and present algorithms for generation, insertion and search for the resulting data structure.
Furthermore, I try to evaluate the effects on the expected search costs and the complexity of the generation and insertion.
It turns out that the two-jump-pointer variant of the jumplist has a slightly better prefactor (1.2 vs. 2) in the leading term of the expected internal path length than the original version and despite the more complex structure of the two-jump-pointer variant compared to the regular jumplist, the complexity of generation and insertion remains linearithmic.

Specification of asynchronous circuit behaviour becomes more complex as the
complexity of today’s System-On-a-Chip (SOC) design increases. This also causes
the Signal Transition Graphs (STGs) – interpreted Petri nets for the specification
of asynchronous circuit behaviour – to become bigger and more complex, which
makes it more difficult, sometimes even impossible, to synthesize an asynchronous
circuit from an STG with a tool like petrify [CKK+96] or CASCADE [BEW00].
It has, therefore, been suggested to decompose the STG as a first step; this
leads to a modular implementation [KWVB03] [KVWB05], which can reduce syn-
thesis effort by possibly avoiding state explosion or by allowing the use of library
elements. A decomposition approach for STGs was presented in [VW02] [KKT93]
[Chu87a]. The decomposition algorithm by Vogler and Wollowski [VW02] is based
on that of Chu [Chu87a] but is much more generally applicable than the one in
[KKT93] [Chu87a], and its correctness has been proved formally in [VW02].
This dissertation begins with Petri net background described in chapter 2.
It starts with a class of Petri nets called a place/transition (P/T) nets. Then
STGs, the subclass of P/T nets, is viewed. Background in net decomposition
is presented in chapter 3. It begins with the structural decomposition of P/T
nets for analysis purposes – liveness and boundedness of the net. Then STG
decomposition for synthesis from [VW02] is described.
The decomposition method from [VW02] still could be improved to deal with
STGs from real applications and to give better decomposition results. Some
improvements for [VW02] to improve decomposition result and increase algorithm
efficiency are discussed in chapter 4. These improvement ideas are suggested in
[KVWB04] and some of them are have been proved formally in [VK04].
The decomposition method from [VW02] is based on net reduction to find
an output block component. A large amount of work has to be done to reduce
an initial specification until the final component is found. This reduction is not
always possible, which causes input initially classified as irrelevant to become
relevant input for the component. But under certain conditions (e.g. if structural
auto-conflicts turn out to be non-dynamic) some of them could be reclassified as
irrelevant. If this is not done, the specifications become unnecessarily large, which
intern leads to unnecessarily large implemented circuits. Instead of reduction, a
new approach, presented in chapter 5, decomposes the original net into structural
components first. An initial output block component is found by composing the
structural components. Then, a final output block component is obtained by net
reduction.
As we cope with the structure of a net most of the time, it would be useful
to have a structural abstraction of the net. A structural abstraction algorithm
[Kan03] is presented in chapter 6. It can improve the performance in finding an
output block component in most of the cases [War05] [Taw04]. Also, the structure
net is in most cases smaller than the net itself. This increases the efficiency of the
decomposition algorithm because it allows the transitions contained in a node of
the structure graph to be contracted at the same time if the structure graph is
used as internal representation of the net.
Chapter 7 discusses the application of STG decomposition in asynchronous
circuit design. Application to speed independent circuits is discussed first. Af-
ter that 3D circuits synthesized from extended burst mode (XBM) specifications
are discussed. An algorithm for translating STG specifications to XBM specifi-
cations was first suggested by [BEW99]. This algorithm first derives the state
machine from the STG specification, then translates the state machine to XBM
specification. An XBM specification, though it is a state machine, allows some
concurrency. These concurrencies can be translated directly, without deriving
all of the possible states. An algorithm which directly translates STG to XBM
specifications, is presented in chapter 7.3.1. Finally DESI, a tool to decompose
STGs and its decomposition results are presented.

Self-adaptation allows software systems to autonomously adjust their behavior during run-time by handling all possible
operating states that violate the requirements of the managed system. This requires an adaptation engine that receives adaptation
requests during the monitoring process of the managed system and responds with an automated and appropriate adaptation
response. During the last decade, several engineering methods have been introduced to enable self-adaptation in software systems.
However, these methods lack addressing (1) run-time uncertainty that hinders the adaptation process and (2) the performance
impacts resulted from the complexity and the large number of the adaptation space. This paper presents CRATER, a framework
that builds an external adaptation engine for self-adaptive software systems. The adaptation engine, which is built on Case-based
Reasoning, handles the aforementioned challenges together. This paper is braced with an experiment illustrating the benefits of
this framework. The experimental results shows the potential of CRATER in terms handling run-time uncertainty and adaptation
remembrance that enhances the performance for large number of adaptation space.

The present research combines different paradigm in the area of visual perception of letter and words. These experiments aimed to understand the deficit underlying the problem associated with the faulty visual processing of letters and words. The present work summarizes the findings from two different types of population: (1) Dyslexics (reading-disabled children) and (2) Illiterates (adults who cannot read). In order to compare the results, comparisons were made between literate and illiterate group; dyslexics and control group (normal reading children). Differences for Even related potentials (ERP’s) between dyslexics and control children were made using mental rotation task for letters. According to the ERP’s, the effect of the mental rotation task of letter perception resulted as a delayed positive component and the component becomes less positive when the task becomes more difficult (Rotation related Negativity – RRN). The component was absent for dyslexics and present for controls. Dyslexics also showed some late effects in comparison to control children and this could be interpreted as problems at the decision stage where they are confused as to the letter is normal or mirrored. Dyslexics also have problems in responding to the letters having visual or phonological similarities (e.g. b vs d, p vs q). Visually similar letters were used to compare dyslexics and controls on a symmetry generalization task in two different contrast conditions (low and high). Dyslexics showed a similar pattern of response, and were overall slower in responding to the task compared to controls. The results were interpreted within the framework of the Functional Coordination Deficit (Lachmann, 2002). Dyslexics also showed delayed response in responding to the word recognition task during motion. Using red background decreases the Magnocellular pathway (M-pathway) activity, making more difficult to identify letters and this effect was worse for dyslexics because their M-pathway is weaker. In dyslexics, the M-pathway is worse; using a red background decreases the M activity and increases the difficulty in identifying lexical task in motion. This effect generated worse response to red compared to the green background. The reaction times with red were longer than those with green background. Further, Illiterates showed an analytic approach to responding to letters as well as on shapes. The analytic approach does not result from an individual capability to read, but is a primary base of visual organization or perception.

Real-time systems are systems that have to react correctly to stimuli from the environment within given timing constraints.
Today, real-time systems are employed everywhere in industry, not only in safety-critical systems but also in, e.g., communication, entertainment, and multimedia systems.
With the advent of multicore platforms, new challenges on the efficient exploitation of real-time systems have arisen:
First, there is the need for effective scheduling algorithms that feature low overheads to improve the use of the computational resources of real-time systems.
The goal of these algorithms is to ensure timely execution of tasks, i.e., to provide runtime guarantees.
Additionally, many systems require their scheduling algorithm to flexibly react to unforeseen events.
Second, the inherent parallelism of multicore systems leads to contention for shared hardware resources and complicates system analysis.
At any time, multiple applications run with varying resource requirements and compete for the scarce resources of the system.
As a result, there is a need for an adaptive resource management.
Achieving and implementing an effective and efficient resource management is a challenging task.
The main goal of resource management is to guarantee a minimum resource availability to real-time applications.
A further goal is to fulfill global optimization objectives, e.g., maximization of the global system performance, or the user perceived quality of service.
In this thesis, we derive methods based on the slot shifting algorithm.
Slot shifting provides flexible scheduling of time-constrained applications and can react to unforeseen events in time-triggered systems.
For this reason, we aim at designing slot shifting based algorithms targeted for multicore systems to tackle the aforementioned challenges.
The main contribution of this thesis is to present two global slot shifting algorithms targeted for multicore systems.
Additionally, we extend slot shifting algorithms to improve their runtime behavior, or to handle non-preemptive firm aperiodic tasks.
In a variety of experiments, the effectiveness and efficiency of the algorithms are evaluated and confirmed.
Finally, the thesis presents an implementation of a slot-shifting-based logic into a resource management framework for multicore systems.
Thus, the thesis closes the circle and successfully bridges the gap between real-time scheduling theory and real-world implementations.
We prove applicability of the slot shifting algorithm to effectively and efficiently perform adaptive resource management on multicore systems.

We present a numerical scheme to simulate a moving rigid body with arbitrary shape suspended in a rarefied gas micro flows, in view of applications to complex computations of moving structures in micro or vacuum systems. The rarefied gas is simulated by solving the Boltzmann equation using a DSMC particle method. The motion of the rigid body is governed by the Newton-Euler equations, where the force and the torque on the rigid body is computed from the momentum transfer of the gas molecules colliding with the body. The resulting motion of the rigid body affects in turn again the gas flow in the surroundings. This means that a two-way coupling has been modeled. We validate the scheme by performing various numerical experiments in 1-, 2- and 3-dimensional computational domains. We have presented 1-dimensional actuator problem, 2-dimensional cavity driven flow problem, Brownian diffusion of a spherical particle both with translational and rotational motions, and finally thermophoresis on a spherical particles. We compare the numerical results obtained from the numerical simulations with the existing theories in each test examples.

Annual Report 2014
(2015)

Annual Report, Jahrbuch AG Magnetismus

Accurate path tracking control of tractors became a key technology for automation in agriculture. Increasingly sophisticated solutions, however, revealed that accurate path tracking control of implements is at least equally important. Therefore, this work focuses on accurate path tracking control of both tractors and implements. The latter, as a prerequisite for improved control, are equipped with steering actuators like steerable wheels or a steerable drawbar, i.e. the implements are actively steered. This work contributes both new plant models and new control approaches for those kinds of tractor-implement combinations. Plant models comprise dynamic vehicle models accounting for forces and moments causing the vehicle motion as well as simplified kinematic descriptions. All models have been derived in a systematic and automated manner to allow for variants of implements and actuator combinations. Path tracking controller design begins with a comprehensive overview and discussion of existing approaches in related domains. Two new approaches have been proposed combining the systematic setup and tuning of a Linear-Quadratic-Regulator with the simplicity of a static output feedback approximation. The first approach ensures accurate path tracking on slopes and curves by including integral control for a selection of controlled variables. The second approach, instead, ensures this by adding disturbance feedforward control based on side-slip estimation using a non-linear kinematic plant model and an Extended Kalman Filter. For both approaches a feedforward control approach for curved path tracking has been newly derived. In addition, a straightforward extension of control accounting for the implement orientation has been developed. All control approaches have been validated in simulations and experiments carried out with a mid-size tractor and a custom built demonstrator implement.