### Refine

#### Year of publication

- 2014 (77) (remove)

#### Document Type

- Doctoral Thesis (47)
- Preprint (27)
- Article (1)
- Periodical Part (1)
- Report (1)

#### Language

- English (77) (remove)

#### Keywords

#### Faculty / Organisational entity

- Fachbereich Mathematik (40)
- Fachbereich Informatik (15)
- Fachbereich Maschinenbau und Verfahrenstechnik (6)
- Fachbereich Sozialwissenschaften (5)
- Fachbereich Chemie (4)
- Fachbereich Elektrotechnik und Informationstechnik (3)
- Fachbereich ARUBI (1)
- Fachbereich Biologie (1)
- Fachbereich Physik (1)
- Fraunhofer (ITWM) (1)

Mechanical ventilation of patients with severe lung injury is an important clinical treatment to ensure proper lung oxygenation and to mitigate the extent of collapsed lung regions. While current imaging technologies such as Computed Tomography (CT) and chest X-ray allow for a thorough inspection of the thorax, they are limited to static pictures and exhibit several disadvantages, including exposure to ionizing radiation and high cost. Electrical Impedance Tomography (EIT) is a novel method to determine functional processes inside the thorax such as lung ventilation and cardiac activity. EIT reconstructs the internal electrical conductivity distribution within the thorax from voltage measurements on the body surface. Conductivity changes correlate with important clinical parameters such as lung volume and perfusion. Current EIT systems and algorithms use simplified or generalized thorax models to solve the reconstruction problem, which reduce image quality and anatomical significance. In this thesis, the development of a clinically relevant workflow to compute sophisticated three-dimensional thorax models from patient-specific CT data is described. The method allows medical experts to generate a multi-material segmentation in an interactive and fast way, while a volumetric mesh is computed automatically from the segmentation. The significantly improved image quality and anatomical precision of EIT images reconstructed with these 3D models is reported, and the impact on clinical applicability is discussed. In addition, three projects concerning quantitative CT (qCT) measurements and multi-modal 3D visualization are presented, which demonstrate the importance and productivity of interdisciplinary research groups including computer scientists and medical experts. The results presented in this thesis contribute significantly to clinical research efforts to pave the way towards improved patient-specific treatments of lung injury using EIT and qCT.

Variational methods in imaging are nowadays developing towards a quite universal and
exible
tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting,
segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form
D(Ku) + alpha R(u) to min_u
;
where the functional D is a data fidelity term also depending on some input data f and
measuring the deviation of Ku from such and R is a regularization functional. Moreover
K is a (often linear) forward operator modeling the dependence of data on an underlying
image, and alpha is a positive regularization parameter. While D is often smooth and (strictly)
convex, the current practice almost exclusively uses nonsmooth regularization functionals.
The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof, cf. [28, 31, 40], or l_1-norms of coeefficients arising
from scalar products with some frame system, cf. [73] and references therein.
The efficient solution of such variational problems in imaging demands for appropriate algorithms.
Taking into account the specific structure as a sum of two very different terms
to be minimized, splitting algorithms are a quite canonical choice. Consequently this field
has revived the interest in techniques like operator splittings or augmented Lagrangians. In
this chapter we shall provide an overview of methods currently developed and recent results
as well as some computational studies providing a comparison of different methods and also
illustrating their success in applications.
We start with a very general viewpoint in the first sections, discussing basic notations, properties
of proximal maps, firmly non-expansive and averaging operators, which form the basis
of further convergence arguments. Then we proceed to a discussion of several state-of-the
art algorithms and their (theoretical) convergence properties. After a section discussing issues
related to the use of analogous iterative schemes for ill-posed problems, we present some practical convergence studies in numerical examples related to PET and spectral CT reconstruction.

The study addresses the effect of multiple jet passes and other parameters namely feedrate, water pressure and standoff distance in waterjet peening of metallic
surfaces. An analysis of surface integrity was used to evaluate the performance of
different parameters in the process. An increase in the number of jet passes and
pressure leads to a higher roughness and more erosion and also a higher hardness.
In contrast, the feedrate shows a reverse effect on those surface characteristics.
There exists a specific value of standoff distance that results in the maximum surface
roughness, erosion as well as hardness. Analysis of the surface microstructure gave
a good insight into the mechanism material removal process involving initial and
evolved damage. Also, the waterjet peening process was optimized based on the
design of experiment approach. The developed empirical models had shown
reasonable correlations between the measured and predicted responses. A proper selection of waterjet peening parameters can be formulated to be used in practical
works.

This PhD-Thesis deals with the calculation and application of a new class of invariants, that can be used to recognize patterns in tensor fields (i.e. scalar fields, vector fields und matrix fields), and by the composition of scalar fields with delta-functions also to point-clouds.
In the first chapter an overview over already existing invariants is given.
In the second chapter the general definition of the new invariants is given:
starting with a tensor field a set of moment tensor is created via folding in tensor-product manner with different orders of the tensor product of the positional vector. From these, rotational invariant values are calculated via contraction of tensor products. An algorithm to get a complete and independent set of invariants from a given moment tensor set is described. Furthermore methods to make these sets of invariants invariant against translation, rotation, scaling, and affine transformation.
In the third chapter, a method to optimize the calculation of these sets of invariants is described: every invariant can be modeled as undirected graph comprising multiple sub-graphs representing partially contracted tensor products of the moment tensors.
The composition of the sets of invariants is optimized by a clever choice of the decomposition into sub-graphs, all paths creating a hyper-graph of sub-graphs where each node describes a composition step. Finally, C++-source-code is created, which optimized using the symmetry of the different tensors and tensor-products, and a comparison of the effort to other calculation methods of invariants is given.
The fourth chapter describes the application of the invariants to object recognition in point-clouds from 3D-scans. To do this, the invariants of sub-sets of point-clouds are stored for every known object. Afterwards, invariants are calculated from an unknown point-cloud and tried to find them in the database to assign it to one of the known objects. Benchmarks using three 3D-object databases are made testing time and recognition rate.

Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.

In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model.
However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees.
The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required.
We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties.
In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions
for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting.

Embedded systems, ranging from very simple systems up to complex controllers, may
nowadays have quite challenging real-time requirements. Many embedded systems are reactive
systems that have to respond to environmental events and have to guarantee certain real-time
constrain. Their execution is usually divided into reaction steps, where in each step, the
system reads inputs from the environment and reacts to these by computing corresponding
outputs.
The synchronous Model of Computation (MoC) has proven to be well-suited for the
development of reactive real-time embedded systems whose paradigm directly reflects the
reactive nature of the systems it describes. Another advantage is the availability of formal
verification by model checking as a result of the deterministic execution based on a formal
semantics. Nevertheless, the increasing complexity of embedded systems requires to compensate
the natural disadvantages of model checking that suffers from the well-known state-space
explosion problem. It is therefore natural to try to integrate other verification methods with
the already established techniques. Hence, improvements to encounter these problems are
required, e.g., appropriate decomposition techniques, which encounter the disadvantages
of the model checking approach naturally. But defining decomposition techniques for synchronous
language is a difficult task, as a result of the inherent parallelism emerging from
the synchronous broadcast communication.
Inspired by the progress in the field of desynchronization of synchronous systems by
representing them in other MoCs, this work will investigate the possibility of adapting and use
methods and tools designed for other MoC for the verification of systems represented in the
synchronous MoC. Therefore, this work introduces the interactive verification of synchronous
systems based on the basic foundation of formal verification for sequential programs – the
Hoare calculus. Due to the different models of computation several problems have to be
solved. In particular due to the large amount of concurrency, several parts of the program
are active at the same point of time. In contrast to sequential programs, a decomposition
in the Hoare-logic style that is in some sense a symbolic execution from one control flow
location to another one requires the consideration of several flows here. Therefore, different
approaches for the interactive verification of synchronous systems are presented.
Additionally, the representation of synchronous systems by other MoCs and the influence
of the representation on the verification task by differently embedding synchronous system
in a single verification tool are elaborated.
The feasibility is shown by integration of the presented approach with the established
model checking methods by implementing the AIFProver on top of the Averest system.

We consider the problem of finding efficient locations of surveillance cameras, where we distinguish
between two different problems. In the first, the whole area must be monitored and the number of cameras
should be as small as possible. In the second, the goal is to maximize the monitored area for a fixed number of
cameras. In both of these problems, restrictions on the ability of the cameras, like limited depth of view or range
of vision are taken into account. We present solution approaches for these problems and report on results of
their implementations applied to an authentic problem. We also consider a bicriteria problem with two objectives:
maximizing the monitored area and minimizing the number of cameras, and solve it for our study case.

This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis.
Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest.
We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived.
We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach.

For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.