## Dissertation

### Filtern

#### Erscheinungsjahr

- 2014 (46) (entfernen)

#### Dokumenttyp

- Dissertation (46) (entfernen)

#### Sprache

- Englisch (46) (entfernen)

#### Schlagworte

- Activity Recognition, Wearable Computing, Minimal Training, Unobtrusive Instrumentations (1)
- Activity recognition (1)
- Adaptive Data Structure (1)
- AhRR (1)
- Algorithm (1)
- Boosting (1)
- CYP1A1 (1)
- Classification (1)
- Closure (1)
- Code Generation (1)

#### Fachbereich / Organisatorische Einheit

In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.

In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts.
In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable.
To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP).
During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived.
When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1].
By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration.
We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization.

Embedded systems, ranging from very simple systems up to complex controllers, may
nowadays have quite challenging real-time requirements. Many embedded systems are reactive
systems that have to respond to environmental events and have to guarantee certain real-time
constrain. Their execution is usually divided into reaction steps, where in each step, the
system reads inputs from the environment and reacts to these by computing corresponding
outputs.
The synchronous Model of Computation (MoC) has proven to be well-suited for the
development of reactive real-time embedded systems whose paradigm directly reflects the
reactive nature of the systems it describes. Another advantage is the availability of formal
verification by model checking as a result of the deterministic execution based on a formal
semantics. Nevertheless, the increasing complexity of embedded systems requires to compensate
the natural disadvantages of model checking that suffers from the well-known state-space
explosion problem. It is therefore natural to try to integrate other verification methods with
the already established techniques. Hence, improvements to encounter these problems are
required, e.g., appropriate decomposition techniques, which encounter the disadvantages
of the model checking approach naturally. But defining decomposition techniques for synchronous
language is a difficult task, as a result of the inherent parallelism emerging from
the synchronous broadcast communication.
Inspired by the progress in the field of desynchronization of synchronous systems by
representing them in other MoCs, this work will investigate the possibility of adapting and use
methods and tools designed for other MoC for the verification of systems represented in the
synchronous MoC. Therefore, this work introduces the interactive verification of synchronous
systems based on the basic foundation of formal verification for sequential programs – the
Hoare calculus. Due to the different models of computation several problems have to be
solved. In particular due to the large amount of concurrency, several parts of the program
are active at the same point of time. In contrast to sequential programs, a decomposition
in the Hoare-logic style that is in some sense a symbolic execution from one control flow
location to another one requires the consideration of several flows here. Therefore, different
approaches for the interactive verification of synchronous systems are presented.
Additionally, the representation of synchronous systems by other MoCs and the influence
of the representation on the verification task by differently embedding synchronous system
in a single verification tool are elaborated.
The feasibility is shown by integration of the presented approach with the established
model checking methods by implementing the AIFProver on top of the Averest system.

Test rig optimization
(2014)

Designing good test rigs for fatigue life tests is a common task in the auto-
motive industry. The problem to find an optimal test rig configuration and
actuator load signals can be formulated as a mathematical program. We in-
troduce a new optimization model that includes multi-criteria, discrete and
continuous aspects. At the same time we manage to avoid the necessity to
deal with the rainflow-counting (RFC) method. RFC is an algorithm, which
extracts load cycles from an irregular time signal. As a mathematical func-
tion it is non-convex and non-differentiable and, hence, makes optimization
of the test rig intractable.
The block structure of the load signals is assumed from the beginning.
It highly reduces complexity of the problem without decreasing the feasible
set. Also, we optimize with respect to the actuators’ positions, which makes
it possible to take torques into account and thus extend the feasible set. As
a result, the new model gives significantly better results, compared with the
other approaches in the test rig optimization.
Under certain conditions, the non-convex test rig problem is a union of
convex problems on cones. Numerical methods for optimization usually need
constraints and a starting point. We describe an algorithm that detects each
cone and its interior point in a polynomial time.
The test rig problem belongs to the class of bilevel programs. For every
instance of the state vector, the sum of functions has to be maximized. We
propose a new branch and bound technique that uses local maxima of every
summand.

In this thesis, we combine Groebner basis with SAT Solver in different manners.
Both SAT solvers and Groebner basis techniques have their own strength and weakness.
Combining them could fix their weakness.
The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin.
However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses.
By selecting smaller and more compact input for Groebner basis computations, we can significantly
reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition,
the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems.
The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials.
Therefore, we propose a more efficient approach to handle such cases.
In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection.
We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm.
Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths.
For a given design, we construct an abstract model.
Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\).
The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering.
Finally, the normal form is employed to prove the desired properties.
To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks.

‘Dioxin-like’ (DL) compounds occur ubiquitously in the environment. Toxic responses associated with specific dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs) include dermal toxicity, immunotoxicity, liver toxicity, carcinogenicity, as well as adverse effects on reproduction, development, and endocrine functions. Most, if not all of these effects are believed to be due to interaction of these compounds with the aryl hydrocarbon receptor (AhR).
With tetrachlorodibenzo-p-dioxin (TCDD) as representatively most potent congener, a toxic equivalency factor (TEF) concept was employed, in which respective congeners were assigned to a certain TEF-value reflecting the compound’s toxicity relative to TCDD’s.
The EU-project ‘SYSTEQ’ aimed to develop, validate, and implement human systemic TEFs as indicators of toxicity for DL-congeners. Hence, the identification of novel quantifiable biomarkers of exposure was a major objective of the SYSTEQ project.
In order to approach to this objective, a mouse whole genome microarray analysis was applied using a set of seven individual congeners, termed the ‘core congeners’. These core congeners (TCDD, 1-PeCDD, 4-PeCDF, PCB 126, PCB 118, PCB 156, and the non dioxin-like PCB 153), which contribute to approximately 90% of toxic equivalents (TEQs) in the human food chain, were further tested in vivo as well as in vitro. The mouse whole genome microarray revealed a conserved list of differentially regulated genes and pathways associated with ‘dioxin-like’ effects.
A definite data-set of in vitro studies was supposed to function as a fundament for a probable establishment of novel TEFs. Thus, CYP1A induction measured by EROD activity, which represents a sensitive and yet best known marker for dioxin-like effects, was used to estimate potency and efficacy of selected congeners. For this study, primary rat hepatocytes and the rat hepatoma cell line H4IIE were used as well as the core congeners and an additional group of compounds of comparable relevance for the environment: 1,6-HxCDD, 1,4,6-HpCDD, TCDF, 1,4-HxCDF, 1,4,6-HpCDF, PCB 77, and PCB 105.
Besides, a human whole genome microarray experiment was applied in order to gain knowledge with respect to TCDD’s impact towards cells of the immune system. Hence, human primary blood mononuclear cells (PBMCs) were isolated from individuals and exposed to TCDD or to TCDD in combination with a stimulus (lipopolysaccharide (LPS), or phytohemagglutinin (PHA)). A few members of the AhR-gene batterie were found to be regulated, and minor data with respect to potential TCDD-mediated immunomodulatory effects were given. Still, obtained data in this regard was limited due to great inter-individual differences.

In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.

We consider two major topics in this thesis: spatial domain partitioning which serves as a framework to simulate creep flows in representative volume elements.
First, we introduce a novel multi-dimensional space partitioning method. A new type of tree combines the advantages of the Octree and the KD-tree without having their disadvantages. We present a new data structure allowing local refinement, parallelization and proper restriction of transition ratios between nodes. Our technique has no dimensional restrictions at all. The tree's data structure is defined by a topological algebra based on the symbols \( A = \{ L, I, R \} \) that encode the partitioning steps. The set of successors is restricted such that each node has the partition of unity property to partition domains without overlap. With our method it is possible to construct a wide choice of spline spaces to compress or reconstruct scientific data such as pressure and velocity fields and multidimensional images. We present a generator function to build a tree that represents a voxel geometry. The space partitioning system is used as a framework to allow numerical computations. This work is triggered by the problem of representing, in a numerically appropriate way, huge three-dimensional voxel geometries that could have up to billions of voxels. These large datasets occure in situations where it is needed to deal with large representative volume elements (REV).
Second, we introduce a novel approach of variable arrangement for pressure and velocity to solve the Stokes equations. The basic idea of our method is to arrange variables in a way such that each cell is able to satisfy a given physical law independently from its neighbor cells. This is done by splitting velocity values to a left and right converging component. For each cell we can set up a small linear system that describes the momentum and mass conservation equations. This formulation allows to use the Gauß-Seidel algorithm to solve the global linear system. Our tree structure is used for spatial partitioning of the geometry and provides a proper initial guess. In addition, we introduce a method that uses the actual velocity field to refine the tree and improve the numerical accuracy where it is needed. We developed a novel approach rather than using existing approaches such as the SIMPLE algorithm, Lattice-Boltzmann methods or Exlicit jump methods since they are suited for regular grid structures. Other standard CFD approaches extract surfaces and creates tetrahedral meshes to solve on unstructured grids thus can not be applied to our datastructure. The discretization converges to the analytical solution with respect to grid refinement. We conclude a high strength in computational time and memory for high porosity geometries and a high strength in memory requirement for low porosity geometries.

Multilevel Constructions
(2014)

The thesis consists of the two chapters.
The first chapter is addressed to make a deep investigation of the MLMC method. In particular we take an optimisation view at the estimate. Rather than fixing the number of discretisation points \(n_i\) to be a geometric sequence, we are trying to find an optimal set up for \(n_i\) such that for a fixed error the estimate can be computed within a minimal time.
In the second chapter we propose to enhance the MLMC estimate with the weak extrapolation technique. This technique helps to improve order of a weak convergence of a scheme and as a result reduce CC of an estimate. In particular we study high order weak extrapolation approach, which is know not be inefficient in the standard settings. However, a combination of the MLMC and the weak extrapolation yields an improvement of the MLMC.

Optical character recognition (OCR) of machine printed text is ubiquitously considered as a solved problem. However, error free OCR of degraded (broken and merged) and noisy text is still challenging for modern OCR systems. OCR of degraded text with high accuracy is very important due to many applications in business, industry and large scale document digitization projects. This thesis presents a new OCR method for degraded
text recognition by introducing a combined ANN/HMM OCR approach. The approach
provides significantly better performance in comparison with state-of-the-art HMM based OCR methods and existing open source OCR systems. In addition, the thesis introduces novel applications of ANNs and HMMs for document image preprocessing and recognition of low resolution text. Furthermore, the thesis provides psychophysical experiments to determine the effect of letter permutation in visual word recognition of Latin and Cursive
script languages.
HMMs and ANNs are widely employed pattern recognition paradigms and have been
used in numerous pattern classification problems. This work presents a simple and novel method for combining the HMMs and ANNs in application to segmentation free OCR of degraded text. HMMs and ANNs are powerful pattern recognition strategies and their combination is interesting to improve current state-of-the-art research in OCR. Mostly, previous attempts in combining the HMMs and ANNs were focused on applying ANNs
as approximation of the probability density function or as a neural vector quantizer for HMMs. These methods either require combined NN/HMM training criteria [ECBG-MZM11] or they use complex neural network architecture like time delay or space displacement neural networks [BLNB95]. However, in this work neural networks are used as discriminative feature extractor, in combination with novel text line scanning mechanism, to extract discriminative features from unsegmented text lines. The features are
processed by HMMs to provide segmentation free text line recognition. The ANN/HMM modules are trained separately on a common dataset by using standard machine learning procedures. The proposed ANN/HMM OCR system also realizes to some extent several cognitive reading based strategies during the OCR. On a dataset of 1,060 degraded text lines extracted from the widely used UNLV-ISRI benchmark database [TNBC99], the presented system achieves a 30% reduction in error rate as compared to Google’s Tesseract OCR system [Smi13] and 43% reduction in error as compared to OCRopus OCR system [Bre08], which are the best open source OCR systems available today.
In addition, this thesis introduces new applications of HMMs and ANNs in OCR and document images preprocessing. First, an HMMs-based segmentation free OCR approach is presented for recognition of low resolution text. OCR of low resolution text is quite important due to presence of low resolution text in screen-shots, web images and video captions. OCR of low resolution text is challenging because of antialiased rendering and use of very small font size. The characters in low resolution text are usually joined to each other and they may appear differently at different locations on computer screen. This
work presents the use of HMMs in optical recognition of low resolution isolated characters and text lines. The evaluation of the proposed method shows that HMMs-based OCR techniques works quite well and reaches the performance of specialized approaches for OCR of low resolution text.
Then, this thesis presents novel applications of ANNs for automatic script recognition and orientation detection. Script recognition determines the written script on the page for the application of an appropriate character recognition algorithm. Orientation detection detects and corrects the deviation of the document’s orientation angle from the horizontal direction. Both, script recognition and orientation detection, are important preprocessing steps in developing robust OCR systems. In this work, instead of extracting handcrafted features, convolutional neural networks are used to extract relevant discriminative features for each classification task. The proposed method resulted in more than 95% script recognition accuracy on various multi-script documents at connected component level
and 100% page orientation detection accuracy for Urdu documents.
Human reading is a nearly analogous cognitive process to OCR that involves decoding of printed symbols into meanings. Studying the cognitive reading behavior may help in building a robust machine reading strategy. This thesis presents a behavioral study that deals on how cognitive system works in visual recognition of words and permuted non-words. The objective of this study is to determine the impact of overall word shape
in visual word recognition process. The permutation is considered as a source of shape degradation and visual appearance of actual words can be distorted by changing the constituent letter positions inside the words. The study proposes a hypothesis that reading of words and permuted non-words are two distinct mental level processes, and people use
different strategies in handling permuted non-words as compared to normal words. The hypothesis is tested by conducting psychophysical experiments in visual recognition of words from orthographically different languages i.e. Urdu, German and English. Experimental data is analyzed using analysis of variance (ANOVA) and distribution free rank tests to determine significance differences in response time latencies for two classes of data. The results support the presented hypothesis and the findings are consistent with
the dual route theories of reading.