If an automated system is tasked to provide services such as search or clustering of information on an information repository, the quality of the output depends a lot on the information that is available to the system in machine-readable form. Simple text, for example, is machine-readable only in a very limited sense. Advanced services typically need to derive other representations of the text (e.g., sets of keywords) as input for their core algorithms. Some services might need information that cannot be derived from the resource in question alone, but is available as separate metadata only, such as usage information. Annotations can be used to carry this information.
This thesis focuses on so-called ontology-based annotations. In contrast to other forms of annotations such as Tags (arbitrary strings that users can assign to resources), ontology-based annotations conform to a predefined data structure and class hierarchy. An advantage of this approach is that rich information can be stored in a well-structured way in the annotations; a drawback is that users need to be familiar with the hierarchy and other design decisions of the underlying ontology used for annotations.
Two scenarios are considered in this thesis:
First, a document-based scenario in which text annotations are used to represent both information about the text content and usage and user context information in a multi-user setting with mostly objective annotation criteria; second, a resource-based scenario whose annotation model focuses on multi-user settings with subjective annotation criteria, using (dis-)similarities in user annotations to derive user similarity metrics, and building personalized views from this information.
Finally, the prototypical systems that have been developed throughout this thesis get evaluated, proving the concepts presented in this thesis.
This thesis discusses several applications of computational topology to the visualization
of scalar fields. Scalar field data come from different measurements and simulations. The
intrinsic properties of this kind of data, which make the visualization of it to a complicated
task, are the large size and presence of noise. Computational topology is a powerful tool
for automatic feature extraction, which allows the user to interpret the information contained
in the dataset in a more efficient way. Utilizing it one can make the main purpose of
scientific visualization, namely extracting knowledge from data, a more convenient task.
Volume rendering is a class of methods designed for realistic visual representation of 3D
scalar fields. It is used in a wide range of applications with different data size, noise
rate and requirements on interactivity and flexibility. At the moment there is no known
technique which can meet the needs of every application domain, therefore development
of methods solving specific problems is required. One of such algorithms, designed for
rendering of noisy data with high frequencies is presented in the first part of this thesis.
The method works with multidimensional transfer functions and is especially suited for
functions exhibiting sharp features. Compared with known methods the presented algorithm
achieves better visual quality with a faster performance in presence of mentioned
features. An improvement on the method utilizing a topological theory, Morse theory, and
a topological construct, Morse-Smale complex, is also presented in this part of the thesis.
The improvement allows for performance speedup at a little precomputation and memory
The usage of topological methods for feature extraction on a real world dataset often
results in a very large feature space which easily leads to information overflow. Topology
simplification is designed to reduce the number of features and allow a domain expert
to concentrate on the most important ones. In the terms of Morse theory features are
represented by critical points. An importance measure which is usually used for removing
critical points is called homological persistence. Critical points are cancelled pairwise
according to their homological persistence value. In the presence of outlier-like noise
homological persistence has a clear drawback: the outliers get a high importance value
assigned and therefore are not being removed. In the second part of this thesis a new
importance measure is presented which is especially suited for data with outliers. This
importance measure is called scale space persistence. The algorithm for the computation
of this measure is based on the scale space theory known from the area of computer
vision. The development of a critical point in scale space gives information about its
spacial extent, therefore outliers can be distinguished from other critical points. The usage
of the presented importance measure is demonstrated on a real world application, crater
identification on a surface of Mars.
The third part of this work presents a system for general interactive topology analysis
and exploration. The development of such a system is motivated by the fact that topological
methods are often considered to be complicated and hard to understand, because
application of topology for visualization requires deep understanding of the mathematical
background behind it. A domain expert exploring the data using topology for feature
extraction needs an intuitive way to manipulate the exploration process. The presented
system is based on an intuitive notion of a scene graph, where the user can choose and
place the component blocks to achieve an individual result. This way the domain expert
can extract more knowledge from given data independent on the application domain. The
tool gives the possibility for calculation and simplification of the underlying topological
structure, Morse-Smale complex, and also the visualization of parts of it. The system also
includes a simple generic query language to acquire different structures of the topological
structure at different levels of hierarchy.
The fourth part of this dissertation is concentrated on an application of computational
geometry for quality assessment of a triangulated surface. Quality assessment of a triangulation
is called surface interrogation and is aimed for revealing intrinsic irregularities
of a surface. Curvature and continuity are the properties required to design a visually
pleasing geometric object. For example, a surface of a manufactured body usually should
be convex without bumps of wiggles. Conventional rendering methods hide the regions
of interest because of smoothing or interpolation. Two new methods which are presented
here: curvature estimation using local fitting with B´ezier patches and computation of reflection
lines for visual representation of continuity, are specially designed for assessment
problems. The examples and comparisons presented in this part of the thesis prove the
benefits of the introduced algorithms. The methods are also well suited for concurrent visualization
of the results from simulation and surface interrogation to reveal the possible
intrinsic relationship between them.
Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.
This PhD-Thesis deals with the calculation and application of a new class of invariants, that can be used to recognize patterns in tensor fields (i.e. scalar fields, vector fields und matrix fields), and by the composition of scalar fields with delta-functions also to point-clouds.
In the first chapter an overview over already existing invariants is given.
In the second chapter the general definition of the new invariants is given:
starting with a tensor field a set of moment tensor is created via folding in tensor-product manner with different orders of the tensor product of the positional vector. From these, rotational invariant values are calculated via contraction of tensor products. An algorithm to get a complete and independent set of invariants from a given moment tensor set is described. Furthermore methods to make these sets of invariants invariant against translation, rotation, scaling, and affine transformation.
In the third chapter, a method to optimize the calculation of these sets of invariants is described: every invariant can be modeled as undirected graph comprising multiple sub-graphs representing partially contracted tensor products of the moment tensors.
The composition of the sets of invariants is optimized by a clever choice of the decomposition into sub-graphs, all paths creating a hyper-graph of sub-graphs where each node describes a composition step. Finally, C++-source-code is created, which optimized using the symmetry of the different tensors and tensor-products, and a comparison of the effort to other calculation methods of invariants is given.
The fourth chapter describes the application of the invariants to object recognition in point-clouds from 3D-scans. To do this, the invariants of sub-sets of point-clouds are stored for every known object. Afterwards, invariants are calculated from an unknown point-cloud and tried to find them in the database to assign it to one of the known objects. Benchmarks using three 3D-object databases are made testing time and recognition rate.
Mechanical ventilation of patients with severe lung injury is an important clinical treatment to ensure proper lung oxygenation and to mitigate the extent of collapsed lung regions. While current imaging technologies such as Computed Tomography (CT) and chest X-ray allow for a thorough inspection of the thorax, they are limited to static pictures and exhibit several disadvantages, including exposure to ionizing radiation and high cost. Electrical Impedance Tomography (EIT) is a novel method to determine functional processes inside the thorax such as lung ventilation and cardiac activity. EIT reconstructs the internal electrical conductivity distribution within the thorax from voltage measurements on the body surface. Conductivity changes correlate with important clinical parameters such as lung volume and perfusion. Current EIT systems and algorithms use simplified or generalized thorax models to solve the reconstruction problem, which reduce image quality and anatomical significance. In this thesis, the development of a clinically relevant workflow to compute sophisticated three-dimensional thorax models from patient-specific CT data is described. The method allows medical experts to generate a multi-material segmentation in an interactive and fast way, while a volumetric mesh is computed automatically from the segmentation. The significantly improved image quality and anatomical precision of EIT images reconstructed with these 3D models is reported, and the impact on clinical applicability is discussed. In addition, three projects concerning quantitative CT (qCT) measurements and multi-modal 3D visualization are presented, which demonstrate the importance and productivity of interdisciplinary research groups including computer scientists and medical experts. The results presented in this thesis contribute significantly to clinical research efforts to pave the way towards improved patient-specific treatments of lung injury using EIT and qCT.
Researchers and analysts in modern industrial and academic environments are faced with a daunting amount of multivariate data. While there has been significant development in the areas of data mining and knowledge
discovery, there is still the need for improved visualizations and generic solutions. The state-of-the-art in visual analytics and exploratory data visualization is to incorporate more profound analysis methods while focusing on improving interactive abilities, in order to support data analysts in gaining new insights through visual exploration and hypothesis building.
In the research field of exploratory data visualization, this thesis contributes new approaches in dimension reduction that tackle a number of shortcomings in state-of-the-art methods, such as interpretability and ambiguity. By combining methods from several disciplines, we describe how ambiguity can be countered effectively by visualizing coordinate values within a lower-dimensional embedding, thereby focusing on the display of the structural composition of high-dimensional data and on an intuitive depiction of inherent global relationships. We also describe how properties and alignment of high-dimensional manifolds can be analyzed in different levels of detail by means of a self-embedding hierarchy of local projections, each using full degree of freedom, while keeping the global context.
To the application field of air quality research, the thesis provides novel means for the research of aerosol source contributions. Triggered by this particularly challenging application problem, we instigate a new research direction in the area of visual analytics by describing a methodology to model-based visual analysis that (i) allows the scientist to be “in the loop” of computations and (ii) enables him to verify and control the analysis process, in order to steer computations towards physical meaning. Careful reflection of our work in this application has led us to derive key design choices that underlie and transcend beyond application-specific solutions. As a result, we describe a general design methodology to computing parameters of a pre-defined analytical model that map to multivariate data. Core applications areas that can benefit from our approach are within engineering disciplines, such as civil, chemical, electrical, and mechanical engineering, as well as in geology, physics, and biology.
This thesis provides a fully automatic translation from synchronous programs to parallel software for different architectures, in particular, shared memory processing (SMP) and distributed memory systems. Thereby, we exploit characteristics of the synchronous model of computation (MoC) to reduce communication and to improve available parallelism and load-balancing by out-of-order (OOO) execution and data speculation.
Manual programming of parallel software requires the developers to partition a system into tasks and to add synchronization and communication. The model-based approach of development abstracts from details of the target architecture and allows to make decisions about the target architecture as late as possible. The synchronous MoC supports this approach by abstracting from time and providing implicit parallelism and synchronization. Existing compilation techniques translate synchronous programs into synchronous guarded actions (SGAs) which are an intermediate format abstracting from semantic problems in synchronous languages. Compilers for SGAs analyze causality problems, ensure logical correctness and the absence of schizophrenia problems. Hence, SGAs are a simplified and general starting point and keep the synchronous MoC at the same time. The instantaneous feedback in the synchronous MoC makes the mapping of these systems to parallel software a non-trivial task. In contrast, other MoCs such as data-flow processing networks (DPNs) directly match with parallel architectures. We translate the SGAs into DPNs,which represent a commonly used model to create parallel software. DPNs have been proposed as a programming model for distributed parallel systems that have communication paths with unpredictable latencies. The purely data-driven execution of DPNs does not require a global coordination and therefore DPNs can be easily mapped to parallel software for architectures with distributed memory. The generation of efficient parallel code from DPNs challenges compiler design with two issues: To perfectly utilize a parallel system, the communication and synchronization has to be kept low, and the utilization of the computational units has to be balanced. The variety of hardware architectures and dynamic execution techniques in processing units of these systems make a statically balanced distributed execution impossible.
The synchronous MoC is still reflected in our generated DPNs, which exhibits characteristics that allow optimizations concerning the previously mentioned issues. In particular, we apply a general communication reduction and OOO execution to achieve a dynamically balanced execution which is inspired from hardware design.
For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.
The recognition of day-to-day activities is still a very challenging and important research topic. During recent years, a lot of research has gone into designing and realizing smart environ- ments in different application areas such as health care, maintenance, sports or smart homes. As a result, a large amount of sensor modalities were developed, different types of activity and context recognition services were implemented and the resulting systems were benchmarked using state-of-the-art evaluation techniques. However, so far hardly any of these approaches have found their way into the market and consequently into the homes of real end-users on a large scale. The reason for this is, that almost all systems have one or more of the following characteristics in common: expensive high-end or prototype sensors are used which are not af- fordable or reliable enough for mainstream applications; many systems are deployed in highly instrumented environments or so-called "living labs", which are far from real-life scenarios and are often evaluated only in research labs; almost all systems are based on complex system con- figurations and/or extensive training data sets, which means that a large amount of data must be collected in order to install the system. Furthermore, many systems rely on a user and/or environment dependent training, which makes it even more difficult to install them on a large scale. Besides, a standardized integration procedure for the deployment of services in existing environments and smart homes has still not been defined. As a matter of fact, service providers use their own closed systems, which are not compatible with other systems, services or sensors. It is clear, that these points make it nearly impossible to deploy activity recognition systems in a real daily-life environment, to make them affordable for real users and to deploy them in hundreds or thousands of different homes.
This thesis works towards the solution of the above mentioned problems. Activity and context recognition systems designed for large-scale deployment and real-life scenarios are intro- duced. Systems are based on low-cost, reliable sensors and can be set up, configured and trained with little effort, even by technical laymen. It is because of these characteristics that we call our approach "minimally invasive". As a consequence, large amounts of training data, that are usu- ally required by many state-of-the-art approaches, are not necessary. Furthermore, all systems were integrated unobtrusively in real-world/similar to real-world environments and were evalu- ated under real-life, as well as similar to real-life conditions. The thesis addresses the following topics: First, a sub-room level indoor positioning system is introduced. The system is based on low-cost ceiling cameras and a simple computer vision tracking approach. The problem of user identification is solved by correlating modes of locomotion patterns derived from the trajectory of unidentified objects and on-body motion sensors. Afterwards, the issue of recognizing how and what mainstream household devices have been used for is considered. Based on a low-cost microphone, the water consumption of water-taps can be approximated by analyzing plumbing noise. Besides that, operating modes of mainstream electronic devices were recognized by using rule-based classifiers, electric current features and power measurement sensors. As a next step, the difficulty of spotting subtle, barely distinguishable hand activities and the resulting object interactions, within a data set containing a large amount of background data, is addressed. The problem is solved by introducing an on-body core system which is configured by simple, one-time physical measurements and minimal data collections. The lack of large training sets is compensated by fusing the system with activity and context recognition systems, that are able to reduce the search space observed. Amongst other systems, previously introduced approaches and ideas are revisited in this section. An in-depth evaluation shows the impact of each fusion procedure on the performance and run-time of the system. The approaches introduced are able to provide significantly better results than a state-of-the-art inertial system using large amounts of training data. The idea of using unobtrusive sensors has also been applied to the field of behavior analysis. Integrated smartphone sensors are used to detect behavioral changes of in- dividuals due to medium-term stress periods. Behavioral parameters related to location traces, social interactions and phone usage were analyzed to detect significant behavioral changes of individuals during stressless and stressful time periods. Finally, as a closing part of the the- sis, a standardization approach related to the integration of ambient intelligence systems (as introduced in this thesis) in real-life and large-scale scenarios is shown.
An huge amount of computational models and programming languages have been proposed
for the description of embedded systems. In contrast to traditional sequential programming
languages, they cope directly with the requirements for embedded systems: direct support for
concurrent computations and periodic interaction with the environment are only some of the
features they offer. Synchronous languages are one class of languages for the development of
embedded systems and they follow the fundamental principle that the execution is divided into
a sequence of logical steps. Thereby, each step follows the simplification that the computation
of the outputs is finished directly when the inputs are available. This rigorous abstraction leads
to well-defined deterministic parallel composition in general, and to deterministic abortion
and suspension in imperative synchronous languages in particular. These key features also
allow to translate programs to hardware and software, and also formal verification techniques
like model checking can be easily applied.
Besides the advantages of imperative synchronous languages, also some drawbacks can
be listed. Over-synchronization is an effect being caused by parallel threads which have to
synchronize for each execution step, even if they do not communicate, since the synchronization
is implicitly forced by the control-flow. This thesis considers the idea of clock refinement to
introduce several abstraction layers for communication and synchronization in addition to the
existing single-clock abstraction. Thereby, clocks can be refined by several independent clocks
so that a controlled amount of asynchrony between subsequent synchronization points can be
exploited by compilers. The declarations of clocks form a tree, and clocks can be defined within
the threads of the parallel statement, which allows one to do independent computations based
on these clocks without synchronizing the threads. However, the synchronous abstraction is
kept at each level of the abstraction.
Clock refinement is introduced in this thesis as an extension to the imperative synchronous
language Quartz. Therefore, new program statements are introduced which allow to define
a new clock as a refinement of an existing one and to finish a step based on a certain clock.
Examples are considered to show the impact of the behavior of the new statements to
the already existing statements, before the semantics of this extension is formally defined.
Furthermore, the thesis presents a compile algorithm to translate programs to an intermediate
format, and to translate the intermediate format to a hardware description. The advantages
obtained by the new modeling feature are finally evaluated based on examples.