There is a growing trend for ever larger wireless sensor networks (WSNs) consisting of thousands or tens of thousands of sensor nodes (e.g., [91, 79]). We believe this trend will continue and thus scalability plays a crucial role in all protocols and mechanisms for WSNs. Another trend in many modern WSN applications is the time sensitivity to information from sensors to sinks. In particular, WSNs are a central part of the vision of cyber-physical systems and as these are basically closed-loop systems many WSN applications will have to operate under stringent timing requirements. Hence, it is crucial to develop algorithms that minimize the worst-case delay in WSNs. In addition, almost all WSNs consist of battery-powered nodes, and thus energy-efficiency clearly remains another premier goal in order to keep network lifetime high. This dissertation presents and evaluates designs for WSNs using multiple sinks to achieve high lifetime and low delay. Firstly, we investigate random and deterministic node placement strategies for large-scale and time-sensitive WSNs. In particular, we focus on tiling-based deterministic node placement strategies and analyze their effects on coverage, lifetime, and delay performance under both exact placement and stochastically disturbed placement. Next, we present sink placement strategies, which constitutes the main contributions of this dissertation. Static sinks will be placed and mobile sinks will be given a trajectory. A proper sink placement strategy can improve the performance of a WSN significantly. In general, the optimal sink placement with lifetime maximization is an NP-hard problem. The problem is even harder if delay is taken into account. In order to achieve both lifetime and delay goals, we focus on the problem of placing multiple (static) sinks such that the maximum worst-case delay is minimized while keeping the energy consumption as low as possible. Different target networks may need a corresponding sink placement strategy under differing levels of apriori assumptions. Therefore, we first develop an algorithm based on the Genetic Algorithm (GA) paradigm for known sensor nodes' locations. For a network where global information is not feasible we introduce a self-organized sink placement (SOSP) strategy. While GA-based sink placement achieves a near-optimal solution, SOSP provides a good sink placement strategy with a lower communication overhead. How to plan the trajectories of many mobile sinks in very large WSNs in order to simultaneously achieve lifetime and delay goals had not been treated so far in the literature. Therefore, we delve into this difficult problem and propose a heuristic framework using multiple orbits for the sinks' trajectories. The framework is designed based on geometric arguments to achieve both, high lifetime and low delay. In simulations, we compare two different instances of our framework, one conceived based on a load-balancing argument and one based on a distance minimization argument, with a set of different competitors spanning from statically placed sinks to battery-state aware strategies. We find our heuristics outperform the competitors in both, lifetime and delay. Furthermore, and probably even more important, the heuristic, while keeping its good delay and lifetime performance, scales well with an increasing number of sinks. In brief, the goal of this dissertation is to show that placing nodes and sinks in conventional WSNs as well as planning trajectories in mobility enabled WSNs carefully really pays off for large-scale and time-sensitive WSNs.
Recent progresses and advances in the field of consumer electronics, driven by display
technologies and also the sector of mobile, hand-held devices, enable new ways in
presenting information to users, as well as new ways of user interaction, therefore
providing a basis for user-centered applications and work environments.
My thesis focuses on how arbitrary display environments can be utilized to improve
both the user experience, regarding perception of information, and also to provide
intuitive interaction possibilities. On the one hand advances in display technologies
provide the basis for new ways of visualizing content and collaborative work, on the
other hand forward-pressing developments in the consumer market, especially the
market of smart phones, offer potential to enhance usability in terms of interaction
and therefore can provide additional benefit for users.
Tiled display setups, combining both large screen real estate and high resolution,
provide new possibilities and chances to visualize large datasets and to facilitate col-
laboration in front of a large screen area. Furthermore these display setups present
several advantages over the traditional single-user-workspace environments: con-
trary to single-user-workspaces, multiple users are able to explore a dataset displayed
on a tiled display system, at the same time, thus allowing new forms of collabora-
tive work. Based on that, face-to-face discussions are enabled, an additional value
is added. Large displays also allow the utilization of the user’s spatial memory, al-
lowing physical navigation without the need of switching between different windows
to explore information.
With Tiled++ I contributed a versatile approach to address the bezel problem. The
bezel problem is one of the Top Ten research challenges in the research field of LCD-
based tiled wall setups. By applying the Tiled++ approach a large high resolution
Focus & Context screen is created, combining high resolution focus areas with low
resolution context information, projected onto the bezel area.
Additionally the field of user interaction poses an important challenge, especially
regarding the utilization of large tiled displays, since traditional keyboard & mouse
interaction devices reached their limits. My focus in this thesis is on Mobile HCI.Devices like mobile phones are utilized to interact with large displays, since they
feature various interaction modalities and preserve user mobility.
Large public displays, as a modernized form of traditional bulletin boards, also en-
able new ways of handling information, displaying content, and user interaction.
Utilized in hot spots, Digital Interactive Public Pinboards can provide an adequate
answer to questions like how to approach pressing issues like disaster and crisis man-
agement for both responders as well as citizens and also new ways of how to handle
information flow (contribution & distribution & accession). My contribution to the
research field of public display environments was the conception and implementa-
tion of an easy-to-use and easy-to-set-up architecture to overcome shortcomings of
current approaches and to cover the needs of aid personnel.
Although being a niche, Virtual Reality (VR) environments can provide additional
value for visualizing specific content. Disciplines like earth sciences & geology, me-
chanical engineering, design, and architecture can benefit from VR environments. In
order to consider the variety of users, I introduce a more intuitive and user friendly
interaction metaphor, the ARC metaphor.
Visualization challenges base on being able to cope with more and more complex
datasets and to bridge the gap between comprehensibility and loss of information.
Furthermore the visualization approach has to be reasonable, which is a crucial
factor when working in interdisciplinary teams, where the standard of knowledge
is diverse. Users have to be able to conceive the visualized content in a fast and
reliable way. My contribution are visualization approaches in the field of supportive
Finally, my work illuminates how the synthesis of visualization, interaction and dis-
play technologies enhance the user experience. I promote a holistic view. The user
is brought back into the focus of attention, provided with a tool-set to support him,
without overextending the abilities of, for example, non-expert users, a crucial factor
in the more and more interdisciplinary field of computer science.
Funkvernetzte Sensorsysteme sind heutzutage allgegenwärtig.
Sie werden sowohl in Rauchmeldern, in Raumtemperaturüberwachungen und Sicherheitssystemen eingesetzt.
Das Sensorsystem soll seine Aufgabe zuverlässig und über viele Jahre ohne Batteriewechsel erfüllen.
Durch die Vernetzung der Sensorsysteme und ihre immer komplexer werdenden Aufgaben wird die Programmierung in einer maschinennahen Sprache immer aufwändiger.
Die modellgetriebene Entwicklung erhöht die Wartbarkeit und reduziert die Entwicklungszeit wodurch im Allgemeinen die Produktqualität steigt.
In Folge der höheren Komplexität, der Abstraktion von der konkreten Hardwareplattform und den immer kürzere Produktentwicklungszeiten bleibt oft keine Zeit für Energieoptimierung, wodurch die Batterielaufzeit geringer ausfällt, als dies möglich wäre.
In dieser Arbeit werden verschiedene Ansätze vorgestellt, die es ermöglichen, bereits während der Modellierung den Stromverbrauch zu berücksichtigen und diesen zu optimieren.
Am Beispiel des inversen Pendels, einem sehr instabilen Regelungssystem, wird dazu mit Hilfe der modellgetriebenen Entwicklung eine funkvernetzte, verteilte Regelung spezifiziert.
Der aus der Spezifikation erzeugte Kode wird direkt auf den Sensorknoten ausgeführt und muß dazu performant und zuverlässig sein, um die Echtzeitanforderungen des Regelungssystems zu erfüllen, aber gleichzeitig so wenig Energie wie möglich zu verbrauchen.
m die Zuverlässigkeit der verteilten Regelung zu gewährleisten ist eine deterministische kollisionsfreie Datenübertragung über das drahtlose Kommunikationsmedium erforderlich.
Die Synchronisation ist eine weitere Voraussetzung zur Ermittlung eines konsistenten Systemzustands.
Most of the evolution in ambient assisted living is due to embedded
systems that dynamically adapt themself to react to environmental
changes or component/subsystem failures to maintain a certain level of
safety. Following this evolution fault tree analysis techniques have been
extended with concept for dynamic adaptation but resulting techniques
such as dynamic fault trees or state event fault trees analysis are not
widely used as expected.
In this report we describe a controlled experiment to analyze these two
techniques with regard to their applicability and efficiency in modeling
dynamic behavior of ambient assisted living systems.
Results of the experiment show that Dynamic Fault Trees are easier and more effective
to use, although they produce better results (models) with State Events Fault Trees.
Most innovation in the automotive industry is driven by embedded systems. They make usage of dynamic adaption to environmental changes or component/subsystem failures for remaining safe. Following this evolution, fault tree analysis techniques have been extended with concept for dynamic adaptation but resulting techniques like state event fault tree analysis, are not widely used in practice.
In this report we present the results of a controlled experiment that analyze these two techniques (State Events Fault Trees and Faul trees combined with markov chains) with regard to their applicability and efficiency in modeling dynamic behavior of dynamic embedded systems.
The experiment was conducted with students of the TU Kaiserslautern to modeli different safety aspects of an ambient assisted living system.
The main results of the experiment show that SEFTs where more easy and effective to use.
Conditional Compilation (CC) is frequently used as a variation mechanism in software product lines (SPLs). However, as a SPL evolves the variable code realized by CC erodes in the sense that it becomes overly complex and difficult to understand and maintain. As a result, the SPL productivity goes down and puts expected advantages more and more at risk. To investigate the variability erosion and keep the productivity above a sufficiently good level, in this paper we 1) investigate several erosion symptoms in an industrial SPL; 2) present a variability improvement process that includes two major improvement strategies. While one strategy is to optimize variable code within the scope of CC, the other strategy is to transition CC to a new variation mechanism called Parameterized Inclusion. Both of these two improvement strategies can be conducted automatically, and the result of CC optimization is provided. Related issues such as applicability and cost of the improvement are also discussed.
In recent years, recommender systems have been widely used for a variety of different kinds of items such as books, movies, and music. However, current recommendation approaches have often been criticized to suffer from overspecialization thus not enough considering a user’s diverse topics of interest. In this thesis we present a novel approach to extracting contextualized user profiles which enable recommendations taking into account a user’s full range of interests. The method applies algorithms from the domain of topic detection and tracking to automatically identify diverse user interests and to represent them with descriptive labels. That way manual annotations of interest topics by the users, e. g., from a predefined domain taxonomy, are no longer required. The approach has been tested in two scenarios: First, we implemented a content-based recommender system for an Enterprise 2.0 resource sharing platform where the contextualized user interest profiles have been used to generate recommendations with a high degree of inter-topic diversity. In an effort to harness the collective intelligence of the users, the resources in the system were described by making use of user-generated metadata. The evaluation experiments show that our approach is likely to capture a multitude of diverse interest topics per user. The labels extracted are specific for these topics and can be used to retrieve relevant on-topic resources. Second, a slightly adapted variation of the algorithm has been used to target music recommendations based on the user’s current mood. In this scenario music artists are described by using freely available Semantic Web data from the Linked Open Data cloud thus not requiring expensive metadata annotations by experts. The evaluation experiments conducted show that many users have a multitude of different preferred music styles. However a correlation between these music styles and music mood categories could not be observed. An integration of our proposed user profiles with existing user model ontologies seems promising for enabling context-sensitive recommendations.
As a Software Product Line (SPL) evolves with increasing number of features and feature values, the feature correlations become extremely intricate, and the specifications of these correlations tend to be either incomplete or inconsistent with their realizations, causing misconfigurations in practice. In order to guide product configuration processes, we present a solution framework to recover complex feature correlations from existing product configurations. These correlations are further pruned automatically and validated by domain experts. During implementation, we use association mining techniques to automatically extract strong association rules as potential feature correlations. This approach is evaluated using a large-scale industrial SPL in the embedded system domain, and finally we identify a large number of complex feature correlations.
Data integration aims at providing uniform access to heterogeneous data, managed by distributed source systems. Data sources can range from legacy systems, databases, and enterprise applications to web-scale data management systems. The materialized approach to data integration, extracts data from the sources, transforms and consolidates the data, and loads it into an integration system, where it is persistently stored and can be queried and analyzed.
To support materialized data integration, so called Extract-Transform-Load (ETL) systems have been built and are widely used to populate data warehouses today. While ETL is considered state-of-the-art in enterprise data warehousing, a new paradigm known as MapReduce has recently gained popularity for web-scale data transformations, such as web indexing or page rank computation.
The input data of both, ETL and MapReduce programs keeps changing over time, while business transactions are processed or the web is crawled, for instance. Hence, the results of ETL and MapReduce programs get stale and need to be recomputed from time to time. Recurrent computations over changing input data can be performed in two ways. The result may either be recomputed from scratch or recomputed in an incremental fashion. The idea behind the latter approach is to update the existing result in response to incremental changes in the input data. This is typically more efficient than the full recomputation approach, because reprocessing unchanged portions of the input data can often be avoided.
Incremental recomputation techniques have been studied by the database research community mainly in the context of the maintenance of materialized views and have been adopted by all major commercial database systems today. However, neither today's ETL tools nor MapReduce support incremental recomputation techniques. The situation of ETL and MapReduce programmers nowadays is thus much comparable to the situation of database programmers in the early 1990s. This thesis makes an effort to transfer incremental recomputation techniques into the ETL and MapReduce environments. This poses interesting research challenges, because these environments differ fundamentally from the relational world with regard to query and programming models, change data capture, transactional guarantees and consistency models. However, as this thesis will show, incremental recomputations are feasible in ETL and MapReduce and may lead to considerable efficiency improvements.
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting
method for Oracle's Java 7 runtime library. The decision for the change was based on
empirical studies showing that on average, the new algorithm is faster than the formerly
used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot
approach — an idea that was considered not promising by several theoretical studies in the
past. In this thesis, I try to find the reason for this unexpected success.
My focus is on the precise and detailed average case analysis, aiming at the flavor of
Knuth's series “The Art of Computer Programming”. In particular, I go beyond abstract
measures like counting key comparisons, and try to understand the efficiency of the
algorithms at different levels of abstraction. Whenever possible, precise expected values are
preferred to asymptotic approximations. This rigor ensures that (a) the sorting methods
discussed here are actually usable in practice and (b) that the analysis results contribute to
a sound comparison of the Quicksort variants.