Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
- 2016 (25) (remove)
Document Type
- Doctoral Thesis (23)
- Conference Proceeding (1)
- Habilitation (1)
Has Fulltext
- yes (25)
Keywords
- AUTOSAR (1)
- Affine Arithmetic (1)
- Combinatorial Testing (1)
- FERAL (1)
- Fast Mode-Signaling (1)
- Fehlerbaumanalyse (1)
- Future Internet (1)
- Gateway (1)
- IEEE 802.15.4 (1)
- Intermediate Composition (1)
Faculty / Organisational entity
Typically software engineers implement their software according to the design of the software
structure. Relations between classes and interfaces such as method-call relations and inheritance
relations are essential parts of a software structure. Accordingly, analyzing several types of
relations will benefit the static analysis process of the software structure. The tasks of this
analysis include but not limited to: understanding of (legacy) software, checking guidelines,
improving product lines, finding structure, or re-engineering of existing software. Graphs with
multi-type edges are possible representation for these relations considering them as edges, while
nodes represent classes and interfaces of software. Then, this multiple type edges graph can
be mapped to visualizations. However, the visualizations should deal with the multiplicity of
relations types and scalability, and they should enable the software engineers to recognize visual
patterns at the same time.
To advance the usage of visualizations for analyzing the static structure of software systems,
I tracked difierent development phases of the interactive multi-matrix visualization (IMMV)
showing an extended user study at the end. Visual structures were determined and classified
systematically using IMMV compared to PNLV in the extended user study as four categories:
High degree, Within-package edges, Cross-package edges, No edges. In addition to these structures
that were found in these handy tools, other structures that look interesting for software
engineers such as cycles and hierarchical structures need additional visualizations to display
them and to investigate them. Therefore, an extended approach for graph layout was presented
that improves the quality of the decomposition and the drawing of directed graphs
according to their topology based on rigorous definitions. The extension involves describing
and analyzing the algorithms for decomposition and drawing in detail giving polynomial time
complexity and space complexity. Finally, I handled visualizing graphs with multi-type edges
using small-multiples, where each tile is dedicated to one edge-type utilizing the topological
graph layout to highlight non-trivial cycles, trees, and DAGs for showing and analyzing the
static structure of software. Finally, I applied this approach to four software systems to show
its usefulness.
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
Towards A Non-tracking Web
(2016)
Today, many publishers (e.g., websites, mobile application developers) commonly use third-party analytics services and social widgets. Unfortunately, this scheme allows these third parties to track individual users across the web, creating privacy concerns and leading to reactions to prevent tracking via blocking, legislation and standards. While improving user privacy, these efforts do not consider the functionality third-party tracking enables publishers to use: to obtain aggregate statistics about their users and increase their exposure to other users via online social networks. Simply preventing third-party tracking without replacing the functionality it provides cannot be a viable solution; leaving publishers without essential services will hurt the sustainability of the entire ecosystem.
In this thesis, we present alternative approaches to bridge this gap between privacy for users and functionality for publishers and other entities. We first propose a general and interaction-based third-party cookie policy that prevents third-party tracking via cookies, yet enables social networking features for users when wanted, and does not interfere with non-tracking services for analytics and advertisements. We then present a system that enables publishers to obtain rich web analytics information (e.g., user demographics, other sites visited) without tracking the users across the web. While this system requires no new organizational players and is practical to deploy, it necessitates the publishers to pre-define answer values for the queries, which may not be feasible for many analytics scenarios (e.g., search phrases used, free-text photo labels). Our second system complements the first system by enabling publishers to discover previously unknown string values to be used as potential answers in a privacy-preserving fashion and with low computation overhead for clients as well as servers. These systems suggest that it is possible to provide non-tracking services with (at least) the same functionality as today’s tracking services.
The Context and Its Importance: In safety and reliability analysis, the information generated by Minimal Cut Set (MCS) analysis is large.
The Top Level event (TLE) that is the root of the fault tree (FT) represents a hazardous state of the system being analyzed.
MCS analysis helps in analyzing the fault tree (FT) qualitatively-and quantitatively when accompanied with quantitative measures.
The information shows the bottlenecks in the fault tree design leading to identifying weaknesses of the system being examined.
Safety analysis (containing the MCS analysis) is especially important for critical systems, where harm can be done to the environment or human causing injuries, or even death during the system usage.
Minimal Cut Set (MCS) analysis is performed using computers and generating a lot of information.
This phase is called MCS analysis I in this thesis.
The information is then analyzed by the analysts to determine possible issues and to improve the design of the system regarding its safety as early as possible.
This phase is called MCS analysis II in this thesis.
The goal of my thesis was developing interactive visualizations to support MCS analysis II of one fault tree (FT).
The Methodology: As safety visualization-in this thesis, Minimal Cut Set analysis II visualization-is an emerging field and no complete checklist regarding Minimal Cut Set analysis II requirements and gaps were available from the perspective of visualization and interaction capabilities,
I have conducted multiple studies using different methods with different data sources (i.e., triangulation of methods and data) for determining these requirements and gaps before developing and evaluating visualizations and interactions supporting Minimal Cut Set analysis II.
Thus, the following approach was taken in my thesis:
1- First, a triangulation of mixed methods and data sources was conducted.
2- Then, four novel interactive visualizations and one novel interaction widget were developed.
3- Finally, these interactive visualizations were evaluated both objectively and subjectively (compared to multiple safety tools)
from the point of view of users and developers of the safety tools that perform MCS analysis I with respect to their degree in supporting MCS analysis II and from the point of non-domain people using empirical strategies.
The Spiral tool supports analysts with different visions, i.e., full vision, color deficiency protanopia, deuteranopia, and tritanopia. It supports 100 out of 103 (97%) requirements obtained from the triangulation and it fills 37 out of 39 (95%) gaps. Its usability was rated high (better than their best currently used tools) by the users of the safety and reliability tools (RiskSpectrum, ESSaRel, FaultTree+, and a self-developed tool) and at least similar to the best currently used tools from the point of view of the CAFTA tool developers. Its quality was higher regarding its degree of supporting MCS analysis II compared to the FaultTree+ tool. The time spent for discovering the critical MCSs from a problem size of 540 MCSs (with a worst case of all equal order) was less than a minute while achieving 99.5% accuracy. The scalability of the Spiral visualization was above 4000 MCSs for a comparison task. The Dynamic Slider reduces the interaction movements up to 85.71% of the previous sliders and solves the overlapping thumb issues by the sliders provides the 3D model view of the system being analyzed provides the ability to change the coloring of MCSs according to the color vision of the user provides selecting a BE (i.e., multi-selection of MCSs), thus, can observe the BEs' NoO and provides its quality provides two interaction speeds for panning and zooming in the MCS, BE, and model views provide a MCS, a BE, and a physical tab for supporting the analysis starting by the MCSs, the BEs, or the physical parts. It combines MCS analysis results and the model of an embedded system enabling the analysts to directly relate safety information with the corresponding parts of the system being analyzed and provides an interactive mapping between the textual information of the BEs and MCSs and the parts related to the BEs.
Verifications and Assessments: I have evaluated all visualizations and the interaction widget both objectively and subjectively, and finally evaluated the final Spiral visualization tool also both objectively and subjectively regarding its perceived quality and regarding its degree of supporting MCS analysis II.
When designing autonomous mobile robotic systems, there usually is a trade-off between the three opposing goals of safety, low-cost and performance.
If one of these design goals is approached further, it usually leads to a recession of one or even both of the other goals.
If for example the performance of a mobile robot is increased by making use of higher vehicle speeds, then the safety of the system is usually decreased, as, under the same circumstances, faster robots are often also more dangerous robots.
This decrease of safety can be mitigated by installing better sensors on the robot, which ensure the safety of the system, even at high speeds.
However, this solution is accompanied by an increase of system cost.
In parallel to mobile robotics, there is a growing amount of ambient and aware technology installations in today's environments - no matter whether in private homes, offices or factory environments.
Part of this technology are sensors that are suitable to assess the state of an environment.
For example, motion detectors that are used to automate lighting can be used to detect the presence of people.
This work constitutes a meeting point between the two fields of robotics and aware environment research.
It shows how data from aware environments can be used to approach the abovementioned goal of establishing safe, performant and additionally low-cost robotic systems.
Sensor data from aware technology, which is often unreliable due to its low-cost nature, is fed to probabilistic methods for estimating the environment's state.
Together with models, these methods cope with the uncertainty and unreliability associated with the sensor data, gathered from an aware environment.
The estimated state includes positions of people in the environment and is used as an input to the local and global path planners of a mobile robot, enabling safe, cost-efficient and performant mobile robot navigation during local obstacle avoidance as well as on a global scale, when planning paths between different locations.
The probabilistic algorithms enable graceful degradation of the whole system.
Even if, in the extreme case, all aware technology fails, the robots will continue to operate, by sacrificing performance while maintaining safety.
All the presented methods of this work have been validated using simulation experiments as well as using experiments with real hardware.
An Adaptive and Dynamic Simulation Framework for Incremental, Collaborative Classifier Fusion
(2016)
Abstract. To investigate incremental collaborative classifier fusion techniques, we have developed a comprehensive simulation framework. It is highly flexible and customizable, and can be adapted to various settings and scenarios. The toolbox is realized as an extension to the NetLogo multi-agent based simulation environment using its comprehensive Java- API. The toolbox has been integrated in two di↵erent environments, one for demonstration purposes and another, modeled on persons using re- alistic motion data from Zurich, who are communicating in an ad hoc fashion using mobile devices.
In this thesis we developed a desynchronization design flow in the goal of easing the de- velopment effort of distributed embedded systems. The starting point of this design flow is a network of synchronous components. By transforming this synchronous network into a dataflow process network (DPN), we ensures important properties that are difficult or theoretically impossible to analyze directly on DPNs are preserved by construction. In particular, both deadlock-freeness and buffer boundedness can be preserved after desyn- chronization. For the correctness of desynchronization, we developed a criteria consisting of two properties: a global property that demands the correctness of the synchronous network, as well as a local property that requires the latency-insensitivity of each local synchronous component. As the global property is also a correctness requirement of synchronous systems in general, we take this property as an assumption of our desyn- chronization. However, the local property is in general not satisfied by all synchronous components, and therefore needs to be verified before desynchronization. In this thesis we developed a novel technique for the verification of the local property that can be carried out very efficiently. Finally we developed a model transformation method that translates a set of synchronous guarded actions – an intermediate format for synchronous systems – to an asynchronous actor description language (CAL). Our theorem ensures that one passed the correctness verification, the generated DPN of asynchronous pro- cesses (or actors) preserves the functional behavior of the original synchronous network. Moreover, by the correctness of the synchronous network, our theorem guarantees that the derived DPN is deadlock-free and can be implemented with only finitely bounded buffers.
In this thesis we developed a desynchronization design flow in the goal of easing the de- velopment effort of distributed embedded systems. The starting point of this design flow is a network of synchronous components. By transforming this synchronous network into a dataflow process network (DPN), we ensures important properties that are difficult or theoretically impossible to analyze directly on DPNs are preserved by construction. In particular, both deadlock-freeness and buffer boundedness can be preserved after desyn- chronization. For the correctness of desynchronization, we developed a criteria consisting of two properties: a global property that demands the correctness of the synchronous network, as well as a local property that requires the latency-insensitivity of each local synchronous component. As the global property is also a correctness requirement of synchronous systems in general, we take this property as an assumption of our desyn- chronization. However, the local property is in general not satisfied by all synchronous components, and therefore needs to be verified before desynchronization. In this thesis we developed a novel technique for the verification of the local property that can be carried out very efficiently. Finally we developed a model transformation method that translates a set of synchronous guarded actions – an intermediate format for synchronous systems – to an asynchronous actor description language (CAL). Our theorem ensures that one passed the correctness verification, the generated DPN of asynchronous pro- cesses (or actors) preserves the functional behavior of the original synchronous network. Moreover, by the correctness of the synchronous network, our theorem guarantees that the derived DPN is deadlock-free and can be implemented with only finitely bounded buffers.
Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.
Distributed systems are omnipresent nowadays and networking them is fundamental for the continuous dissemination and thus availability of data. Provision of data in real-time is one of the most important non-functional aspects that safety-critical networks must guarantee. Formal verification of data communication against worst-case deadline requirements is key to certification of emerging x-by-wire systems. Verification allows aircraft to take off, cars to steer by wire, and safety-critical industrial facilities to operate. Therefore, different methodologies for worst-case modeling and analysis of real-time systems have been established. Among them is deterministic Network Calculus (NC), a versatile technique that is applicable across multiple domains such as packet switching, task scheduling, system on chip, software-defined networking, data center networking and network virtualization. NC is a methodology to derive deterministic bounds on two crucial performance metrics of communication systems:
(a) the end-to-end delay data flows experience and
(b) the buffer space required by a server to queue all incoming data.
NC has already seen application in the industry, for instance, basic results have been used to certify the backbone network of the Airbus A380 aircraft.
The NC methodology for worst-case performance analysis of distributed real-time systems consists of two branches. Both share the NC network model but diverge regarding their respective derivation of performance bounds, i.e., their analysis principle. NC was created as a deterministic system theory for queueing analysis and its operations were later cast in a (min,+)-algebraic framework. This branch is known as algebraic Network Calculus (algNC). While algNC can efficiently compute bounds on delay and backlog, the algebraic manipulations do not allow NC to attain the most accurate bounds achievable for the given network model. These tight performance bounds can only be attained with the other, newly established branch of NC, the optimization-based analysis (optNC). However, the only optNC analysis that can currently derive tight bounds was proven to be computationally infeasible even for the analysis of moderately sized networks other than simple sequences of servers.
This thesis makes various contributions in the area of algNC: accuracy within the existing framework is improved, distributivity of the sensor network calculus analysis is established, and most significantly the algNC is extended with optimization principles. They allow algNC to derive performance bounds that are competitive with optNC. Moreover, the computational efficiency of the new NC approach is improved such that this thesis presents the first NC analysis that is both accurate and computationally feasible at the same time. It allows NC to scale to larger, more complex systems that require formal verification of their real-time capabilities.
In der aktuellen technologischen Entwicklung spielen verteilte eingebettete Echtzeitsysteme eine immer zentralere Rolle und werden zunehmend zum Träger von Innovationen. Durch den hiermit verbundenen steigenden Funktionsumfang der verteilten Echtzeitsysteme und deren zunehmenden Einsatz in sicherheitsrelevanten Anwendungsgebieten stellt die Entwicklung solcher Systeme eine immer größere Herausforderung dar. Hierbei handelt es sich einerseits um Herausforderungen bezogen auf die Kommunikation hinsichtlich Echtzeitfähigkeit und effizienter Bandbreitennutzung, andererseits werden geeignete Methoden benötigt, um den Entwicklungsprozess solcher komplexen Systeme durch Tests und Evaluationen zu unterstützen und zu begleiten. Die hier vorgestellte Arbeit adressiert diese beiden Aspekte und ist entsprechend in zwei Teile untergliedert.
Der erste Teil der Arbeit beschäftigt sich mit der Entwicklung neuer Kommunikationslösungen, um den gestiegenen Kommunikationsanforderungen begegnen zu können. So erfordert die Nutzung verteilter Echtzeitsysteme im Kontext sicherheitsrelevanter Aufgaben den Einsatz zeitgetriggerter Kommunikationssysteme, die in der Lage sind, deterministische Garantien bezüglich der Echtzeitfähigkeit zu gewähren. Diese klassischen auf exklusiven Reservierungen basierenden Ansätze sind jedoch gerade bei (seltenen) sporadischen Nachrichten sehr ineffizient in Bezug auf die Nutzung der Bandbreite.
Das in dieser Arbeit verwendete Mode-Based Scheduling with Fast Mode-Signaling (modusbasierte Kommunikation) ist ein Verfahren zur Verbesserung der Bandbreitennutzung zeitgetriggerter Kommunikation, bei gleichzeitiger Gewährleistung der Echtzeitfähigkeit. Um dies zu ermöglichen, erlaubt Mode-Based Scheduling einen kontrollierten, slotbasierten Wettbewerb, welcher durch eine schnelle Modussignalisierung (Fast Mode-Signaling) aufgelöst wird. Im Zuge dieser Arbeit werden verschiedene robuste, zuverlässige und vor allem deterministische Realisierungen von Mode-Based Scheduling with Fast Mode-Signaling auf Basis existierender drahtgebundener Kommunikationsprotokolle (TTCAN und FlexRay) vorgestellt sowie Konzepte präsentiert, welche eine einfache Integration in weitere Kommunikationstechnologien (wie drahtlose Ad-Hoc-Netze) ermöglichen.
Der zweite Teil der Arbeit konzentriert sich nicht nur auf Kommunikationsaspekte, sondern stellt einen Ansatz vor, den Entwicklungsprozess verteilter eingebetteter Echtzeitsysteme durch kontinuierliche Tests und Evaluationen in allen Entwicklungsphasen zu unterstützen und zu begleiten. Das im Kontext des Innovationszentrums für Applied Systems Modeling mitentwickelte und erweiterte FERAL (ein Framework für die Kopplung spezialisierter Simulatoren) bietet eine ideale Ausgangsbasis für das Virtual Prototyping komplexer verteilter eingebetteter Echtzeitsysteme und ermöglicht Tests und Evaluationen der Systeme in einer realistisch simulierten Umgebung. Die entwickelten Simulatoren für aktuelle Kommunikationstechnologien ermöglichen hierbei realistische Simulationen der Interaktionen innerhalb des verteilten Systems. Durch die Unterstützung von Simulationssystemen mit Komponenten auf unterschiedlichen Abstraktionsstufen kann FERAL in allen Entwicklungsphasen eingesetzt werden. Anhand einer Fallstudie wird gezeigt, wie FERAL verwendet werden kann, um ein Simulationssystem zusammen mit den zu realisierenden Komponenten schrittweise zu verfeinern. Auf diese Weise steht während jeder Entwicklungsphase ein ausführbares Simulationssystem für Tests zur Verfügung. Die entwickelten Konzepte und Simulatoren für FERAL ermöglichen es, Designalternativen zu evaluieren und die Wahl einer Kommunikationstechnologie durch die Ergebnisse von Simulationen zu stützen.
Computer Vision (CV) problems, such as image classification and segmentation, have traditionally been solved by manual construction of feature hierarchies or incorporation of other prior knowledge. However, noisy images, varying viewpoints and lighting conditions of images, and clutters in real-world images make the problem challenging. Such tasks cannot be efficiently solved without learning from data. Therefore, many Deep Learning (DL) approaches have recently been successful for various CV tasks, for instance, image classification, object recognition and detection, action recognition, video classification, and scene labeling. The main focus of this thesis is to investigate a purely learning-based approach, particularly, Multi-Dimensional LSTM (MD-LSTM) recurrent neural networks to tackle the challenging CV tasks, classification and segmentation on 2D and 3D image data. Due to the structural nature of MD-LSTM, the network learns directly from raw pixel values and takes the complex spatial dependencies of each pixel into account. This thesis provides several key contributions in the field of CV and DL.
Several MD-LSTM network architectural options are suggested based on the type of input and output, as well as the requiring tasks. Including the main layers, which are an input layer, a hidden layer, and an output layer, several additional layers can be added such as a collapse layer and a fully connected layer. First, a single Two Dimensional LSTM (2D-LSTM) is directly applied on texture images for segmentation and show improvement over other texture segmentation methods. Besides, a 2D-LSTM layer with a collapse layer is applied for image classification on texture and scene images and have provided an accurate classification results. In addition, a deeper model with a fully connected layer is introduced to deal with more complex images for scene labeling and outperforms the other state-of-the-art methods including the deep Convolutional Neural Networks (CNN). Here, several input and output representation techniques are introduced to achieve the robust classification. Randomly sampled windows as input are transformed in scaling and rotation, which are integrated to get the final classification. To achieve multi-class image classification on scene images, several pruning techniques are introduced. This framework provides a good results in automatic web-image tagging. The next contribution is an investigation of 3D data with MD-LSTM. The traditional cuboid order of computations in Multi-Dimensional LSTM (MD-LSTM) is re-arranged in pyramidal fashion. The resulting Pyramidal Multi-Dimensional LSTM (PyraMiD-LSTM) is easy to parallelize, especially for 3D data such as stacks of brain slice images. PyraMiD-LSTM was tested on 3D biomedical volumetric images and achieved best known pixel-wise brain image segmentation results and competitive results on Electron Microscopy (EM) data for membrane segmentation.
To validate the framework, several challenging databases for classification and segmentation are proposed to overcome the limitations of current databases. First, scene images are randomly collected from the web and used for scene understanding, i.e., the web-scene image dataset for multi-class image classification. To achieve multi-class image classification, the training and testing images are generated in a different setting. For training, images belong to a single pre-defined category which are trained as a regular single-class image classification. However, for testing, images containing multi-classes are randomly collected by web-image search engine by querying the categories. All scene images include noise, background clutter, unrelated contents, and also diverse in quality and resolution. This setting can make the database possible to evaluate for real-world applications. Secondly, an automated blob-mosaics texture dataset generator is introduced for segmentation. Random 2D Gaussian blobs are generated and filled with random material textures. These textures contain diverse changes in illumination, scale, rotation, and viewpoint. The generated images are very challenging since they are even visually hard to separate the related regions.
Overall, the contributions in this thesis are major advancements in the direction of solving image analysis problems with Long Short-Term Memory (LSTM) without the need of any extra processing or manually designed steps. We aim at improving the presented framework to achieve the ultimate goal of accurate fine-grained image analysis and human-like understanding of images by machines.
Knowing the extent to which we rely on technology one may think that correct programs are nowadays the norm. Unfortunately, this is far from the truth. Luckily, possible reasons why program correctness is difficult often come hand in hand with some solutions. Consider concurrent program correctness under Sequential Consistency (SC). Under SC, instructions of each program's concurrent component are executed atomically and in order. By using logic to represent correctness specifications, model checking provides a successful solution to concurrent program verification under SC. Alas, SC’s atomicity assumptions do not reflect the reality of hardware architectures. Total Store Order (TSO) is a less common memory model implemented in SPARC and in Intel x86 multiprocessors that relaxes the SC constraints. While the architecturally de-atomized execution of stores under TSO speeds up program execution, it also complicates program verification. To be precise, due to TSO’s unbounded store buffers, a program’s semantics under TSO might be infinite. This, for example, turns reachability under SC (a PSPACE-complete task) into a non-primitive-recursive-complete problem under TSO. This thesis develops verification techniques targeting TSO-relaxed programs. To be precise, we present under- and over-approximating heuristics for checking reachability in TSO-relaxed programs as well as state-reducing methods for speeding up such heuristics. In a first contribution, we propose an algorithm to check reachability of TSO-relaxed programs lazily. The under-approximating refinement algorithm uses auxiliary variables to simulate TSO’s buffers along instruction sequences suggested by an oracle. The oracle’s deciding characteristic is that if it returns the empty sequence then the program’s SC- and TSO-reachable states are the same. Secondly, we propose several approaches to over-approximate TSO buffers. Combined in a refinement algorithm, these approaches can be used to determine safety with respect to TSO reachability for a large class of TSO-relaxed programs. On the more technical side, we prove that checking reachability is decidable when TSO buffers are approximated by multisets with tracked per address last-added-values. Finally, we analyze how the explored state space can be reduced when checking TSO and SC reachability. Intuitively, through the viewpoint of Shasha-and-Snir-like traces, we exploit the structure of program instructions to explain several state-space reducing methods including dynamic and cartesian partial order reduction.
Entwicklung von TDMA-basierten QoS-Routing-Protokollen und Simulationskomponenten für Ad-Hoc-Netze
(2016)
Ad-Hoc-Netze sind selbstorganisierende Netze ohne zentrale Infrastruktur, die heutzutage in vielen Bereichen Verwendung finden. Sie bestehen aus drahtlosen Knoten, die zur Erfüllung ihrer Aufgaben miteinander kommunizieren. Jedoch befinden sich nicht notwendigerweise alle Knoten in Reichweite zueinander. Damit entfernte Knoten einander erreichen können, werden Routingverfahren benötigt. Die Etablierung einer beliebigen Route ist jedoch oft nicht ausreichend, denn viele Anwendungen stellen spezielle Dienstgüteanforderungen (QoS-Anforderungen) an die Verbindung, beispielsweise die Gewährleistung einer Mindestbandbreite. Um diese QoS-Anforderungen erfüllen zu können, werden sie bereits bei der Ermittlung einer Route berücksichtigt, und die benötigten Ressourcen werden entlang der Route reserviert. Dazu dienen QoS-Routing- und Reservierungsprotokolle.
In dieser Arbeit wird zunächst der Aspekt der deterministischen Reservierung von Bandbreite in Form von konkreten Zeitslots einer TDMA-basierten MAC-Schicht betrachtet. Da sich die Übertragungen verschiedener Knoten in drahtlosen Netzen gegenseitig stören können, wurde ein Interferenzmodell entwickelt. Dieses identifiziert Bedingungen, unter denen Zeitslots innerhalb eines Netzes für mehr als eine Übertragung verwendet werden können. Zudem definiert es durch Aggregation der Informationen anderer Knoten Möglichkeiten zur Ermittlung der benötigten Informationen, um zu entscheiden, welche Zeitslots für eine störungsfreie Übertragung verwendet werden können.
Weiterhin werden existierende QoS-Routing- und Reservierungsprotokolle auf inhärente Probleme untersucht, wobei der Schwerpunkt auf Protokollen liegt, die deterministische Reservierungen von Zeitslots vornehmen. In diese Kategorie fällt auch das im Rahmen der Arbeit entwickelte Protokoll RBBQR, dessen Hauptziel darin besteht, die identifizierten Probleme zu eliminieren. Ferner wird das ebenfalls zu dieser Kategorie gehörende Protokoll QMRP beschrieben, welches zentralisiert Multicast-Routen inklusive der zugehörigen Reservierungen in teilstationären Netzen ermittelt.
Ein weiterer Bestandteil der Arbeit behandelt die Entwicklung von Simulationskomponenten, welche beispielsweise zur Evaluation von QoS-Routing- und Reservierungsprotokollen genutzt werden können. Das existierende Simulationsframework FERAL wurde um eine Komponente erweitert, die die Verwendung von Kommunikationstechnologien des Netzwerksimulators ns-3 ermöglicht. Weiterhin wurde ein Modul zur Simulation eines CC2420-Transceivers entwickelt, welches in eigenständigen ns-3-Simulationen und in Simulationen mit FERAL verwendet werden kann.
A wide range of methods and techniques have been developed over the years to manage the increasing
complexity of automotive Electrical/Electronic systems. Standardization is an example
of such complexity managing techniques that aims to minimize the costs, avoid compatibility
problems and improve the efficiency of development processes.
A well-known and -practiced standard in automotive industry is AUTOSAR (Automotive
Open System Architecture). AUTOSAR is a common standard among OEMs (Original Equipment
Manufacturer), suppliers and other involved companies. It was developed originally with
the goal of simplifying the overall development and integration process of Electrical/Electronic
artifacts from different functional domains, such as hardware, software, and vehicle communication.
However, the AUTOSAR standard, in its current status, is not able to manage the problems
in some areas of the system development. Validation and optimization process of system configuration
handled in this thesis are examples of such areas, in which the AUTOSAR standard
offers so far no mature solutions.
Generally, systems developed on the basis of AUTOSAR must be configured in a way that all
defined requirements are met. In most cases, the number of configuration parameters and their
possible settings in AUTOSAR systems are large, especially if the developed system is complex
with modules from various knowledge domains. The verification process here can consume a
lot of resources to test all possible combinations of configuration settings, and ideally find the
optimal configuration variant, since the number of test cases can be very high. This problem is
referred to in literature as the combinatorial explosion problem.
Combinatorial testing is an active and promising area of functional testing that offers ideas
to solve the combinatorial explosion problem. Thereby, the focus is to cover the interaction
errors by selecting a sample of system input parameters or configuration settings for test case
generation. However, the industrial acceptance of combinatorial testing is still weak because of
the deficiency of real industrial examples.
This thesis is tempted to fill this gap between the industry and the academy in the area
of combinatorial testing to emphasizes the effectiveness of combinatorial testing in verifying
complex configurable systems.
The particular intention of the thesis is to provide a new applicable approach to combinatorial
testing to fight the combinatorial explosion problem emerged during the verification and
performance measurement of transport protocol parallel routing of an AUTOSAR gateway. The
proposed approach has been validated and evaluated by means of two real industrial examples
of AUTOSAR gateways with multiple communication buses and two different degrees of complexity
to illustrate its applicability.
Synapses play a central role in the information propagation in the nervous system. A better understanding of synaptic structures and processes is vital for advancing nervous disease research. This work is part of an interdisciplinary project that aims at the quantitative examination of components of the neuromuscular junction, a synaptic connection between a neuron and a muscle cell.
The research project is based on image stacks picturing neuromuscular junctions captured by modern electron microscopes, which permit the rapid acquisition of huge amounts of image data at a high level of detail. The large amount and sheer size of such microscopic data makes a direct visual examination infeasible, though.
This thesis presents novel problem-oriented interactive visualization techniques that support the segmentation and examination of neuromuscular junctions.
First, I introduce a structured data model for segmented surfaces of neuromuscular junctions to enable the computational analysis of their properties. However, surface segmentation of neuromuscular junctions is a very challenging task due to the extremely intricate character of the objects of interest. Hence, such problematic segmentations are often performed manually by non-experts and thus requires further inspection.
With NeuroMap, I develop a novel framework to support proofreading and correction of three-dimensional surface segmentations. To provide a clear overview and to ease navigation within the data, I propose the surface map, an abstracted two-dimensional representation using key features of the surface as landmarks. These visualizations are augmented with information about automated segmentation error estimates. The framework provides intuitive and interactive data correction mechanisms, which in turn permit the expeditious creation of high-quality segmentations.
While analyzing such segmented synapse data, the formulation of specific research questions is often impossible due to missing insight into the data. I address this problem by designing a generic parameter space for segmented structures from biological image data. Furthermore, I introduce a graphical interface to aid its exploration, combining both parameter selection as well as data representation.
Mixed-signal systems combine analog circuits with digital hardware and software systems. A particular challenge is the sensitivity of analog parts to even small deviations in parameters, or inputs. Parameters of circuits and systems such as process, voltage, and temperature are never accurate; we hence model them as uncertain values (‘uncertainties’). Uncertain parameters and inputs can modify the dynamic behavior and lead to properties of the system that are not in specified ranges. For verification of mixed- signal systems, the analysis of the impact of uncertainties on the dynamical behavior plays a central role.
Verification of mixed-signal systems is usually done by numerical simulation. A single numerical simulation run allows designers to verify single parameter values out of often ranges of uncertain values. Multi-run simulation techniques such as Monte Carlo Simulation, Corner Case simulation, and enhanced techniques such as Importance Sampling or Design-of-Experiments allow to verify ranges – at the cost of a high number of simulation runs, and with the risk of not finding potential errors. Formal and symbolic approaches are an interesting alternative. Such methods allow a comprehensive verification. However, formal methods do not scale well with heterogeneity and complexity. Also, formal methods do not support existing and established modeling languages. This fact complicates its integration in industrial design flows.
In previous work on verification of Mixed-Signal systems, Affine Arithmetic is used for symbolic simulation. This allows combining the high coverage of formal methods with the ease-of use and applicability of simulation. Affine Arithmetic computes the propagation of uncertainties through mostly linear analog circuits and DSP methods in an accurate way. However, Affine Arithmetic is currently only able to compute with contiguous regions, but does not permit the representation of and computation with discrete behavior, e.g. introduced by software. This is a serious limitation: in mixed-signal systems, uncertainties in the analog part are often compensated by embedded software; hence, verification of system properties must consider both analog circuits and embedded software.
The objective of this work is to provide an extension to Affine Arithmetic that allows symbolic computation also for digital hardware and software systems, and to demonstrate its applicability and scalability. Compared with related work and state of the art, this thesis provides the following achievements:
1. The thesis introduces extended Affine Arithmetic Forms (XAAF) for the representation of branch and merge operations.
2. The thesis describes arithmetic and relational operations on XAAF, and reduces over-approximation by using an LP solver.
3. The thesis shows and discusses ways to integrate this XAAF into existing modeling languages, in particular SystemC. This way, breaks in the design flow can be avoided.
The applicability and scalability of the approach is demonstrated by symbolic simulation of a Delta-Sigma Modulator and a PLL circuit of an IEEE 802.15.4 transceiver system.
Requirements-Aware, Template-Based Protocol Graphs for Service-Oriented Network Architectures
(2016)
Rigidness of the Internet causes its architectural design issues such as interdependencies among the layers, no cross-layer information exchange, and applications dependency on the underlying protocols implementation.
G-Lab (i.e., http://www.german-lab.de/) is a research project for Future Internet Architecture (FIA), which focuses on problems of the Internet such as rigidness, mobility, and addressing. Where the focus of ICSY (i.e., www.icsy) was on providing the flexibility in future network architectures. An approach so-called Service Oriented Network Architecture (SONATE) is proposed to compose the protocols dynamically. SONATE is based on principles of the service-oriented architecture (SOA), where protocols are decomposed in software modules and later they are put together on demand to provide the desired service.
This composition of functionalities can be performed at various time-epochs (e.g., run-time, design-time, deployment-time). However, these epochs have trade-off in terms of the time-complexity (i.e., required setup time) and the provided flexibility. The design-time is the least time critical in comparison to other time phases, which makes it possible to utilize human-analytical capability. However, the design-time lacks the real-time knowledge of requirements and network conditions, what results in inflexible protocol graphs, and they cannot be changed at later stages on changing requirements. Contrary to the design-time, the run-time is most time critical where an application is waiting for a connection to be established, but at the same time it has maximum information to generate a protocol graph suitable to the given requirements.
Considering limitations above of different time-phases, in this thesis, a novel intermediate functional composition approach (i.e., Template-Based Composition) has been presented to generate requirements aware protocol graphs. The template-based composition splits the composition process across different time-phases to exploit the less time critical nature and human-analytical availability of the design-time, ability to instantaneously deploy new functionalities of the deployment time and maximum information availability of the run-time. The approach is successfully implemented , demonstrated and evaluated based on its performance to know the implications for the practical use.
Integrating Security Concerns into Safety Analysis of Embedded Systems Using Component Fault Trees
(2016)
Nowadays, almost every newly developed system contains embedded systems for controlling system functions. An embedded system perceives its environment via sensors, and interacts with it using actuators such as motors. For systems that might damage their environment by faulty behavior usually a safety analysis is performed. Security properties of embedded systems are usually not analyzed at all. New developments in the area of Industry 4.0 and Internet of Things lead to more and more networking of embedded systems. Thereby, new causes for system failures emerge: Vulnerabilities in software and communication components might be exploited by attackers to obtain control over a system. By targeted actions a system may also be brought into a critical state in which it might harm itself or its environment. Examples for such vulnerabilities, and also successful attacks, became known over the last few years.
For this reason, in embedded systems safety as well as security has to be analyzed at least as far as it may cause safety critical failures of system components.
The goal of this thesis is to describe in one model how vulnerabilities from the security point of view might influence the safety of a system. The focus lies on safety analysis of systems, so the safety analysis is extended to encompass security problems that may have an effect on the safety of a system. Component Fault Trees are very well suited to examine causes of a failure and to find failure scenarios composed of combinations of faults. A Component Fault Tree of an analyzed system is extended by additional Basic Events that may be caused by targeted attacks. Qualitative and quantitative analyses are extended to take the additional security events into account. Thereby, causes of failures that are based on safety as well as security problems may be found. Quantitative or at least semi-quantitative analyses allow to evaluate security measures more detailed, and to justify the need of such.
The approach was applied to several example systems: The safety chain of the off-road robot RAVON, an adaptive cruise control, a smart farming scenario, and a model of a generic infusion pump were analyzed. The result of all example analyses was that additional failure causes were found which would not have been detected in traditional Component Fault Trees. In the analyses also failure scenarios were found that are caused solely by attacks, and that are not depending on failures of system components. These are especially critical scenarios which should not happen in this way, as they are not found in a classical safety analysis. Thus the approach shows its additional benefit to a safety analysis which is achieved by the application of established techniques with only little additional effort.
Interconnected, autonomously driving cars shall realize the vision of a zero-accident, low energy mobility in spite of a fast increasing traffic volume. Tightly interconnected medical devices and health care systems shall ensure the health of an aging society. And interconnected virtual power plants based on renewable energy sources shall ensure a clean energy supply in a society that consumes more energy than ever before. Such open systems of systems will play an essential role for economy and society.
Open systems of systems dynamically connect to each other in order to collectively provide a superordinate functionality, which could not be provided by a single system alone. The structure as well as the behavior of an open system of system dynamically emerge at runtime leading to very flexible solutions working under various different environmental conditions. This flexibility and adaptivity of systems of systems are a key for realizing the above mentioned scenarios.
On the other hand, however, this leads to uncertainties since the emerging structure and behavior of a system of system can hardly be anticipated at design time. This impedes the indispensable safety assessment of such systems in safety-critical application domains. Existing safety assurance approaches presume that a system is completely specified and configured prior to a safety assessment. Therefore, they cannot be applied to open systems of systems. In consequence, safety assurance of open systems of systems could easily become a bottleneck impeding or even preventing the success of this promising new generation of embedded systems.
For this reason, this thesis introduces an approach for the safety assurance of open systems of systems. To this end, we shift parts of the safety assurance lifecycle into runtime in order to dynamically assess the safety of the emerging system of system. We use so-called safety models at runtime for enabling systems to assess the safety of an emerging system of system themselves. This leads to a very flexible runtime safety assurance framework.
To this end, this thesis describes the fundamental knowledge on safety assurance and model-driven development, which are the indispensable prerequisites for defining safety models at runtime. Based on these fundamentals, we illustrate how we modularized and formalized conventional safety assurance techniques using model-based representations and analyses. Finally, we explain how we advanced these design time safety models to safety models that can be used by the systems themselves at runtime and how we use these safety models at runtime to create an efficient and flexible runtime safety assurance framework for open systems of systems.
The task of printed Optical Character Recognition (OCR), though considered ``solved'' by many, still poses several challenges. The complex grapheme structure of many scripts, such as Devanagari and Urdu Nastaleeq, greatly lowers the performance of state-of-the-art OCR systems.
Moreover, the digitization of historical and multilingual documents still require much probing. Lack of benchmark datasets further complicates the development of reliable OCR systems. This thesis aims to find the answers to some of these challenges using contemporary machine learning technologies. Specifically, the Long Short-Term Memory (LSTM) networks, have been employed to OCR modern as well historical monolingual documents. The excellent OCR results obtained on these have led us to extend their application for multilingual documents.
The first major contribution of this thesis is to demonstrate the usability of LSTM networks for monolingual documents. The LSTM networks yield very good OCR results on various modern and historical scripts, without using sophisticated features and post-processing techniques. The set of modern scripts include modern English, Urdu Nastaleeq and Devanagari. To address the challenge of OCR of historical documents, this thesis focuses on Old German Fraktur script, medieval Latin script of the 15th century, and Polytonic Greek script. LSTM-based systems outperform the contemporary OCR systems on all of these scripts. To cater for the lack of ground-truth data, this thesis proposes a new methodology, combining segmentation-based and segmentation-free OCR approaches, to OCR scripts for which no transcribed training data is available.
Another major contribution of this thesis is the development of a novel multilingual OCR system. A unified framework for dealing with different types of multilingual documents has been proposed. The core motivation behind this generalized framework is the human reading ability to process multilingual documents, where no script identification takes place.
In this design, the LSTM networks recognize multiple scripts simultaneously without the need to identify different scripts. The first step in building this framework is the realization of a language-independent OCR system which recognizes multilingual text in a single step. This language-independent approach is then extended to script-independent OCR that can recognize multiscript documents using a single OCR model. The proposed generalized approach yields low error rate (1.2%) on a test corpus of English-Greek bilingual documents.
In summary, this thesis aims to extend the research in document recognition, from modern Latin scripts to Old Latin, to Greek and to other ``under-privilaged'' scripts such as Devanagari and Urdu Nastaleeq.
It also attempts to add a different perspective in dealing with multilingual documents.
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.
Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential
(2016)
Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.
Most of today’s wireless communication devices operate on unlicensed bands with uncoordinated spectrum access, with the consequence that RF interference and collisions are impairing the overall performance of wireless networks. In the classical design of network protocols, both packets in a collision are considered lost, such that channel access mechanisms attempt to avoid collisions proactively. However, with the current proliferation of wireless applications, e.g., WLANs, car-to-car networks, or the Internet of Things, this conservative approach is increasingly limiting the achievable network performance in practice. Instead of shunning interference, this thesis questions the notion of „harmful“ interference and argues that interference can, when generated in a controlled manner, be used to increase the performance and security of wireless systems. Using results from information theory and communications engineering, we identify the causes for reception or loss of packets and apply these insights to design system architectures that benefit from interference. Because the effect of signal propagation and channel fading, receiver design and implementation, and higher layer interactions on reception performance is complex and hard to reproduce by simulations, we design and implement an experimental platform for controlled interference generation to strengthen our theoretical findings with experimental results. Following this philosophy, we introduce and evaluate a system architecture that leverage interference.
First, we identify the conditions for successful reception of concurrent transmissions in wireless networks. We focus on the inherent ability of angular modulation receivers to reject interference when the power difference of the colliding signals is sufficiently large, the so-called capture effect. Because signal power fades over distance, the capture effect enables two or more sender–receiver pairs to transmit concurrently if they are positioned appropriately, in turn boosting network performance. Second, we show how to increase the security of wireless networks with a centralized network access control system (called WiFire) that selectively interferes with packets that violate a local security policy, thus effectively protecting legitimate devices from receiving such packets. WiFire’s working principle is as follows: a small number of specialized infrastructure devices, the guardians, are distributed alongside a network and continuously monitor all packet transmissions in the proximity, demodulating them iteratively. This enables the guardians to access the packet’s content before the packet fully arrives at the receiver. Using this knowledge the guardians classify the packet according to a programmable security policy. If a packet is deemed malicious, e.g., because its header fields indicate an unknown client, one or more guardians emit a limited burst of interference targeting the end of the packet, with the objective to introduce bit errors into it. Established communication standards use frame check sequences to ensure that packets are received correctly; WiFire leverages this built-in behavior to prevent a receiver from processing a harmful packet at all. This paradigm of „over-the-air“ protection without requiring any prior modification of client devices enables novel security services such as the protection of devices that cannot defend themselves because their performance limitations prohibit the use of complex cryptographic protocols, or of devices that cannot be altered after deployment.
This thesis makes several contributions. We introduce the first software-defined radio based experimental platform that is able to generate selective interference with the timing precision needed to evaluate the novel architectures developed in this thesis. It implements a real-time receiver for IEEE 802.15.4, giving it the ability to react to packets in a channel-aware way. Extending this system design and implementation, we introduce a security architecture that enables a remote protection of wireless clients, the wireless firewall. We augment our system with a rule checker (similar in design to Netfilter) to enable rule-based selective interference. We analyze the security properties of this architecture using physical layer modeling and validate our analysis with experiments in diverse environmental settings. Finally, we perform an analysis of concurrent transmissions. We introduce a new model that captures the physical properties correctly and show its validity with experiments, improving the state of the art in the design and analysis of cross-layer protocols for wireless networks.
Automata theory has given rise to a variety of automata models that consist
of a finite-state control and an infinite-state storage mechanism. The aim
of this work is to provide insights into how the structure of the storage
mechanism influences the expressiveness and the analyzability of the
resulting model. To this end, it presents generalizations of results about
individual storage mechanisms to larger classes. These generalizations
characterize those storage mechanisms for which the given result remains
true and for which it fails.
In order to speak of classes of storage mechanisms, we need an overarching
framework that accommodates each of the concrete storage mechanisms we wish
to address. Such a framework is provided by the model of valence automata,
in which the storage mechanism is represented by a monoid. Since the monoid
serves as a parameter to specifying the storage mechanism, our aim
translates into the question: For which monoids does the given
(automata-theoretic) result hold?
As a first result, we present an algebraic characterization of those monoids
over which valence automata accept only regular languages. In addition, it
turns out that for each monoid, this is the case if and only if valence
grammars, an analogous grammar model, can generate only context-free
languages.
Furthermore, we are concerned with closure properties: We study which
monoids result in a Boolean closed language class. For every language class
that is closed under rational transductions (in particular, those induced by
valence automata), we show: If the class is Boolean closed and contains any
non-regular language, then it already includes the whole arithmetical
hierarchy.
This work also introduces the class of graph monoids, which are defined by
finite graphs. By choosing appropriate graphs, one can realize a number of
prominent storage mechanisms, but also combinations and variants thereof.
Examples are pushdowns, counters, and Turing tapes. We can therefore relate
the structure of the graphs to computational properties of the resulting
storage mechanisms.
In the case of graph monoids, we study (i) the decidability of the emptiness
problem, (ii) which storage mechanisms guarantee semilinear Parikh images,
(iii) when silent transitions (i.e. those that read no input) can be
avoided, and (iv) which storage mechanisms permit the computation of
downward closures.