As a Software Product Line (SPL) evolves with increasing number of features and feature values, the feature correlations become extremely intricate, and the specifications of these correlations tend to be either incomplete or inconsistent with their realizations, causing misconfigurations in practice. In order to guide product configuration processes, we present a solution framework to recover complex feature correlations from existing product configurations. These correlations are further pruned automatically and validated by domain experts. During implementation, we use association mining techniques to automatically extract strong association rules as potential feature correlations. This approach is evaluated using a large-scale industrial SPL in the embedded system domain, and finally we identify a large number of complex feature correlations.
Data integration aims at providing uniform access to heterogeneous data, managed by distributed source systems. Data sources can range from legacy systems, databases, and enterprise applications to web-scale data management systems. The materialized approach to data integration, extracts data from the sources, transforms and consolidates the data, and loads it into an integration system, where it is persistently stored and can be queried and analyzed.
To support materialized data integration, so called Extract-Transform-Load (ETL) systems have been built and are widely used to populate data warehouses today. While ETL is considered state-of-the-art in enterprise data warehousing, a new paradigm known as MapReduce has recently gained popularity for web-scale data transformations, such as web indexing or page rank computation.
The input data of both, ETL and MapReduce programs keeps changing over time, while business transactions are processed or the web is crawled, for instance. Hence, the results of ETL and MapReduce programs get stale and need to be recomputed from time to time. Recurrent computations over changing input data can be performed in two ways. The result may either be recomputed from scratch or recomputed in an incremental fashion. The idea behind the latter approach is to update the existing result in response to incremental changes in the input data. This is typically more efficient than the full recomputation approach, because reprocessing unchanged portions of the input data can often be avoided.
Incremental recomputation techniques have been studied by the database research community mainly in the context of the maintenance of materialized views and have been adopted by all major commercial database systems today. However, neither today's ETL tools nor MapReduce support incremental recomputation techniques. The situation of ETL and MapReduce programmers nowadays is thus much comparable to the situation of database programmers in the early 1990s. This thesis makes an effort to transfer incremental recomputation techniques into the ETL and MapReduce environments. This poses interesting research challenges, because these environments differ fundamentally from the relational world with regard to query and programming models, change data capture, transactional guarantees and consistency models. However, as this thesis will show, incremental recomputations are feasible in ETL and MapReduce and may lead to considerable efficiency improvements.
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting
method for Oracle's Java 7 runtime library. The decision for the change was based on
empirical studies showing that on average, the new algorithm is faster than the formerly
used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot
approach — an idea that was considered not promising by several theoretical studies in the
past. In this thesis, I try to find the reason for this unexpected success.
My focus is on the precise and detailed average case analysis, aiming at the flavor of
Knuth's series “The Art of Computer Programming”. In particular, I go beyond abstract
measures like counting key comparisons, and try to understand the efficiency of the
algorithms at different levels of abstraction. Whenever possible, precise expected values are
preferred to asymptotic approximations. This rigor ensures that (a) the sorting methods
discussed here are actually usable in practice and (b) that the analysis results contribute to
a sound comparison of the Quicksort variants.
Fluid extraction is a typical chemical process where two types of fluids are mixed together. The high complexity of this process which involves droplet coalescence, breakup, mass transfer, and counter-current flow often makes design difficult. The industrial design of these processes is still based on expensive mini-plant and pilot plant experiments. Therefore, there is a strong need for research into the stimulation of fluid-fluid interaction processes using computational fluid dynamics (CFD).
Previous multi-phase fluid simulations have focused on the development of models that couple mass and momentum using the Navier-Stokes equation. Recent population balance models (PBM) have proved to be important methods for analyzing droplet breakage and collisions. A combination of CFD and PBM facilitates the simulation of flow property by solving coupling equations, and the calculation of the droplet size and numbers. In our study, we successfully coupled an Euler-Euler CFD model with the breakup and coalescence models proposed by Luo and Svendsen (59).
The simulation output of extraction columns provides a mathematical understand- ing of how fluids are mixed inside a mixing device. This mixing process shows that the dispersed phase of a flow generates large blobs and bubbles. Current mathemati- cal simulation results often fail to provide an intuitive representation of how well two different types of fluid interact, so intuitive and physically plausible visualization tech- niques are in high demand to help chemical engineers to explore and analyze bubble column simulation data. In chapter 3, we present the visualization tools we developed for extraction column data.
Fluid interfaces and free surfaces are topics of growing interest in the field of multi- phase computational fluid dynamics. However, the analysis of the flow field relative to the material interface shape and topology is a challenging task. In chapter 5, we present a technique that facilitates the visualization and analysis of complex material interface behaviors over time. To achieve this, we track the surface parameterization of time-varying material interfaces and identify locations where there are interactions between the material interfaces and fluid particles. Splatting and surface visualization techniques produce an intuitive representation of the derived interface stability. Our results demonstrate that the interaction of a flow field with a material interface can be understood using appropriate extraction and visualization techniques, and that our techniques can help the analysis of mixing and material interface consistency.
In addition to texture-based methods for surface analysis, the interface of two- phase fluid can be considered as an implicit function of the density or volume fraction values. High-level visualization techniques such as topology-based methods can re- veal the hidden structure underlying simple simulation data, which will enhance and advance our understanding of multi-fluid simulation data. Recent feature-based vi- sualization approaches have explored the possibility of using Reeb graphs to analyze scalar field topologies(19, 107). In chapter 6, we present a novel interpolation scheme for interpolating point-based volume fraction data and we further explore the implicit fluid interface using a topology-based method.
Data usage control is a concept that extends access control to also protect data after it
has been released. Usage control enforcement relies on available information about the
distribution of data in the monitored system. In this thesis we introduce an information
engine V8 of the Chromium browser to evaluate the feasibility of the chosen approach.
The automatic analysis and retrieval of technical line drawings is hindered by many challenges such as: the large amount of contextual clutter around the symbols within the drawings, degradation, transformations on the symbols in drawings, large databases of drawings
and large alphabets of symbols. The core tasks required for the analysis of technical line
drawings are: symbol recognition, spotting and retrieval. The current systems for performing these tasks have poor performance due to the mentioned challenges. This dissertation
presents a number of methods that address these challenges. These methods achieve both
accurate and efficient symbol spotting and retrieval in technical line drawings, and perform
significantly better than state-of-the-art methods on the same problems. An overview of
the key contributions of this dissertation is given in the following.
First, this dissertation presents a geometric matching-based method for symbol recognition
and spotting. The method performs recognition in the presence of large amounts of contextual clutter, and provides precise localization of the recognized symbols. On standard
databases such as GREC-2005 and GREC-2011, the method achieves up to 10% higher
recall and up to 28% higher precision than state-of-the-art methods on the spotting task,
and achieves up to 7% higher recognition accuracy on the isolated recognition task. The
method is based on a geometric matching approach, which is flexible enough to incorporate
improvements on the matching strategy, feature types and information on the features. The
method also includes an adaptive preprocessing algorithm that deals with a wide variety
of noise types.
In order to improve the performance of the spotting method when dealing with degraded
drawings, two novel methods are presented in this dissertation. Both methods are based on
combining geometric matching with machine learning techniques. The geometric matching
is used to automatically generate training data that contain information on how well the
features of the queries are matched in both the true and the false matches found by the
spotting method. The first method learns the feature weights of the different query symbols
by linear discriminant analysis (LDA). The weighted query features are used in the spotting
method and result in 27% higher average precision than the original method, with a speedup
factor of 2. The second method uses SVM classification as a post-spotting step to distinguish
the true from the false matches in the spotting method. The use of the classification step
further improves the average precision of the spotting method by 20.6%.
This dissertation also presents methods for content analysis of line drawings. First, a
method for accurate and consistent detection (95.8%) of regions of interest (ROIs) is presented. The method is based on statistical feature grouping. The ROI-finding method is
identified as an important part of a symbol retrieval system: the better the detected ROIs,the higher the performance of a retrieval system. The ROI-finding method is also used to
improve the performance of the geometric-based spotting system.
Second, a symbol clustering method for building a compact and accurate representation of
a large database of technical drawings is presented. This method uses the output from the
ROI-finding method as input, and uses geometric matching as a similarity measure. The
method achieves high accuracy (90.1% recall, 94.3% precision) in forming clusters of symbols. The representatives of the clusters (34 symbols) are used as key entries to a symbol
index, which is identified as the outcome of an off-line stage of a symbol retrieval system.
Finally, an efficient and high performing large scale symbol retrieval system is presented
in this dissertation. The system follows the bag of visual words (BoVW) model, but with
using methods that are suitable to line drawings. The system uses the symbol index to
represent a database of drawings. During the on-line query retrieval stage, the query is
analyzed by the ROI-finding method, matched with the key entries of the symbol index via
geometric matching, and finally, a spatial verification step is performed on the retrieved
matches. The system achieves a query lookup time that is independent of the size of the
database, and is instead dependent on the size of the symbol index. The system achieves up
to 10% higher recall and up to 28% higher precision than state-of-the-art spotting systems
on similar databases.
Overall, these contributions are major advancements in the research of graphics recognition.
The hope is that, such contributions provide the basis for the development of reliable and
accurate performing applications for browsing, querying or classification of line drawings
for the benefit of end users.
Generic layout analysis--process of decomposing document image into homogeneous regions for a collection of diverse document images--has many important applications in document image analysis and understanding such as preprocessing of degraded warped, camera-captured document images, high performance layout analysis of document images containing complex cursive scripts, and word spotting in historical document images at page level. Many areas in this field like generic text line extraction method are considered as elusive goals so far, still beyond the reach of the state-of-the-art methods [NJ07, LSZT07, KB06]. This thesis addresses this problem in such a way that it presents generic, domain-independent, text line extraction and text and non-text segmentation methods, and then describes some important applications, that were developed based on these methods. An overview of the key contributions of this thesis is as follows.
The first part of this thesis presents a generic text line extraction method using a combination of matched filtering and ridge detection techniques, which are commonly used in computer vision. Unlike the state-of-the-art text line extraction methods in the literature, the generic text line extraction method can be equally and robustly applied to a large variety of document image classes including scanned and camera-captured documents, binary and grayscale documents, typed-text and handwritten documents, historical and contemporary documents, and documents containing different scripts. Different standard datasets are selected for performance evaluation that belong to different categories of document images such as the UW-III [GHHP97] dataset of scanned documents, the ICDAR 2007 [GAS07] and the UMD [LZDJ08] datasets of handwritten documents, the DFKI-I [SB07] dataset of camera-captured documents, Arabic/Urdu script documents dataset, and German calligraphic (Fraktur) script historical documents dataset. The generic text line extraction method achieves 86% (n = 23,763 text lines in 650 documents) text line detection accuracy which is better than the aggregate accuracy of 73% of the best performing domain-specific state-of-the-art methods. To the best of the author's knowledge, it is the first general-purpose text line extraction method that can be equally used for a diverse collection of documents.
This thesis also presents an active contour (snake) based curled text line extraction method for warped, camera-captured document images. The presented approach is applied to DFKI-I [SB07] dataset of camera-captured, Latin script document images for curled text line extraction. It achieves above 95% (n = 3,091 text lines in 102 documents) text line detection accuracy, which is significantly better than the competing state-of-the-art curled text line extraction methods. The presented text line extraction method can also be applied to document images containing different scripts like Chinese, Devanagari, and Arabic after small modifications.
The second part of this thesis presents an improved version of the state-of-the-art multiresolution morphology (Leptonica) based text and non-text segmentation method [Blo91], which is a domain-independent page segmentation approach and can be equally applied to a diverse collection of binarized document images. It is demonstrated that the presented improvements result in an increase in segmentation accuracy from 93% to 99% (n = 113 documents).
This thesis also introduces a discriminative learning based approach for page segmentation, where a self-tunable multi-layer perceptron (MLP) classifier [BS10] is trained for distinguishing between text and non-text connected components. Unlike other classification based page segmentation approaches in the literature, the connected components based discriminative learning based approach is faster than pixel based classification methods and does not require a block segmentation method beforehand. A segmentation accuracy of $96\%$ ($n = 113$ documents) is achieved in comparison to the state-of-the-art multiresolution morphology (Leptonica) based page segmentation method [Blo91] that achieves a segmentation accuracy of 93%. In addition to text and non-text segmentation of Latin script documents, the presented approach can also be adapted for document images containing other scripts as well as for other specialized layout analysis tasks such as digit and non-digit segmentation [HBSB12], orientation detection [RBSB09], and body-text and side-note segmentation [BAESB12].
Finally, this thesis presents important applications of the two generic layout analysis techniques, ridge-based text line extraction method and the multi-resolution morphology based text and non-text segmentation method, discussed above. First, a complete preprocessing pipeline is described for removing different types of degradations from grayscale warped, camera-captured document images that includes removal of grayscale degradations such as non-uniform shadows and blurring through binarization, noise cleanup applying page frame detection, and document rectification using monocular dewarping. Each of these preprocessing steps shows significant improvement in comparison to the analyzed state-of-the-art methods in the literature. Second, a high performance layout analysis method is described for complex Arabic script document images written in different languages such as Arabic, Urdu, and Persian and different styles for example Naskh and Nastaliq. The presented layout analysis system is robust against different types of document image degradations and shows better performance for text and non-text segmentation, text line extraction, and reading order determination on a variety of Arabic and Urdu document images as compared to the state-of-the-art methods. It can be used for large scale Arabic and Urdu documents' digitization processes. These applications demonstrate that the layout analysis methods, ridge-based text line extraction and the multi-resolution morphology based text and non-text segmentation, are generic and can be applied easily to a large collection of diverse document images.
This research for this thesis was conducted to develop a framework which supports the automatic configuration of project-specific software development processes by selecting and combining different technologies: the Process Configuration Framework. The research draws attention to the problem that while the research community develops new technologies, the industrial companies continue only using their well-known ones. Because of this, technology transfer takes decades. In addition, there is the fact that there is no solution which solves all problems in a software development project. This leads to a number of technologies which need to be combined for one project.
The framework developed and explained in this research mainly addresses those problems by building a bridge between research and industry as well as by supporting software companies during the selection of the most appropriate technologies combined in a software process. The technology transformation gap is filled by a repository of (new) technologies which are used as a foundation of the Process Configuration Framework. The process is configured by providing SPEM process pattern for each technology, so that the companies can build their process by plugging into each other.
The technologies of the repository were specified in a schema including a technology model, context model, and an impact model. With context and impact it is possible to provide information about a technology, for example, its benefits to quality, cost or schedule. The offering of the process pattern as output of the Process Configuration Framework is performed in several stages:
I Technology Ranking:
1 Ranking based on Application Domain, Project & Impact
2 Ranking based on Environment
3 Ranking based on Static Context
II Technology Combination:
4 Creation of all possible Technology Chains
5 Restriction of the Technology Chains
6 Ranking based on Static and Dynamic Context
7 Extension of the Chains by Quality Assurance
III Process Configuration:
8 Process Component Diagram
9 Extension of the Process Component Diagram
10 Instantiation of the Components by Technologies of the Technology Chain
11 Providing process patterns
12 Creation of the process based on Patterns
The effectiveness and quality of the Process Configuration Framework have additionally been evaluated in a case study. Here, the Technology Chains manually created by experts were compared to the chains automatically created by the framework after it was configured by those experts. This comparison depicted that the framework results are similar and therefore can be used as a recommendation.
We conclude from our research that support during the configuration of a process for software projects is important especially for non-experts. This support is provided by the Process Configuration Framework developed in this research. In addition our research has shown that this framework offers a possibility to speed up the technology transformation gap between the research community and industrial companies.
The safety of embedded systems is becoming more and more important nowadays. Fault Tree Analysis (FTA) is a widely used technique for analyzing the safety of embedded systems. A standardized tree-like structure called a Fault Tree (FT) models the failures of the systems. The Component Fault Tree (CFT) provides an advanced modeling concept for adapting the traditional FTs to the hierarchical architecture model in system design. Minimal Cut Set (MCS) analysis is a method that works for qualitative analysis based on the FTs. Each MCS represents a minimal combination of component failures of a system called basic events, which may together cause the top-level system failure. The ordinary representations of MCSs consist of plain text and data tables with little additional supporting visual and interactive information. Importance analysis based on FTs or CFTs estimates the contribution of each potential basic event to a top-level system failure. The resulting importance values of basic events are typically represented in summary views, e.g., data tables and histograms. There is little visual integration between these forms and the FT (or CFT) structure. The safety of a system can be improved using an iterative process, called the safety improvement process, based on FTs taking relevant constraints into account, e.g., cost. Typically, relevant data regarding the safety improvement process are presented across multiple views with few interactive associations. In short, the ordinary representation concepts cannot effectively facilitate these analyses.
We propose a set of visualization approaches for addressing the issues above mentioned in order to facilitate those analyses in terms of the representations.
1. To support the MCS analysis, we propose a matrix-based visualization that allows detailed data of the MCSs of interest to be viewed while maintaining a satisfactory overview of a large number of MCSs for effective navigation and pattern analysis. Engineers can also intuitively analyze the influence of MCSs of a CFT.
2. To facilitate the importance analysis based on the CFT, we propose a hybrid visualization approach that combines the icicle-layout-style architectural views with the CFT structure. This approach facilitates to identify the vulnerable components taking the hierarchies of system architecture into account and investigate the logical failure propagation of the important basic events.
3. We propose a visual safety improvement process that integrates an enhanced decision tree with a scatter plot. This approach allows one to visually investigate the detailed data related to individual steps of the process while maintaining the overview of the process. The approach facilitates to construct and analyze improvement solutions of the safety of a system.
Using our visualization approaches, the MCS analysis, the importance analysis, and the safety improvement process based on the CFT can be facilitated.
Predicting secondary structures of RNA molecules is one of the fundamental problems of and thus a challenging task in computational structural biology. Existing prediction methods basically use the dynamic programming principle and are either based on a general thermodynamic model or on a specific probabilistic model, traditionally realized by a stochastic context-free grammar. To date, the applied grammars were rather simple and small and despite the fact that statistical approaches have become increasingly appreciated over the past years, a corresponding sampling algorithm based on a stochastic RNA structure model has not yet been devised. In addition, basically all popular state-of-the-art tools for computational structure prediction have the same worst-case time and space requirements of O(n^3) and O(n^2) for sequence length n, limiting their applicability for practical purposes due to the often quite large sizes of native RNA molecules. Accordingly, the prime demand imposed by biologists on computational prediction procedures is to reach a reduced waiting time for results that are not significantly less accurate.
We here deal with all of these issues, by describing algorithms and performing comprehensive studies that are based on sophisticated stochastic context-free grammars of similar complexity as those underlying thermodynamic prediction approaches, where all of our methods indeed make use of the concept of sampling. We also employ the approximation technique known from theoretical computer science in order to reach a heuristic worst-case speedup for RNA folding.
Particularly, we start by describing a way for deriving a sequence-independent random sampler for an arbitrary class of RNAs by means of (weighted) unranking. The resulting algorithm may generate any secondary structure of a given fixed size n in only O(n·log(n)) time, where the results are observed to be accurate, validating its practical applicability.
With respect to RNA folding, we present a novel probabilistic sampling algorithm that generates statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method actually samples the possible foldings from a distribution implied by a suitable (traditional or length-dependent) grammar. Notably, we also propose several (new) ways for obtaining predictions from generated samples. Both variants have the same worst-case time and space complexities of O(n^3) and O(n^2) for sequence length n. Nevertheless, evaluations of our sampling methods show that they are actually capable of producing accurate (prediction) results.
In an attempt to resolve the long-standing problem of reducing the time complexity of RNA folding algorithms without sacrificing much of the accuracy of the results, we invented an innovative heuristic statistical sampling method that can be implemented to require only O(n^2) time for generating a fixed-size sample of candidate structures for a given sequence of length n. Since a reasonable prediction can still efficiently be obtained from the generated sample set, this approach finally reduces the worst-case time complexity by a liner factor compared to all existing precise methods. Notably, we also propose a novel (heuristic) sampling strategy as opposed to the common one typically applied for statistical sampling, which may produce more accurate results for particular settings. A validation of our heuristic sampling approach by comparison to several leading RNA secondary structure prediction tools indicates that it is capable of producing competitive predictions, but may require the consideration of large sample sizes.