## 00-XX GENERAL

### Refine

#### Faculty / Organisational entity

#### Document Type

- Report (3)
- Doctoral Thesis (2)
- Preprint (1)
- Working Paper (1)

#### Keywords

- Ergodic, Binary, Time Series, Exogenous (1)
- MLE (1)
- Multi-Edge Graph (1)
- Multivariate (1)
- Software engineering (1)
- artificial neural network (1)
- directed graphs (1)
- document analysis (1)
- graph drawing algorithm (1)
- graph layout (1)

- Multi-Edge Graph Visualizations for Fostering Software Comprehension (2016)
- Typically software engineers implement their software according to the design of the software structure. Relations between classes and interfaces such as method-call relations and inheritance relations are essential parts of a software structure. Accordingly, analyzing several types of relations will benefit the static analysis process of the software structure. The tasks of this analysis include but not limited to: understanding of (legacy) software, checking guidelines, improving product lines, finding structure, or re-engineering of existing software. Graphs with multi-type edges are possible representation for these relations considering them as edges, while nodes represent classes and interfaces of software. Then, this multiple type edges graph can be mapped to visualizations. However, the visualizations should deal with the multiplicity of relations types and scalability, and they should enable the software engineers to recognize visual patterns at the same time. To advance the usage of visualizations for analyzing the static structure of software systems, I tracked difierent development phases of the interactive multi-matrix visualization (IMMV) showing an extended user study at the end. Visual structures were determined and classified systematically using IMMV compared to PNLV in the extended user study as four categories: High degree, Within-package edges, Cross-package edges, No edges. In addition to these structures that were found in these handy tools, other structures that look interesting for software engineers such as cycles and hierarchical structures need additional visualizations to display them and to investigate them. Therefore, an extended approach for graph layout was presented that improves the quality of the decomposition and the drawing of directed graphs according to their topology based on rigorous definitions. The extension involves describing and analyzing the algorithms for decomposition and drawing in detail giving polynomial time complexity and space complexity. Finally, I handled visualizing graphs with multi-type edges using small-multiples, where each tile is dedicated to one edge-type utilizing the topological graph layout to highlight non-trivial cycles, trees, and DAGs for showing and analyzing the static structure of software. Finally, I applied this approach to four software systems to show its usefulness.

- Statistical Language Modeling for Historical Documents using Weighted Finite-State Transducers and Long Short-Term Memory (2015)
- The goal of this work is to develop statistical natural language models and processing techniques based on Recurrent Neural Networks (RNN), especially the recently introduced Long Short- Term Memory (LSTM). Due to their adapting and predicting abilities, these methods are more robust, and easier to train than traditional methods, i.e., words list and rule-based models. They improve the output of recognition systems and make them more accessible to users for browsing and reading. These techniques are required, especially for historical books which might take years of effort and huge costs to manually transcribe them. The contributions of this thesis are several new methods which have high-performance computing and accuracy. First, an error model for improving recognition results is designed. As a second contribution, a hyphenation model for difficult transcription for alignment purposes is suggested. Third, a dehyphenation model is used to classify the hyphens in noisy transcription. The fourth contribution is using LSTM networks for normalizing historical orthography. A size normalization alignment is implemented to equal the size of strings, before the training phase. Using the LSTM networks as a language model to improve the recognition results is the fifth contribution. Finally, the sixth contribution is a combination of Weighted Finite-State Transducers (WFSTs), and LSTM applied on multiple recognition systems. These contributions will be elaborated in more detail. Context-dependent confusion rules is a new technique to build an error model for Optical Character Recognition (OCR) corrections. The rules are extracted from the OCR confusions which appear in the recognition outputs and are translated into edit operations, e.g., insertions, deletions, and substitutions using the Levenshtein edit distance algorithm. The edit operations are extracted in a form of rules with respect to the context of the incorrect string to build an error model using WFSTs. The context-dependent rules assist the language model to find the best candidate corrections. They avoid the calculations that occur in searching the language model and they also make the language model able to correct incorrect words by using context- dependent confusion rules. The context-dependent error model is applied on the university of Washington (UWIII) dataset and the Nastaleeq script in Urdu dataset. It improves the OCR results from an error rate of 1.14% to an error rate of 0.68%. It performs better than the state-of-the-art single rule-based which returns an error rate of 1.0%. This thesis describes a new, simple, fast, and accurate system for generating correspondences between real scanned historical books and their transcriptions. The alignment has many challenges, first, the transcriptions might have different modifications, and layout variations than the original book. Second, the recognition of the historical books have misrecognition, and segmentation errors, which make the alignment more difficult especially the line breaks, and pages will not have the same correspondences. Adapted WFSTs are designed to represent the transcription. The WFSTs process Fraktur ligatures and adapt the transcription with a hyphenations model that allows the alignment with respect to the varieties of the hyphenated words in the line breaks of the OCR documents. In this work, several approaches are implemented to be used for the alignment such as: text-segments, page-wise, and book-wise approaches. The approaches are evaluated on German calligraphic (Fraktur) script historical documents dataset from “Wan- derungen durch die Mark Brandenburg” volumes (1862-1889). The text-segmentation approach returns an error rate of 2.33% without using a hyphenation model and an error rate of 2.0% using a hyphenation model. Dehyphenation methods are presented to remove the hyphen from the transcription. They provide the transcription in a readable and reflowable format to be used for alignment purposes. We consider the task as classification problem and classify the hyphens from the given patterns as hyphens for line breaks, combined words, or noise. The methods are applied on clean and noisy transcription for different languages. The Decision Trees classifier returns better performance on UWIII dataset and returns an accuracy of 98%. It returns 97% on Fraktur script. A new method for normalizing historical OCRed text using LSTM is implemented for different texts, ranging from Early New High German 14th - 16th centuries to modern forms in New High German applied on the Luther bible. It performed better than the rule-based word-list approaches. It provides a transcription for various purposes such as part-of-speech tagging and n-grams. Also two new techniques are presented for aligning the OCR results and normalize the size by using adding Character-Epsilons or Appending-Epsilons. They allow deletion and insertion in the appropriate position in the string. In normalizing historical wordforms to modern wordforms, the accuracy of LSTM on seen data is around 94%, while the state-of-the-art combined rule-based method returns 93%. On unseen data, LSTM returns 88% and the combined rule-based method returns 76%. In normalizing modern wordforms to historical wordforms, the LSTM delivers the best performance and returns 93.4% on seen data and 89.17% on unknown data. In this thesis, a deep investigation has been done on constructing high-performance language modeling for improving the recognition systems. A new method to construct a language model using LSTM is designed to correct OCR results. The method is applied on UWIII and Urdu script. The LSTM approach outperforms the state-of-the-art, especially for unseen tokens during training. On the UWIII dataset, the LSTM returns reduction in OCR error rates from 1.14% to 0.48%. On the Nastaleeq script in Urdu dataset, the LSTM reduces the error rate from 6.9% to 1.58%. Finally, the integration of multiple recognition outputs can give higher performance than a single recognition system. Therefore, a new method for combining the results of OCR systems is explored using WFSTs and LSTM. It uses multiple OCR outputs and votes for the best output to improve the OCR results. It performs better than the ISRI tool, Pairwise of Multiple Sequence and it helps to improve the OCR results. The purpose is to provide correct transcription so that it can be used for digitizing books, linguistics purposes, N-grams, and part-of-speech tagging. The method consists of two alignment steps. First, two recognition systems are aligned using WFSTs. The transducers are designed to be more flexible and compatible with the different symbols in line and page breaks to avoid the segmentation and misrecognition errors. The LSTM model then is used to vote the best candidate correction of the two systems and improve the incorrect tokens which are produced during the first alignment. The approaches are evaluated on OCRs output from the English UWIII and historical German Fraktur dataset which are obtained from state-of-the-art OCR systems. The Experiments show that the error rate of ISRI-Voting is 1.45%, the error rate of the Pairwise of Multiple Sequence is 1.32%, the error rate of the Line-to-Page alignment is 1.26% and the error rate of the LSTM approach has the best performance with 0.40%. The purpose of this thesis is to contribute methods providing correct transcriptions corresponding to the original book. This is considered to be the first step towards an accurate and more effective use of the documents in digital libraries.

- Invariant input loads for full vehicle multibody system simulation (2011)
- Input loads are essential for the numerical simulation of vehicle multibody system (MBS)- models. Such load data is called invariant, if it is independent of the specific system under consideration. A digital road profile, e.g., can be used to excite MBS models of different vehicle variants. However, quantities efficiently obtained by measurement such as wheel forces are typically not invariant in this sense. This leads to the general task to derive invariant loads on the basis of measurable, but system-dependent quantities. We present an approach to derive input data for full-vehicle simulation that can be used to simulate different variants of a vehicle MBS model. An important ingredient of this input data is a virtual road profile computed by optimal control methods.

- Geometric Ergodicity of Binary Autoregressive Models with Exogenous Variables (2013)
- In this paper we introduce a binary autoregressive model. In contrast to the typical autoregression framework, we allow the conditional distribution of the observed process to depend on past values of the time series and some exogenous variables. Such processes have potential applications in econometrics, medicine and environmental sciences. In this paper, we establish stationarity and geometric ergodicity of these processes under suitable conditions on the parameters of the model. Such properties are important for understanding the stability properties of the model as well as for deriving the asymptotic behavior of the parameter estimators.

- Structure and pressure drop of real and virtual metal wire meshes (2009)
- An efficient mathematical model to virtually generate woven metal wire meshes is presented. The accuracy of this model is verified by the comparison of virtual structures with three-dimensional images of real meshes, which are produced via computer tomography. Virtual structures are generated for three types of metal wire meshes using only easy to measure parameters. For these geometries the velocity-dependent pressure drop is simulated and compared with measurements performed by the GKD - Gebr. Kufferath AG. The simulation results lie within the tolerances of the measurements. The generation of the structures and the numerical simulations were done at GKD using the Fraunhofer GeoDict software.

- Maximum Likelihood Estimators for Multivariate Hidden Markov Mixture Models (2013)
- In this paper we consider a multivariate switching model, with constant states means and covariances. In this model, the switching mechanism between the basic states of the observed time series is controlled by a hidden Markov chain. As illustration, under Gaussian assumption on the innovations and some rather simple conditions, we prove the consistency and asymptotic normality of the maximum likelihood estimates of the model parameters.

- Geometric characterization of particles in 3d with an application to technical cleanliness (2011)
- Continuously improving imaging technologies allow to capture the complex spatial geometry of particles. Consequently, methods to characterize their three dimensional shapes must become more sophisticated, too. Our contribution to the geometric analysis of particles based on 3d image data is to unambiguously generalize size and shape descriptors used in 2d particle analysis to the spatial setting. While being defined and meaningful for arbitrary particles, the characteristics were actually selected motivated by the application to technical cleanliness. Residual dirt particles can seriously harm mechanical components in vehicles, machines, or medical instruments. 3d geometric characterization based on micro-computed tomography allows to detect dangerous particles reliably and with high throughput. It thus enables intervention within the production line. Analogously to the commonly agreed standards for the two dimensional case, we show how to classify 3d particles as granules, chips and fibers on the basis of the chosen characteristics. The application to 3d image data of dirt particles is demonstrated.