## Fachbereich Informatik

### Refine

#### Year of publication

- 1999 (267)
- 1996 (50)
- 1994 (49)
- 1995 (48)
- 1998 (38)
- 1997 (35)
- 2016 (25)
- 1993 (22)
- 2015 (22)
- 2001 (21)
- 2007 (19)
- 2013 (18)
- 2002 (17)
- 2018 (17)
- 2003 (16)
- 2014 (15)
- 2012 (14)
- 1992 (13)
- 2000 (13)
- 2004 (12)
- 2006 (11)
- 2009 (11)
- 2008 (9)
- 2005 (8)
- 2017 (8)
- 1991 (7)
- 2010 (7)
- 2019 (7)
- 2011 (5)
- 1979 (2)
- 1980 (1)
- 1983 (1)
- 1990 (1)

#### Document Type

- Preprint (346)
- Doctoral Thesis (144)
- Report (139)
- Article (109)
- Master's Thesis (45)
- Study Thesis (13)
- Conference Proceeding (8)
- Bachelor Thesis (2)
- Habilitation (2)
- Part of a Book (1)

#### Keywords

- AG-RESY (64)
- PARO (31)
- Case-Based Reasoning (20)
- Visualisierung (17)
- SKALP (16)
- CoMo-Kit (15)
- Fallbasiertes Schliessen (12)
- RODEO (12)
- Robotik (12)
- HANDFLEX (11)

In this work we propose a set of term-rewriting techniques for modelling object-oriented computation. Based on symbolic variants of explicit substitutions calculi, we show how to deal with imperative statements like assignment and sequence in specifications in a pure declarative style. Under our model, computation with classes and objects becomes simply normal form calculation, exactly as it is the case in term-rewriting based languages (for instance the functional languages). We believe this kind of unification between functions and
objects is important because it provides plausible alternatives for using the term-rewriting theory as an engine for supporting the formal and mechanical reasoning about object-oriented specifications.

Concept mapping is a simple and intuitive visual form of knowledge representation. Concept maps can be categorized as informal or formal, where the latter is characterized by implementing a semantics model constraining their components. Software engineering is a domain that has successfully adopted formal concept maps to visualize and specify complex systems. Automated tools have been implemented to support these models although their semantic constraints are hardcoded within the systems and hidden from users. This paper presents the Constraint Graphs and jKSImapper systems. Constraint Graphs is a flexible and powerful graphical system interface for specifying concept mapping notations. In addition, jKSImapper is a multi-user concept mapping editor for the Internet and the World Wide Web. Together, these systems aim to support user-definable formal concept mapping notations and distributed collaboration on the Internet and the World Wide Web.

The CBR team of the LISA is involved in several applied research projects based on the CBR paradigm. These applications use adaptation to solve the specific problems they face. So, we have capitalized some experience about how can be expressed and formalized adaptation processes. The bibliography on the subject is quite important but demonstrates a lake of formalism. At most, there exists some classifications about different types of adaptation.

We present a way to describe Reason Maintenance Systems using the sameformalism for justification based as well as for assumption based approaches.This formalism uses labelled formulae and thus is a special case of Gabbay'slabelled deductive systems. Since our approach is logic based, we are able toget a semantics oriented description of the systems in question.Instead of restricting ourselves to e.g. propositional Horn formulae, as wasdone in the past, we admit arbitrary logics. This enables us to characterizesystems as a whole, including both the reason maintenance component and theproblem solver, nevertheless maintaining a separation between the basic logicand the part that describes the label propagation. The possibility to freely varythe basic logic enables us to not only describe various existing systems, but canhelp in the design of completely new ones.We also show, that it is possible to implement systems based directly on ourlabelled logic and plead for "incremental calculi" crafted to attack undecidablelogics.Furthermore it is shown that the same approach can be used to handledefault reasoning, if the propositional labels are upgraded to first order.

The problem to interpolate Hermite-type data (i.e. two points with attached tangent vectors) with elastic curves of prescribed tension is known to have multiple solutions. A method is presented that finds all solutions of length not exceeding one period of its curvature function. The algorithm is based on algebraic relations between discrete curvature information which allow to transform the problem into a univariate one. The method operates with curves that by construction partially interpolate the given data. Hereby the objective function of the problem is drastically simplified. A bound on the maximum curvature value is established that provides an interval containing all solutions.

Caching has long been used to reduce average access latency, from registers and memory pages cached by hardware, to the application level such as a web browser retaining retrieved documents. We focus here on the high-level caching of potentially shared networked documents and define two terms in relation to this type of caching: Zero latency refers to the condition where access to a document produces a cache hit on the local machine, that is, there is little or no latency due to the network (we assume that latency due to local disk and memory access is insignificant in comparison to network latency). A document with zero latency usually has been placed in the cache after a previous access, or has been pulled there through some prefetching mechanism. Negative latency refers to automatic presentation, or push, of a document to a user based on a prediction that the user will want that document. With an ideal system, a user would be presented with documents either that she was about to request, or that she would not know to request but that would be immediately useful to her.

We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.

Computer-based simulation and visualization of acoustics of a virtual scene can aid during the design process of concert halls, lecture rooms, theaters, or living rooms. Because, not only the visual aspect of the room is important, but also its acoustics. In factory floors noise reduction is important since noise is hazardous to health. Despite the obvious dissimilarity between our aural and visual senses, many techniques required for the visualization of photo-realistic images and for the auralization of acoustic environments are quite similar. Both applications can be served by geometric methods such as particle- and ray tracing if we neglect a number of less important effects. By means of the simulation of room acoustics we want to predict the acoustic properties of a virtual model. For auralization, a pulse response filter needs to be assembled for each pair of source and listener positions. The convolution of this filter with an anechoic source signal provides the signal received at the listener position. Hence, the pulse response filter must contain all reverberations (echos) of a unit pulse, including their frequency decompositions due to absorption at different surface materials. For the room acoustic simulation a method named phonon tracing, since it is based on particles, is developed. The approach computes the energy or pressure decomposition for each particle (phonon) sent out from a sound source and uses this in a second pass (phonon collection) to construct the response filters for different listeners. This step can be performed in different precision levels. During the tracing step particle paths and additional information are stored in a so called phonon map. Using this map several sound visualization approaches were developed. From the visualization, the effect of different materials on the spectral energy / pressure distribution can be observed. The first few reflections already show whether certain frequency bands are rapidly absorbed. The absorbing materials can be identified and replaced in the virtual model, improving the overall acoustic quality of the simulated room. Furthermore an insight into the pressure / energy received at the listener position is possible. The phonon tracing algorithm as well as several sound visualization approaches are integrated into a common system utilizing Virtual Reality technologies in order to facilitate the immersion into the virtual scene. The system is a prototype developed within a project at the University of Kaiserslautern and is still a subject of further improvements. It consists of a stereoscopic back-projection system for visual rendering as well as professional audio equipment for auralization purposes.

Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.

The reasoning power of human-oriented plan-based reasoning systems is primarilyderived from their domain-specific problem solving knowledge. Such knowledge is, how-ever, intrinsically incomplete. In order to model the human ability of adapting existingmethods to new situations we present in this work a declarative approach for represent-ing methods, which can be adapted by so-called meta-methods. Since apparently thesuccess of this approach relies on the existence of general and strong meta-methods,we describe several meta-methods of general interest in detail by presenting the prob-lem solving process of two familiar classes of mathematical problems. These examplesshould illustrate our philosophy of proof planning as well: besides planning with thecurrent repertoire of methods, the repertoire of methods evolves with experience inthat new ones are created by meta-methods which modify existing ones.

INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.

In this paper we generalize the notion of method for proofplanning. While we adopt the general structure of methods introducedby Alan Bundy, we make an essential advancement in that we strictlyseparate the declarative knowledge from the procedural knowledge. Thischange of paradigm not only leads to representations easier to under-stand, it also enables modeling the important activity of formulatingmeta-methods, that is, operators that adapt the declarative part of exist-ing methods to suit novel situations. Thus this change of representationleads to a considerably strengthened planning mechanism.After presenting our declarative approach towards methods we describethe basic proof planning process with these. Then we define the notion ofmeta-method, provide an overview of practical examples and illustratehow meta-methods can be integrated into the planning process.

Extending the planADbased paradigm for auto-mated theorem proving, we developed in previ-ous work a declarative approach towards rep-resenting methods in a proof planning frame-work to support their mechanical modification.This paper presents a detailed study of a classof particular methods, embodying variations ofa mathematical technique called diagonaliza-tion. The purpose of this paper is mainly two-fold. First we demonstrate that typical math-ematical methods can be represented in ourframework in a natural way. Second we illus-trate our philosophy of proof planning: besidesplanning with a fixed repertoire of methods,metaADmethods create new methods by modify-ing existing ones. With the help of three differ-ent diagonalization problems we present an ex-ample trace protocol of the evolution of meth-ods: an initial method is extracted from a par-ticular successful proof. This initial method isthen reformulated for the subsequent problems,and more general methods can be obtained byabstracting existing methods. Finally we comeup with a fairly abstract method capable ofdealing with all the three problems, since it cap-tures the very key idea of diagonalization.

Adaptive Extraction and Representation of Geometric Structures from Unorganized 3D Point Sets
(2009)

The primary emphasis of this thesis concerns the extraction and representation of intrinsic properties of three-dimensional (3D) unorganized point clouds. The points establishing a point cloud as it mainly emerges from LiDaR (Light Detection and Ranging) scan devices or by reconstruction from two-dimensional (2D) image series represent discrete samples of real world objects. Depending on the type of scenery the data is generated from the resulting point cloud may exhibit a variety of different structures. Especially, in the case of environmental LiDaR scans the complexity of the corresponding point clouds is relatively high. Hence, finding new techniques allowing the efficient extraction and representation of the underlying structural entities becomes an important research issue of recent interest. This thesis introduces new methods regarding the extraction and visualization of structural features like surfaces and curves (e.g. ridge-lines, creases) from 3D (environmental) point clouds. One main part concerns the extraction of curve-like features from environmental point data sets. It provides a new method supporting a stable feature extraction by incorporating a probability-based point classification scheme that characterizes individual points regarding their affiliation to surface-, curve- and volume-like structures. Another part is concerned with the surface reconstruction from (environmental) point clouds exhibiting objects that are more or less complex. A new method providing multi-resolutional surface representations from regular point clouds is discussed. Following the applied principles of this approach a volumetric surface reconstruction method based on the proposed classification scheme is introduced. It allows the reconstruction of surfaces from highly unstructured and noisy point data sets. Furthermore, contributions in the field of reconstructing 3D point clouds from 2D image series are provided. In addition, a discussion concerning the most important properties of (environmental) point clouds with respect to feature extraction is presented.

This document introduces the extension of Katja to support position structures and explains the subtleties of their application as well as the design decisions made and problems solved with respect to their implementation. The Katja system was first introduced by Jan Schäfer in the context of his project work and is based on the MAX system developed by Arnd Poetzsch-Heffter.

If gradient based derivative algorithms are used to improve industrial products by reducing their target functions, the derivatives need to be exact.
The last percent of possible improvement, like the efficiency of a turbine, can only be gained if the derivatives are consistent with the solution process that is used in the simulation software.
It is problematic that the development of the simulation software is an ongoing process which leads to the use of approximated derivatives.
If a derivative computation is implemented manually, it will be inconsistent after some time if it is not updated.
This thesis presents a generalized approach which differentiates the whole simulation software with Algorithmic Differentiation (AD), and guarantees a correct and consistent derivative computation after each change to the software.
For this purpose, the variable tagging technique is developed.
The technique checks at run-time if all dependencies, which are used by the derivative algorithms, are correct.
Since it is also necessary to check the correctness of the implementation, a theorem is developed which describes how AD derivatives can be compared.
This theorem is used to develop further methods that can detect and correct errors.
All methods are designed such that they can be applied in real world applications and are used within industrial configurations.
The process described above yields consistent and correct derivatives but the efficiency can still be improved.
This is done by deriving new derivative algorithms.
A fixed-point iterator approach, with a consistent derivation, yields all state of the art algorithms and produces two new algorithms.
These two new algorithms include all implementation details and therefore they produce consistent derivative results.
For detecting hot spots in the application, the state of the art techniques are presented and extended.
The data management is changed such that the performance of the software is affected only marginally when quantities, like the number of input and output variables or the memory consumption, are computed for the detection.
The hot spots can be treated with techniques like checkpointing or preaccumulation.
How these techniques change the time and memory consumption is analyzed and it is shown how they need to be used in selected AD tools.
As a last step, the used AD tools are analyzed in more detail.
The major implementation strategies for operator overloading AD tools are presented and implementation improvements for existing AD tools are discussed.\
The discussion focuses on a minimal memory consumption and makes it possible to compare AD tools on a theoretical level.
The new AD tool CoDiPack is based on these findings and its design and concepts are presented.
The improvements and findings in this thesis make it possible, that an automatic, consistent and correct derivative is generated in an efficient way for industrial applications.

Automated theorem proving is a search problem and, by its undecidability, a very difficult one. The challenge in the development of a practically successful prover is the mapping of the extensively developed theory into a program that runs efficiently on a computer. Starting from a level-based system model for automated theorem provers, in this work we present different techniques that are important for the development of powerful equational theorem provers. The contributions can be divided into three areas: Architecture. We present a novel prover architecture that is based on a set-based compression scheme. With moderate additional computational costs we achieve a substantial reduction of the memory requirements. Further wins are architectural clarity, the easy provision of proof objects, and a new way to parallelize a prover which shows respectable speed-ups in practice. The compact representation paves the way to new applications of automated equational provers in the area of verification systems. Algorithms. To improve the speed of a prover we need efficient solutions for the most time-consuming sub-tasks. We demonstrate improvements of several orders of magnitude for two of the most widely used term orderings, LPO and KBO. Other important contributions are a novel generic unsatisfiability test for ordering constraints and, based on that, a sufficient ground reducibility criterion with an excellent cost-benefit ratio. Redundancy avoidance. The notion of redundancy is of central importance to justify simplifying inferences which are used to prune the search space. In our experience with unfailing completion, the usual notion of redundancy is not strong enough. In the presence of associativity and commutativity, the provers often get stuck enumerating equations that are permutations of each other. By extending and refining the proof ordering, many more equations can be shown redundant. Furthermore, our refinement of the unfailing completion approach allows us to use redundant equations for simplification without the need to consider them for generating inferences. We describe the efficient implementation of several redundancy criteria and experimentally investigate their influence on the proof search. The combination of these techniques results in a considerable improvement of the practical performance of a prover, which we demonstrate with extensive experiments for the automated theorem prover Waldmeister. The progress achieved allows the prover to solve problems that were previously out of reach. This considerably enhances the potential of the prover and opens up the way for new applications.

Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.

Accelerating the maturation process within the software engineering discipline may result in boosts of development productivity. One way to enable this acceleration is to develop tools and processes to mimic evolution of traditional engineering disciplines. Principles established in traditional engineering disciplines represent high-level guidance to constructing these tools and processes. This paper discusses two principles found in the traditional engineering disciplines and how these principles can apply to mature the software engineering discipline. The discussion is concretized through description of the Collaborative Management Environment, a software system under collaborative development among several national laboratories.

This paper shows an approach to profit from type information about planning objects in a partial-order planner. The approach turns out to combine representational and computational advantages. On the one hand, type hierarchies allow better structuring of domain specifications. On the other hand, operators contain type constraints which reduce the search space of the planner as they partially achieve the functionality of filter conditions.

Integrated project management means that design and planning are interleaved with plan execution, allowing both the design and plan to be changed as necessary. This requires that the right effects of change are propagated through the plan and design. When this is distributed among designers and planners, no one may have all of the information to perform such propagation and it is important to identify what effects should be propagated to whom when. We describe a set of dependencies among plan and design elements that allow such notification by a set of message-passing software agents. The result is to provide a novel level of computer support for complex projects.

Based on the experiences from an autonomous mobile robot project called MOBOT-III, we found hard realtime-constraints for the operating- system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operating-system-kernel". The focusin this article is on a communication-scheme fulfilling the previous demanded assurances. The centralchapters discuss the shared buffer management and the way to design the communication architecture.Some further aspects beside the strict realtime-requirements like the possibilities to control and watch a running system, are mentioned. ALBATROSS is actually implemented on a multi-processor VMEbus-system.

Based on experiences from an autonomous mobile robot project called MOBOT -III, we found hard realtime-constraints for the operating-system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operatingsystem-kernel", not limited to mobile- robot-projects only, but which might be useful also wherever you have to guarantee a high reliability of a realtime-system. The focus in this article is on a communication-scheme fulfilling the demanded (hard realtime-) assurances although not implying time-delays or jitters on the critical informationchannels. The central chapters discuss a locking-free shared buffer management, without the need for interrupts and a way to arrange the communication architecture in order to produce minimal protocol-overhead and short cycle-times. Most of the remaining communication-capacity (if there is any) is used for redundant transfers, increasing the reliability of the whole system. ALBATROSS is actually implemented on a multi-processor VMEbus-system.

Ultraschall ist eines der am häufigsten genutzen, bildgebenden Verfahren in der Kardiologie. Dies ist durch die günstige Erzeugung, die Nicht-Invasivität und die Unschädlichkeit für die Patienten begründet. Nachteilig an den existierenden Geräten ist der Umstand, daß lediglich zwei-dimensionale Bilder generiert werden können. Zusätzlich können diese Bilder aufgrund anatomischer Gegebenheiten nicht aus einer wahlfreien Position akquiriert werden. Dies erschwert die Analyse der Daten und folglich die Diagnose. Mit dieser Arbeit wurden neue, algorithmische Aspekte des vier-dimensionalen, kardiologischen Ultraschalls ausgehend von der Akquisition der Rohdaten, deren Synchronisation und Rekonstruktion bis hin zur Visualisierung bearbeitet. In einem zusätzlichen Kapitel wurde eine neue Technik zur weiteren Aufwertung der Visualisierung, sowie zur visuellen Bearbeitung der Ultraschalldaten entwickelt. Durch die hier entwickelten Verfahren ist es möglich bestimmte Einschränkungen des kardiologischen Ultraschalls aufzuheben oder zumindest zu mildern. Hierunter zählen vor allem die Einschränkung auf zwei-dimensionale Schnittbilder, sowie die eingeschränkte Sichtwahl.

Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.

ALICE
(1994)

This paper presents a new kind of abstraction, which has been developed for the purpose of proofplanning. The basic idea of this paper is to abstract a given theorem and to find an abstractproof of it. Once an abstract proof has been found, this proof has to be refined to a real proofof the original theorem. We present a goal oriented abstraction for the purpose of equality proofplanning, which is parameterized by common parts of the left- and right-hand sides of the givenequality. Therefore, this abstraction technique provides an abstract equality problem which ismore adequate than those generated by the abstractions known so far. The presented abstractionalso supports the heuristic search process based on the difference reduction paradigm. We give aformal definition of the abstract space including the objects and their manipulation. Furthermore,we prove some properties in order to allow an efficient implementation of the presented abstraction.

There are two general approaches to providing for isochronous streams in the current Internet. The first approach is the resource reservation approach through protocols such as RSVP, or ATM technology. This provides bandwidth guarantees, however, it also requires significant upgrading of resources in the underlying network. The other common approach is adaptive rate control where the end-system has control of its rate according to feedback from the client population. This approach cannot guarantee timely delivery and raises some scaling questions, however a properly implemented scheme does improve quality and it requires no changes to the underlying IP network. Hence, there exists a dichotomy of requirements ; 1. To cater for reservation protocols or 'hooks' for future reservation components, and 2. To provide an architecture which provides an application controlled QoS scheme, which scales to the size of the current Internet in a best- effort architecture.

An Adaptive and Dynamic Simulation Framework for Incremental, Collaborative Classifier Fusion
(2016)

Abstract. To investigate incremental collaborative classifier fusion techniques, we have developed a comprehensive simulation framework. It is highly flexible and customizable, and can be adapted to various settings and scenarios. The toolbox is realized as an extension to the NetLogo multi-agent based simulation environment using its comprehensive Java- API. The toolbox has been integrated in two di↵erent environments, one for demonstration purposes and another, modeled on persons using re- alistic motion data from Zurich, who are communicating in an ad hoc fashion using mobile devices.

An agent-based approach to managing distributed, multi-platform software development projects
(1999)

This paper describes work undertaken within the context of the P3 (Project and Process Prompter) Project which aims to develop the Prompter tool, a 'decision-support tool to assist in the planning and managing of a software development project'. Prompter will have the ability to help software project managers to assimilate best practice and 'know how' in the field of software project management and incorporate expert critiquing to assist with solving the complex problems associated with software project management. This paper focuses on Prompters agent- based approach to tackling the problems of distributed, platform independent support.

A method for efficiently handling associativity and commutativity (AC) in implementations of (equational) theorem provers without incorporating AC as an underlying theory will be presented. The key of substantial efficiency gains resides in a more suitable representation of permutation-equations (such as f(x,f(y,z))=f(y,f(z,x)) for instance). By representing these permutation-equations through permutations in the mathematical sense (i.e. bijective func- tions :{1,..,n} {1,..,n}), and by applying adapted and specialized inference rules, we can cope more appropriately with the fact that permutation-equations are playing a particular role. Moreover, a number of restrictions concerning application and generation of permuta- tion-equations can be found that would not be possible in this extent when treating permu- tation-equations just like any other equation. Thus, further improvements in efficiency can be achieved.

In 1978, Klop demonstrated that a rewrite system constructed by adding the untyped lambda calculus, which has the Church-Rosser property, to a Church-Rosser first-order algebraic rewrite system may not be Church-Rosser. In contrast, Breazu-Tannen recently showed that argumenting any Church-Rosser first-order algebraic rewrite system with the simply-typed lambda calculus results in a Church-Rosser rewrite system. In addition, Breazu-Tannen and Gallier have shown that the second-order polymorphic lambda calculus can be added to such rewrite systems without compromising the Church-Rosser property (for terms which can be provably typed). There are other systems for which a Church-Rosser result would be desirable, among them being X^t+SP+FIX, the simply-typed lambda calculus extended with surjective pairing and fixed points. This paper will show that Klop's untyped counterexample can be lifted to a typed system to demonstrate that X^t+SP+FIX is not Church-Rosser.

We propose an approach to the problem of proof control for our new first-order inductive theorem prover QuodLibet that is characterized by a great deal of flexibility w.r.t. the forms of proof control the prover supports. The approach is based on so-called (proof) tactics, i.e. proof control routines written in a special proof control language named QML. QuodLibet provides a set of tactics (in addition to the elementary inference rules), which range from tactics for trivial simplification steps to tactics representing comprehensive inductive proof strategies. Moreover, QuodLibet allows new tactics that are written by the user in QML to be integrated into the system to dynamically extend its functionality.

We describe a technique to make application programs fault tolerant. This techADnique is based on the concept of checkpointing from an active program to one ormore passive backup copies which serve as an abstraction of stable memory. Ifthe primary copy fails, one of the backup copies takes over and resumes processADing service requests. After each failure a new backup copy is created in order torestore the replication degree of the service. All mechanisms necessary to achieveand maintain fault tolerance can be added automatically to the code of a nonADfaulttolerant server, thus making fault tolerance completely transparent for the applicaADtion programmer.

Open distributed systems are a class of distributed systems where (i) only partial information about the environment, in which they are running, is present, (ii) new resources may become available at runtime, and (iii) a subsystem may become aware of other subsystems after some interaction. Modeling and implementing such systems correctly is a complex task due to the openness and the dynamicity aspects. One way to ensure that the resulting systems behave correctly is to utilize formal verification.
Formal verification requires an adequate semantic model of the implementation, a specification of the desired behavior, and a reasoning technique. The actor model is a semantic model that captures the challenging aspects of open distributed systems by utilizing actors as universal primitives to represent system entities and allowing them to create new actors and to communicate by sending directed messages as reply to received messages. To enable compositional reasoning, where the reasoning task is reduced to independent verification of the system parts, semantic entities at a higher level of abstraction than actors are needed.
This thesis proposes an automaton model and combines sound reasoning techniques to compositionally verify implementations of open actor systems. Based on I/O automata, the model allows automata to be created dynamically and captures dynamic changes in communication patterns. Each automaton represents either an actor or a group of actors. The specification of the desired behavior is given constructively as an automaton. As the basis for compositionality, we formalize a component notion based on the static structure of the implementation instead of the dynamic entities (the actors) occurring in the system execution. The reasoning proceeds in two stages. The first stage establishes the connection between the automata representing single actors and their implementation description by means of weakest liberal preconditions. The second stage employs this result as the basis for verifying whether a component specification is satisfied. The verification is done by building a simulation relation from the automaton representing the implementation to the component's automaton. Finally, we validate the compositional verification approach through a number of examples by proving correctness of their actor implementations with respect to system specifications.

It has been observed that for understanding the biological function of certain RNA molecules, one has to study joint secondary structures of interacting pairs of RNA. In this thesis, a new approach for predicting the joint structure is proposed and implemented. For this, we introduce the class of m-dimensional context-free grammars --- an extension of stochastic context-free grammars to multiple dimensions --- and present an Earley-style semiring parser for this class. Additionally, we develop and thoroughly discuss an implementation variant of Earley parsers tailored to efficiently handle dense grammars, which embraces the grammars used for structure prediction. A currently proposed partitioning scheme for joint secondary structures is transferred into a two-dimensional context-free grammar, which in turn is used as a stochastic model for RNA-RNA interaction. This model is trained on actual data and then used for predicting most likely joint structures for given RNA molecules. While this technique has been widely used for secondary structure prediction of single molecules, RNA-RNA interaction was hardly approached this way in the past. Although our parser has O(n^3 m^3) time complexity and O(n^2 m^2) space complexity for two RNA molecules of sizes n and m, it remains practically applicable for typical sizes if enough memory is available. Experiments show that our parser is much more efficient for this application than classical Earley parsers. Moreover the predictions of joint structures are comparable in quality to current energy minimization approaches.

Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.

In recent years the demand on business process modelling (BPM) became apparent in many different communities, e.g. information systems engineering, requirements engineering [KiB94], software engineering and knowledge engineering (e.g. [BrV94], [SWH+94]). This suggests to aim at a unifying view on business process modelling in all these disciplines. To achieve the business goals some problems which obstruct these goals must be solved. This can be done either by restructuring the business process, by application of standard software, or by developing individual software components such as knowledge based systems (KBSs). To be able to model business goals and to analyse problems occurring during the business processes these processes including organisational structures and activities have to be modelled. This is also true when building a KBS in an enterprise environment. Because the KBS is only a small part of the whole business organisation, it must be embedded into or at least linked to all relevant business processes, i.e. it should not be a stand-alone solution. For this purpose we extend the MIKE approach [AFS96] in the BMBF project WORKS (Work Oriented Design of Knowledge Systems) by offering business models for modelling relevant aspects of an enterprise. To be able to define an integrated framework with other possibilties to improve an enterprise (e.g. information systems engineering) we determine the standard views of an enterprise. Next we define the views, that are necessary for developing a KBS.

We present an approach to systematically describing case-based reasoning systems bydifferent kinds of criteria. One main requirement was the practical relevance of these criteria and their usability for real-life applications. We report on the results we achieved from a case study carried out in the INRECA1 Esprit project.

The development of complex software systems is driven by many diverse and sometimes contradictory requirements such as correctness and maintainability of resulting products, development costs, and time-to-market. To alleviate these difficulties, we propose a development method for distributed systems that integrates different basic approaches. First, it combines the use of the formal description technique SDL with software reuse concepts. This results in the definition of a use-case driven, incremental development method with SDL-patterns as the main reusable artifacts. Experience with this approach has shown that there are several other factors of influence, such as the quality of reuse artifacts or the experience of the development team. Therefore, we further combined our SDL-pattern approach with an improvement methodology known from the area of experimental software engineering. In order to demonstrate the validity of this integrating approach, we sketch some representative outcomings of a case study.

Mechanised reasoning systems and computer algebra systems have apparentlydifferent objectives. Their integration is, however, highly desirable, since in manyformal proofs both of the two different tasks, proving and calculating, have to beperformed. Even more importantly, proof and computation are often interwoven andnot easily separable. In the context of producing reliable proofs, the question howto ensure correctness when integrating a computer algebra system into a mechanisedreasoning system is crucial. In this contribution, we discuss the correctness prob-lems that arise from such an integration and advocate an approach in which thecalculations of the computer algebra system are checked at the calculus level of themechanised reasoning system. This can be achieved by adding a verbose mode to thecomputer algebra system which produces high-level protocol information that can beprocessed by an interface to derive proof plans. Such a proof plan in turn can beexpanded to proofs at different levels of abstraction, so the approach is well-suited forproducing a high-level verbalised explication as well as for a low-level machine check-able calculus-level proof. We present an implementation of our ideas and exemplifythem using an automatically solved extended example.

The paper presents a process-oriented view on knowledge management in software development. We describe requirements on knowledge management systems from a process-oriented perspective, introduce a process modeling language MILOS and its use for knowledge management. Then we explain how a process-oriented knowledge management system can be implemented using advanced but available information technologies.

This paper describes an Internet-scalable knowledge base infrastructure for managing the knowledge used by an in-telligent software productivity infrastructure system. The infrastructure provides workable solutions for several significant issues: (1) Internetunique names for pieces of knowledge; (2) multi-platform, multi-language support; (3) distributed knowledge base synchronization mechanisms; (4) support for extensive customized variations in knowledge content, and (5) knowledge caching mechanisms for improved system performance. The infrastructure described here is a workable example of the kind of infrastructure that will be required to manage the evolution and reuse of millions of pieces of knowledge in the future.

This report explains basic notions and concepts of Abstract State Machines (ASM) as well as notation for defining ASM models. The objective here is to provide an intuitive understanding of the formalism; for a rigorous definition of the mathematical foundations of ASM, the reader is referred to [2] and [3]. Further references on ASM-related material can be found on the ASM Web Pages [1].