## Fachbereich Informatik

### Refine

#### Year of publication

#### Document Type

- Preprint (346) (remove)

#### Keywords

- AG-RESY (17)
- Case-Based Reasoning (16)
- RODEO (10)
- Fallbasiertes Schliessen (9)
- Case Based Reasoning (6)
- Abstraction (5)
- Fallbasiertes Schließen (5)
- Robotics (5)
- case-based problem solving (5)
- CoMo-Kit (4)

Building interoperation among separately developed software units requires checking their conceptual assumptions and constraints. However, eliciting such assumptions and constraints is time consuming and is a challenging task as it requires analyzing each of the interoperating software units. To address this issue we proposed a new conceptual interoperability analysis approach which aims at decreasing the analysis cost and the conceptual mismatches between the interoperating software units. In this report we present the design of a planned controlled experiment for evaluating the effectiveness, efficiency, and acceptance of our proposed conceptual interoperability analysis approach. The design includes the study objectives, research questions, statistical hypotheses, and experimental design. It also provides the materials that will be used in the execution phase of the planned experiment.

Most innovation in the automotive industry is driven by embedded systems. They make usage of dynamic adaption to environmental changes or component/subsystem failures for remaining safe. Following this evolution, fault tree analysis techniques have been extended with concept for dynamic adaptation but resulting techniques like state event fault tree analysis, are not widely used in practice.
In this report we present the results of a controlled experiment that analyze these two techniques (State Events Fault Trees and Faul trees combined with markov chains) with regard to their applicability and efficiency in modeling dynamic behavior of dynamic embedded systems.
The experiment was conducted with students of the TU Kaiserslautern to modeli different safety aspects of an ambient assisted living system.
The main results of the experiment show that SEFTs where more easy and effective to use.

Most of the evolution in ambient assisted living is due to embedded
systems that dynamically adapt themself to react to environmental
changes or component/subsystem failures to maintain a certain level of
safety. Following this evolution fault tree analysis techniques have been
extended with concept for dynamic adaptation but resulting techniques
such as dynamic fault trees or state event fault trees analysis are not
widely used as expected.
In this report we describe a controlled experiment to analyze these two
techniques with regard to their applicability and efficiency in modeling
dynamic behavior of ambient assisted living systems.
Results of the experiment show that Dynamic Fault Trees are easier and more effective
to use, although they produce better results (models) with State Events Fault Trees.

Conditional Compilation (CC) is frequently used as a variation mechanism in software product lines (SPLs). However, as a SPL evolves the variable code realized by CC erodes in the sense that it becomes overly complex and difficult to understand and maintain. As a result, the SPL productivity goes down and puts expected advantages more and more at risk. To investigate the variability erosion and keep the productivity above a sufficiently good level, in this paper we 1) investigate several erosion symptoms in an industrial SPL; 2) present a variability improvement process that includes two major improvement strategies. While one strategy is to optimize variable code within the scope of CC, the other strategy is to transition CC to a new variation mechanism called Parameterized Inclusion. Both of these two improvement strategies can be conducted automatically, and the result of CC optimization is provided. Related issues such as applicability and cost of the improvement are also discussed.

As a Software Product Line (SPL) evolves with increasing number of features and feature values, the feature correlations become extremely intricate, and the specifications of these correlations tend to be either incomplete or inconsistent with their realizations, causing misconfigurations in practice. In order to guide product configuration processes, we present a solution framework to recover complex feature correlations from existing product configurations. These correlations are further pruned automatically and validated by domain experts. During implementation, we use association mining techniques to automatically extract strong association rules as potential feature correlations. This approach is evaluated using a large-scale industrial SPL in the embedded system domain, and finally we identify a large number of complex feature correlations.

A translation contract is a binary predicate corrTransl(S,T) for source programs S and target programs T. It precisely specifies when T is considered to be a correct translation of S. A certifying compiler generates --in addittion to the target T-- a proof for corrTransl(S,T). Certifying compilers are important for the development of safety critical systems to establish the behavioral equivalence of high-level programs with their compiled assembler code. In this paper, we report on a certifying compiler, its proof techniques, and the underlying formal framework developed within the proof assistent Isabelle/HOL. The compiler uses a tiny C-like language as input, has an optimization phase, and generates MIPS code. The underlying translation contract is based on a trace semantics. We investigate design alternatives and discuss our experiences.

This paper deals with the handling of deformable linear objects (DLOs), such as hoses, wires, or leaf springs. It investigates usable features for the vision-based detection of a changing contact situation between a DLO and a rigid polyhedral obstacle and a classification of such contact state transitions. The result is a complete classification of contact state transitions and of the most significant features for each class. This knowledge enables reliable detection of changes in the DLO contact situation, facilitating implementation of sensor-based manipulation skills for all possible contact changes.

Die Domäne der Operationsroboter liegt heute in Fräsarbeiten an knöchernen Strukturen. Da Roboter über eine extreme Präzision verfügen und nicht ermüden bietet sich ihr Einsatz insbesondere bei langwierigen und zugleich hochpräzisen Fräsvorgängen im Bereich der lateralen Schädelbasis an. In jüngsten Arbeiten wurden Prozessparameter zur Anlage eines Implantatlagers bspw. für ein Cochlea Implantat oder für eine roboterunterstützte Mastoidektomie ermittelt. Gemessen wurden die Parameter Kraft, Moment, Vibration und Temperatur bei unterschiedlichen Vorschüben, Drehzahlen, Bahnkurven und unterschiedlichem Knochenmaterial (Mastoid, Kalotte). Hieraus ergaben sich Optimierungsparameter für solche Fräsvorgänge. Auffallend waren unvermittelt auftretende und extrem weit über dem Grenzwert liegende Spitzenwerte für Kräfte, bei im Normbereich liegenden Mittelwerten. Aus diesem Grunde wurde ein Verfahren entwickelt, welches aus einer geometrischen Beschreibung des Implantates eine geeignete Fräsbahn errechnet und eine Kraft-geregelte Prozesskontrolle des Fräsvorganges implementiert. Mit einem 6-achsigen Knickarmroboter erfolgten die Untersuchungen primär an Tierpräparaten und zur Optimierung an Felsenbeinpräparaten.Durch intraoperative online Rückkopplung der Kraft - Sensorik war eine lokale Navigation möglich. Bei steigenden Kräften über den Grenzwert wurde die Vorschubgeschwindigkeit automatisch reguliert, auch konnte das Errreichen der Dura an Hand der Werte detektiert werden. Das Implantatlager ließ sich durch das entwickelte Computerprogramm exakt ausfräsen. Die Untersuchungen ergaben, dass eine zufriedenstellende Anlage eines Implantatbettes in der Kalotte durch einen Kraft-geregelten Fräsvorgang mit einem Roboter, im Sinne einer lokalen Navigation, gelingt.

Handhabung deformierbarer linearer Objekte: Programmierung mit verschiedenen Manipulation-Skills
(2002)

Diese Arbeit beschreibt verschiedene Bewegungsprimitive zur Lösung einiger häufig auftre-tender Probleme bei der Handhabung von deformierbaren linearen Objekten. Anhand der beispielhaften Montage einer Feder wird die Nützlichkeit der verschiedenen Manipulation-Skills im einzelnen, aber auch deren Kombination dargestellt.

Die Domäne der Operationsroboter liegt heute in Fräsarbeiten an knöchernen Strukturen. Da Roboter über eine extreme Präzision verfügen und nicht ermüden bietet sich ihr Einsatz ins-besondere bei langwierigen und zugleich hochpräzisen Fräsvorgängen im Bereich der later-alen Schädelbasis an. Aus diesem Grunde wurde ein Verfahren entwickelt, welches aus einer geometrischen Beschreibung des Implantates eine geeignete Fräsbahn errechnet und eine kraftgeregelte Prozesskontrolle des Fräsvorganges implementiert. Mit einem 6*achsigen Knickarmroboter erfolgten die Untersuchungen primär an Tierpräparaten und zur Optimierung an Felsenbeinpräparaten.

Manipulating Deformable Linear Objects: Manipulation Skill for Active Damping of Oscillations
(2002)

While handling deformable linear objects (DLOs), such as hoses, wires or leaf springs, with an industrial robot at high speed, unintended and undesired oscillations that delay further operations may occur. This paper analyzes oscillations based on a simple model with one degree of freedom (DOF) and presents a method for active open-loop damping. Different ways to interpret an oscillating DLO as a system with 1 DOF lead to translational and rotational adjustment motions. Both were implemented as a manipulation skill with a sepa-rate program that can be executed immediately after any robot motion. We showed how these manipulation skills can generate the needed adjustment motions automatically based on the readings of a wrist-mounted force/torque sensor. Experiments demonstrated the effectiveness under various conditions.

Point-to-Point Trajectory Planning of Flexible Redundant Robot Manipulators Using Genetic Algorithms
(2001)

The paper focuses on the problem of point-to-point trajectory planning for flexible redundant robot manipulators (FRM) in joint space. Compared with irredundant flexible manipulators, a FRM possesses additional possibilities during point-to-point trajectory planning due to its kinematics redundancy. A trajectory planning method to minimize vibration and/or executing time of a point-to-point motion is presented for FRM based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as planning variables. Quadrinomial and quintic polynomial are used to describe the segments that connect the initial, intermediate, and final points in joint space. The trajectory planning of FRM is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. Case studies show that the method is applicable.

This article presents contributions in the field of path planning for industrial robots with 6 degrees of freedom. This work presents the results of our research in the last 4 years at the Institute for Process Control and Robotics at the University of Karlsruhe. The path planning approach we present works in an implicit and discretized C-space. Collisions are detected in the Cartesian workspace by a hierarchical distance computation. The method is based on the A* search algorithm and needs no essential off-line computation. A new optimal discretization method leads to smaller search spaces, thus speeding up the planning. For a further acceleration, the search was parallelized. With a static load distribution good speedups can be achieved. By extending the algorithm to a bidirectional search, the planner is able to automatically select the easier search direction. The new dynamic switching of start and goal leads finally to the multi-goal path planning, which is able to compute a collision-free path between a set of goal poses (e.g., spot welding points) while minimizing the total path length.

The vibration induced in a deformable object upon automatic handling by robot manipulators can often be bothersome. This paper presents a force/torque sensor-based method for handling deformable linear objects (DLOs) in a manner suitable to eliminate acute vibration. An adjustment-motion that can be attached to the end of an arbitrary end-effector's trajectory is employed to eliminate vibration of deformable objects. Differently from model-based methods, the presented sensor-based method does not employ any information from previous motions. The adjustment-motion is generated automatically by analyzing data from a force/torque sensor mounted on the robot wrist. Template matching technique is used to find out the matching point between the vibrational signal of the DLO and a template. Experiments are conducted to test the new method under various conditions. Results demonstrate the effectiveness of the sensor-based adjustment-motion.

The task of handling non-rigid one-dimensional objects by a robot manipulation system is investigated. Especially, approaches to calculate motions with specific behavior in point contacts between the object and environment are regarded. For single point contacts, motions based on generalized rotations solving the direct and inverse manipulation problem are investigated. The latter problem is additionally tackled by simple rotation and translation motions. For double and multiple point contacts, motions based on Splines are suggested. In experimental results with steel springs, the predicted and measured effect for each approach are compared.

Manipulating Deformable Linear Objects: Attachable Adjustment-Motions for Vibration Reduction
(2001)

This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. Different types of adjustment-motions that eliminate vibration of deformable objects and can be attached to the end of an arbitrary end-effector trajectory are presented. For describing the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment motion for each simulation example. Experiments are conducted to verify the presented manipulating method.

Manipulating Deformable Linear Objects: Model-Based Adjustment-Motion for Vibration Reduction
(2001)

This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. An adjustment-motion that eliminates vibration of DLOs and can be attached to the end of any arbitrary end-effector's trajectory is presented, based on the concept of open-loop control. The presented adjustment-motion is a kind of agile end-effector motion with limited scope. To describe the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment-motion for each simulation example. In contrast to previous approaches, the presented method can be treated as one of the manipulation skills and can be applied to different cases without major changes to the method.

The paper focuses on the problem of trajectory planning of flexible redundant robot manipulators (FRM) in joint space. Compared to irredundant flexible manipulators, FRMs present additional possibilities in trajectory planning due to their kinematics redundancy. A trajectory planning method to minimize vibration of FRMs is presented based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as a planning variable. Quadrinomial and quintic polynomials are used to describe the segments which connect the initial, intermediate, and final points in joint space. The trajectory planning of FRMs is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. A case study shows that the method is applicable.

Besides the work in the field of manipulating rigid objects, currently, there are several research and development activities going on in the field of manipulating non-rigid or deformable objects. Several papers have been published on international conferences in this field from various projects and countries. But there has been no comprehensive work which provides both a representative overview of the state of the art and identifies the important aspects in this field. Thus, we collected these activities and invited the corresponding working groups to present an overview of their research. Altogether, nineteen authors coming from Japan, Germany, Italy, Greece, United Kingdom, and Australia contributed to this book. Their research work covers all the different aspects that occur when manipulating deformable objects. The contributions can be characterized and grouped by the following four aspects: * object modeling and simulation, * planning and control strategies, * collaborative systems, and * applications and industrial experiences. In the following, we give a short motivation and overview of the single chapters of the book. The simulation of deformable objects is one way to approach the problem of manipulating these objects by robots. Based on a physical model of the object and the occurring constraints, the resulting object shape is calculated. In Chapter 2, Hirai presents an energy-based approach, where the internal energy under the geometric constraints is minimized. Frugoli et al. introduce a force-based approach, where the forces between discrete particles are minimized meeting given constraints. Finally, Remde and Henrich extend the energy-based approach to plastic deformation and give a solution of the inverse simulation problem. Even if the object behavior is predicted by simulation, there is still the question of how to control the robot during a single manipulation operation. An additional question is how to retrieve an overall plan for the concatenated manipulation operations. In Chapter 3, Wada investigates the control problems when positioning multiple points of a planar deformable object. McCarrager proposes a control scheme exploiting the flexibility, rather than minimizing it. Abegg et al. use a simple contact state model to describe typical assembly tasks and to derive robust manipulation primitives. Finally, Ono presents an automatic sewing system and suggests a strategy for unfolding fabric. In several manipulation tasks, it is reasonable to apply more than one robot. Especially in cases, where the deformable object has to take a specific shape. Since the robots working at the same object are influencing each other, different control algorithms have to be introduced. In Chapter 4, Yoshida and Kosuge investigates this problem for the task of bending a sheet of metal and exploits the relation ship between the static object deformation and the bending moments. Tanner and Kyriakopoulos regard the deformable object as underactuated mechanical system and make use of the existence of non-holonomic constraints. Both approaches model the deformable object as finite elements. All of the above aspects have their counterpart in different applications and industrial experiences. In Chapter 5, Rizzi et al. present test cases and applications of their approach to simulate the manipulation of fabric, wires, cables, and soft bags. Buckingham and Graham give an overview of two European projects processing white fish including locating, gripping, and deheading the fish. Maruyama outlines the three development phases of a robot system for performing outage-free maintenance of live-line power supply in Japan. Finally, Kämper presents the development of a flexible automatic cabling unit for the wiring of long-tube lighting with plug components.

Da gerade in der heutigen Zeit viele zusammenarbeitende Softwareentwickler benötigt werden, um immer komplexer werdende Applikationen zu entwerfen, geht der Trend mehr und mehr in die Richtung des räumlich getrennten Arbeitens. Begünstigt wird diese Entwicklung nicht zuletzt durch die Möglichkeiten der Kommunikation und des Datenaustauschs, die durch das Internet geboten werden. Auf dieser Basis sollen Werkzeuge konzipiert und entwickelt werden, die eine effiziente verteilte Softwareentwicklung ermöglichen. Die Nutzung des Internet zu diesem Zweck löst das Verbindungsproblem für sehr große Entfernungen, die Nutzung von Webservern und -browsern wird der Anforderung der Betriebssystemunabhängigkeit und der Realisierung der Verteiltheit im Sinne des Client/Server-Prinzips gerecht. Unter dem Oberbegriff "Software Configuration Management" versteht man die Menge aller Aufgaben, die bei der Produktverwaltung im Bereich der Softwareherstellung anfallen. In dieser Ausarbeitung sollen zunächst die Anforderungen an ein webbasiertes SCM-System formuliert, einige technische Möglichkeiten genannt und verschiedene existierende SCM-Produkte, die eine Web-Schnittstelle bieten auf die Anforderungen überprüft und miteinander verglichen werden.

Gerade in einer Zeit, in der das Internet in nahezu alle Bereiche des menschlichen Lebens vorgedrungen ist und sich nicht zuletzt aufgrund seiner unbegrenzt scheinenden Möglichkeiten zur Beschaffung und zum Austausch von Informationen und zur weltweiten Kommunikation eines sehr starken Zuspruchs erfreut, liegt es nicht nur im Sinne von Rechenzentren und Dienstanbietern, eine Möglichkeit zur Abrechnung der in Anspruch genommenen Ressourcen in die Hand zu bekommen. Die Erschließung neuer Regionen, sowie der Ausbau vorhandener Netze in Richtung einer Bereitstellung höherer Bandbreiten zur Verbesserung der Übertragungsgeschwindigkeiten ist mit immensen Kosten verbunden. Es ist nicht Aufgabe dieser Arbeit zu entscheiden, auf welche Art und Weise die Kosten auf die Benutzer umgelegt oder verteilt werden sollen. Wir wollen hier auch keine Vorschläge zu solchen Überlegungen einbringen, da dergleichen die Domäne anderer Disziplinen, wie beispielsweise der Betriebs- und Volkswirtschaftslehre und der Politik, darstellt. Unsere Aufgabe ist es aber, die informatikspezifischen Probleme der rechnerinternen Erfassung von Accountinginformationen zu beleuchten und so gesammelte Werte den Spezialisten anderer Fachgebiete zur weiteren Verarbeitung zu überlassen. So befasst sich diese Arbeit zunächst mit den grundlegenden Eigenschaften und Modellen des zu betrachtenden Datenverkehrs, um im folgenden Voraussetzungen und Möglichkeiten zur Realisierung einer benutzerorientierten Erfassung und Abrechung der genutzten Netzwerkressourcen aufzuzeigen und herauszuarbeiten.

A new and systematic basic approach to force- and vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation between the deformable and rigid obstacles, and are derived from the object image and its features. We give an enumeration of possible contact states and discuss the main characteristics of each state. We investigate the performance of robust transitions between the contact states and derive criteria and conditions for each of the states and for two sensor systems, i.e. a vision sensor and a force/torque sensor. This results in a new and task-independent approach in regarding the handling of deformable objects and in a sensor-based implementation of manipulation primitives for industrial robots. Thus, the usage of sensor processing is an appropriate solution for our problem. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task. Experimental results show the feasibility of our approach: A robot performs several contact state transitions which can be combined for solving a more complex task.

In this chapter, the quantitative numerical simulation of the behavior of deformable linear objects, such as hoses, wires and leaf springs is studied. We first give a short review of the physical approach and the basic solution principle. Then, we give a more detailed description of some key aspects: We introduce a novel approach concerning dynamics based on an algorithm very similar to the one used for (quasi-) static computation. Then, we look at the plastic workpiece deformation, involving a modified computation algorithm and a special representation of the workpiece shape. Then, we give alternative solutions for two key aspects of the algorithm, and investigate the problem of performing the workpiece simulation efficiently, i.e., with desired precision in a short time. In the end, we introduce the inverse modeling problem which must be solved when the gripper trajectory for a given task shall be generated.

We present an approach to learning cooperative behavior of agents. Our ap-proach is based on classifying situations with the help of the nearest-neighborrule. In this context, learning amounts to evolving a set of good prototypical sit-uations. With each prototypical situation an action is associated that should beexecuted in that situation. A set of prototypical situation/action pairs togetherwith the nearest-neighbor rule represent the behavior of an agent.We demonstrate the utility of our approach in the light of variants of thewell-known pursuit game. To this end, we present a classification of variantsof the pursuit game, and we report on the results of our approach obtained forvariants regarding several aspects of the classification. A first implementationof our approach that utilizes a genetic algorithm to conduct the search for a setof suitable prototypical situation/action pairs was able to handle many differentvariants.

The common wisdom that goal orderings can be used to improve planning performance is nearly as old as planning itself. During the last decades of research several approaches emerged that computed goal orderings for different planning paradigms, mostly in the area of state-space planning. For partial-order, plan-space planners goal orderings have not been investigated in much detail. Mechanisms developed for statespace planning are not directly applicable because partial-order planners do not have a current (world) state. Further, it is not completely clear how plan-space planners should make use of goal orderings. This paper describes an approach to extract goal orderings to be used by the plan-space planner CAPlan. The extraction of goal orderings is based on the analysis of an extended version of operator graphs which previously have been found useful for the analysis of interactions and recursion of plan-space planners.

Die Verwendung von existierenden Planungsansätzen zur Lösung von realen Anwendungs- problemen führt meist schnell zur Erkenntnis, dass eine vorliegende Problemstellung im Prinzip zwar lösbar ist, der exponentiell anwachsende Suchraum jedoch nur die Behandlung relativ kleiner Aufgabenstellungen erlaubt. Beobachtet man jedoch menschliche Planungsexperten, so sind diese in der Lage bei komplexen Problemen den Suchraum durch Abstraktion und die Verwendung bekannter Fallbeispiele als Heuristiken, entscheident zu verkleinern und so auch für schwierige Aufgabenstellungen zu einer akzeptablen Lösung zu gelangen. In dieser Arbeit wollen wir am Beispiel der Arbeitsplanung ein System vorstellen, das Abstraktion und fallbasierte Techniken zur Steuerung des Inferenzprozesses eines nichtlinearen, hierarchischen Planungssystems einsetzt und so die Komplexität der zu lösenden Gesamtaufgabe reduziert.

We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.

We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.

In den letzten Jahren wurden Methoden des fallbasierten Schliessens häufig in Bereichen verwendet, in denen traditionell symbolische Verfahren zum Einsatz kommen, beispielsweise in der Klassifikation. Damit stellt sich zwangsläufig die Frage nach den Unterschieden bzw. der Mächtigkeit dieser Lernverfahren. Jantke [Jantke, 1992] hat bereits Gemeinsamkeiten von Induktiver Inferenz und fallbasierter Klassifikation untersucht. In dieser Arbeit wollen wir einige Zusammenhänge zwischen der Fallbasis, dem Ähnlichkeitsmass und dem zu erlernenden Begriff verdeutlichen. Zu diesem Zweck wird ein einfacher symbolischer Lernalgorithmus (der Versionenraum nach [Mitchell, 1982]) in eine äquivalente, fallbasiert arbeitende Variante transformiert. Die vorgestellten Ergebnisse bestätigen die Äquivalenz von symbolischen und fallbasierten Ansätzen und zeigen die starke Abhängigkeit zwischen dem im System verwendeten Mass und dem zu lernenden Begriff.

Die Mehrzahl aller CBR-Systeme in der Diagnostik verwendet für das Fallretrieval ein numerisches Ähnlichkeitsmass. In dieser Arbeit wird ein Ansatz vorgestellt, bei dem durch die Einführung eines an den Komponenten des zu diagnostizierenden technischen Systems orientierten Ähnlichkeitsbegriffs nicht nur das Retrieval wesentlich verbessert werden kann, sondern sich auch die Möglichkeit zu einer echten Fall- und Lösungstransformation bietet. Dies führt wiederum zu einer erheblichen Verkleinerung der Fallbasis. Die Ver- wendung dieses Ähnlichkeitsbegriffes setzt die Integration von zusätzlichem Wissen voraus, das aus einem qualitativem Modell der Domäne (im Sinne der modellbasierten Diagnostik) gewonnen wird.

Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.

In nebenläufigen Systemen erleichtert das Konzept der Atomarität vonOperationen, konkurrierende Zugriffe in größere, leichter beherrschbareAbschnitte zu unterteilen. Wenn wir aber Spezifikationen in der forma-len Beschreibungstechnik Estelle betrachten, erweist es sich, daß es un-ter bestimmten Umständen schwierig ist, die Atomarität der sogenanntenTransitionen bei Implementationen exakt einzuhalten, obwohl diese Ato-marität eine konzeptuelle Grundlage der Semantik von Estelle ist. Es wirdaufgezeigt, wie trotzdem sowohl korrekte als auch effiziente nebenläufigeImplementationen erreicht werden können. Schließlich wird darauf hinge-wiesen, daß die das Problem auslösenden Aktionen oft vom Spezifiziererleicht von vorneherein vermieden werden können; und dies gilt auch überden Kontext von Estelle hinaus.

Bestimmung der Ähnlichkeit in der fallbasierten Diagnose mit simulationsfähigen Maschinenmodellen
(1999)

Eine Fallbasis mit bereits gelösten Diagnoseproblemen Wissen über die Struktur der Maschine Wissen über die Funktion der einzelnen Bauteile (konkret und abstrakt) Die hier vorgestellte Komponente setzt dabei auf die im Rahmen des Moltke-Projektes entwickelten Systeme Patdex[Wes91] (fallbasierte Diagnose) und iMake [Sch92] bzw. Make [Reh91] (modellbasierte Generierung von Moltke- Wissensbasen) auf.

The feature interaction problem in telecommunications systems increasingly obstructsthe evolution of such systems. We develop formal detection criteria which render anecessary (but less than sufficient) condition for feature interactions. It can be checkedmechanically and points out all potentially critical spots. These have to be analyzedmanually. The resulting resolution decisions are incorporated formally. Some prototypetool support is already available. A prerequisite for formal criteria is a formal definitionof the problem. Since the notions of feature and feature interaction are often used in arather fuzzy way, we attempt a formal definition first and discuss which aspects can beincluded in a formalization (and therefore in a detection method). This paper describeson-going work.

Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

Collecting Experience on the Systematic Development of CBR Applications using the INRECA Methodology
(1999)

This paper presents an overview of the INRECA methodology for building and maintaining CBR applications. This methodology supports the collection and reuse of experience on the systematic development of CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development experience is documented using software process models and stored in different levels of generality in a three-layered experience base. Up to now, experience from 9 industrial projects enacted by all INRECA II partners has been collected.

Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)

The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.

Planning means constructing a course of actions to achieve a specified set of goals when starting from an initial situation. For example, determining a sequence of actions (a plan) for transporting goods from an initial location to some destination is a typical planning problem in the transportation domain. Many planning problems are of practical interest.

MOLTKE is a research project dealing with a complex technical application. After describing the domain of CNCmachining centers and the applied KA methods, we summarize the concrete KA problems which we have to handle. Then we describe a KA mechanism which supports an engineer in developing a diagnosis system. In chapter 6 weintroduce learning techniques operating on diagnostic cases and domain knowledge for improving the diagnostic procedure of MOLTKE. In the last section of this chapter we outline some essential aspects of organizationalknowledge which is heavily applied by engineers for analysing such technical systems (Qualitative Engineering). Finally we give a short overview of the actual state of realization and our future plans.

Most automated theorem provers suffer from the problem that theycan produce proofs only in formalisms difficult to understand even forexperienced mathematicians. Efforts have been made to transformsuch machine generated proofs into natural deduction (ND) proofs.Although the single steps are now easy to understand, the entire proofis usually at a low level of abstraction, containing too many tedioussteps. Therefore, it is not adequate as input to natural language gen-eration systems.To overcome these problems, we propose a new intermediate rep-resentation, called ND style proofs at the assertion level . After illus-trating the notion intuitively, we show that the assertion level stepscan be justified by domain-specific inference rules, and that these rulescan be represented compactly in a tree structure. Finally, we describea procedure which substantially shortens ND proofs by abstractingthem to the assertion level, and report our experience with furthertransformation into natural language.

In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected and the less important results are forgotten. We describe some useful experts and explain in detail how they work together. We establish fairness criteria and so prove the distributed system to be both, complete and correct. We have implementedour system and show by non-trivial examples that drastical time speed-ups are possible for a cooperating team of experts compared to the time needed by the best expert in the team.

Constructing an analogy between a known and already proven theorem(the base case) and another yet to be proven theorem (the target case) oftenamounts to finding the appropriate representation at which the base and thetarget are similar. This is a well-known fact in mathematics, and it was cor-roborated by our empirical study of a mathematical textbook, which showedthat a reformulation of the representation of a theorem and its proof is in-deed more often than not a necessary prerequisite for an analogical inference.Thus machine supported reformulation becomes an important component ofautomated analogy-driven theorem proving too.The reformulation component proposed in this paper is embedded into aproof plan methodology based on methods and meta-methods, where the latterare used to change and appropriately adapt the methods. A theorem and itsproof are both represented as a method and then reformulated by the set ofmetamethods presented in this paper.Our approach supports analogy-driven theorem proving at various levels ofabstraction and in principle makes it independent of the given and often acci-dental representation of the given theorems. Different methods can representfully instantiated proofs, subproofs, or general proof methods, and hence ourapproach also supports these three kinds of analogy respectively. By attachingappropriate justifications to meta-methods the analogical inference can oftenbe justified in the sense of Russell.This paper presents a model of analogy-driven proof plan construction andfocuses on empirically extracted meta-methods. It classifies and formally de-scribes these meta-methods and shows how to use them for an appropriatereformulation in automated analogy-driven theorem proving.

Following Buchberger's approach to computing a Gröbner basis of a poly-nomial ideal in polynomial rings, a completion procedure for finitely generatedright ideals in Z[H] is given, where H is an ordered monoid presented by a finite,convergent semi - Thue system (Sigma; T ). Taking a finite set F ' Z[H] we get a(possibly infinite) basis of the right ideal generated by F , such that using thisbasis we have unique normal forms for all p 2 Z[H] (especially the normal formis 0 in case p is an element of the right ideal generated by F ). As the orderingand multiplication on H need not be compatible, reduction has to be definedcarefully in order to make it Noetherian. Further we no longer have p Delta x ! p 0for p 2 Z[H]; x 2 H. Similar to Buchberger's s - polynomials, confluence criteriaare developed and a completion procedure is given. In case T = ; or (Sigma; T ) is aconvergent, 2 - monadic presentation of a group providing inverses of length 1 forthe generators or (Sigma; T ) is a convergent presentation of a commutative monoid ,termination can be shown. So in this cases finitely generated right ideals admitfinite Gröbner bases. The connection to the subgroup problem is discussed.

This case study examines in detail the theorems and proofs that are shownby analogy in a mathematical textbook on semigroups and automata, thatis widely used as an undergraduate textbook in theoretical computer scienceat German universities (P. Deussen, Halbgruppen und Automaten, Springer1971). The study shows the important role of restructuring a proof for findinganalogous subproofs, and of reformulating a proof for the analogical trans-formation. It also emphasizes the importance of the relevant assumptions ofa known proof, i.e., of those assumptions actually used in the proof. In thisdocument we show the theorems, the proof structure, the subproblems andthe proofs of subproblems and their analogues with the purpose to providean empirical test set of cases for automated analogy-driven theorem proving.Theorems and their proofs are given in natural language augmented by theusual set of mathematical symbols in the studied textbook. As a first step weencode the theorems in logic and show the actual restructuring. Secondly, wecode the proofs in a Natural Deduction calculus such that a formal analysisbecomes possible and mention reformulations that are necessary in order toreveal the analogy.

We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.

While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.

To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this thesis a frame-based knowledge representation formal-ism including higher-order sorted logic is presented, which supports a conceptualrepresentation and to a large extent guarantees the consistency of the built-upknowledge bases. In order to operationalize this knowledge, for instance, in anautomated theorem proving system, a class of sound morphisms from higher-orderinto first-order logic is given, in addition a sound and complete translation ispresented. The translations are bijective and hence compatible with a later proofpresentation.In order to prove certain theorems the comprehension axioms are necessary,(but difficult to handle in an automated system); such theorems are called trulyhigher-order. Many apparently higher-order theorems (i.e. theorems that arestated in higher-order syntax) however are essentially first-order in the sense thatthey can be proved without the comprehension axioms: for proving these theoremsthe translation technique as presented in this thesis is well-suited.

We transform a user-friendly formulation of aproblem to a machine-friendly one exploiting the variabilityof first-order logic to express facts. The usefulness of tacticsto improve the presentation is shown with several examples.In particular it is shown how tactical and resolution theoremproving can be combined.

There are well known examples of monoids in literature which do not admit a finite andcanonical presentation by a semi-Thue system over a fixed alphabet, not even over an arbi-trary alphabet. We introduce conditional Thue and semi-Thue systems similar to conditionalterm rewriting systems as defined by Kaplan. Using these conditional semi-Thue systems wegive finite and canonical presentations of the examples mentioned above. Furthermore weshow, that each finitely generated monoid with decidable word problem is embeddable in amonoid which has a finite canonical conditional presentation.

Typical examples, that is, examples that are representative for a particular situationor concept, play an important role in human knowledge representation and reasoning.In real life situations more often than not, instead of a lengthy abstract characteriza-tion, a typical example is used to describe the situation. This well-known observationhas been the motivation for various investigations in experimental psychology, whichalso motivate our formal characterization of typical examples, based on a partial orderfor their typicality. Reasoning by typical examples is then developed as a special caseof analogical reasoning using the semantic information contained in the correspondingconcept structures. We derive new inference rules by replacing the explicit informa-tion about connections and similarity, which are normally used to formalize analogicalinference rules, by information about the relationship to typical examples. Using theseinference rules analogical reasoning proceeds by checking a related typical example,this is a form of reasoning based on semantic information from cases.

This paper concerns a knowledge structure called method , within a compu-tational model for human oriented deduction. With human oriented theoremproving cast as an interleaving process of planning and verification, the body ofall methods reflects the reasoning repertoire of a reasoning system. While weadopt the general structure of methods introduced by Alan Bundy, we make anessential advancement in that we strictly separate the declarative knowledgefrom the procedural knowledge. This is achieved by postulating some stand-ard types of knowledge we have identified, such as inference rules, assertions,and proof schemata, together with corresponding knowledge interpreters. Ourapproach in effect changes the way deductive knowledge is encoded: A newcompound declarative knowledge structure, the proof schema, takes the placeof complicated procedures for modeling specific proof strategies. This change ofparadigm not only leads to representations easier to understand, it also enablesus modeling the even more important activity of formulating meta-methods,that is, operators that adapt existing methods to suit novel situations. In thispaper, we first introduce briefly the general framework for describing methods.Then we turn to several types of knowledge with their interpreters. Finally,we briefly illustrate some meta-methods.

We present a framework for the integration of the Knuth-Bendix completion algorithm with narrowing methods, compiled rewrite rules, and a heuristic difference reduction mechanism for paramodulation. The possibility of embedding theory unification algorithms into this framework is outlined. Results are presented and discussed for several examples of equality reasoning problems in the context of an actual implementation of an automated theorem proving system (the Mkrp-system) and a fast C implementation of the completion procedure. The Mkrp-system is based on the clause graph resolution procedure. The thesis shows the indispensibility of the constraining effects of completion and rewriting for equality reasoning in general and quantifies the amount of speed-up caused by various enhancements of the basic method. The simplicity of the superposition inference rule allows to construct an abstract machine for completion, which is presented together with computation times for a concrete implementation.

This report presents the main ideas underlyingtheOmegaGamma mkrp-system, an environmentfor the development of mathematical proofs. The motivation for the development ofthis system comes from our extensive experience with traditional first-order theoremprovers and aims to overcome some of their shortcomings. After comparing the benefitsand drawbacks of existing systems, we propose a system architecture that combinesthe positive features of different types of theorem-proving systems, most notably theadvantages of human-oriented systems based on methods (our version of tactics) andthe deductive strength of traditional automated theorem provers.In OmegaGamma mkrp a user first states a problem to be solved in a typed and sorted higher-order language (called POST ) and then applies natural deduction inference rules inorder to prove it. He can also insert a mathematical fact from an integrated data-base into the current partial proof, he can apply a domain-specific problem-solvingmethod, or he can call an integrated automated theorem prover to solve a subprob-lem. The user can also pass the control to a planning component that supports andpartially automates his long-range planning of a proof. Toward the important goal ofuser-friendliness, machine-generated proofs are transformed in several steps into muchshorter, better-structured proofs that are finally translated into natural language.This work was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D2, D3)

An important property and also a crucial point ofa term rewriting system is its termination. Transformation or-derings, developed by Bellegarde & Lescanne strongly based on awork of Bachmair & Dershowitz, represent a general technique forextending orderings. The main characteristics of this method aretwo rewriting relations, one for transforming terms and the otherfor ensuring the well-foundedness of the ordering. The centralproblem of this approach concerns the choice of the two relationssuch that the termination of a given term rewriting system can beproved. In this communication, we present a heuristic-based al-gorithm that partially solves this problem. Furthermore, we showhow to simulate well-known orderings on strings by transformationorderings.

This report presents a methodology to guide equational reasoningin a goal directed way. Suggested by rippling methods developed inthe field of inductive theorem proving we use attributes of terms andheuristics to determine bridge lemmas, i.e. lemmas which have tobe used during the proof of the theorem. Once we have found sucha bridge lemma we use the techniques of difference unification andrippling to enable its use.

This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.

We consider the problem of verifying confluence and termination of conditionalterm rewriting systems (TRSs). For unconditional TRSs the critical pair lemmaholds which enables a finite test for confluence of (finite) terminating systems.And for ensuring termination of unconditional TRSs a couple of methods forconstructing appropiate well-founded term orderings are known. If however ter-mination is not guaranteed then proving confluence is much more difficult. Re-cently we have obtained some interesting results for unconditional TRSs whichprovide sufficient criteria for termination plus confluence in terms of restrictedtermination and confluence properties. In particular, we have shown that anyinnermost terminating and locally confluent overlay system is complete, i.e. ter-minating and confluent. Here we generalize our approach to the conditional caseand show how to solve the additional complications due to the presence of con-ditions in the rules. Our main result can be stated as follows: Any conditionalTRS which is an innermost terminating semantical overlay system such that all(conditional) critical pairs are joinable is complete.

We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.

Even though it is not very often admitted, partial functions do play asignificant role in many practical applications of deduction systems. Kleenehas already given a semantic account of partial functions using three-valuedlogic decades ago, but there has not been a satisfactory mechanization. Recentyears have seen a thorough investigation of the framework of many-valuedtruth-functional logics. However, strong Kleene logic, where quantificationis restricted and therefore not truth-functional, does not fit the frameworkdirectly. We solve this problem by applying recent methods from sorted logics.This paper presents a resolution calculus that combines the proper treatmentof partial functions with the efficiency of sorted calculi.

The team work method is a concept for distributing automated theoremprovers and so to activate several experts to work on a given problem. We haveimplemented this for pure equational logic using the unfailing KnuthADBendixcompletion procedure as basic prover. In this paper we present three classes ofexperts working in a goal oriented fashion. In general, goal oriented experts perADform their job "unfair" and so are often unable to solve a given problem alone.However, as a team member in the team work method they perform highly effiADcient, even in comparison with such respected provers as Otter 3.0 or REVEAL,as we demonstrate by examples, some of which can only be proved using teamwork.The reason for these achievements results from the fact that the team workmethod forces the experts to compete for a while and then to cooperate by exADchanging their best results. This allows one to collect "good" intermediate resultsand to forget "useless" ones. Completion based proof methods are frequently reADgarded to have the disadvantage of being not goal oriented. We believe that ourapproach overcomes this disadvantage to a large extend.

In 1978, Klop demonstrated that a rewrite system constructed by adding the untyped lambda calculus, which has the Church-Rosser property, to a Church-Rosser first-order algebraic rewrite system may not be Church-Rosser. In contrast, Breazu-Tannen recently showed that argumenting any Church-Rosser first-order algebraic rewrite system with the simply-typed lambda calculus results in a Church-Rosser rewrite system. In addition, Breazu-Tannen and Gallier have shown that the second-order polymorphic lambda calculus can be added to such rewrite systems without compromising the Church-Rosser property (for terms which can be provably typed). There are other systems for which a Church-Rosser result would be desirable, among them being X^t+SP+FIX, the simply-typed lambda calculus extended with surjective pairing and fixed points. This paper will show that Klop's untyped counterexample can be lifted to a typed system to demonstrate that X^t+SP+FIX is not Church-Rosser.

Over the past thirty years there have been significant achievements in the field of auto-mated theorem proving with respect to the reasoning power of the inference engines.Although some effort has also been spent to facilitate more user friendliness of the de-duction systems, most of them failed to benefit from more recent developments in therelated fields of artificial intelligence (AI), such as natural language generation and usermodeling. In particular, no model is available which accounts both for human deductiveactivities and for human proof presentation. In this thesis, a reconstructive architecture issuggested which substantially abstracts, reorganizes and finally translates machine-foundproofs into natural language. Both the procedures and the intermediate representationsof our architecture find their basis in computational models for informal mathematicalreasoning and for proof presentation. User modeling is not incorporated into the currenttheory, although we plan to do so later.

This paper presents a new way to use planning in automated theorem provingby means of distribution. To overcome the problem that often subtasks fora proof problem can not be detected a priori (which prevents the use of theknown planning and distribution techniques) we use a team of experts that workindependently with different heuristics on the problem. After a certain amount oftime referees judge their results using the impact of the results on the behaviourof the expert and a supervisor combines the selected results to a new startingpoint.This supervisor also selects the experts that can work on the problem inthe next round. This selection is a reactive planning task. We outline whichinformation the supervisor can use to fulfill this task and how this informationis processed to result in a plan or to revise a plan. We also show that the useof planning for the assignment of experts to the team allows the system to solvemany different examples in an acceptable time with the same start configurationand without any consultation of the user.Plans are always subject to changeShin'a'in proverb

The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.

The introduction of sorts to first-order automated deduction has broughtgreater conciseness of representation and a considerable gain in efficiency byreducing the search space. It is therefore promising to treat sorts in higherorder theorem proving as well.In this paper we present a generalization of Huet's Constrained Resolutionto an order-sorted type theory SigmaT with term declarations. This system buildscertain taxonomic axioms into the unification and conducts reasoning withthem in a controlled way. We make this notion precise by giving a relativizationoperator that totally and faithfully encodes SigmaT into simple type theory.

In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".

Planverfahren
(1999)

We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.

A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.

One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.

Dynamic Lambda Calculus
(1999)

The goal of this paper is to lay a logical foundation for discourse theories by providing analgebraic foundation of compositional formalisms for discourse semantics as an analogon tothe simply typed (lambda)-calculus. Just as that can be specialized to type theory by simply providinga special type for truth values and postulating the quantifiers and connectives as constantswith fixed semantics, the proposed dynamic (lambda)-calculus DLC can be specialized to (lambda)-DRT byessentially the same measures, yielding a much more principled and modular treatment of(lambda)-DRT than before; DLC is also expected to eventually provide a conceptually simple basisfor studying higher-order unification for compositional discourse theories.Over the past few years, there have been a series of attempts [Zee89, GS90, EK95, Mus96,KKP96, Kus96] to combine the Montagovian type theoretic framework [Mon74] with dynamicapproaches, such as DRT [Kam81]. The motivation for these developments is to obtain a generallogical framework for discourse semantics that combines compositionality and dynamic binding.Let us look at an example of compositional semantics construction in (lambda)-DRT which is one ofthe above formalisms [KKP96, Kus96]. By the use of fi-reduction we arrive at a first-order DRTrepresentation of the sentence A i man sleeps. (i denoting an index for anaphoric binding.)

This paper shows how a new approach to theorem provingby analogy is applicable to real maths problems. This approach worksat the level of proof-plans and employs reformulation that goes beyondsymbol mapping. The Heine-Borel theorem is a widely known result inmathematics. It is usually stated in R 1 and similar versions are also truein R 2 , in topology, and metric spaces. Its analogical transfer was proposedas a challenge example and could not be solved by previous approachesto theorem proving by analogy. We use a proof-plan of the Heine-Boreltheorem in R 1 as a guide in automatically producing a proof-plan of theHeine-Borel theorem in R 2 by analogy-driven proof-plan construction.

This paper addresses a model of analogy-driven theorem proving that is more general and cognitively more adequate than previous approaches. The model works at the level ofproof-plans. More precisely, we consider analogy as a control strategy in proof planning that employs a source proof-plan to guide the construction of a proof-plan for the target problem. Our approach includes a reformulation of the source proof-plan. This is in accordance with the well known fact that constructing ananalogy in maths often amounts to first finding the appropriate representation which brings out the similarity of two problems, i.e., finding the right concepts and the right level of abstraction. Several well known theorems were processed by our analogy-driven proof-plan construction that could not be proven analogically by previous approaches.

This paper addresses analogy-driven auto-mated theorem proving that employs a sourceproof-plan to guide the search for a proof-planof the target problem. The approach presen-ted uses reformulations that go beyond symbolmappings and that incorporate frequently usedre-representations and abstractions. Severalrealistic math examples were successfully pro-cessed by our analogy-driven proof-plan con-struction. One challenge example, a Heine-Borel theorem, is discussed here. For this ex-ample the reformulaitons are shown step bystep and the modifying actions are demon-strated.

Analogy in CLAM
(1999)

CL A M is a proof planner, developed by the Dream group in Edinburgh,that mainly operates for inductive proofs. This paper addresses the questionhow an analogy model that I developed independently of CL A M can beapplied to CL A M and it presents analogy-driven proof plan construction as acontrol strategy of CL A M . This strategy is realized as a derivational analogythat includes the reformulation of proof plans. The analogical replay checkswhether the reformulated justifications of the source plan methods hold inthe target as a permission to transfer the method to the target plan. SinceCL A M has very efficient heuristic search strategies, the main purpose ofthe analogy is to suggest lemmas, to replay not commonly loaded methods,to suggest induction variables and induction terms, and to override controlrather than to construct a target proof plan that can be built by CL A Mitself more efficiently.

Distributed systems are an alternative to shared-memorymultiprocessors for the execution of parallel applications.PANDA is a runtime system which provides architecturalsupport for efficient parallel and distributed program-ming. PANDA supplies means for fast user-level threads,and for a transparent and coordinated sharing of objectsacross a homogeneous network. The paper motivates themajor architectural choices that guided our design. Theproblem of sharing data in a distributed environment isdiscussed, and the performance of appropriate mecha-nisms provided by the PANDA prototype implementation isassessed.

AbstractOne main purpose for the use of formal description techniques (FDTs) is formal reasoningand verification. This requires a formal calculus and a suitable formal semantics of theFDT. In this paper, we discuss the basic verification requirements for Estelle, and howthey can be supported by existing calculi. This leads us to the redefinition of the stanADdard Estelle semantics using Lamport's temporal logic of actions and Dijkstra's predicatetransformers.

The increasing use of distributed computer systems leads to an increasingneed for distributed applications. Their development in various domains like of-fice automation or computer integrated manufacturing is not sufficiently sup-ported by current techniques. New software engineering concepts are needed inthe three areas 'languages', 'tools', and 'environments'. We believe that object-oriented techniques and graphics support are key approaches to major achieve-ments in all three areas. As a consequence, we developed a universal object-oriented graphical editor ODE as one of our basic tools (tool building tool).ODE is based on the object-oriented paradigm, with some important extensionslike built-in object relations. It has an extensible functional language which al-lows for customization of the editor. ODE was developed as part of DOCASE, asoftware production environment for distributed applications. The basic ideas ofDOCASE will be presented and the requirements for ODE will be pointed out.Then ODE will be described in detail, followed by a sample customization ofODE: the one for the DOCASE design language.

Der ständig zunehmende Einsatz verteilter DV-Systeme führt zu einem stark steigendenBedarf an verteilten Anwendungen. Deren Entwicklung in den verschiedensten Anwen-dungsfeldern wie Fabrik- und Büroautomatisierung ist für die Anwender bislang kaum zuhandhaben. Neue Konzepte des Software Engineering sind daher notwendig, und zwar inden drei Bereichen 'Sprachen', 'Werkzeuge' und 'Umgebungen'. Objekt-orientierte Me-thoden und graphische Unterstützung haben sich bei unseren Arbeiten als besonders taug-lich herausgestellt, um in allen drei Bereichen deutliche Fortschritte zu erzielen. Entspre-chend wurde ein universeller objektorientierter graphischer Editor, ODE, als einesunserer zentralen Basis-Werkzeuge ('tool building tool') entwickelt. ODE basiert aufdem objekt-orientierten Paradigma sowie einer leicht handhabbaren funktionalen Sprachefür Erweiterungen; außerdem erlaubt ODE die einfache Integration mit anderen Werk-zeugen und imperativ programmierten Funktionen. ODE entstand als Teil von DOCASE,einer Software-Produktionsumgebung für verteilte Anwendungen. Grundzüge von DO-CASE werden vorgestellt, Anforderungen an ODE abgeleitet. Dann wird ODE detaillier-ter beschrieben. Es folgt eine exemplarische Beschreibung einer Erweiterung von ODE,nämlich der für die DOCASE-Entwurfssprache.

A growing share of all software development project work is being done by geographically distributed teams. To satisfy shorter product design cycles, expert team members for a development project may need to be r ecruited globally. Yet to avoid extensive travelling or r eplacement costs, distributed project work is preferred. Current-generation software engineering tools and ass ociated systems, processes, and methods were for the most part developed to be used within a single enterprise. Major innovations have lately been introduced to enable groupware applications on the Internet to support global collaboration. However, their deployment for distributed software projects requires further research. In partic ular, groupware methods must seamlessly be integrated with project and product management systems to make them attractive for industry. In this position paper we outline the major challenges concerning distributed (virtual) software projects. Based on our experiences with software process modeling and enactment environments, we then propose approaches to solve those challenges.

Coordinating distributed processes, especially engineering and software design processes, has been a research topic for some time now. Several approaches have been published that aim at coordinating large projects in general, and large software development processes in specific. However, most of these approaches focus on the technical part of the design process and omit management activities like planning and scheduling the project, or monitoring it during execution. In this paper, we focus on coordinating the management activities that accompany the technical software design process. We state the requirements for a Software Engineering Environm ent (SEE) accommodating management, and we describe a possible architecture for such an SEE.

This paper describes the architecture and concept of operation of a Framework for Adaptive Process Modeling and Execution (FAME). The research addresses the absence of robust methods for supporting the software process management life cycle. FAME employs a novel, model-based approach in providing automated support for different activities in the software development life cycle including project definition, process design, process analysis, process enactment, process execution status monitoring, and execution status-triggered process redesign. FAME applications extend beyond the software development domain to areas such as agile manufacturing, project management, logistics planning, and business process reengineering.

CORBA Lacks Venom
(1999)

Distributed objects bring to distributed computing such desirable properties of modularisation, abstraction and reuse easing the burden of development and maintenance by diminishing the gap between implementation and real-world objects. Distributed objects, however, need a consistent framework in which inter-object communication may take place. The Common Object Request Broker Architecture (CORBA) is a distributed object standard. CORBA's primary protocol is the Internet Interoperable Object Protocol limited to blocked synchronous remote procedure calls, over TCP/IP which is inappropriate for systems requiring timely guarantees.

Multi-User Dimensions (MUDs) [3], and their Object-Oriented versions (MOOs) [6], are geographically distributed, programmable client-server systems that support the cooperation of multiple users according to the virtual environment metaphor. In this metaphor, users are allowed to concurrently navigate in a set of "virtual" rooms. Rooms are interconnected through doors and may contain objects. Users are allowed to explore the contents of rooms, create and manipulate objects, and contact other users visiting the same room.

This paper investigates the suitability of the mobile agents approach to the problem of integrating a collection of local DBMS into a single heterogeneous large-scale distributed DBMS. The paper proposes a model of distributed transactions as a set of mobile agents and presents the relevant execution semantics. In addition, the mechanisms which are needed to guarantee the ACID properties in the considered environment are discussed.

Although work processes, like software processes, include a number of process aspects such as defined phases and deadlines, they are not plannable in detail. However, the advantages of today's process management, such as effective document routing and timeliness, can only be achieved with detailed models of work processes. This paper suggests a concept that uses detailed process models in conjunction with the possibility of defining the way a process model determines the work of individuals. Based on the WAM approach1, which allows workers to choose methods for their tasks according to the situation, we describe features to carry out planned parts of a process with workers always being able to start exceptional mechanisms. These mechanisms are based on the modelling paradigm of linked abstraction workflows (LAWs) that describe workflows at different levels of abstraction and classify refinements of tasks by the way lower tasks can be used.

Concept mapping is a simple and intuitive visual form of knowledge representation. Concept maps can be categorized as informal or formal, where the latter is characterized by implementing a semantics model constraining their components. Software engineering is a domain that has successfully adopted formal concept maps to visualize and specify complex systems. Automated tools have been implemented to support these models although their semantic constraints are hardcoded within the systems and hidden from users. This paper presents the Constraint Graphs and jKSImapper systems. Constraint Graphs is a flexible and powerful graphical system interface for specifying concept mapping notations. In addition, jKSImapper is a multi-user concept mapping editor for the Internet and the World Wide Web. Together, these systems aim to support user-definable formal concept mapping notations and distributed collaboration on the Internet and the World Wide Web.

We argue in this paper that sophisticated mi-croplanning techniques are required even formathematical proofs, in contrast to the beliefthat mathematical texts are only schematicand mechanical. We demonstrate why para-phrasing and aggregation significantly en-hance the flexibility and the coherence ofthe text produced. To this end, we adoptedthe Text Structure of Meteer as our basicrepresentation. The type checking mecha-nism of Text Structure allows us to achieveparaphrasing by building comparable combi-nations of linguistic resources. Specified interms of concepts in an uniform ontologicalstructure called the Upper Model, our se-mantic aggregation rules are more compactthan similar rules reported in the literature.

This paper outlines an implemented system called PROVERB that explains machine -found natural deduction proofs in natural language. Different from earlier works, we pursue a reconstructive approach. Based on the observation that natural deduction proofs are at a too low level of abstraction compared with proofs found in mathematical textbooks, we define first the concept of so-called assertion level inference rules. Derivations justified by these rules can intuitively be understood as the application of a definition or a theorem. Then an algorithm is introduced that abstracts machine-found ND proofs using the assertion level inference rules. Abstracted proofs are then verbalized into natural language by a presentation module. The most significant feature of the presentation module is that it combines standard hierarchical text planning and techniques that locally organize argumentative texts based on the derivation relation under the guidance of a focus mechanism. The behavior of the system is demonstrated with the help of a concrete example throughout the paper.

We describe a technique to make application programs fault tolerant. This techADnique is based on the concept of checkpointing from an active program to one ormore passive backup copies which serve as an abstraction of stable memory. Ifthe primary copy fails, one of the backup copies takes over and resumes processADing service requests. After each failure a new backup copy is created in order torestore the replication degree of the service. All mechanisms necessary to achieveand maintain fault tolerance can be added automatically to the code of a nonADfaulttolerant server, thus making fault tolerance completely transparent for the applicaADtion programmer.

Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago. This approach allows rejectingcertain unwanted formulae as faulty, which the simpler two-valued onesaccept. We have developed resolution and tableau calculi for automatedtheorem proving that take the restrictions of the three-valued logic intoaccount, which however have the severe drawback that existing theo-rem provers cannot directly be adapted to the technique. Even recentlyimplemented calculi for many-valued logics are not well-suited, since inthose the quantification does not exclude the undefined element. In thiswork we show, that it is possible to enhance a two-valued theorem proverby a simple strategy so that it can be used to generate proofs for the the-orems of the three-valued setting. By this we are able to use an existingtheorem prover for a large fragment of the language.

This paper addresses two modi of analogical reasoning. Thefirst modus is based on the explicit representation of the justificationfor the analogical inference. The second modus is based on the repre-sentation of typical instances by concept structures. The two kinds ofanalogical inferences rely on different forms of relevance knowledge thatcause non-monotonicity. While the uncertainty and non-monotonicity ofanalogical inferences is not questioned, a semantic characterization ofanalogical reasoning has not been given yet. We introduce a minimalmodel semantics for analogical inference with typical instances.