### Refine

#### Year of publication

#### Document Type

- Preprint (346) (remove)

#### Keywords

- AG-RESY (17)
- Case-Based Reasoning (16)
- RODEO (10)
- Fallbasiertes Schliessen (9)
- Case Based Reasoning (6)
- Abstraction (5)
- Fallbasiertes Schließen (5)
- Robotics (5)
- case-based problem solving (5)
- CoMo-Kit (4)

#### Faculty / Organisational entity

- Fachbereich Informatik (346) (remove)

Object-oriented case representations require approaches for similarity assessment that allow to compare two differently structured objects, in particular, objects belonging to different object classes. Currently, such similarity measures are developed more or less in an ad-hoc fashion. It is mostly unclear, how the structure of an object-oriented case model, e.g., the class hierarchy, influences similarity assessment. Intuitively, it is obvious that the class hierarchy contains knowledge about the similarity of the objects. However, how this knowledge relates to the knowledge that could be represented in similarity measures is not obvious at all. This paper analyzes several situations in which class hierarchies are used in different ways for case modeling and proposes a systematic way of specifying similarity measures for comparing arbitrary objects from the hierarchy. The proposed similarity measures have a clear semantics and are computationally inexpensive to compute at run-time.

Lernen von Abstraktionshierarchien zur Optimierung der Auswahl von maschinell abstrahierten Plänen
(1994)

Mit Hilfe von "Multistrategy" Ansätzen, die erklärungsbasiertes und induktives Lernen integrieren, ist es möglich, die Performanz von Planungssystemen signifikant zu verbessern. Dabei können gelöste Planungsprobleme zunächst mit einem wissensintensiven Verfahren abstrahiert und generalisiert werden. Durch den in diesem Beitrag im Vordergrund stehenden induktiven inkrementellen Lernalgorithmus ist es dann weiterhin möglich, die Gesamtheit des deduktiv generierten Wissens in einer Abstraktionshierarchie anzuordnen. Dabei wird die, im allgemeinen unentscheidbare, "spezieller-als-Relation" zwischen generalisierten Plänen, induktiv aus den gegebenen Planungsfällen gelernt. Diese Abstraktionshierarchie dient dann zur Klassifikation neuer Problemstellungen und damit zur Bestimmung einer speziellsten anwendbaren abstrakten Problemlösung.

This paper addresses the role of abstraction in case-based reasoning. We develop a general framework for reusing cases at several levels of abstraction, which is particularly suited for describing and analyzing existing and designing new approaches of this kind. We show that in synthetic tasks (e.g. configuration, design, and planning), abstraction can be successfully used to improve the efficiency of similarity assessment, retrieval, and adaptation. Furthermore, a case-based planning system, called Paris, is described and analyzed in detail using this framework. An empirical study done with Paris demonstrates significant advantages concerning retrieval and adaptation efficiency as well as flexibility of adaptation. Finally, we show how other approaches from the literature can be classified according to the developed framework.

This paper is to present a new algorithm, called KNNcost, for learning feature weights for CBR systems used for classification. Unlike algorithms known so far, KNNcost considers the profits of a correct and the cost of a wrong decision. The need for this algorithm is motivated from two real-world applications, where cost and profits of decisions play a major role. We introduce a representation of accuracy, cost and profits of decisions and define the decision cost of a classification system. To compare accuracy optimization with cost optimization, we tested KNNacc against KNNcost. The first one optimizes classification accuracy with a conjugate gradient algorithm. The second one optimizes the decision cost of the CBR system, respecting cost and profits of the classifications. We present experiments with these two algorithms in a real application to demonstrate the usefulness of our approach.

Although several systematic analyses of existing approaches to adaptation have been published recently, a general formal adaptation framework is still missing. This paper presents a step into the direction of developing such a formal model of transformational adaptation. The model is based on the notion of the quality of a solution to a problem, while quality is meant in a more general sense and can also denote some kind of appropriateness, utility, or degree of correctness. Adaptation knowledge is then defined in terms of functions transforming one case into a successor case. The notion of quality provides us with a semantics for adaptation knowledge and allows us to define terms like soundness, correctness and completeness. In this view, adaptation (and even the whole CBR process) appears to be a special instance of an optimization problem.

In this paper, we propose the PARIS approach for improving complex problem solving by learning from previous cases. In this approach, abstract planning cases are learned from given concrete cases. For this purpose, we have developed a new abstraction methodology that allows to completely change the representation language of a planning case, when the concrete and abstract languages are given by the user. Furthermore, we present a learning algorithm which is correct and complete with respect to the introduced model. An empirical study in the domain of process planning in mechanical engineering shows significant improvements in planning efficiency through learning abstract cases while an explanation-based learning method only causes a very slight improvement.

Paris (Plan Abstraction and Refinement in an Integrated System) [4, 2] is a domain independent case-based planning system which allows the flexible reuse of planning cases by abstraction and refinement. This approach is mainly inspired by the observation that reuse of plans must not be restricted to a single description level. In domains with a high variation in the problems, the reuse of past solutions must be achieved at various levels of abstraction.

Abstraction is one of the most promising approaches to improve the performance of problem solvers. In several domains abstraction by dropping sentences of a domain description - as used in most hierarchical planners - has proven useful. In this paper we present examples which illustrate significant drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract. However, to achieve a powerful change of the representation language, the abstract language itself as well as rules which describe admissible ways of abstracting states must be provided in the domain model. This new abstraction approach is the core of PARIS (Plan Abstraction and Refinement in an Integrated System), a system in which abstract planning cases are automatically learned from given concrete cases. An empirical study in the domain of process planning in mechanical engineering shows significant advantages of the proposed reasoning from abstract cases over classical hierarchical planning.^

Case-based problem solving can be significantly improved by applying domain knowledge (in opposition to problem solving knowledge), which can be acquired with reasonable effort, to derive explanations of the correctness of a case. Such explanations, constructed on several levels of abstraction, can be employed as the basis for similarity assessment as well as for adaptation by solution refinement. The general approach for explanation-based similarity can be applied to different real world problem solving tasks such as diagnosis and planning in technical areas. This paper presents the general idea as well as the two specific, completely implemented realizations for a diagnosis and a planning task.

When problems are solved through reasoning from cases, the primary kind of knowledge is contained in the specific cases which are stored in the case base. However, in many situations additional background-knowledge is required to cope with the requirements of an application. We describe an approach to integrate such general knowledge into the reasoning process in a way that it complements the knowledge contained in the cases. This general knowledge itself is not sufficient to perform any kind of model-based problem solving, but it is required to interpret the available cases appropriately. Background knowledge is expressed by two different kinds of rules that both must be formalized by the knowledge engineer: Completion rules describe how to infer additional features out of known features of an old case or the current query case. Adaptation rules describe how an old case can be adapted to fit the current query. This paper shows how these kinds of rules can be integrated into an object-oriented case representation.

This paper motivates the necessity for support for negotiation during Sales Support on the Internet within Case-Based Reasoning solutions. Different negotiation approaches are discussed and a general model of the sales process is presented. Further, the tradition al CBR-cycle is modified in such a way that iterative retrieval during a CBR consulting session is covered by the new model. Several gen eral characteristics of negotiation are described and a case study is shown where preliminary approaches are used to negotiate with a cu stomer about his demands and available products in a 'CBR-based' Electronic Commerce solution.

The feature interaction problem in telecommunications systems increasingly ob-structs the evolution of such systems. We develop formal detection criteria whichrender a necessary (but less than sufficient) condition for feature interactions. It can be checked mechanically and points out all potentially critical spots. Thesehave to be analysed manually. The resulting resolution decisions are incorporatedformally. Some prototype tool support is already available. A prerequisite forformal criteria is a formal definition of the problem. Since the notions of featureand feature interaction are often used in a rather fuzzy way, we attempt a formaldefinition first and discuss which aspects can be included in a formalization (andtherefore in a detection method). This paper describes ongoing work.

In nebenläufigen Systemen erleichtert das Konzept der Atomarität vonOperationen, konkurrierende Zugriffe in größere, leichter beherrschbareAbschnitte zu unterteilen. Wenn wir aber Spezifikationen in der forma-len Beschreibungstechnik Estelle betrachten, erweist es sich, daß es un-ter bestimmten Umständen schwierig ist, die Atomarität der sogenanntenTransitionen bei Implementationen exakt einzuhalten, obwohl diese Ato-marität eine konzeptuelle Grundlage der Semantik von Estelle ist. Es wirdaufgezeigt, wie trotzdem sowohl korrekte als auch effiziente nebenläufigeImplementationen erreicht werden können. Schließlich wird darauf hinge-wiesen, daß die das Problem auslösenden Aktionen oft vom Spezifiziererleicht von vorneherein vermieden werden können; und dies gilt auch überden Kontext von Estelle hinaus.

The feature interaction problem in telecommunications systems increasingly obstructsthe evolution of such systems. We develop formal detection criteria which render anecessary (but less than sufficient) condition for feature interactions. It can be checkedmechanically and points out all potentially critical spots. These have to be analyzedmanually. The resulting resolution decisions are incorporated formally. Some prototypetool support is already available. A prerequisite for formal criteria is a formal definitionof the problem. Since the notions of feature and feature interaction are often used in arather fuzzy way, we attempt a formal definition first and discuss which aspects can beincluded in a formalization (and therefore in a detection method). This paper describeson-going work.

Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)

The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.

A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)

We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.

AbstractOne main purpose for the use of formal description techniques (FDTs) is formal reasoningand verification. This requires a formal calculus and a suitable formal semantics of theFDT. In this paper, we discuss the basic verification requirements for Estelle, and howthey can be supported by existing calculi. This leads us to the redefinition of the stanADdard Estelle semantics using Lamport's temporal logic of actions and Dijkstra's predicatetransformers.

In this report we give an overview of the development of our new Waldmeisterprover for equational theories. We elaborate a systematic stepwise design process, startingwith the inference system for unfailing Knuth - Bendix completion and ending up with animplementation which avoids the main diseases today's provers suffer from: overindulgencein time and space.Our design process is based on a logical three - level system model consisting of basicoperations for inference step execution, aggregated inference machine, and overall controlstrategy. Careful analysis of the inference system for unfailing completion has revealed thecrucial points responsible for time and space consumption. For the low level of our model,we introduce specialized data structures and algorithms speeding up the running system andcutting it down in size - both by one order of magnitude compared with standard techniques.Flexible control of the mid - level aggregation inside the resulting prover is made possible by acorresponding set of parameters. Experimental analysis shows that this flexibility is a pointof high importance. We go on with some implementation guidelines we have found valuablein the field of deduction.The resulting new prover shows that our design approach is promising. We compare oursystem's throughput with that of an established system and finally demonstrate how twovery hard problems could be solved by Waldmeister.

This report is a first attempt of formalizing the diagonalization proof technique.We give a strategy how to systematically construct diagonalization proofs: (i) findingan indexing relation, (ii) constructing a diagonal element, and (iii) making the implicitcontradiction of the diagonal element explicit. We suggest a declarative representationof the strategy and describe how it can be realized in a proof planning environment.

We present an empirical study of mathematical proofs by diagonalization, the aim istheir mechanization based on proof planning techniques. We show that these proofs canbe constructed according to a strategy that (i) finds an indexing relation, (ii) constructsa diagonal element, and (iii) makes the implicit contradiction of the diagonal elementexplicit. Moreover we suggest how diagonal elements can be represented.

This report presents a methodology to guide equational reasoningin a goal directed way. Suggested by rippling methods developed inthe field of inductive theorem proving we use attributes of terms andheuristics to determine bridge lemmas, i.e. lemmas which have tobe used during the proof of the theorem. Once we have found sucha bridge lemma we use the techniques of difference unification andrippling to enable its use.

In the recent years a form of software development that was previously dismissed as too ad-hoc and chaotic for serious projects has suddenly taken the front stage. With products such as Apache, Linux, Perl, and others, open-source software has emerged as a viable alternative to traditional approaches to software development. With its globally distributed developer force and extremely rapid code evolution, open source is arguably the extreme in "virtual software projects" [1], and exemplifies many of the advantages and challenges of distributed software development.

This paper describes the design and implementation of a process support system (PROSYT), which is intended to provide guidance in performing business processes and cooperation among people over a local or geographically distributed architecture. In particular, it can be used as a Process-centered Software Engineering Environment (PSEE) to support distributed software development. Our main purpose is to describe how complex applications of this kind can be developed systematically. In particular, how the requirements of high flexibility, reconfigurability, scalability, and efficiency demanded by these applications can be met through appropriate design choices.

There are two general approaches to providing for isochronous streams in the current Internet. The first approach is the resource reservation approach through protocols such as RSVP, or ATM technology. This provides bandwidth guarantees, however, it also requires significant upgrading of resources in the underlying network. The other common approach is adaptive rate control where the end-system has control of its rate according to feedback from the client population. This approach cannot guarantee timely delivery and raises some scaling questions, however a properly implemented scheme does improve quality and it requires no changes to the underlying IP network. Hence, there exists a dichotomy of requirements ; 1. To cater for reservation protocols or 'hooks' for future reservation components, and 2. To provide an architecture which provides an application controlled QoS scheme, which scales to the size of the current Internet in a best- effort architecture.

CORBA Lacks Venom
(1999)

Distributed objects bring to distributed computing such desirable properties of modularisation, abstraction and reuse easing the burden of development and maintenance by diminishing the gap between implementation and real-world objects. Distributed objects, however, need a consistent framework in which inter-object communication may take place. The Common Object Request Broker Architecture (CORBA) is a distributed object standard. CORBA's primary protocol is the Internet Interoperable Object Protocol limited to blocked synchronous remote procedure calls, over TCP/IP which is inappropriate for systems requiring timely guarantees.

In recent years the demand on business process modelling (BPM) became apparent in many different communities, e.g. information systems engineering, requirements engineering [KiB94], software engineering and knowledge engineering (e.g. [BrV94], [SWH+94]). This suggests to aim at a unifying view on business process modelling in all these disciplines. To achieve the business goals some problems which obstruct these goals must be solved. This can be done either by restructuring the business process, by application of standard software, or by developing individual software components such as knowledge based systems (KBSs). To be able to model business goals and to analyse problems occurring during the business processes these processes including organisational structures and activities have to be modelled. This is also true when building a KBS in an enterprise environment. Because the KBS is only a small part of the whole business organisation, it must be embedded into or at least linked to all relevant business processes, i.e. it should not be a stand-alone solution. For this purpose we extend the MIKE approach [AFS96] in the BMBF project WORKS (Work Oriented Design of Knowledge Systems) by offering business models for modelling relevant aspects of an enterprise. To be able to define an integrated framework with other possibilties to improve an enterprise (e.g. information systems engineering) we determine the standard views of an enterprise. Next we define the views, that are necessary for developing a KBS.

A large set of criteria to evaluate formal methods for reactive systems is presented. To make this set more comprehensible, it is structured according to a Concept-Model of formal methods. It is made clear that it is necessary to make the catalogue more specific before applying it. Some of the steps needed to do so are explained. As an example the catalogue is applied within the context of the application domain building automation systems to three different formal methods: SDL, statecharts, and a temporallogic.

A combination of a state-based formalism and a temporal logic is proposed to get an expressive language for various descriptions of reactive systems. Thereby it is possible to use a model as well as a property oriented specification style in one description. The descriptions considered here are those of the environment, the specification, and the design of a reactive system. It is possible to express e.g. the requirements of a reactive system by states and transitions between them together with further temporal formulas restricting the behaviors of the statecharts. It is shown, how this combined formalism can be used: The specification of a small example is given and a designed controller is proven correct with respect to this specification. The combination of the langugages is based on giving a temporal semantics of a state-based formalism (statecharts) using a temporal logic (TLA).

There are well known examples of monoids in literature which do not admit a finite andcanonical presentation by a semi-Thue system over a fixed alphabet, not even over an arbi-trary alphabet. We introduce conditional Thue and semi-Thue systems similar to conditionalterm rewriting systems as defined by Kaplan. Using these conditional semi-Thue systems wegive finite and canonical presentations of the examples mentioned above. Furthermore weshow, that each finitely generated monoid with decidable word problem is embeddable in amonoid which has a finite canonical conditional presentation.

Representations of activities dealing with the development or maintenance of software are called software process models. Process models allow for communication, reasoning, guidance, improvement, and automation. Two approaches for building, instantiating, and managing processes, namely CoMo-Kit and MVP-E, are combined to build a more powerful one. CoMo-Kit is based on AI/KE technology; it was developed for supporting complex design processes and is not specialized to software development processes. MVP-E is a process-sensitive software engineering environment for modeling and analyzing software development processes, and guides software developers. Additionally, it provides services to establish and run measurement programmes in software organizations. Because both approaches were developed completely independently major integration efforts are to be made to combine their both advantages. This paper concentrates on the resulting language concepts and their operationalization necessary for building automated process support.

We present an approach to prove several theorems in slightly different axiomsystems simultaneously. We represent the different problems as a taxonomy, i.e.a tree in which each node inherits all knowledge of its predecessors, and solve theproblems using inference steps on rules and equations with simple constraints,i.e. words identifying nodes in the taxonomy. We demonstrate that a substantialgain can be achieved by using taxonomic constraints, not only by avoiding therepetition of inference steps in the different problems but also by achieving runtimes that are much shorter than the accumulated run times when proving eachproblem separately.

We present a concept for an automated theorem prover that employs a searchcontrol based on ideas from several areas of artificial intelligence (AI). The combi-nation of case-based reasoning, several similarity concepts, a cooperation conceptof distributed AI and reactive planning enables a system using our concept tolearn form previous successful proof attempts. In a kind of bootstrapping processeasy problems are used to solve more and more complicated ones.We provide case studies from two domains of interest in pure equationaltheorem proving taken from the TPTP library. These case studies show thatan instantiation of our architecture achieves a high grade of automation andoutperforms state-of-the-art conventional theorem provers.

We present an approach to learning cooperative behavior of agents. Our ap-proach is based on classifying situations with the help of the nearest-neighborrule. In this context, learning amounts to evolving a set of good prototypical sit-uations. With each prototypical situation an action is associated that should beexecuted in that situation. A set of prototypical situation/action pairs togetherwith the nearest-neighbor rule represent the behavior of an agent.We demonstrate the utility of our approach in the light of variants of thewell-known pursuit game. To this end, we present a classification of variantsof the pursuit game, and we report on the results of our approach obtained forvariants regarding several aspects of the classification. A first implementationof our approach that utilizes a genetic algorithm to conduct the search for a setof suitable prototypical situation/action pairs was able to handle many differentvariants.

The team work method is a concept for distributing automated theoremprovers and so to activate several experts to work on a given problem. We haveimplemented this for pure equational logic using the unfailing KnuthADBendixcompletion procedure as basic prover. In this paper we present three classes ofexperts working in a goal oriented fashion. In general, goal oriented experts perADform their job "unfair" and so are often unable to solve a given problem alone.However, as a team member in the team work method they perform highly effiADcient, even in comparison with such respected provers as Otter 3.0 or REVEAL,as we demonstrate by examples, some of which can only be proved using teamwork.The reason for these achievements results from the fact that the team workmethod forces the experts to compete for a while and then to cooperate by exADchanging their best results. This allows one to collect "good" intermediate resultsand to forget "useless" ones. Completion based proof methods are frequently reADgarded to have the disadvantage of being not goal oriented. We believe that ourapproach overcomes this disadvantage to a large extend.

We present an overview of various learning techniques used in automated theorem provers. We characterize the main problems arising in this context and classify the solutions to these problems from published approaches. We analyze the suitability of several combinations of solutions for different approaches to theorem proving and place these combinations in a spectrum ranging from provers using very specialized learning approaches to optimally adapt to a small class of proof problems, to provers that learn more general kinds of knowledge, resulting in systems that are less efficient in special cases but show improved performance for a wide range of problems. Finally, we suggest combinations of solutions for various proof philosophies.

This paper presents a new way to use planning in automated theorem provingby means of distribution. To overcome the problem that often subtasks fora proof problem can not be detected a priori (which prevents the use of theknown planning and distribution techniques) we use a team of experts that workindependently with different heuristics on the problem. After a certain amount oftime referees judge their results using the impact of the results on the behaviourof the expert and a supervisor combines the selected results to a new startingpoint.This supervisor also selects the experts that can work on the problem inthe next round. This selection is a reactive planning task. We outline whichinformation the supervisor can use to fulfill this task and how this informationis processed to result in a plan or to revise a plan. We also show that the useof planning for the assignment of experts to the team allows the system to solvemany different examples in an acceptable time with the same start configurationand without any consultation of the user.Plans are always subject to changeShin'a'in proverb

We present a distributed system, Dott, for approximately solving the Trav-eling Salesman Problem (TSP) based on the Teamwork method. So-calledexperts and specialists work independently and in parallel for given time pe-riods. For TSP, specialists are tour construction algorithms and experts usemodified genetic algorithms in which after each application of a genetic operatorthe resulting tour is locally optimized before it is added to the population. Aftera given time period the work of each expert and specialist is judged by a referee.A new start population, including selected individuals from each expert and spe-cialist, is generated by the supervisor, based on the judgments of the referees.Our system is able to find better tours than each of the experts or specialistsworking alone. Also results comparable to those of single runs can be found muchfaster by a team.

Automatic proof systems are becoming more and more powerful.However, the proofs generated by these systems are not met withwide acceptance, because they are presented in a way inappropriatefor human understanding.In this paper we pursue two different, but related, aims. First wedescribe methods to structure and transform equational proofs in away that they conform to human reading conventions. We developalgorithms to impose a hierarchical structure on proof protocols fromcompletion based proof systems and to generate equational chainsfrom them.Our second aim is to demonstrate the difficulties of obtaining suchprotocols from distributed proof systems and to present our solutionto these problems for provers using the TEAMWORK method. Wealso show that proof systems using this method can give considerablehelp in structuring the proof listing in a way analogous to humanbehaviour.In addition to theoretical results we also include descriptions onalgorithms, implementation notes, examples and data on a variety ofexamples.

Multi-User Dimensions (MUDs) [3], and their Object-Oriented versions (MOOs) [6], are geographically distributed, programmable client-server systems that support the cooperation of multiple users according to the virtual environment metaphor. In this metaphor, users are allowed to concurrently navigate in a set of "virtual" rooms. Rooms are interconnected through doors and may contain objects. Users are allowed to explore the contents of rooms, create and manipulate objects, and contact other users visiting the same room.

This paper investigates the suitability of the mobile agents approach to the problem of integrating a collection of local DBMS into a single heterogeneous large-scale distributed DBMS. The paper proposes a model of distributed transactions as a set of mobile agents and presents the relevant execution semantics. In addition, the mechanisms which are needed to guarantee the ACID properties in the considered environment are discussed.

In this paper the autonomous mobile vehicle MOBOT-IV is presented, which is capable of exploring an indoor-environment while building up an internal representation of its world. This internal model is used for the navigation of the vehicle during and after the exploration phase. In contrast to methods, which use a grid based or line based environment representation, in the approach presented in this paper, local sector maps are the basic data structure of the world model. This paper describes the method of the view-point-planning for map building, the use of this map for navigation and the method of external position estimation including the hand- ling of an position error in a moving real-time system.

Komprimierungsverfahren
(1999)

Bedingt durch das Wachstum von Informationsnachfrage und -angebot werden effizientere Wege zur Repräsentation von Informationen aller Art benötigt. Dies kann sowohl durch Assimilation und Optimierung der gewählten Datenstruktur und ihrer Repräsentation als auch (additiv) durch Komprimierung derselbigen erreicht werden. Diese Ausarbeitung soll in pragmatischer Art und Weise in das Themengebiet der Komprimierung einführen. Vorgestellt werden insgesamt 3 Stellvertreter aus unterschiedlichen Bereichen : Komprimierung von Texten mittels Huffman-Code, Komprimierung von Bitlisten mittels Laufkomprimierung (RLE-Komprimierung), Komprimierung von - auf dem RGB-Farbmodell basierenden - Grafiken mittels eines eigenen Verfahrens. Während die ersten beiden Verfahren Vertreter verlustfreier Komprimierung sind, ist das Dritte ein Vertreter der verlustbehafteten Komprimierung. Die vorgestellten Verfahren werden zur Arrondierung an konkreten Beispielen eingeübt und schließlich sogar in der Programmiersprache Pascal implementiert. Die konkrete Realisation in einer gegebenen Programmiersprache birgt kanonischerweise die Gefahr, den Blick für das Wesentliche zu verlieren. Deshalb wurde bei der Erstellung dieser Ausarbeitung (im speziellen der Programmieraufgaben) akribisch auf Abstrahierung unnötiger Details geachtet.

Auf KLUEDO, dem Kaiserslauterer uniweiten elektronischen Dokumentenserver, können Angehörige der Universität Kaiserslautern wissenschaftliche Dokumente in elektronischer Form im Internet und damit weltweit veröffentlichen. Der Dokumentenserver entstand auf Initiative der Fachbereiche und der Universitätsbibliothek Kaiserslautern im Rahmen des vom Bundesministerium für Bildung und Forschung (BMBF) geförderten GLOBAL INFO - Vorprojektes. Hierbei basiert der Dokumentenserver stellenweise auf dem (während des MathNet-Projektes des deutschen Forschungsnetzes (DFN) und des MathBibNet-Projektes der deutschen Forschungsgemeinschaft (DFG) aufgebauten) Mathematik-Preprintserver der Universität Kaiserslautern. Dieser Artikel umreißt die Aspekte, die hinter der Entwicklung des Dokumentenservers in der vorliegenden Form gestanden haben und gibt einen groben Überblick über die Systemarchitektur. Detailinformationen und eine Beschreibung des Systems findet man in meiner Arbeit Propadeutik metadatenbasierter Publikationsserver: Ontologie und konkordante Implementierung.

Das System ART (ASF RRL Translation) stellt im wesentlichen eine Umgebung dar,in welcher die Modularisierbarkeit von Beweisen (Induktionsbeweisen über Gleichungs-spezifikationen) untersucht werden kann. Es wurde die bereits bestehende Spezifikati-onsprache ASF (siehe [BeHeKl89]), in welcher modularisierte Spezifikationen möglichsind, so erweitert, daß zusätzlich auch Beweisaufgaben spezifiziert werden können. Imfolgenden wird diese erweiterte Spezifikationsprache auch ASF genannt. Als Bewei-ser für die Beweisaufgaben einer Spezifikation wurde RRL (siehe [KaZh89]) gewählt.RRL kann sowohl Kommandos aus einem File abarbeiten, wie auch Sitzungsprotokolleanfertigen, mit deren Hilfe sich die Beweisverläufe und Benutzereingaben der entspre-chenden RRL-Sitzung rekonstruieren lassen. In ART kann nun eine ASF-Spezifikation,die Beweisaufgaben umfassen kann, in ein File übersetzt werden, welches von RRLabgearbeitet werden kann. Dies wird im folgenden kurz mit 'Übersetzung von ASF nach RRL' bezeichnet. Bei der Abarbeitung eines solchen Files wird von RRL ein Sit-zungsprotokoll angelegt. ART kann dieses Sitzungsprotokoll dazu heranziehen, neueErgebnisse, wie etwa den erfolgreichen Beweis einer Beweisaufgabe, zu ermitteln, umdiese Ergebnisse der ursprüngliche Spezifikation hinzuzufügen. Dies wird im folgendenkurz mit 'Rückübersetzung von RRL nach ASF' bezeichnet. Im Kern besteht ART alsoaus einer Komponente zur Übersetzung von ASF nach RRL und aus einer Komponentezur Rückübersetzung von RRL nach ASF.

This paper provides a description of PLATIN. With PLATIN we present an imple-mented system for planning inductive theorem proofs in equational theories that arebased on rewrite methods. We provide a survey of the underlying architecture ofPLATIN and then concentrate on details and experiences of the current implementa-tion.

This paper focuses on the issues involved when multiple mobile agents interact in multiagent systems. The application is an intelligent agent market place, where buyer and seller agents cooperate and compete to process sales transactions for their owners. The market place manager acts as afacilitator by giving necessary information to agents and managing communication between agents, and also as a mediator by proposing solutions to agents or stopping them to get into infinite loops bargaining back and forth.The buyer and seller agents range from using hardcoded logic to rule-based inferencing in their negotiation strategies. However these agents must support some communication skills using KQML or FIPA-ACL.So in contrast with other approaches to multiagent negotiation, we introduce an explicit mediator (market place manager) into the negotiation, and we propose a negotiation strategy based on dependence theory [1] implemented by our best buyers and best sellers.

Der Wissenserwerb erschwert bisher häufig den Einsatz wissensbasierter Systeme der Arbeitsplanerstellung in der industriellen Praxis. Die meisten Anwendungen gestatten nur das Erfassen und Editieren des durch aufwendige Erhebung, Systematisierung und Formulierung gewonnenen fachspezifischen Planungswissens. Im Rahmen eines DFG-Projektes soll die Anwendbarkeit bekannter maschineller Lernverfahren auf technologische Reihenfolge- und Zuordnungsprobleme im Rahmen der generierenden Arbeitsplanerstellung von Teilefertigungsprozessen im Maschinenbau nachgewiesen werden. Dazu wird ein Prototyp mit Hilfe eines verfügbaren Softwarewerkzeuges entwickelt, der das maschinelle Lernen aus vorgegebenen Beispielen ermöglichen und mit einem existierenden Prototypen der wissensbasierten Arbeistplanung kommunizieren soll. Der folgende Beitrag gibt einen Überblick über das mit Lernverfahren zu behandelnde Planungswissen und stellt mögliche Repräsentationsmöglichkeiten des Wissens zur Diskussion.

Although work processes, like software processes, include a number of process aspects such as defined phases and deadlines, they are not plannable in detail. However, the advantages of today's process management, such as effective document routing and timeliness, can only be achieved with detailed models of work processes. This paper suggests a concept that uses detailed process models in conjunction with the possibility of defining the way a process model determines the work of individuals. Based on the WAM approach1, which allows workers to choose methods for their tasks according to the situation, we describe features to carry out planned parts of a process with workers always being able to start exceptional mechanisms. These mechanisms are based on the modelling paradigm of linked abstraction workflows (LAWs) that describe workflows at different levels of abstraction and classify refinements of tasks by the way lower tasks can be used.

We present a way to describe Reason Maintenance Systems using the sameformalism for justification based as well as for assumption based approaches.This formalism uses labelled formulae and thus is a special case of Gabbay'slabelled deductive systems. Since our approach is logic based, we are able toget a semantics oriented description of the systems in question.Instead of restricting ourselves to e.g. propositional Horn formulae, as wasdone in the past, we admit arbitrary logics. This enables us to characterizesystems as a whole, including both the reason maintenance component and theproblem solver, nevertheless maintaining a separation between the basic logicand the part that describes the label propagation. The possibility to freely varythe basic logic enables us to not only describe various existing systems, but canhelp in the design of completely new ones.We also show, that it is possible to implement systems based directly on ourlabelled logic and plead for "incremental calculi" crafted to attack undecidablelogics.Furthermore it is shown that the same approach can be used to handledefault reasoning, if the propositional labels are upgraded to first order.

In order to improve the quality of software systems and to set up a more effective process for their development, many attempts have been made in the field of software engineering. Reuse of existing knowledge is seen as a promising way to solve the outstanding problems in this field. In previous work we have integrated the design pattern concept with the formal design language SDL, resulting in a certain kind of pattern formalization. For the domain of communication systems we have also developed a pool of SDL patterns with an accompanying process model for pattern application. In this paper we present an extension that combines the SDL pattern approach with the experience base concept. This extension supports a systematic method for empirical evaluation and continuous improvement of the SDL pattern approach. Thereby the experience base serves as a repository necessary for effective reuse of the captured knowledge. A comprehensive usage scenario is described which shows the advantages of the combined approach. To demonstrate its feasibility, first results of a research case study are given.

Learning from examples is a field of research in machine learning where class descriptions, like decision trees or implications (production rules or horn clauses) are produced using positive and negative examples as information. To solve this task many different heuristic search strategies have been developed, so far. The search by specialization is the most widely used search strategy, whereas other approaches use a search by generalization only. JoJo is an algorithm that combines both search directions into one search procedure. According to the estimated quality of the currently regarded rule either a generalization or specialization step is carried out by deleting or adding one premise to the conjunction part of the rule. But, to create an even more flexible (and faster) algorithm, it should be possible to delete or add more than just one premise at a time. Relaxing this restriction of JoJo led to the new highly flexible algorithm Frog that additionally uses a third search direction.

We argue in this paper that sophisticated mi-croplanning techniques are required even formathematical proofs, in contrast to the beliefthat mathematical texts are only schematicand mechanical. We demonstrate why para-phrasing and aggregation significantly en-hance the flexibility and the coherence ofthe text produced. To this end, we adoptedthe Text Structure of Meteer as our basicrepresentation. The type checking mecha-nism of Text Structure allows us to achieveparaphrasing by building comparable combi-nations of linguistic resources. Specified interms of concepts in an uniform ontologicalstructure called the Upper Model, our se-mantic aggregation rules are more compactthan similar rules reported in the literature.

This paper outlines the microplanner of PROVERB , a system that generates multilingual text from machine-found mathematical proofs. The main representational vehicle is the text structure proposed by Meteer. Following Panaget, we also distinguish between the ideational and the textual semantic categories, and use the upper model to replace the former. Based on this, a further extension is made to support aggregation before realization decisions are made. While our the framework of our macroplanner is kept languageindependent, our microplanner draws on language specific linguistic sources such as realization classes and lexicon. Since English and German are the first two languages to be generated and because the sublanguage of our mathematical domain is relatively limited, the upper model and the textual semantic categories are designed to cope with both languages. Since the work reported is still in progress, we also discuss open problems we are facing.

This paper describes the linguistic part of a system called PROVERB, which transforms, abstracts,and verbalizes machine-found proofs into formatedtexts. Linguistically, the architecture of PROVERB follows most application oriented systems, and is a pipelined control of three components. Its macroplanner linearizes a proof and plans mediating communicative acts by employing a combination of hierarchical planning and focus-guided navigation. The microplanner then maps communicative acts and domain concepts into linguistic resources, paraphrases and aggregates such resources to producethe final Text Structure. A Text Structure contains all necessary syntactic information, and can be executed by our realizer into grammatical sentences. The system works fully automatically and performs particularly well for textbook size examples.

This paper outlines the linguistic part of an implemented system namedPROVERB[3] that transforms, abstracts, and verbalizes machine-found proofs innatural language. It aims to illustrate, that state-of-the-art techniques of natural language processing are necessary to produce coherent texts that resemble those found in typical mathematical textbooks, in contrast to the belief that mathematical texts are only schematic and mechanical.The verbalization module consists of a content planner, a sentence planner, and a syntactic generator. Intuitively speaking, the content planner first decides the order in which proof steps should be conveyed. It also some messages to highlight global proof structures. Subsequently, thesentence planner combines and rearranges linguistic resources associated with messages produced by the content planner in order to produce connected text. The syntactic generat or finally produces the surface text.

This paper outlines an implemented system named PROVERBthat transforms and abstracts machine-found proofs to natural deduction style proofs at an adequate level of abstraction and then verbalizesthem in natural language. The abstracted proofs, originally employedonly as an intermediate representation, also prove to be useful for proofplanning and proving by analogy.

Im Rahmen des Sonderforschungsbereichs SFB314, Projekt X9 "Lernen und Analogie in technischen Expertensystemen", wurde die Verwendbarkeit von Techniken des fallbasierten Schliessens in wissens- basierten Systemen untersucht. Als prototypische Anwendungsdomäne wurde die Arbeitsplanerstellung rotationssymmetrischer Werkstücke gewählt. Im vorliegenden Beitrag wird ein Modell der Arbeits- planerstellung unter Berücksichtigung der verschiedenen, bisher als unabhängig behandelten Planungsmethoden beschrieben. Auf der Basis einer modelbasierte Wissensakquistion aus in Unternehmen verfügbaren Arbeitsplänen wird ein Ausschnitt der Arbeitsplanerstellung, die Aufspannplanung, detailliert. Die Anwendbarkeit wurde durch eine prototypische Realisierung nachgewiesen.

Das Ziel dieses Projekts war es, anhand von empirischen Untersuchungen klassische statistische Verfahren und aktuelle Methoden des Maschinellen Lernens mit einem Ansatz zu vergleichen, der in der Arbeitsgruppe entworfen und theoretisch analysiert wurde. Implementiert wurden f"unf Verfahren, einige davon in verschiedenen Varianten: FeedForward Neuronale Netze, Entscheidungsbäume, Bayes Entscheidungen, die auf Chow-Expansionen beruhen, Harmonische Analyse und die Methode des N"achsten Nachbarn. Als Referenzmassstab wurden Vorhersagen herangezogen, die den Trend oder den Mittelwert der letzten letzten Beobachtungen vorhersagten. Als Daten standen 16 Zeitreihen von Aktien- und Devisenkursen zur Verf"ugung. Jede der Zeitreihen bestand aus 2000 Daten, von denen die ersten 1500 zum Training und die restlichen 500 für den Vergleich der Verfahren dienten. Dabei zeigte es sich, dass die naiven Referenzverfahren einen recht guten Pr"ufstein darstellten. Die Bayes-Entscheidungen und die Entscheidungsbäume erwiesen sich als besonders stark und übertrafen die Referenzmethoden fast immer. Neuronale Netze und die Methode des n"achsten Nachbarn waren etwa genausogut, während die Harmonische Analyse für kurzfristige Vorhersagen schlechter und für langfristige besser war. Bei Entscheidungsbäumen und Neuronalen Netzen fiel auf, dass kleine B"aume bzw. Netze bessere Ergebnisse lieferten als grosse.

Concept mapping is a simple and intuitive visual form of knowledge representation. Concept maps can be categorized as informal or formal, where the latter is characterized by implementing a semantics model constraining their components. Software engineering is a domain that has successfully adopted formal concept maps to visualize and specify complex systems. Automated tools have been implemented to support these models although their semantic constraints are hardcoded within the systems and hidden from users. This paper presents the Constraint Graphs and jKSImapper systems. Constraint Graphs is a flexible and powerful graphical system interface for specifying concept mapping notations. In addition, jKSImapper is a multi-user concept mapping editor for the Internet and the World Wide Web. Together, these systems aim to support user-definable formal concept mapping notations and distributed collaboration on the Internet and the World Wide Web.

In diesem Beitrag werden konnektionistische Lernverfahren für die wissensbasierte Diagnose technischer Systeme vorgestellt. Es werden zwei Problemstellungen untersucht: die Prognose von Signalverläufen technischer Zustandsgrössen sowie die diagnostische Klassifikation von Systemzuständen und die Ergebnisse der Untersuchungen dargestellt.

We present the adaptation process in a CBR application for decision support in the domain of industrial supervision. Our approach uses explanations to approximate relations between a problem description and its solution, and the adaptation process is guided by these explanations (a more detailed presentation has been done in [4]).

The CBR team of the LISA is involved in several applied research projects based on the CBR paradigm. These applications use adaptation to solve the specific problems they face. So, we have capitalized some experience about how can be expressed and formalized adaptation processes. The bibliography on the subject is quite important but demonstrates a lake of formalism. At most, there exists some classifications about different types of adaptation.

Top-down and bottom-up theorem proving approaches have each specific ad-vantages and disadvantages. Bottom-up provers profit from strong redundancycontrol and suffer from the lack of goal-orientation, whereas top-down provers aregoal-oriented but have weak calculi when their proof lengths are considered. Inorder to integrate both approaches our method is to achieve cooperation betweena top-down and a bottom-up prover: The top-down prover generates subgoalclauses, then they are processed by a bottom-up prover. We discuss theoreticaspects of this methodology and we introduce techniques for a relevancy-basedfiltering of generated subgoal clauses. Experiments with a model eliminationand a superposition-based prover reveal the high potential of our cooperation approach.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).

We examine an approach for demand-driven cooperative theorem proving.We briefly point out the problems arising from the use of common success-driven cooperation methods, and we propose the application of our approachof requirement-based cooperative theorem proving. This approach allows for abetter orientation on current needs of provers in comparison with conventional co-operation concepts. We introduce an abstract framework for requirement-basedcooperation and describe two instantiations of it: Requirement-based exchangeof facts and sub-problem division and transfer via requests. Finally, we reporton experimental studies conducted in the areas superposition and unfailing com-pletion.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).

We examine different possibilities of coupling saturation-based theorem pro-vers by exchanging positive/negative information. We discuss which positive ornegative information is well-suited for cooperative theorem proving and show inan abstract way how this information can be used. Based on this study, we in-troduce a basic model for cooperative theorem proving. We present theoreticalresults regarding the exchange of positive/negative information as well as practi-cal methods and heuristics that allow for a gain of efficiency in comparison withsequential provers. Finally, we report on experimental studies conducted in theareas condensed detachment, unfailing completion, and superposition.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).

We investigate the usage of so-called inference rights. We point out the prob-lems arising from the inflexibility of existing approaches to heuristically controlthe search of automated deduction systems, and we propose the application ofinference rights that are well-suited for controlling the search more flexibly. More-over, inference rights allow for a mechanism of "partial forgetting" of facts thatis not realizable in the most controlling aproaches. We study theoretical founda-tions of inference rights as well as the integration of inference rights into alreadyexisting inference systems. Furthermore, we present possibilities to control suchmodified inference systems in order to gain efficiency. Finally, we report onexperimental results obtained in the area of condensed detachment.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).

We present a cooperation concept for automated theorem provers that isbased on a periodical interchange of selected results between several incarnationsof a prover. These incarnations differ from each other in the search heuristic theyemploy for guiding the search of the prover. Depending on the strengths' andweaknesses of these heuristics different knowledge and different communicationstructures are used for selecting the results to interchange.Our concept is easy to implement and can easily be integrated into alreadyexisting theorem provers. Moreover, the resulting cooperation allows the dis-tributed system to find proofs much faster than single heuristics working alone.We substantiate these claims by two case studies: experiments with the DiCoDesystem that is based on the condensed detachment rule and experiments with theSPASS system, a prover for first order logic with equality based on the super-position calculus. Both case studies show the improvements by our cooperationconcept.

We present a methodology for coupling several saturation-based theoremprovers (running on different computers). The methodology is well-suited for re-alizing cooperation between different incarnations of one basic prover. Moreover,also different heterogeneous provers - that differ from each other in the calculusand in the heuristic they employ - can be coupled. Cooperation between the dif-ferent provers is achieved by periodically interchanging clauses which are selectedby so-called referees. We present theoretic results regarding the completeness ofthe system of cooperating provers as well as describe concrete heuristics for de-signing referees. Furthermore, we report on two experimental studies performedwith homogeneous and heterogeneous provers in the areas superposition and un-failing completion. The results reveal that the occurring synergetic effects leadto a significant improvement of performance.

We present a general framework for developing search heuristics for au-tomated theorem provers. This framework allows for the construction ofheuristics that are on the one hand able to replay (parts of) a given prooffound in the past but are on the other hand flexible enough to deviate fromthe given proof path in order to solve similar proof problems. We substanti-ate the abstract framework by the presentation of three distinct techniquesfor learning appropriate search heuristics based on soADcalled features. Wedemonstrate the usefulness of these techniques in the area of equational de-duction. Comparisons with the renowned theorem prover Otter validatethe applicability and strength of our approach.

We present an approach to automating the selection of search-guiding heuris-tics that control the search conducted by a problem solver. The approach centerson representing problems with feature vectors that are vectors of numerical val-ues. Thus, similarity between problems can be determined by using a distancemeasure on feature vectors. Given a database of problems, each problem beingassociated with the heuristic that was used to solve it, heuristics to be employedto solve a novel problem are suggested in correspondence with the similaritybetween the novel problem and problems of the database.Our approach is strongly connected with instance-based learning and nearest-neighbor classification and therefore possesses incremental learning capabilities.In experimental studies it has proven to be a viable tool for achieving the finaland crucial missing piece of automation of problem solving - namely selecting anappropriate search-guiding heuristic - in a flexible way.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).

We are going to present two methods that allow to exploit previous expe-rience in the area of automated deduction. The first method adapts (learns)the parameters of a heuristic employed for controlling the application of infer-ence rules in order to find a known proof with as little redundant search effortas possible. Adaptation is accomplished by a genetic algorithm. A heuristiclearned that way can then be profitably used to solve similar problems. Thesecond method attempts to re-enact a known proof in a flexible manner in orderto solve an unknown problem whose proof is believed to lie in (close) vicinity.The experimental results obtained with an equational theorem prover show thatthese methods not only allow for impressive speed-ups, but also make it possibleto handle problems that were out of reach before.

Problems stemming from the study of logic calculi in connection with an infer-ence rule called "condensed detachment" are widely acknowledged as prominenttest sets for automated deduction systems and their search guiding heuristics. Itis in the light of these problems that we demonstrate the power of heuristics thatmake use of past proof experience with numerous experiments.We present two such heuristics. The first heuristic attempts to re-enact aproof of a proof problem found in the past in a flexible way in order to find a proofof a similar problem. The second heuristic employs "features" in connection withpast proof experience to prune the search space. Both these heuristics not onlyallow for substantial speed-ups, but also make it possible to prove problems thatwere out of reach when using so-called basic heuristics. Moreover, a combinationof these two heuristics can further increase performance.We compare our results with the results the creators of Otter obtained withthis renowned theorem prover and this way substantiate our achievements.

We present a method for learning heuristics employed by an automated proverto control its inference machine. The hub of the method is the adaptation of theparameters of a heuristic. Adaptation is accomplished by a genetic algorithm.The necessary guidance during the learning process is provided by a proof prob-lem and a proof of it found in the past. The objective of learning consists infinding a parameter configuration that avoids redundant effort w.r.t. this prob-lem and the particular proof of it. A heuristic learned (adapted) this way canthen be applied profitably when searching for a proof of a similar problem. So,our method can be used to train a proof heuristic for a class of similar problems.A number of experiments (with an automated prover for purely equationallogic) show that adapted heuristics are not only able to speed up enormously thesearch for the proof learned during adaptation. They also reduce redundancies inthe search for proofs of similar theorems. This not only results in finding proofsfaster, but also enables the prover to prove theorems it could not handle before.

In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".

We present first steps towards fully automated deduction that merely requiresthe user to submit proof problems and pick up results. Essentially, this necessi-tates the automation of the crucial step in the use of a deduction system, namelychoosing and configuring an appropriate search-guiding heuristic. Furthermore,we motivate why learning capabilities are pivotal for satisfactory performance.The infrastructure for automating both the selection of a heuristic and integra-tion of learning are provided in form of an environment embedding the "core"deduction system.We have conducted a case study in connection with a deduction system basedon condensed detachment. Our experiments with a fully automated deductionsystem 'AutoCoDe' have produced remarkable results. We substantiate Au-toCoDe's encouraging achievements with a comparison with the renowned the-orem prover Otter. AutoCoDe outperforms Otter even when assuming veryfavorable conditions for Otter.

Evolving Combinators
(1996)

One of the many abilities that distinguish a mathematician from an auto-mated deduction system is to be able to offer appropriate expressions based onintuition and experience that are substituted for existentially quantified variablesso as to simplify the problem at hand substantially. We propose to simulate thisability with a technique called genetic programming for use in automated deduc-tion. We apply this approach to problems of combinatory logic. Our experimen-tal results show that the approach is viable and actually produces very promisingresults. A comparison with the renowned theorem prover Otter underlines theachievements.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).

We present a method for making use of past proof experience called flexiblere-enactment (FR). FR is actually a search-guiding heuristic that uses past proofexperience to create a search bias. Given a proof P of a problem solved previouslythat is assumed to be similar to the current problem A, FR searches for P andin the "neighborhood" of P in order to find a proof of A.This heuristic use of past experience has certain advantages that make FRquite profitable and give it a wide range of applicability. Experimental studiessubstantiate and illustrate this claim.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).

We present a novel approach to classification, based on a tight coupling of instancebased learning and a genetic algorithm. In contrast to the usual instance-based learning setting, we do not rely on (parts of) the given training set as the basis of a nearestneighbor classifier, but we try to employ artificially generated instances as concept prototypes. The extremely hard problem of finding an appropriate set of concept prototypes is tackled by a genetic search procedure with the classification accuracy on the given training set as evaluation criterion for the genetic fitness measure. Experiments with artificial datasets show that - due to the ability to find concise and accurate concept descriptions that contain few, but typical instances - this classification approach is considerably robust against noise, untypical training instances and irrelevant attributes. These favorable (theoretical) properties are corroborated using a number of hard real-world classification problems.

We investigate in how far interpolation mechanisms based on the nearest-neighbor rule (NNR) can support cancer research. The main objective is to usethe NNR to predict the likelihood of tumorigenesis based on given risk factors.By using a genetic algorithm to optimize the parameters of the nearest-neighbourprediction, the performance of this interpolation method can be improved sub-stantially. Furthermore, it is possible to detect risk factors which are hardly ornot relevant to tumorigenesis. Our preliminary studies demonstrate that NNR-based interpolation is a simple tool that nevertheless has enough potential to beseriously considered for cancer research or related research.

A translation contract is a binary predicate corrTransl(S,T) for source programs S and target programs T. It precisely specifies when T is considered to be a correct translation of S. A certifying compiler generates --in addittion to the target T-- a proof for corrTransl(S,T). Certifying compilers are important for the development of safety critical systems to establish the behavioral equivalence of high-level programs with their compiled assembler code. In this paper, we report on a certifying compiler, its proof techniques, and the underlying formal framework developed within the proof assistent Isabelle/HOL. The compiler uses a tiny C-like language as input, has an optimization phase, and generates MIPS code. The underlying translation contract is based on a trace semantics. We investigate design alternatives and discuss our experiences.

Today's communication systems are typically structured into several layers, where each layer realizes a fixed set of protocol functionalities. These functionalities have been carefully chosen such that a wide range of applications can be supported and protocols work in a general environment of networks. However, due to evolving network technologies as well as increased and varying demands of modern applications general-purpose protocol stacks are not always adequate. To improve this situation new flexible communication architectures have been developed which enable the configuration of customized communication subsystems by composing a proper set of reusable building blocks. In particular, several approaches to automatic configuration of communication subsystems have been reported in the literature. This report gives an overview of theses approaches (F-CCS, Da CaPo, x-Kernel, and ADAPTIVE) and, in particular, defines a framework, which identifies common architectural issues and configuration tasks.

Due to the large variety of modern applications and evolving network technologies, a small number of general-purpose protocol stacks will no longer be sufficient. Rather, customization of communication protocols will play a major role. In this paper, we present an approach that has the potential to substantially reduce the effort for designing customized protocols. Our approach is based on the concept of design patterns, which is well-established in object oriented software development. We specialize this concept to communication protocols, and - in addition - use formal description techniques (FDTs) to specify protocol design patterns as well as rules for their instantiation and composition. The FDTs of our choice are SDL-92 and MSCs, which offer suitable language support. We propose an SDL pattern description template and relate pattern-based configuring of communication protocols to existing SDL methodologies. Particular SDL patterns and the configuring of a customized resource reservation protocol are presented in detail.

The development of software products has become a highly cooperative and distributed activity involving working groups at geographically distinct places. These groups show an increasing mobility and a very flexible organizational structure. Process methodology and technology have to take such evolutions into account. A possible direction for the emergence of new process technology and methodology is to take benefit from recent advances within multiagent systems engineering : innovative methodologies for adaptable and autonomous architectures; they exhibit interesting features to support distributed software processes.

Within the present paper we investigate case-based representability as well as case-based learnability of indexed families of uniformly recursive languages. Since we are mainly interested in case-based learning with respect to an arbitrary fixed similarity measure, case-based learnability of an indexed family requires its representability, first. We show that every indexed family is case- based representable by positive and negative cases. If only positive cases are allowed the class of representable families is comparatively small. Furthermore, we present results that provide some bounds concerning the necessary size of case bases. We study, in detail, how the choice of a case selection strategy influences the learning capabilities of a case-based learner. We define different case selection strategies and compare their learning power to one another. Furthermore, we elaborate the relations to Gold-style language learning from positive and both positive and negative examples.

Contrary to symbolic learning approaches, that represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

In den letzten Jahren wurden Methoden des fallbasierten Schliessens häufig in Bereichen verwendet, in denen traditionell symbolische Verfahren zum Einsatz kommen, beispielsweise in der Klassifikation. Damit stellt sich zwangsläufig die Frage nach den Unterschieden bzw. der Mächtigkeit dieser Lernverfahren. Jantke [Jantke, 1992] hat bereits Gemeinsamkeiten von Induktiver Inferenz und fallbasierter Klassifikation untersucht. In dieser Arbeit wollen wir einige Zusammenhänge zwischen der Fallbasis, dem Ähnlichkeitsmass und dem zu erlernenden Begriff verdeutlichen. Zu diesem Zweck wird ein einfacher symbolischer Lernalgorithmus (der Versionenraum nach [Mitchell, 1982]) in eine äquivalente, fallbasiert arbeitende Variante transformiert. Die vorgestellten Ergebnisse bestätigen die Äquivalenz von symbolischen und fallbasierten Ansätzen und zeigen die starke Abhängigkeit zwischen dem im System verwendeten Mass und dem zu lernenden Begriff.

While symbolic learning approaches encode the knowledge provided by the presentation of the cases explicitly into a symbolic representation of the concept, e.g. formulas, rules, or decision trees, case-based approaches describe learned concepts implicitly by a pair (CB; d), i.e. by a set CB of cases and a distance measure d. Given the same information, symbolic as well as the case-based approach compute a classification when a new case is presented. This poses the question if there are any differences concerning the learning power of the two approaches. In this work we will study the relationship between the case base, the measure of distance, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the conjecture of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

In this paper, we present an approach to support distributed planning and scheduling, as well as the subsequent (also distributed) plan execution, in one system. The system will support the distributed planners and schedulers by providing task agendas for them, stating who needs to plan which tasks, and sending change notifications and warnings, if a planning or scheduling decision needs to be updated. The plan built using these mechanisms is then enacted by a workflow engine in the same system. This approach enables us to support interleaved planning and plan enactment, allowing the user to change the plan and schedule while the project is already under way. Deviations of the actual project enactment from the plan and schedule can automatically be detected, and necessary notifications will be sent to the concerned planner(s). This again facilitates the task of keeping the plan up to date, avoiding the complete invalidation of the plan as is often the case in conventional projects soon after enactment has started.

Correctness and runtime efficiency are essential properties of software ingeneral and of high-speed protocols in particular. Establishing correctnessrequires the use of FDTs during protocol design, and to prove the protocolcode correct with respect to its formal specification. Another approach toboost confidence in the correctness of the implementation is to generateprotocol code automatically from the specification. However, the runtimeefficiency of this code is often insufficient. This has turned out to be amajor obstacle to the use of FDTs in practice.One of the FDTs currently applied to communication protocols is Es-telle. We show how runtime efficiency can be significantly improved byseveral measures carried out during the design, implementation and run-time of a protocol. Recent results of improvements in the efficiency ofEstelle-based protocol implementations are extended and interpreted.

The purpose of this expose is to explain the generic design of a customized communication subsystem. The expose addresses both functional and non-functional aspects. Starting point is a real-time requirement from the application area building automation. We show how this application requirement and some background information about the application area lead to a system architecture, a communication service, a protocol architecture and to the selection, adaptation, and composition of protocol functionalities. The reader will probably be surprised how much effort is necessary in order to implement the innocuous, innocent, inconspicuous looking application requirement. Formal description techniques (FDTs) will be used in all design phases.

A non-trivial real-time requirement obeying a pattern that can be foundin various instantiations in the application domain building automation, and which is therefore called generic, is investigated in detail. Starting point is a description of a real-time problem in natural language augmented by a diagram, in a style often found in requirements documents. Step by step, this description is made more precise and finally transformed into a surprisingly concise formal specification, written in real-time temporal logic with customized operators. Wereason why this formal specification precisely captures the original description- as far as this is feasible due to the lack of precision of natural language.

We present a new criterion for confluence of (possibly) non-terminating left-linear term rewriting systems. The criterion is based on certain strong joinabil-ity properties of parallel critical pairs . We show how this criterion relates toother well-known results, consider some special cases and discuss some possibleextensions.

The well-known and powerful proof principle by well-founded induction says that for verifying \(\forall x : P (x)\) for some property \(P\) it suffices to show \(\forall x : [[\forall y < x :P (y)] \Rightarrow P (x)] \) , provided \(<\) is a well-founded partial ordering on the domainof interest. Here we investigate a more general formulation of this proof principlewhich allows for a kind of parameterized partial orderings \(<_x\) which naturallyarises in some cases. More precisely, we develop conditions under which theparameterized proof principle \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)]\) is sound in thesense that \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)] \Rightarrow \forall x : P (x)\) holds, and givecounterexamples demonstrating that these conditions are indeed essential.